text
stringlengths 15
59.8k
| meta
dict |
---|---|
Q: Draggable elements hang in IE I've got a bug from testers when they double clicked on element or something like then dragged it to the bottom of page,even if draggable element has containment set,mouse goes down than the element. After that you can release mouse click and draggable element hang on mouse pointer. Only in IE :( has anyone any ideea why or how to stop this?
My code looks like :
EDIT: http://jsfiddle.net/j2GyK/
function dragNdrop(){
for(var i = 0; i < 3; i++){
dragObjects.push(document.getElementById("box" + i));
$(dragObjects[i]).css( 'cursor', 'pointer' );
$(dragObjects[i]).draggable({
containment:'#decor',
zIndex: 1000,
revert: true,
start: function(event, ui) {
dragTarget = event.currentTarget; // luam date despre obiectul curent tras
}
});
}
$("#drop").droppable({
drop: function( event, ui )
{
dropTarget = event.target; // luam date despre drop target
$(dropTarget).prop('src', dragTarget.src);
$(dragObjects).css("visibility", "visible");
$(dragTarget).css("visibility", "hidden");
thisSRC = dropTarget.src.charAt(dropTarget.src.length-5);
checkBtn.disabled = false;
}
});
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/23362094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is there a way to override a specific keyword color in VS CODE theme (Material Theme)? I'm trying to override specific keywords (import, export, try, catch, await, return etc) of a VS Code "Material Theme"... There's a way to overwrite all the keywords like so
"editor.tokenColorCustomizations": {
"[Material Theme Darker]": {
"keywords": "#1ed680"
}
},
But this overwrites every keyword, and I just need those of a specific blue color... Is there any way to do so? VS Code documentation didn't specify it for certain keywords/tokens
But obviously there's a way to do it... Thanks a lot
| {
"language": "en",
"url": "https://stackoverflow.com/questions/57663522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: String to DateTime conversion in .NET compact Framework I am using .NET compact Framework, more precisely using Windows mobile 6.0 and
was trying to convert a string to datetime but seems it is not supported.
Any help would be appreciated.
Dim provider AS CultureInfo = New CultureInfo("en-US")
Dim dt AS DateTime = Convert.ToDatetime("5/2/2013 5:15:03 PM")
Getting a formatexception issue.
A: Please see the following .NET Fiddle here.
Imports System
Imports System.Globalization
Public Module Module1
Public Sub Main()
Dim provider AS CultureInfo = New CultureInfo("en-US")
Dim dt AS DateTime = Convert.ToDatetime("5/2/2013 5:15:03 PM")
Console.WriteLine(dt)
Console.WriteLine(dt.ToString("d",provider))
End Sub
End Module
Output:
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36336470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Cassandra nodetool rebuild_index stuck in 100% in nodetool compactionstats, how to refresh it and to force complection? I have a DC/OS cluster running Cassandra framework, three masters and six workers, after a framework crash due to registry issues, the Cassandra nodes are not synced with the data, in order to sync it i tried to repair the Keyspaces one by one and check the "./nodetool compactionstats" status.
after the repair i got in the "./nodetool compactionstats" a stuck task:
[root@server-worker1 bin]# ./nodetool compactionstats
pending tasks: 2
- system.IndexInfo: 1
- my_app_prod.profile_activation_history: 1
id compaction type keyspace table completed total unit progress
0d49d3d0-19d6-11eb-a65c-f71e0bcef8b1 Secondary index build my_app_prod profile_activation_history 3010912 3010912 bytes 100.00%
Active compaction remaining time : 0h00m00s
[root@server-worker1 bin]#
the task is stuck in 100%, how can i force it to finish ? or refresh the status of the "./nodetool compactionstats" ?
I checked in the nodes and no such process is running in memory in any node. I need to continue repairing the keyspaces but this task is standing in front of it because until this task is over the repair will wait.
A: Secondary index builds are part of the normal operation of Cassandra when you have secondary indexes on tables. Any new mutation that a node receives will get indexed.
It runs as a compaction thread within the same JVM as the Cassandra process so you won't see a separate process running on a machine's process table.
There's no operation to "force" them to finish. They will finish when the required indexing of data has completed.
Repairs are also part of the normal operation of Cassandra. When new data is streamed to a node during a repair, that data will also be indexed by the receiving node. What I'm getting at is that those operations go hand-in-hand and one does not prevent the other from working. Cheers!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/64591002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Sorting of an array using merge sort I have implemented for merge sort in c , although the code seems to be correct the code does not give me the sorted array rather returns the same array that is given to it, that means my merge function in not working
#include<stdio.h>
#include<stdlib.h>
void re_sort(int arr[],int size);
void merge(int left[],int right[],int arr[],int rightlen,int leftlen);
int main(void)
{
int a[10];
int n;
printf("enter the number\n");
scanf("%d",&n);
printf("enter the elements\n");
for(int i=0;i<n;i++)
{
scanf("%d",&a[i]);
}
re_sort(a,n); //merge sort using recursion
printf("the sorted list is:\n");
for(int i=0;i<n;i++)
{ printf("%d\t",a[i]);
}
return 0;
}
void re_sort(int arr[],int size)
{ int mid,*left,*right;
int k=0;
if(size<2)
return;
else
mid=size/2;
left=(int*)(malloc(mid*(sizeof(int)))); // two sub arrays left and right
right=(int*)(malloc((size-mid)*(sizeof(int))));
for(int i=0;i<mid;i++)
{
left[i]=arr[k++];
}
for(int j=0;j<(size-mid);j++)
{
right[j]=arr[k++];
}
re_sort(left,mid); //recursion until size becomes less than 2
re_sort(right,size-mid);
merge(left,right,arr,size-mid,mid); //both the elements in left and right are merged
}
void merge(int left[],int right[],int arr1[],int rightlen,int leftlen)
{ int arr[100];
int k=0,i=0,j=0;
while(i<leftlen && j<rightlen)
{
if(left[i]<= right[j])
arr[k++]=left[i++];
else
arr[k++]=right[j++];
}
while(i<leftlen)
{
arr[k++]=left[i++];
}
while(j<rightlen)
{
arr[k++]=right[j++];
}
for(int l=0;l<(rightlen+leftlen);l++)
{
arr1[l]=arr[l];
}
free(left);
free(right);
}
A: Here
if(left[i]<= right[j])
arr[k++]=left[i++];
else
arr[k++]=left[j++];
last left should be right.
Anyway, where do you free the memory you malloc-ed...?
A: It is a very bad idea to malloc a new buffer for each sub-array on every recursive call. Remember, malloc is quite expensive action, and free costs even much more than malloc!
The subarrays resulting from recursive splitting do not overlap (except for the result of merge spans over two merged parts). So you never need more than one buffer at a time for the merge result and that merge does not interfere with any other merge (except with those for which it is a sub-range). As a result it's enough to create just a single copy of the whole input array and use both arrays alternatively as a source place and destination place for recursive merges:
void merge( int dst[], int src[], int idx1, int idx2, int end2)
{
int idx = idx1;
int end1 = idx2;
while(idx1 < end1 && idx2 < end2)
dst[idx++] = src[idx1] <= src[idx2] ? src[idx1++] : src[idx2++];
while(idx1 < end1)
dst[idx++] = src[idx1++];
while(idx2 < end2)
dst[idx++] = src[idx2++];
}
void mrgsrt( int dst[], int src[], int begin, int len)
{
if(len == 1)
dst[begin] = src[begin];
if(len > 1) {
int mid = len/2;
mrgsrt(src, dst, begin, mid);
mrgsrt(src, dst, begin+mid, len-mid);
merge(dst, src, begin, begin+mid, begin+len);
}
}
void sort( int a[], int len)
{
int *tmp;
if((tmp = malloc(len*sizeof(*a))) != NULL) {
memcpy(tmp, a, len*sizeof(*a));
mrgsrt(a, tmp, 0, len);
free(tmp);
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26403686",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Error:Cause: org.gradle.api.internal.ExtensibleDynamicObject I am trying to use NDK in Android Studio, but an error occurs:
Error:Cause: org.gradle.api.internal.ExtensibleDynamicObject
My Android Studio version is 2.2 preview 1, and Gradle version is 2.10.
gradle-experimental version is 0.7.0
I have tryed this Gradle build Error:Cause: org.gradle.api.internal.ExtensibleDynamicObject but it did not work.
Anyone knows how to resolve it?
Here are my build.gradle file contents:
apply plugin: 'com.android.model.application'
model {
android {
compileSdkVersion 23
buildToolsVersion "23.0.3"
defaultConfig {
applicationId "me.stupideme.shuclass"
minSdkVersion 16
targetSdkVersion 21
versionCode 1
versionName "1.0"
vectorDrawables.useSupportLibrary = true
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
}
android.ndk {
moduleName "ndktest"
ldLibs.addAll(['log'])
cppFlags.add("-std=c++11")
cppFlags.add("-fexceptions")
platformVersion 16
stl 'gnustl_shared'
}
buildTypes {
release {
minifyEnabled false
//proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
proguardFiles += file('proguard-rules.pro')
}
}
}
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
compile 'com.android.support:appcompat-v7:23.4.0'
compile 'com.android.support:design:23.4.0'
compile 'com.android.support.constraint:constraint-layout:1.0.0-alpha1'
compile 'com.android.support:support-v4:23.4.0'
compile 'com.android.support:cardview-v7:23.4.0'
compile 'io.github.yavski:fab-speed-dial:1.0.1'
compile 'com.github.akashandroid90:imageletter:1.5'
testCompile 'junit:junit:4.12'
androidTestCompile 'com.android.support.test.espresso:espresso-core:2.2.2'
androidTestCompile 'com.android.support.test:runner:0.5'
androidTestCompile 'com.android.support:support-annotations:23.4.0'
}
buildTypes {
release {
minifyEnabled false
//proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.txt'
proguardFiles += file('proguard-rules.pro')
}
}
}
A: minSdkVersion.apiLevel 16
targetSdkVersion.apiLevel 21
instead of
minSdkVersion 16
targetSdkVersion 21
and
moduleName = 'ndktest'
instead of
moduleName "ndktest"
it should be
apply plugin: 'com.android.model.application'
model {
android {
compileSdkVersion 23
buildToolsVersion "23.0.3"
defaultConfig {
applicationId "me.stupideme.shuclass"
minSdkVersion.apiLevel 16
targetSdkVersion.apiLevel 21
versionCode 1
versionName "1.0"
vectorDrawables.useSupportLibrary = true
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
}
android.ndk {
moduleName = 'ndktest'
ldLibs.addAll(['log'])
cppFlags.add("-std=c++11")
cppFlags.add("-fexceptions")
platformVersion 16
stl 'gnustl_shared'
}
buildTypes {
release {
minifyEnabled false
//proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
proguardFiles += file('proguard-rules.pro')
}
}
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
compile 'com.android.support:appcompat-v7:23.4.0'
compile 'com.android.support:design:23.4.0'
compile 'com.android.support.constraint:constraint-layout:1.0.0-alpha1'
compile 'com.android.support:support-v4:23.4.0'
compile 'com.android.support:cardview-v7:23.4.0'
compile 'io.github.yavski:fab-speed-dial:1.0.1'
compile 'com.github.akashandroid90:imageletter:1.5'
testCompile 'junit:junit:4.12'
androidTestCompile 'com.android.support.test.espresso:espresso-core:2.2.2'
androidTestCompile 'com.android.support.test:runner:0.5'
androidTestCompile 'com.android.support:support-annotations:23.4.0'
}
buildTypes {
release {
minifyEnabled false
//proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.txt'
proguardFiles += file('proguard-rules.pro')
}
}
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/37968781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Babel doesn't ingore files outside the directory Here is my directory structure:
root/
βββ nodejs_project/
βββ node_modules/
βββ src/
βββ babel.config.json
βββ package.json
βββ package-lock.json
βββ another_project/
βββ folder1/
βββ static/
βββ folder2/
βββ static/
Here is my babel.config.json
{
"presets": ["@babel/react", "minify"],
"ignore": ["**/*.min.js"]
}
Ignoring works well when .js files are in src/ directory:
...\root\nodejs_project>npx babel src/ --out-dir src/ --out-file-extension .min.js
But ignore rule stops working when .js files are in another_project/ directory:
...\root\nodejs_project>npx babel ../another_project/ --out-dir ../another_project/ --out-file-extension .min.js
How can I set babel.config.json to ignore all of the *.min.js files in another_project/ directory?
A: This helped me in any directories:
{
"presets": ["@babel/react", "minify"],
"ignore": ["../**/*.min.js"]
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/69597611",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Google sheets calculations for dates before 1900 - A possible solution I am trying to do calculations without scripts. A solution can be to sum a far in the future date (31/12/9999) +1 and use that value to do calculations
Cell A1 = 1/2/1872 -> -10.194,00
Cell A2 = 31/12/9999 -> 2.958.465,00
Cell A3 = A1-A2+1 -> 2.948.270,00 -> 01/02/9972
I can adjust the year to a more confortable, year that is distant from edges (99999 and 1900) -> I use as offset 4000 which should preserve original leap year. A4: =DATE(YEAR(A3)-8100+4000;MONTH(A3);DAY(A3)) -> 01/02/5872
at this point I can use A4 to do most of calculations on dates and calculate back any adjusted date by using DATEVALUE()
This does of course take into consideration past dates that have issues with current official calentars; it seems that 19th century is ok.
I haven't tested/ported it to Excel.
Does anybody confirm it works?
A: the logic of your formula may be correct, but more factors must be considered when playing with the calendar as humanity likes to adjust even the rules of adjustment. here are a few examples:
*
*the longest year in history: 46 BCE (708 AUC) lasting 445 days known as "the last year of confusion" as Ceasar added 3 more months (90 days) so that the next year (45 BCE) would start right after the winter solstice.
*the shortest year in history: 1582 CE lasting 355 days where October had only 21 days (4th Oct. was followed by 15th Oct.) but also it depends where you are
because British Empire decided to reinvent the wheel by accepting the "1582-wheel" in the year 1752 CE where September had only 19 days (2nd Sep. was followed by 14th Sep.) resulting the year to have 355 days as well. however, if we are technical, the British Empire also had a year that lasted only 282 days because their "old" new year started on 25 March and not 1 January therefore the year 1751 CE started on 25th Mar and ended on 31st Dec. Turkey, for example, joined the "gregorian train" in 1st Jan 1927 CE after their December of 1926 CE had only 18 days so that year was long only 352 days. the latest country to adopt the gregorian calendar was Saudi Arabia in 2016 CE when they jumped from 1437 AH
*the year zero: does not exist. 31st Dec. 1 BCE was followed by 1st Jan. 1 CE
753 AUC = 1 BCE
754 AUC = 1 CE
also, dude who invented this nonsense was born around 1223 AUC (470 CE) so that speaks for itself. this is important because offsetting DATEVALUE needs to be done in such a way that the calculation will not cross 0 eg. not drop bellow -693593:
=TO_DATE(-694324) - incorrect datevalue - 01/01/00-1
=TO_DATE(-693678) - incorrect datevalue - 08/10/0000
=TO_DATE(-693593) - 1st valid datevalue - 01/01/0001
=TO_DATE(35830290) - last valid datevalue - 31/12/99999
*it's also worth mentioning that 25th Dec. 200 CE was not Friday on the Roman peninsula because people in that era used 8-day system
*there are many calendar systems each with its own set of rules and up to this date there are still countries that do not recognize the gregorian calendar as a standard so if you want to re-live the year 2021 go to Ethiopia where today's 9 Oct. 2022 CE = 29 Mes. 2015 EC on the other hand if you prefer to live in the future try Nepal where today's 9 Oct. 2022 = 23 Ash. 2079 BS
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74004375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to Edit existing google form item (question) using google Apps Script I have a Google script to construct a google form. The script fills the form using a Spreadsheet which contains the questions and corresponding options.
The question displayed in the form needs to be updated at regular intervals.
I hope to update the question in the form by changing the question in the Spreadsheet as follows:
*
*I use onOpen() for the script, so that each time the form is accessed the script reconstructs the most updated form.
However, currently each time I run the Script a new question is added to the form and the previous outdated questions still stay on the form.
I need to edit the existing question on the Form using the script to update it to show the latest question.
I could not find a way to edit an existing form question using google script.
Anyone know how?
The question and options are updated in the spreadsheet at regular intervals.
I want the script to be able to edit the question automatically.
All my attempts to find a function which is able to edit an already available question on a Form have been in vain!
(PS : All functions I found are able to create a new question and its options/ but not able to edit an existing form question/option)
A: In order to update an existing item (question), the code must first get that item by it's item type, and there are many different methods for getting items by their type.
There is a different method for each type of question. The types of questions are:
*
*CHECKBOX
*DATE
*DATETIME
*DURATION
*GRID
*LIST
*MULTIPLE_CHOICE
*PARAGRAPH_TEXT
*SCALE
*TEXT
*TIME
In order to update an existing item, the code must first get that item by it's item type. Here are some examples:
*
*asCheckboxItem()
*asDateItem()
*asListItem()
*Etc
For example:
var myCheckBoxItem = FormApp.openById(id).getItemById(id).asCheckboxItem();
Once the code has obtained an item as the correct item, you can change it the same way that you created it in the first place.
function editFormItem() {
var form = FormApp.getActiveForm();
var allItems = form.getItems();
var i,
L=0,
thisItem,
thisItemType,
myCheckBoxItem;
L = allItems.length;
for (i=0;i<L;i++) {
thisItem = allItems[i];
thisItemType = thisItem.getType();
//Logger.log('thisItemType: ' + thisItemType);
if (thisItemType===FormApp.ItemType.CHECKBOX) {
myCheckBoxItem = thisItem.asCheckboxItem();
myCheckBoxItem.setChoiceValues(values)
};
};
};
The above script is not complete. You need to somehow match up what item goes with the new changes. If all your Form questions are the same item type, then you won't need to test for what the item type is.
There are 3 item types that get returned by getItems() that are not question items. They are:
*
*IMAGE
*PAGE_BREAK
*SECTION_HEADER
So, if you have any of those 3 in your form, you should check the item type.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33309046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Java program for Adding Digits Given num = 38, the process is like: 3 + 8 = 11, 1 + 1 = 2. Since 2 has only one digit, return it.
but my function returns 11 ? what is wrong with my logic? Help !!
public class Solution {
public int addDigits(int num) {
int result=doSum(num);
return result;
}
public static int doSum(int num){
int sum=0,digit;
while(num!=0){
digit=num%10;
sum+=digit;
num=num/10;
}
if(sum/10!=0){
doSum(sum);
}
return sum;
}
}
A: if(sum/10!=0){
doSum(sum);
}
This is what is wrong with your logic. You recursively call doSum() on the new sum but you do nothing with the result. So you need to change this to:
if(sum/10!=0){
sum = doSum(sum);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/32710911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Loading data from SQL Server to Elasticsearch Looking for suggesting on loading data from SQL Server into Elasticsearch or any other data store. The goal is to have transactional data available in real time for Reporting.
We currently use a 3rd party tool, in addition to SSRS, for data analytics. The data transfer is done using daily batch jobs and as a result, there is a 24 hour data latency.
We are looking to build something out that would allow for more real time availability of the data, similar to SSRS, for our Clients to report on. We need to ensure that this does not have an impact on our SQL Server database.
My initial thought was to do a full dump of the data, during the weekend, and writes, in real time, during weekdays.
Thanks.
A: ElasticSearch's main use cases are for providing search type capabilities on top of unstructured large text based data. For example, if you were ingesting large batches of emails into your data store every day, ElasticSearch is a good tool to parse out pieces of those emails based on rules you setup with it to enable searching (and to some degree querying) capability of those email messages.
If your data is already in SQL Server, it sounds like it's structured already and therefore there's not much gained from ElasticSearch in terms of reportability and availability. Rather you'd likely be introducing extra complexity to your data workflow.
If you have structured data in SQL Server already, and you are experiencing issues with reporting directly off of it, you should look to building a data warehouse instead to handle your reporting. SQL Server comes with a number of features out of the box to help you replicate your data for this very purpose. The three main features to accomplish this that you could look into are AlwaysOn Availability Groups, Replication, or SSIS.
Each option above (in addition to other out-of-the-box features of SQL Server) have different pros and drawbacks. For example, AlwaysOn Availability Groups are very easy to setup and offer the ability to automatically failover if your main server had an outage, but they clone the entire database to a replica. Replication let's you more granularly choose to only copy specific Tables and Views, but then you can't as easily failover if your main server has an outage. So you should read up on all three options and understand their differences.
Additionally, if you're having specific performance problems trying to report off of the main database, you may want to dig into the root cause of those problems first before looking into replicating your data as a solution for reporting (although it's a fairly common solution). You may find that a simple architectural change like using a columnstore index on the correct Table will improve your reporting capabilities immensely.
I've been down both pathways of implementing ElasticSearch and a data warehouse using all three of the main data synchronization features above, for structured data and unstructured large text data, and have experienced the proper use cases for both. One data warehouse I've managed in the past had Tables with billions of rows in it (each Table terabytes big), and it was highly performant for reporting off of on fairly modest hardware in AWS (we weren't even using Redshift).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65976619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Managing string resources in Qt I have a string that I need at various points in my program. I know that Qt can manage image resources, but I need similar functionality for a couple of strings. Currently I'm using a string resource class, which is a sloppy solution.
class StringRes {
public:
static const QString& appName() { return _appName; }
static const QString& appVersion() { return _appVersion; }
private:
static const QString _appName;
static const QString _appVersion;
};
Besides, this solution causes a segfault at a certain point in my code.
_fileStream << QString("This is ")
+ StringRes::appName()
+ " "
+ StringRes::appVersion()
+ " reporting for duty.\n";
How do Qt programmers (or C++ programmers in general) manage their string resources?
A: For storing just application's name and version and organization's name and domain you can use QCoreApplications's properties applicationName, applicationVersion, organizationDomain and organizationName.
I usually set them in main() function:
#include <QApplication>
#include "MainWindow.h"
int main(int argc, char *argv[])
{
QApplication app(argc, argv);
// These functions are member of QCoreApplication, QApplication's
// parent class:
app.setApplicationName("My Application");
app.setApplicationVersion("3.5.2");
app.setOrganizationName("My Company, or just My Name");
app.setOrganizationDomain("http://example.com/");
MainWindow window;
window.show();
return app.exec();
}
And I can use them to show a nice about message:
#include "MainWindow.h"
#include <QCoreApplication>
...
// Slot called when ? -> About menu is clicked.
void MainWindow::on_aboutAction_triggered()
{
QString message = tr("<strong>%1</strong> %2<br />"
"Developed by %3")
.arg(QCoreApplication::applicationName())
.arg(QCoreApplication::applicationVersion())
.arg(QString("<a href=\"%1\">%2</a>")
.arg(QCoreApplication::organizationDomain())
.arg(QCoreApplication::organizationName()))
;
QMessageBox::about(this, tr("About"), message);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11266776",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: AKS Storage Class network acl I'm trying to create a storage class in AKS that needs to have the public access disabled and the specific vnet and subnet allowed. Basically using service endpoint.
I can create the storage account with specific params like storage type, but the other settings dont apply.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ######files2
provisioner: file.csi.azure.com
allowVolumeExpansion: true
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict
- actimeo=30
parameters:
skuName: Standard_GRS
vnetResourceGroup: r##########
vnetName: v#######-#####
subnetName: s############s
allowBlobPublicAccess: false
this link seems to imply its possible:
https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/aks/azure-files-csi.md
But I can't get it working and I can't have storage accounts exposed to the internet.
Any ideas?
Thanks
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74744022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to compile sqlite with ICU? I downloaded sqlite from http://www.sqlite.org/sqlite-autoconf-3070701.tar.gz
How can I compile sqlite with icu ?
A: Whether you build the amalgamation with icu enabled or just icu extension depends on what you want to do with icu.
If you need an icu tokenizer (to do fts) you need to build amalgamation, if you just need the icu functions as https://www.sqlite.org/cgi/src/dir?ci=6cb537bdce85e088&name=ext/icu list then icu extension is enough.
When building icu extension I find I can not name it libSqliteIcu.so as that readme said b/c when I load it I got this error
sqlite> .load ./libSqliteIcu.so
Error: dlsym(0x7fa073e02c60, sqlite3_sqliteicu_init): symbol not found
After asking the question at sqlit mail list I was told that, which I have confirm.
The symbol name is sqlite3_icu_init. When you load module lib<x>.so the symbol sqlite3_<x>_init is called. You need to either (a) rename the shared library to the correct name (libicu.so) or pass the name of the init function (sqlite3_icu_init) to the loader when you load the module, or (b) change the name of the sqlite3_icu_init function in the icu.c source so that it matches the name that the module loader is looking for ...
A: 1) You can compile it as dynamic extension of SQLite
Citing http://www.sqlite.org/cvstrac/fileview?f=sqlite/ext/icu/README.txt
The easiest way to compile and use the ICU extension is to build
and use it as a dynamically loadable SQLite extension. To do this
using gcc on *nix:
gcc -shared icu.c `icu-config --cppflags --ldflags` -o libSqliteIcu.so
You may need to add "-I" flags so that gcc can find sqlite3ext.h
and sqlite3.h. The resulting shared lib, libSqliteIcu.so, may be
loaded into sqlite in the same way as any other dynamically loadable
extension.
(loading is .load libSqliteIcu.so in SQLite prompt)
2) You can compile SQLite with ICU enabled. According to http://www.sqlite.org/compile.html
you should define macro SQLITE_ENABLE_ICU:
Add -DSQLITE_ENABLE_ICU to the CFLAGS variable or add #define SQLITE_ENABLE_ICU in some config file.
Okay, there is something here not described in standard documentation. Here is an example of calling configure with ICU enabled:
CFLAGS='-O3 -DSQLITE_ENABLE_ICU' CPPFLAGS=`icu-config --cppflags` LDFLAGS=`icu-config --ldflags` ./configure
You also should have icu-config program installed (it is from libicu or libicu-dev package)
A: To compile SQLite with ICU enabled, you should define macro SQLITE_ENABLE_ICU.
Make sure that you have libicu-dev (on Debian/Ubuntu) installed.
As @osgx wrote, the standard documentation is lacking ICU-specific flags that you also need to set. icu-config is deprecated and is missing on Ubuntu 20.04 onwards, thus you should use pkg-config instead:
CFLAGS="-O2 -DSQLITE_ENABLE_ICU `pkg-config --libs --cflags icu-uc icu-io`" ./configure
make
See:
*
*How To Use ICU - C++ Makefiles
*Compile-time Options of SQLite
| {
"language": "en",
"url": "https://stackoverflow.com/questions/6578600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Qt 5.2.0 ftp and QNetworkAccessManager I need to be able to create directories on my ftp server.
I know that there's no QFtp in the 5.2.1 qt, so how do I mkdir with QNetworkAccessManager?
A: QNetworkAccessManager doesn't support that
A: Although it is recommended to use QNetworkAccessManager as much as possible, you always fall back to the QtFtp add-on as follows:
QT += ftp
Then, you will be able to use the mkdir method of the QFtp class.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/22250898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: @Destroy Annotation with Page Scoped Beans I have a page scoped Seam component and it has a no-parameter void method annotated with @Destroy as is shown below. My problem is that destroy method is never called even if the browser page is changed (i.e. page scope ended).
@Name("myPageBean")
@Scope(ScopeType.PAGE)
public class MyPageBean {
@Destroy
public void destroy {
// Code runs when the component is destroyed.
}
}
Do you have an idea for this issue?
Thanks in advance.
A:
When does the page context get destroyed?
The page scope is indistinguishable from the UI component tree.
Therefore, the page context is destroyed when JSF removes the UI
component tree (also called the view) from the session. However, when
this happens, Seam does not receive a callback and therefore the
@Destroy method on a page-scoped component never gets called. If the
user clicks away from the page or closes the browser, the page context
has to wait to get cleaned up into JSF kills the view to which it is
bound. This typically happens when the session ends or if the number
of views in the session exceeds the limit. This limit is established
using the com.sun.faces.numberOfViewsInSession and
com.sun.faces.numberOfLogicalViews context parameters in the Sun
implementation. Both default to 15. However, it's generally best not
to mess with these values.
The page scope should be seen merely as a way to keep data associated
with a view as a means of maintaining the integrity of the UI
component. This focus is especially relevant for data tables, which
have historically been problematic. I would not use the page scope as
a general storage mechanism for use case or workflow data. A good way
to think of it is as a cache.
http://www.seamframework.org/42514.lace
A: do you ever use this bean in a page?, if not, I guess the destroy will not be called because of it never be created.
or you can add @StartUp to force creating the bean when the Scope are initialized.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/5652696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: jQuery selector - select new elements too after append/load within variable I have:
$elements = $('.elements');
$element = $('.element');
function appendText(element){
element.append('<em> appended text</em>');
}
appendText($element);
$('button').on('click', function(){
$elements.append('<span class="element">Span Element Appended after load</span>');
appendText($element);
});
The appendText function, after button click, appends only to the initial element and that is due to JS cache I presume.
I know that I can do appendText($('element')); and the problem will be solved, but I don't want to change all my code now.
Is there any way to make jQuery consider this $element variable as not a cached element and look into the full DOM each time I call that variable?
Please find the jsfiddle if you wish to play or understand better: http://jsfiddle.net/adyz/733Xd/
A: If you add this:
$element = $('.element:last-child')
before
appendText($element);
I think will solve your problem
jsFindle here: http://jsfiddle.net/733Xd/5/.
Best regards!
A: That is an expensive thing to do. I would advise against it for performance reasons.
I did this pluggin in the beggining of last year https://github.com/fmsf/jQuery-obj-update
It doesn't trigger on every call, you have to request the update yourself:
$element.update();
The code is small enough to be pasted on the answer:
(function ( $ ) {
$.fn.update = function(){
var newElements = $(this.selector),i;
for(i=0;i<newElements.length;i++){
this[i] = newElements[i];
}
for(;i<this.length;i++){
this[i] = undefined;
}
this.length = newElements.length;
return this;
};
})(jQuery);
A: I think below one will solve your problem
appendText($element); //here you always referring to the node which was there initial.
http://jsfiddle.net/s9udJ/
A: Possible Solution will be
$(function(){
$elements = $('.elements');
$element = $('.element');
function appendText(element){
element.append('<em> appended text</em>');
}
appendText($element);
$('button').on('click', function(){
$elements.append('<span class="element">Span Element Appended after load</span>');
appendText($elements.find('span').last());
});
})
A: I don't think what you're asking is easily possible - when you call $element = $('.element'); you define a variable which equals to set of objects (well, one object). When calling appendText($element); you're operating on that object. It's not a cache - it's just how JS (and other programming languages) works.
The only solution I can see is to have a function that will update the variable, every time jquery calls one of its DOM manipulation methods, along the lines of this:
<div class='a'></div>
$(document).ready(function()
{
var element = $('.a');
$.fn.appendUpdate = function(elem)
{
// ugly because this is an object
// also - not really taking account of multiple objects that are added here
// just making an example
if ($(elem).is(this.selector))
{
this[this.length] = $(this).append(elem).get(0);
this.length++;
}
return this;
}
element.appendUpdate("<div class='a'></div>");
console.log(element);
});
Then you can use sub() to roll out your own version of append = the above. This way your variables would be up to date, and you wouldn't really need to change your code. I also need to say that I shudder about the thing I've written (please, please, don't use it).
Fiddle
| {
"language": "en",
"url": "https://stackoverflow.com/questions/23802428",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: how active two keyboard events esimultanialy I'm trying to execute two javascript functions:
<body onkeydown="movimentation()">
<script>
function movimentation(){
var key=event.key;
if(key==='8'){
up2();
console.log("the key pressioned is " + key);
};
if(key==="5"){
down2();
console.log("the key pressioned is " + key);
};
if(key==="6" ){
rigth2();
console.log("the key pressioned is " + key);
};
if(key==="4"){
left2();
console.log("the key pressioned is " + key);
};if(key==="w"){
up1();
console.log("the key pressioned is " + key);
};
if(key==="s" ){
down1();
console.log("the key pressioned is " + key);
};
if(key==="d"){
rigth1();
console.log("the key pressioned is " + key);
};
if(key==="a"){
left1();
console.log("the key pressioned is " + key);
}else{};
}
function p2movimentation(key){
if(key==='8'){
up2();
console.log("the key pressioned is " + key);
};
if(key==="5" && p2x<98){
down2();
console.log("the key pressioned is " + key);
};
if(key==="6" ){
rigth2();
console.log("the key pressioned is " + key);
};
if(key==="4"){
left2();
console.log("the key pressioned is " + key);
}else{};
};
function p1movimentation(key){
if(key==="w"){
up1();
console.log("the key pressioned is " + key);
};
if(key==="s"){
down1();
console.log("the key pressioned is " + key);
};
if(key==="d"){
rigth1();
console.log("the key pressioned is " + key);
};
if(key==="a" ){
left1();
console.log("the key pressioned is " + key);
}else{};
};
(up1(),down1(),rigth1() and left1() moviment player 1 and up2(),down2(),rigth2() and left2() moviment player 2)
But the activation of the first function prevent the activation of the second one.
I would love that the two functions are executed simultaneously.
A: You can make a wrapper function which will call the two other functions like this:
function wrapperFunction(e){
p1movimentation(e);
p2movimentation(e);
}
function p1movimentation(e){
console.log("p1movimentation");
}
function p2movimentation(e){
console.log("p2movimentation");
}
<body onkeydown="wrapperFunction();"></body>
A: There's no way to do it.
Javascript is single-threaded. It can only do one thing at a time. Even when you attach multiple event listeners to something, javascript will still run each event listener one at a time. If another event fires while javascript is busy doing something, it won't go run the relevant code until it finishes what it's currently doing. The way javascript does this is referred to as the event loop.
To get your desired effect, you'll likely have to rethink what your two functions are doing. Maybe you need to split up the functions, so that the functions can take turns? e.g. onkeydown="p1move();p2move();p1cleanUp();p2cleanUp()". If you edit your question with the bodies of these functions, we might be able to help out more.
EDIT
I think I've figured out what the issue is here. You're relying on the fact that keydown is repeatedly fired as the user holds down a key, but the issue is if the user holds down a second key, then that second key starts repeatedly firing instead.
What you need to do instead is keep track of which keys are currently being held down. Then, you could, for example, make a game loop that runs every so often that'll check which keys are currently being pressed, and react accordingly.
In order to know which keys are currently being pressed, you'll need to listen to both the keydown event and the keyup event. (when keydown is fired, add the key to a list. When keyup is fired, remove the key from the list).
This stackoverflow answer explains the same thing but in a lot more detail. The question is referring to the same thing you're struggling with.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65229069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How can I translate only one object of the scene in an OpenGL? I have a wall pattern DrawWall and an airplane DrawAirplane in my OpenGL game. How can I push and pop the current matrix and translate only the wall in the scene?
I expect the airplane to be fixed.
private: void DrawWall(){
glPushMatrix();
glBegin(GL_POLYGON);
LeftWallPattern();
glEnd();
glBegin(GL_POLYGON);
RightWallPattern();
glEnd();
glPopMatrix();
}
private: void DrawAirplane(){
glPushMatrix();
glBegin(GL_LINE_LOOP);
//...
glEnd();
glPopMatrix();
}
public: void Display(){
glClear(GL_COLOR_BUFFER_BIT);
glTranslatef(0, -0.02, 0);
DrawWall();
DrawAirplane();
glFlush();
}
A: Use glPushMatrix() to push the current matrix, do glTranslate and draw the wall, then glPopMatrix() and draw the plane. This should only translate the wall. The problem is you seem to be doing the translate in display instead of in DrawWall where it should be.
A: A few things to expand on what Jesus was saying.
When drawing the airplane you don't want to apply any transformations to it, so you need to have the identity matrix loaded:
Push the current modelview matrix
Load the identity matrix <=== this is the step you're missing
Draw the airplane
Pop the modelview matrix
When drawing the wall you want the current transformations to apply, so you do not push the current matrix or else you've wiped out all of the translations you've built up.
Remove the Push/Pop operations from DrawWall()
At some point in your initialization, before Display is called for the first time, you need to set the modelview matrix to the identity matrix. For each subsequent call to Display, -0.02 will then be added to your translation in the y-direction.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17179665",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How can I create a Mono project compatible winforms app using dotnet? I'd like to create a winforms app on windows using the dotnet command line which is compatible with the Mono project running on Linux. In my particular case th framework should be .NET framework 4.7.2.
If I use Visual Studio I can create a winforms project specifically for the .NET framework. However if I use dotnet new winforms to create a new winforms app I can set the framework with -f only to either netcoreapp3.1 or net5.0. It is just about scaffolding the app project, not developing it.
Can I run them on under Linux using Mono?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/69658637",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: PowerShell / PowerShell ISE - How to enter a tab indentation in the command window? In PowerShell at the command line, how do I enter a tab indentation for multiple line commands?
Obviously, [tab] or [shift] + [tab] does not work, otherwise I would not ask this question.
A: Using PSReadline (built-in to PS 5.1 or available via Install-Module) you can make a custom key handler:
Set-PSReadlineKeyHandler -Chord 'ctrl+tab' -ScriptBlock {
$text = ''
$cursor = 0
[Microsoft.PowerShell.PSConsoleReadLine]::GetBufferState([ref]$text, [ref]$cursor)
$lastNewLine = [math]::max(0, $text.LastIndexOf("`n", $cursor - 1))
[Microsoft.PowerShell.PSConsoleReadLine]::Replace([math]::min($cursor, $lastNewLine + 1), 0, " ")
}
Then Ctrl+Tab will indent the line which the cursor is on, no matter where in the line the cursor is.
Extending this to multiple lines, when you can't really select multiple lines in the console, is left as an exercise for the reader.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49905495",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-3"
} |
Q: Worker process cannot be executed in a Java Tomcat server deployed to Heroku I have 1 web and 1 worker process communicating with each other using CloudAMQP library consisting of 2 channels/queues (1 channel is to send a simple message from web, and the other channel is to receive a message from worker). Since I'm deploying my webapp in WAR file format, I had to put my worker class into a jar by creating a local repository of Maven and including dependency to put the file in WEB-INF/lib folder so that I could execute the worker process using the generated worker sh script from maven-war-plugin artifact.
worker sh script :
BASEDIR=`dirname $0`/..
BASEDIR=`(cd "$BASEDIR"; pwd)`
# OS specific support. $var _must_ be set to either true or false.
cygwin=false;
darwin=false;
case "`uname`" in
CYGWIN*) cygwin=true ;;
Darwin*) darwin=true
if [ -z "$JAVA_VERSION" ] ; then
JAVA_VERSION="CurrentJDK"
else
echo "Using Java version: $JAVA_VERSION"
fi
if [ -z "$JAVA_HOME" ] ; then
JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/${JAVA_VERSION}/Home
fi
;;
esac
if [ -z "$JAVA_HOME" ] ; then
if [ -r /etc/gentoo-release ] ; then
JAVA_HOME=`java-config --jre-home`
fi
fi
# For Cygwin, ensure paths are in UNIX format before anything is touched
if $cygwin ; then
[ -n "$JAVA_HOME" ] && JAVA_HOME=`cygpath --unix "$JAVA_HOME"`
[ -n "$CLASSPATH" ] && CLASSPATH=`cygpath --path --unix "$CLASSPATH"`
fi
# If a specific java binary isn't specified search for the standard 'java' binary
if [ -z "$JAVACMD" ] ; then
if [ -n "$JAVA_HOME" ] ; then
if [ -x "$JAVA_HOME/jre/sh/java" ] ; then
# IBM's JDK on AIX uses strange locations for the executables
JAVACMD="$JAVA_HOME/jre/sh/java"
else
JAVACMD="$JAVA_HOME/bin/java"
fi
else
JAVACMD=`which java`
fi
fi
if [ ! -x "$JAVACMD" ] ; then
echo "Error: JAVA_HOME is not defined correctly."
echo " We cannot execute $JAVACMD"
exit 1
fi
if [ -z "$REPO" ]
then
REPO="$BASEDIR"/repo
fi
CLASSPATH=$CLASSPATH_PREFIX:"$BASEDIR"/etc:"$REPO"/com/rabbitmq/amqp-client/3.3.4/amqp-client-3.3.4.jar:"$REPO"/javax/servlet/jstl/1.2/jstl-1.2.jar:"$REPO"/com/example/worker/1.0/worker-1.0.jar:"$REPO"/cloudamqp/example/amqpexample/1.0-SNAPSHOT/amqpexample-1.0-SNAPSHOT.war
EXTRA_JVM_ARGUMENTS=""
# For Cygwin, switch paths to Windows format before running java
if $cygwin; then
[ -n "$CLASSPATH" ] && CLASSPATH=`cygpath --path --windows "$CLASSPATH"`
[ -n "$JAVA_HOME" ] && JAVA_HOME=`cygpath --path --windows "$JAVA_HOME"`
[ -n "$HOME" ] && HOME=`cygpath --path --windows "$HOME"`
[ -n "$BASEDIR" ] && BASEDIR=`cygpath --path --windows "$BASEDIR"`
[ -n "$REPO" ] && REPO=`cygpath --path --windows "$REPO"`
fi
exec "$JAVACMD" $JAVA_OPTS \
$EXTRA_JVM_ARGUMENTS \
-classpath "$CLASSPATH" \
-Dapp.name="worker" \
-Dapp.pid="$$" \
-Dapp.repo="$REPO" \
-Dbasedir="$BASEDIR" \
WorkerProcess \
"$@"
The worker sh script is then referenced into Heroku Procfile :
web: java $JAVA_OPTS -jar target/dependency/webapp-runner.jar --port $PORT target/*.war
worker: sh target/bin/worker
I tested the app locally using heroku local -f Procfile.windows , and all works as expected. However, when I deploy to Heroku (I have tried using both Heroku CLI deployment and git as well), only the web Procfile can be executed properly. The worker process gives back error 127 when I check the log, saying it cannot open target\bin\worker script. I've tried to google what error 127 for heroku means, but so far there's no match, and I have no idea what causes of my remotely deployed apps not being able to open target\bin\worker even though it works fine in my local deployment. Any help will be appreciated.
Below here are the full error logs from Heroku, servlet class, worker class, and POM respectively :
2016-10-31T14:27:32.629768+00:00 heroku[slug-compiler]: Slug compilation finished
2016-10-31T14:27:34.807925+00:00 heroku[web.1]: State changed from down to starting
2016-10-31T14:27:34.941663+00:00 heroku[worker.1]: State changed from down to starting
2016-10-31T14:27:37.334213+00:00 heroku[worker.1]: Starting process with command `sh target/bin/worker`
2016-10-31T14:27:37.825597+00:00 heroku[worker.1]: State changed from starting to up
2016-10-31T14:27:38.100208+00:00 app[worker.1]: sh: 0: Can't open target/bin/worker
2016-10-31T14:27:38.104979+00:00 heroku[worker.1]: State changed from up to crashed
2016-10-31T14:27:38.104979+00:00 heroku[worker.1]: State changed from crashed to starting
2016-10-31T14:27:38.291551+00:00 heroku[web.1]: Starting process with command `java $JAVA_OPTS -jar target/dependency/webapp-runner.jar $WEBAPP_RUNNER_OPTS --port 54481 target/amqpexample-1.0-SNAPSHOT.war`
2016-10-31T14:27:40.725610+00:00 app[web.1]: Setting JAVA_TOOL_OPTIONS defaults based on dyno size. Custom settings will override them.
2016-10-31T14:27:40.732741+00:00 app[web.1]: Picked up JAVA_TOOL_OPTIONS: -Xmx350m -Xss512k -Dfile.encoding=UTF-8
2016-10-31T14:27:41.385588+00:00 app[web.1]: Expanding amqpexample-1.0-SNAPSHOT.war into /app/target/tomcat.54481/webapps/expanded
2016-10-31T14:27:41.385722+00:00 app[web.1]: Adding Context for /app/target/tomcat.54481/webapps/expanded
2016-10-31T14:27:41.817336+00:00 heroku[worker.1]: Starting process with command `sh target/bin/worker`
2016-10-31T14:27:42.172560+00:00 app[web.1]: INFO: Initializing ProtocolHandler ["http-nio-54481"]
2016-10-31T14:27:42.172549+00:00 app[web.1]: Oct 31, 2016 2:27:42 PM org.apache.coyote.AbstractProtocol init
2016-10-31T14:27:42.201060+00:00 app[web.1]: Oct 31, 2016 2:27:42 PM org.apache.tomcat.util.net.NioSelectorPool getSharedSelector
2016-10-31T14:27:42.201064+00:00 app[web.1]: INFO: Using a shared selector for servlet write/read
2016-10-31T14:27:42.205036+00:00 app[web.1]: Oct 31, 2016 2:27:42 PM org.apache.catalina.core.StandardService startInternal
2016-10-31T14:27:42.205038+00:00 app[web.1]: INFO: Starting service Tomcat
2016-10-31T14:27:42.206587+00:00 app[web.1]: Oct 31, 2016 2:27:42 PM org.apache.catalina.core.StandardEngine startInternal
2016-10-31T14:27:42.206588+00:00 app[web.1]: INFO: Starting Servlet Engine: Apache Tomcat/8.0.30
2016-10-31T14:27:42.378327+00:00 heroku[worker.1]: State changed from starting to up
2016-10-31T14:27:42.441771+00:00 app[web.1]: Oct 31, 2016 2:27:42 PM org.apache.catalina.startup.ContextConfig getDefaultWebXmlFragment
2016-10-31T14:27:42.441781+00:00 app[web.1]: INFO: No global web.xml found
2016-10-31T14:27:42.765321+00:00 heroku[web.1]: State changed from starting to up
2016-10-31T14:27:43.507545+00:00 app[worker.1]: sh: 0: Can't open target/bin/worker
2016-10-31T14:27:43.591348+00:00 heroku[worker.1]: Process exited with status 127
2016-10-31T14:27:43.603552+00:00 heroku[worker.1]: State changed from up to crashed
2016-10-31T14:27:44.410603+00:00 app[web.1]: Oct 31, 2016 2:27:44 PM org.apache.jasper.servlet.TldScanner scanJars
2016-10-31T14:27:44.410621+00:00 app[web.1]: INFO: At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
2016-10-31T14:27:44.498715+00:00 app[web.1]: Oct 31, 2016 2:27:44 PM org.apache.coyote.AbstractProtocol start
2016-10-31T14:27:44.498718+00:00 app[web.1]: INFO: Starting ProtocolHandler ["http-nio-54481"]
2016-10-31T14:27:47.410949+00:00 heroku[router]: at=info method=GET path="/" host=amqpexample.herokuapp.com request_id=334f7e05-d4e5-4f78-a444-1e6e7403e52c fwd="14.192.210.168" dyno=web.1 connect=1ms service=4305ms status=200 bytes=370
2016-10-31T14:27:50.003897+00:00 heroku[router]: at=info method=GET path="/" host=amqpexample.herokuapp.com request_id=51c9fac7-6763-4374-9f2b-672f0d7f7be7 fwd="14.192.210.168" dyno=web.1 connect=1ms service=21ms status=200 bytes=295
Servlet class (responsible to a display a buffer message and waiting for new message to be sent from worker when being refreshed) :
package herokutest;
import com.rabbitmq.client.*;
import java.io.IOException;
import java.net.URISyntaxException;
import java.security.KeyManagementException;
import java.security.NoSuchAlgorithmException;
import javax.servlet.ServletConfig;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
public class HelloWorld extends javax.servlet.http.HttpServlet {
private final static String QUEUE_NAME = "hello";
private final static String ANOTHER_QUEUE = "ANOTHER";
public Channel channel;
public Channel anotherChannel;
public DefaultConsumer consumer;
public String message = "Temporary";
public void init(final ServletConfig config) throws ServletException {
try {
createChannel();
} catch (Exception e) {
e.printStackTrace();
}
finally {
consumer = new DefaultConsumer(anotherChannel) {
@Override
public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException {
String innerMessage = new String(body, "UTF-8");
message = innerMessage;
}
};
try {
anotherChannel.basicConsume(ANOTHER_QUEUE, true, consumer);
} catch (IOException e) {
e.printStackTrace();
}
}
}
public void createChannel() throws NoSuchAlgorithmException, KeyManagementException, URISyntaxException, IOException {
String uri = System.getenv("CLOUDAMQP_URL");
if (uri == null) uri = "amqp://guest:guest@localhost";
ConnectionFactory factory = new ConnectionFactory();
factory.setUri(uri);
factory.setRequestedHeartbeat(30);
factory.setConnectionTimeout(30);
Connection connection = factory.newConnection();
channel = connection.createChannel();
anotherChannel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
anotherChannel.queueDeclare(ANOTHER_QUEUE, false, false, false, null);
}
protected void doGet(final HttpServletRequest request, final HttpServletResponse response) throws ServletException, IOException {
String test="test";
channel.basicPublish("", QUEUE_NAME, null, test.getBytes());
request.setAttribute("message", message);
request.getRequestDispatcher("/index.jsp").forward(request, response);
}
}
Worker class : (responsible for sending a message Another message back to servlet)
import com.rabbitmq.client.*;
import java.io.IOException;
public class WorkerProcess {
private final static String QUEUE_NAME = "hello";
private final static String ANOTHER_QUEUE = "ANOTHER";
static String message;
public static void main(String[] argv) throws Exception {
ConnectionFactory factory = new ConnectionFactory();
String uri = System.getenv("CLOUDAMQP_URL");
if (uri == null) uri = "amqp://guest:guest@localhost";
factory.setUri(uri);
factory.setRequestedHeartbeat(30);
factory.setConnectionTimeout(30);
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
final Channel anotherChannel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
anotherChannel.queueDeclare(ANOTHER_QUEUE, false, false, false, null);
System.out.println(" [*] Connected to Worker");
final Consumer consumer = new DefaultConsumer(channel) {
@Override
public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException {
message = new String(body, "UTF-8");
String anotherMessage= "Another message";
anotherChannel.basicPublish("", ANOTHER_QUEUE, null, anotherMessage.getBytes());
}
};
channel.basicConsume(QUEUE_NAME, true, consumer);
}
}
POM :
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>cloudamqp.example</groupId>
<artifactId>amqpexample</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>war</packaging>
<dependencies>
<dependency>
<groupId>com.rabbitmq</groupId>
<artifactId>amqp-client</artifactId>
<version>3.3.4</version>
</dependency>
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>jstl</artifactId>
<version>1.2</version>
</dependency>
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>javax.servlet-api</artifactId>
<version>3.0.1</version>
<scope>provided</scope>
</dependency>
<!-- This is my worker jar included as dependency -->
<dependency>
<groupId>com.example</groupId>
<artifactId>worker</artifactId>
<version>1.0</version>
</dependency>
</dependencies>
<!-- My local repository containing my worker .jar file-->
<repositories>
<repository>
<id>project.local</id>
<name>project</name>
<url>file:${project.basedir}/localrepo</url>
</repository>
</repositories>
<build>
<plugins>
<plugin>
<groupId>com.heroku.sdk</groupId>
<artifactId>heroku-maven-plugin</artifactId>
<version>1.1.1</version>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>2.4</version>
<configuration>
<webResources>
<resource>
<directory>src/web</directory>
</resource>
</webResources>
<archiveClasses>false</archiveClasses>
</configuration>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>appassembler-maven-plugin</artifactId>
<version>1.1.1</version>
<configuration>
<assembleDirectory>target</assembleDirectory>
<programs>
<program>
<mainClass>WorkerProcess</mainClass>
<name>worker</name>
</program>
</programs>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>assemble</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<version>2.3</version>
<executions>
<execution>
<phase>package</phase>
<goals><goal>copy</goal></goals>
<configuration>
<artifactItems>
<artifactItem>
<groupId>com.github.jsimone</groupId>
<artifactId>webapp-runner</artifactId>
<version>8.0.30.2</version>
<destFileName>webapp-runner.jar</destFileName>
</artifactItem>
</artifactItems>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
A: This is the important message in the logs:
2016-10-31T14:27:43.507545+00:00 app[worker.1]: sh: 0: Can't open target/bin/worker
Does your maven build generate this target/bin/worker script?
I would not expect the CLI deploy to work because it does not include the target dir by default. But you can include it with the --include option.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/40345594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: OpenGLES 2.0 app not working on tablet while working on smartphones - Black screen I have developped a small openGLES2.0 app which is compatible from android version 8 to 17.
It basically renders a sphere model with simple texturing in an empty scene. I also keep track of device sensors like accelerometer. All is working fine and cool on a wide variety of android phones (GS2, GS3, GN2, Nexus...).
My simple problem is that i can't get it working on tablets (GT2 running on 4.1.1 for example). The app is installed correctly, it doesn't crash at all, i just see a black screen instead of my sphere model. The part of the application that doesn't use openGL is running perfectly.
I can't believe that OpenGLES 2.0 is not working on GT2, neither sensors, neither internet connection that i used also. Is there something to check/enable to get it working ? Maybe something related to the larger size of screen on tablets ? I could maybe post a bit of code but i think the problem is elsewhere...
Thanks for your time !
A: When you use GLES 2.0 on devices you must be careful with indices. On many devices you cannot use int index. Every device supports unsigned short index. For render you must use:
GLushort indicies[] = { 0, 1, 2, 0, 2, 3 };
...
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, indicies);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17799079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Run Django Celery Periodically I have the task file tasks.py
@shared_task
def add(x, y):
print(x + y)
return x + y
and celery.py
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from celery.schedules import crontab
# setting the Django settings module.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'onboarding.settings')
app = Celery('onboarding',broker='pyamqp://guest@localhost//')
app.config_from_object('django.conf:settings', namespace='CELERY')
# Looks up for task modules in Django applications and loads them
app.autodiscover_tasks()
app.conf.beat_schedule = {
# Executes every Monday morning at 7:30 a.m.
'add-every-1-seconds': {
'task': 'tasks.add',
'schedule': 30,
'args': (16, 16),
},
}
when i run celery -A onboarding beat
and celery -A onboarding worker
Seems it is adding tasks but it is not executing the tasks required. What can be the problem
| {
"language": "en",
"url": "https://stackoverflow.com/questions/71935826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: deserialize EOF Exception, Serialize may write file improperly package thegreatestrpgevermade;
import java.util.HashMap;
import java.lang.reflect.*;
import javax.swing.JPanel;
import javax.swing.JFrame;
import javax.swing.JTextField;
import javax.swing.JButton;
import java.awt.event.ActionListener;
import java.awt.event.ActionEvent;
import java.io.Externalizable;
import javax.swing.Action;
import javax.swing.AbstractAction;
import java.io.ObjectOutputStream;
import java.io.Serializable;
import java.io.ObjectInputStream;
import java.io.FileOutputStream;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.ObjectInput;
import java.io.ObjectOutput;
public class Room implements Externalizable
{
static Room room;
HashMap<Integer,ParentObject> ParentObjectMap ;
int roomNumber ;
int width ;
int height ;
int idNumber ;
static ActionClass actionClass;
static JFrame creationWindow;
static JPanel creationPanel;
static JButton addObjectButton;
static JButton removeObjectButton;
static JButton finalizeRoomButton;
static JTextField classNameField;
static JTextField xSpawnField;
static JTextField ySpawnField;
static JTextField argument3Field;
static JTextField numberOfArgumentsField;
static JTextField roomNumberField;
static JTextField roomWidthField;
static JTextField roomHeightField;
public Room()
{
HashMap<Integer,ParentObject> ParentObjectMap = new HashMap();;
int roomNumber=0 ;
int width =0 ;
int height =0;
int idNumber =0;
}
static Object StringToObj (String name, Object[] argArray, int numArgs)
{
try
{
Class o = Class.forName(name);
Class argTypes[] = new Class[numArgs];
//System.out.println("A");
for(int I = 0; I<numArgs ; I ++)
{
argTypes[I] = Class.forName(argArray[I].getClass().getName());//.Type;
if (Class.forName(argArray[I].getClass().getName()) == Integer.class)
{
argTypes[I] = Integer.TYPE;
}
//System.out.println(argArray[I].getClass().getName());
//argTypes[1] = Integer.TYPE;
}
Constructor constructor = o.getConstructor(argTypes);
//Object arguments[] = new Object[numArgs];
//System.out.println("C");
//arguments[0] = A;
//arguments[1] = B;
Object newObject = constructor.newInstance(argArray);
//System.out.println("D");
return newObject;
}
catch(ClassNotFoundException cnfe)
{
System.out.println("Class not found exception ");
System.exit(1);
}
catch(NoSuchMethodException nsme)
{
System.out.println("No Such Method Exception");
System.exit(1);
}
catch(InstantiationException instExcep)
{
System.out.println("Instantiation Exception");
System.exit(1);
}
catch(IllegalAccessException iae)
{
System.out.println("Illegal Access Exception");
System.exit(1);
}
catch(InvocationTargetException ite)
{
System.out.println("Invocation target exception");
System.exit(1);
}
//so , sort through an array of all the sub classes of parent object,
//or maybe all the classes in the package, and test their getName
//against the argument until it comes out right, then return that class.
System.out.println("String to Obj is broken");
return null;
}
class ActionClass implements ActionListener
{
public void actionPerformed (ActionEvent e)
{
if(e.getSource() == addObjectButton)
{
Object[] arguments = new Object[Integer.parseInt(numberOfArgumentsField.getText())];
arguments[0] = Integer.parseInt(xSpawnField.getText());
arguments[1] = Integer.parseInt(ySpawnField.getText());
//arguments[2] = idNumber;
ParentObject placeHolder = (ParentObject)StringToObj("thegreatestrpgevermade." +
classNameField.getText(), arguments,Integer.parseInt(numberOfArgumentsField.getText()));
placeHolder.IDNUMBER = idNumber;
idNumber ++;
ParentObjectMap.put(new Integer(idNumber),placeHolder);
System.out.println("idnumber :" +placeHolder.IDNUMBER);
}
if(e.getSource() == removeObjectButton)
{
System.out.println(ParentObjectMap.get(0).spriteDirectory);
System.out.println("removeObjectButton Pressed");
}
if(e.getSource() == finalizeRoomButton)
{
try
{
roomNumber = Integer.parseInt(roomNumberField.getText());
width = Integer.parseInt(roomWidthField.getText());
height = Integer.parseInt(roomHeightField.getText());
FileOutputStream fos = new FileOutputStream("C:\\thegreatestrpgevermade\\room"
+ roomNumber +".ser");
ObjectOutputStream oos = new ObjectOutputStream(fos);
System.out.println("Finalize: room line 152");
oos.writeObject(room);
System.out.println("Finalize: room line 154");
oos.flush();
oos.close();
}
catch(IOException ioe)
{
System.out.println("io exception room");
System.exit(1);
}
}
}
}
public void InitializeCreationMode()
{
room = new Room();
ParentObjectMap = new HashMap();
actionClass = new ActionClass();
creationWindow = new JFrame();
creationPanel = new JPanel();
addObjectButton = new JButton("Add Object");
removeObjectButton = new JButton("Remove Object");
finalizeRoomButton = new JButton("Finalize Room");
classNameField = new JTextField("Class Name",20);
xSpawnField = new JTextField("X Coordinate(arg1)" , 20);
ySpawnField = new JTextField("Y Coordinate(arg2)" , 20);
argument3Field = new JTextField("Argument 3" , 20);
numberOfArgumentsField = new JTextField("Number Of Arguments" , 20);
roomNumberField = new JTextField("Room Number" , 20);
roomWidthField = new JTextField("Room Width" , 20);
roomHeightField = new JTextField("Room Height" , 20);
creationWindow.setSize(300,400);
creationWindow.setResizable(false);
creationWindow.setLocation(0,0);
creationWindow.add(creationPanel);
creationWindow.setDefaultCloseOperation(creationWindow.EXIT_ON_CLOSE);
creationPanel.setSize(300,400);
creationPanel.add(addObjectButton);
creationPanel.add(removeObjectButton);
creationPanel.add(finalizeRoomButton);
creationPanel.add(roomNumberField);
creationPanel.add(roomWidthField);
creationPanel.add(roomHeightField);
creationPanel.add(classNameField);
creationPanel.add(numberOfArgumentsField);
creationPanel.add(xSpawnField);
creationPanel.add(ySpawnField);
creationPanel.add(argument3Field);
addObjectButton.addActionListener(actionClass);
removeObjectButton.addActionListener(actionClass);
finalizeRoomButton.addActionListener(actionClass);
creationWindow.setVisible(true);
//im thinking ID numbers can't stay random.
//so basically, say HERO hits ENEMY. if ENEMY gets hit, it gives
//HERO its ID number which can be put into the hashmap to modify
// ENEMY'S health.
}
public void readExternal(ObjectInput in)
{
try
{
idNumber = in.readInt();
width = in.readInt();
height = in.readInt();
roomNumber = in.readInt();
// ParentObjectMap = (HashMap)in.readObject();
}
catch(IOException ioe)
{
System.out.println(ioe.toString());
ioe.printStackTrace();
}
catch(ClassNotFoundException cnfe)
{
System.out.println("cnfe exception readExternal");
}
}
public void writeExternal(ObjectOutput out)
{
try
{
System.out.println("Checkpoint 1 writeExternal");
out.writeInt(idNumber);
System.out.println("Checkpoint 2 writeExternal: width:" + width);
out.writeObject(width);
System.out.println("Checkpoint 3 writeExternal : width:" + width);
out.writeInt(height);
System.out.println("Checkpoint 4 writeExternal");
out.writeInt(roomNumber);
System.out.println("Checkpoint 5 writeExternal");
//out.writeObject(ParentObjectMap);
}
catch(IOException ioe)
{
System.out.println(ioe.toString());
ioe.printStackTrace();
}
}
/*public static void main(String[] args)
{
room = new Room();
room.InitializeCreationMode();
}*/
}
The other class is MainClass. Ill include the relevant snippet.
FileInputStream fis = new FileInputStream("C:\\thegreatestrpgevermade\\room0.ser");
ObjectInputStream ois = new ObjectInputStream(fis);
room = (Room)ois.readObject();
// room = new Room();
room.InitializeCreationMode();
System.out.println("Room Width: " +room.width);
The stack trace is this:
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at java.io.ObjectInputStream$BlockDataInputStream.readInt(ObjectInputStream.java:2820)
at java.io.ObjectInputStream.readInt(ObjectInputStream.java:971)
at thegreatestrpgevermade.Room.readExternal(Room.java:220)
at java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:1837)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1796)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at thegreatestrpgevermade.MainClass.main(MainClass.java:176)
The Initialize creation mode method creates a jwindow with buttons and fields
to assign values to width, height,etc. The action class listens to its buttons
and the finalize button serializes the fields. Something is wrong with the serialization.
printing out width at any point gives a zero. I have no idea why.
A: Assuming the question is kind of stated in your title, the answer is yes, the file wasn't written correctly. But you don't need all this. Change it to implement Serializable instead of Externalizable, and remove the readExternal() and writeExternal() methods.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34982396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: performSegue after completion handler ANSWER BELOW
Im facing a little issue that you may help me with.
the app Im working on allows you to request for content based on your location.
the first ViewController is somewhat a form that grab your location / a specified location + some other information to target specific answers.
I need to perform a segue to pass the "question" variables to my second ViewController where I load "answers" with a query based on the question details.
What is causing me trouble is that, whenever the question is geolocalized, I can't retrieve the information using prepareForSegue because it doesn't wait for the geoPoint to be made (completed).The second controller display my latitude and longitude as nil.
I see that I can call the "prepareForSegue" method using "perfomSegueWithIdentifier", and retrieve the information in my second view controller but it perform the segue twice... How can I trigger the segue only when Im ready but using the prepareForSegue data parameter I need to preserve?
Is there a way to pass variable from one controller to another using performSegue?
Any help would be awesome.
Also, while I don't think the code is relevant for my question, here is the code I use.
geoPointing method
@IBAction func doPostQuestion(sender: UIButton) {
var thereQ:PFObject = PFObject(className: "tquestion")
if(somewhereLabel.text == "my location"){
println("Location is geolocalized")
PFGeoPoint.geoPointForCurrentLocationInBackground {
(geoPoint: PFGeoPoint!, error: NSError!) -> Void in
if error == nil {
self.geoLati = geoPoint.latitude as Double
self.geoLong = geoPoint.longitude as Double
self.performSegueWithIdentifier("goto_results", sender:self) // call prepareForSegue when ready but implies to have a segue done on click... (performed twiced)
}
}
}
self.navigationController?.popToRootViewControllerAnimated(true)
}
prepareForSegue
override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
if(segue.identifier == "goto_results"){
// Get Label
let theDestination = (segue.destinationViewController as displayAnswersViewController)
theDestination.lat = self.geoLati
theDestination.lng = self.geoLong
}
}
ANSWER SOLUTION:
As suggested, to solve this problem you just need to create your segue from your viewController1 to your viewController2 and not from a button. This way you can trigger prepareForSegue programatically using the "performSegue" method that will call prepareForSegue anyway.
A: To solve this problem you just need to create your segue from your viewController1 to your viewController2 and not from a button. This way you can trigger prepareForSegue programatically using the "performSegue" method that will call prepareForSegue anyway.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26996822",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Makefile gtkmm; collect2.exe: error: ld returned 1 exit status I'm pretty sure the problem lays somewhere in my makefile, as when I make the program, the error in the title points me to line 12, the linker command. I've tinkered around for it with a while but can't seem to get anything to work. The following is my code/makefile. I am making this in mingw32.
makefile:
CXXFLAGS := -std=c++11 -Wall -Werror -g $(shell pkg-config gtkmm-3.0 --cflags)
LDLIBS = -lpthread $(shell pkg-config gtkmm-3.0 --libs)
all: test
test: sample.count
sample.count: InIT_Printer_Install_Assistant
./InIT_Printer_Install_Assistant
InIT_Printer_Install_Assistant: main.o win_home.o
g++ $(CXXFLAGS) $(LDLIBS) -o $@ $^ `pkg-config gtkmm-3.0 --cflags --libs`
main.o: main.cpp win_home.h
win_home.o: win_home.cpp win_home.h
clean:
-rm -f *.o *~
spotless: clean
-rm -f InIT_Printer_Install_Assistant
main.cpp:
#include <gtkmm.h>
#include <iostream>
#include "win_home.h"
int main(int argc, char *argv[])
{
auto app = Gtk::Application::create(argc, argv, "com.InIT.PrinterApp");
HomeGUI win_home;
win_home.set_default_size(600,400);
win_home.set_title("InIT Self-Service Printer Management");
return app->run(win_home);
}
win_home.cpp:
#include "win_home.h"
HomeGUI::HomeGUI()
{
//build interface/gui
this->buildInterface();
//retrieve printers
//create printer Buttons
//register Handlers
//this->registerHandlers();
}
void HomeGUI::buildInterface()
{
//combo boxes
/*
Gtk::HBox combo_rowbox = Gtk::HBox(false, 10);
Gtk::ComboBox combobox_department = Gtk::ComboBox(false);
Gtk::ComboBox combobox_building = Gtk::ComboBox(false);
combo_rowbox.pack_start(child, false, false, padding=0)
add(combo_rowbox);
*/
return;
}
win_home.h:
#ifndef GTKMM_INIT_PRINTER_INSTALL_ASSISTANT_H
#define GTKMM_INIT_PRINTER_INSTALL_ASSISTANT_H
#include <vector>
#include <string>
#include <iostream>
#include <gtkmm.h>
class HomeGUI : public Gtk::Window
{
public:
HomeGUI();
virtual ~HomeGUI();
void buildInterface();
void registerHandlers();
//void defaultToFloorPlan();
protected:
//Signal Handlers
//Member variables
std::string m_selected_department;
std::string m_selected_building;
std::string m_selected_floor;
//Member widgets
//std::vector<Gtk::Button> m_printbuttons;
//HelpGUI m_win_help;
//UninstallGUI m_win_uninstall;
//Member logic
//ClientLogic logic;
};
#endif
Result after making:
C:\msys32\home\PrintApplication/win_home.cpp:3: undefined reference to VTT for HomeGUI'
C:\msys32\home\PrintApplication/win_home.cpp:3: undefined reference toVTT for HomeGUI'
C:\msys32\home\PrintApplication/win_home.cpp:3: undefined reference to vtable for HomeGUI'
C:\msys32\home\PrintApplication/win_home.cpp:3: undefined reference tovtable for HomeGUI'
C:\msys32\home\PrintApplication/win_home.cpp:3: undefined reference to vtable for HomeGUI'
C:\msys32\home\PrintApplication/win_home.cpp:3: undefined reference tovtable for HomeGUI'
C:\msys32\home\PrintApplication/win_home.cpp:3: undefined reference to `VTT for HomeGUI'
collect2.exe: error: ld returned 1 exit status
make: *** [Makefile:12: InIT_Printer_Install_Assistant] Error 1
If anyone can enlighten me as to why this is happening, that would be greatly appreciated.
**Note: as you can tell, I'm still a noob with makefiles, so feel free to correct me as needed when it comes to general makefile formatting.
A: @Unimportant's comment solved the issue. Both pure virtual and non-pure virtual functions must all have a body. Changed my win_home.h to:
#include "win_home.h"
HomeGUI::HomeGUI()
{
//build interface/gui
this->buildInterface();
//retrieve printers
//create printer Buttons
//register Handlers
//this->registerHandlers();
}
HomeGUI::~HomeGUI()
{
}
void HomeGUI::buildInterface()
{
//combo boxes
/*
Gtk::HBox combo_rowbox = Gtk::HBox(false, 10);
Gtk::ComboBox combobox_department = Gtk::ComboBox(false);
Gtk::ComboBox combobox_building = Gtk::ComboBox(false);
combo_rowbox.pack_start(child, false, false, padding=0)
add(combo_rowbox);
*/
return;
}
void HomeGUI::registerHandlers()
{
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/41579419",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: iBook page flip transition I'm trying recreate iBook like transition for pages in landscape mode for an pdf reader app. I only need the animation, not the, touch handling like iBook app; the user with a simple swipe turn the page. I have tried different sample code, including Leaves, but I can't find anything simple.
Can Anyone help me to create this simple animation, or there's a simple way to recreate iBook transition?
A: Ole Begemann has done something like this. You can find the project here on GitHub.
Ole also writes a superb blog summary of some of the best developer links and tutorials around. Well worth subscribing to!
A: Look at the UIView documentation for animation types available. Here is what I'd use:
UIViewAnimationOptions animation;
if (pageNumberLower) {
animation = UIViewAnimationOptionTransitionCurlDown;
} else {
animation = UIViewAnimationOptionTransitionCurlUp;
}
[UIView transitionWithView:myChangingView
duration:0.5
options:animation
animations:^{ CHANGE PAGE HERE }
completion:NULL];
| {
"language": "en",
"url": "https://stackoverflow.com/questions/5805038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Display own places on custom Google Map I'd like to create a map of a space system from a computer game.
I understand that you can use the Google Maps API to render your own map with custom tiles and placemarks etc (which I've done successfully), but I'd really like to be able to see hierarchical place names in the same way that you see New York, Brooklyn, Manhatten, Queens etc when viewing New York from this zoom level, and Chinatown, East Village, Hudson Square when viewing New York from this zoom level.
I've also had a look at Google Fusion tables, but they appear to be restricted to Earth locations only.
I suppose ideally I'd like a modified version of the google.map.Marker object that displays the name of the marker next to the marker itself and allows specification of the text-size and at what zoom level the marker text appears. But that feels like a hack.
Is this possible using the Google Maps API, or another browser-based mapping system?
EDIT:
D'oh, should have kept Googling. Someone's basically done what I was looking for here.
A: Self-answering the question so it doesn't keep coming up as unanswered.
I've used the code from Uncle Tomm's blog to solve the problem.
I just need a good algorithm for displaying nearby placenames without them overlapping... but that's another question!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11759230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Read JSON list and load it a dataframe My input JSON file has list of objects as below:
[
{
"name": "Hariharan",
"place": "Chennai",
"items": [
{"item_name":"This is a book shelf", "item_level": 1},
{"item_name":"Introduction", "item_level": 1},
{"item_name":"ABCDEF", "item_level": 2},
{"item_name":"grains", "item_level": 3},
{"item_name":"market place", "item_level": 1},
{"item_name":"Vegentables", "item_level": 1},
{"item_name":"Fruits", "item_level": 4},
{"item_name":"EFGHIJ", "item_level": 2},
{"item_name":"Conclusion", "item_level": 1}
],
"descriptions": [
{"item_name": "Books"}
]
}
]
I want to read this json file and load the data into dataframe.
Dataframe columns should be name, place, items and descriptions. Both items and descriptions are again list of items.
I am able to read json
import json
with open('data/test.json', 'r', encoding='utf8') as fp:
data = json.load(fp)
print(type(data)) is <class 'list'>
I get an error when I try to load the list on to dataframe:
df = pd.read_json(data)
ValueError: Invalid file path or buffer object type: <class 'list'>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74793091",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Remove text and link from e-mail I have auto generated receipts coming to my e-mail (Outlook) that I have built rules to forward to different CSR's based text in the body of the e-mail.
There is an "unsubscribe" option and occasionally either our CSR's or their clients are clicking on which removes me from getting these e-mails.
is there a way removing this string of text along with the link?
The string is as follows:
If you no longer wish to receive emails from Self Service Terminal, please click on the 'Unsubscribe' link below:
Unsubscribe
Thanks
A: Something like this will help:
With Selection.Find
.ClearFormatting
.Text = "If you no longer wish to receive emails from Self Service Terminal, please click on the 'Unsubscribe' link below: Unsubscribe"
.Replacement.ClearFormatting
.Replacement.Text = ""
.Execute Replace:=wdReplaceAll, Forward:=True, _
Wrap:=wdFindContinue
End With
See Om3rs link in the comment for more information on this.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/39088799",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: How can I set Crontab for a specific hour? I'd like to set my Cron job to work at specific hour.
Particularly, I'd like to set it at 1PM and 7PM every day of the year. How can I do?
I wrote two line like those below:
0 13 * * * /usr/bin/php path/myphp.php
0 19 * * * /usr/bin/php path/myphp.php
but nothing works fine! Can someone help me?
A: 0 13,19 * * * /usr/bin/php path/myphp.php should work, check your log / user mail for errors.
A: Keep in mind that there's a difference in format between a user's crontab (accessed with the command contab -e or what have you) and the system's crontab, managed in files like /etc/cron.d and others.
In a user's personal crontab, the format you used should work.
In the system crontab, (if you place a new file or edit anything under /etc), make sure you specify the user name to run as before the command, for example:
0 13,19 * * * www-data /usr/bin/php path/myphp.php
Will run the command /usr/bin/php path/myphp.php as user www-data every day at 13:00 and 19:00.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18670906",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: PHP key => value array find prev and next entry I have an array with key => value and I want to receive the previous and next entry for a given array key.
Example:
$array = array(
'one' => 'first',
'two' => 'second',
'three' => '3rd',
'four' => '4th'
)
When I have the given key 'two' I want to receive the entries $array['one'] and $array['three'], is there any nice none foreach solution?
A: The two functions you are looking for are next() and prev(). Native PHP functions for doing exactly what you are after:
$previousPage = prev($array);
$nextPage = next($array);
These functions move the internal pointer so, for example if you are on $array['two'] and use prev($array) then you are now on $array['one']. What I'm getting at is if you need to get one and three then you need to call next() twice.
A: $array = array(
'one' => 'first',
'two' => 'second',
'three' => '3rd',
'four' => '4th'
);
function getPrevNext($haystack,$needle) {
$prev = $next = null;
$aKeys = array_keys($haystack);
$k = array_search($needle,$aKeys);
if ($k !== false) {
if ($k > 0)
$prev = array($aKeys[$k-1] => $haystack[$aKeys[$k-1]]);
if ($k < count($aKeys)-1)
$next = array($aKeys[$k+1] => $haystack[$aKeys[$k+1]]);
}
return array($prev,$next);
}
var_dump(getPrevNext($array,'two'));
var_dump(getPrevNext($array,'one'));
var_dump(getPrevNext($array,'four'));
A: You can try to whip something up with an implementation of a SPL CachingIterator.
A: You could define your own class that handles basic array operations:
Here is an example posted by adityabhai [at] gmail com [Aditya Bhatt] 09-May-2008 12:14 on php.net
<?php
class Steps {
private $all;
private $count;
private $curr;
public function __construct () {
$this->count = 0;
}
public function add ($step) {
$this->count++;
$this->all[$this->count] = $step;
}
public function setCurrent ($step) {
reset($this->all);
for ($i=1; $i<=$this->count; $i++) {
if ($this->all[$i]==$step) break;
next($this->all);
}
$this->curr = current($this->all);
}
public function getCurrent () {
return $this->curr;
}
public function getNext () {
self::setCurrent($this->curr);
return next($this->all);
}
public function getPrev () {
self::setCurrent($this->curr);
return prev($this->all);
}
}
?>
Demo Example:
<?php
$steps = new Steps();
$steps->add('1');
$steps->add('2');
$steps->add('3');
$steps->add('4');
$steps->add('5');
$steps->add('6');
$steps->setCurrent('4');
echo $steps->getCurrent()."<br />";
echo $steps->getNext()."<br />";
echo $steps->getPrev()."<br />";
$steps->setCurrent('2');
echo $steps->getCurrent()."<br />";
echo $steps->getNext()."<br />";
echo $steps->getPrev()."<br />";
?>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/5376424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: WPF ListBoxItem Visibility and ScrollBar I was hoping to collapse certain ListBoxItems based on a property of their data context.
I came up with the following (trimmed for brevity)
<ListBox ItemsSource="{Binding SourceColumns}">
<ListBox.ItemContainerStyle>
<Style TargetType="{x:Type ListBoxItem}">
<Style.Triggers>
<DataTrigger Binding="{Binding IsDeleted}" Value="True">
<Setter Property="Visibility" Value="Collapsed"/>
</DataTrigger>
</Style.Triggers>
</Style>
</ListBox.ItemContainerStyle>
<ListBox.ItemTemplate>
<DataTemplate>
<TextBlock VerticalAlignment="Center" Margin="5,0" Text="{Binding ColumnName}"/>
</DataTemplate>
</ListBox.ItemTemplate>
</ListBox>
This "works" in that it does collapse the listboxitems that are marked as "IsDeleted", however the vertical scrollbar does not adjust for the "missing" items. As I'm scrolling, all of a sudden the bar gets bigger and bigger (without moving) until I scroll past the point of the hidden items, and then finally starts to move.
I also tried explicitly setting the height and width to 0 as well in the data trigger, to no avail.
Does anyone know if there's a workaround for this?
A: Enter CollectinViewSource
One thing you can do is connect your ListBox to your items through a CollectionViewSource.
What you do is create the collectionViewSource in XAML:
<Window.Resources>
<CollectionViewSource x:Key="cvsItems"/>
</Window.Resources>
Connect to it in your CodeBehind or ViewModel
Dim cvsItems as CollectionViewSource
cvsItems = MyWindow.FindResource("cvsItems")
and set it's source property to your collection of items.
cvsItems.Source = MyItemCollection
Then you can do filtering on it. The collectionViewSource maintains all of the items in the collection, but alters the View of those items based on what you tell it.
Filtering
To filter, create a CollectionView using your CollectionViewSource:
Dim MyCollectionView as CollectionView = cvsItems.View
Next write a filtering function:
Private Function FilterDeleted(ByVal item As Object) As Boolean
Dim MyObj = CType(item, MyObjectType)
If MyObj.Deleted = True Then Return False Else Return True End If
End Function
Finally, write something that makes the magic happen:
MyCollectionView .Filter = New Predicate(Of Object)(AddressOf FilterDeleted)
I usually have checkboxes or Radiobuttons in a hideable expander that lets me change my filtering options back and forth. Those are bound to properties each of which runs the filter function which evaluates all the filters and then returns whether the item should appear or not.
Let me know if this works for you.
Edit:
I almost forgot:
<ListBox ItemsSource="{Binding Source={StaticResource cvsItems}}"/>
A: The answer is to set the VirtualizingStackPanel.IsVirtual="False" in your listbox.
Why don't my listboxitems collapse?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/6655968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: keycode for exclamation mark with adb shell input How can I enter a "!" via adb? I can't find the keycode anywhere and it isn't coming across with I use adb shell input text !
adb shell input keycode <???>
A: you need to escape the following characters:
( ) < > | ; & * \ ~ " '
'escaping' means putting a backslash ( \ ) before the offending character.
space is escaped by using %s
adb shell input text \!
There is no keycode for !.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18468260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: FileNotFoundError when using django.test.client.post I had some django test code that used to work in python 2 but started throwing some errors in python3. Here's a snippet:
def test_filter(self):
c1 = dt.Client()
c1.login(username='user1', password='pass')
f = open('data/tiny.txt', 'rb')
test_file = {
'datafile': f,
'uid': 'bb',
'private': 'True',
'name': "tr1"
}
response = c1.post(
'/api/v1/files',
test_file,
format='multipart'
)
When I run this using python3 I get the following Exception:
Exception ignored in: <bound method _TemporaryFileCloser.__del__ of <tempfile._TemporaryFileCloser object at 0x115904160>>
Traceback (most recent call last):
File "/Users/pete/miniconda3/envs/cenv3/lib/python3.6/tempfile.py", line 450, in __del__
self.close()
File "/Users/pete/miniconda3/envs/cenv3/lib/python3.6/tempfile.py", line 446, in close
unlink(self.name)
FileNotFoundError: [Errno 2] No such file or directory: '/var/folders/yb/3jwdnlf17gd1dcq65th786d00000gn/T/tmpw_t0j9bw.upload'
Interestingly, this exception doesn't affect the outcome of the tests. They still pass. It does get reported in the output and is rather annoying.
Any ideas on what's causing this? Creating the object with a file using Object.create works fine and doesn't throw an error.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/46569172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Python regex with newlines doesn't match I have a file which contains
Line1
Line2
Line3
Line4
and in a Python program I am searching for
Line1
Line2
Line3
The program is
import re
file = open("blah.log","r")
file_contents = file.read()
pattern='''Line1
Line2
Line3'''
matchObj = re.search(pattern, file_contents, re.M|re.I)
if matchObj:
print matchObj.group(0)
else:
print "No match!!"
However, it shows no match even the pattern is in the file.
But if the
file_contents = '''Line1
Line2
Line3
Line4''' # not reading from the file
Now it matches with regex pattern.
What is the reason for this?
How can I make the program work with the reading the contents from the file?
A: Since the lines in your file are delimited by '\r\n', the pattern you search for should account for that.
For convenience, you can still use triple quotes to initialize the string you want to search for, but then use the str.replace() method to replace all occurrences of '\n' with '\r\n':
pattern='''Line1
Line2
Line3'''.replace('\n', '\r\n')
Furthermore, if all you need is a substring match, you can use the in operator instead of the more costly regex match:
if pattern in file_contents:
print pattern
else:
print "No match!!"
A: New line character in a file can be '\n', '\r' or '\r\n'. It depends on OS. To be at safer side, try to match with all new line characters.
pattern='''Line1(\n|\r|\r\n)Line2(\n|\r|\r\n)Line3'''
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53617360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Swagger Codegen: Inheritance and Composition not working as expected I have the following short YAML:
# Transaction Request object with minimal information that we need
Parent:
required:
- a
- b
- c
properties:
a:
type: number
format: double
b:
type: string
c:
type: string
# Full transaction
Child:
required:
- a
- b
- c
allOf:
- $ref: "#/definitions/Parent"
properties:
date:
type: string
format: date-time
state:
type: string
enum:
- 1
- 2
- 3
In Swagger UI and Editor these objects show up as I wish them to: Child inherits the a,b and c fields from Parent and has a few additional ones.
I would have expected:
public class Parent {
private Double a;
private String b;
private String c;
...}
and
public class Child extends Parent {
// Superclass fields as well as:
private Date date;
private enum State {...};
...}
However, while the Parent class looked as expected, my Child class consisted of the following:
public class Child {
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
Child child = (Child) o;
return true;
}
... }
Which lacks even extends. When using a discriminator it works, but I don't really want Polymorphism - just plain inheritance. How can I accomplish this with Swagger Codegen?
Relevant pom.xml entry:
<plugin>
<groupId>io.swagger</groupId>
<artifactId>swagger-codegen-maven-plugin</artifactId>
<version>2.2.2-SNAPSHOT</version>
<configuration>
<inputSpec>${project.basedir}/src/main/resources/test.yaml</inputSpec>
<language>jaxrs-resteasy</language>
<output>${project.build.directory}/generated-sources/payment</output>
<configOptions>
<sourceFolder>src/java/main</sourceFolder>
<dateLibrary>java8</dateLibrary>
</configOptions>
<groupId>net.product</groupId>
<artifactId>product_api</artifactId>
<modelPackage>net.product.product_api.model</modelPackage>
<invokerPackage>net.product.product_api</invokerPackage>
<apiPackage>net.product.product_api</apiPackage>
</configuration>
<executions>
<execution>
<id>generate-server-stubs</id>
<goals>
<goal>generate</goal>
</goals>
<configuration>
</configuration>
</execution>
</executions>
</plugin>
A: Cat:
allOf:
- $ref: "#/definitions/Animal"
- type: "object"
properties:
declawed:
type: "boolean"
Animal:
type: "object"
required:
- "className"
discriminator: "className"
properties:
className:
type: "string"
color:
type: "string"
default: "red"
You need add code at parent class
required:
- "className"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/39578692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: List content being ignored I have a list of calendar events. The html looks like this:
<li data-id="1">
<a href="#calendar-item/1">
<div class="calendar" style="">
<div class="calendar-header"></div>
<div class="calendar-month">Dec</div>
<div class="calendar-day">11</div>
</div>
<p>Parents Association Non-Uniform Day</p>
<span class="chevron"></span>
</a>
</li>
I have given the list item padding, but it is ignoring the content of the div tag, see the image:
Here is the jsfiddle.
A: works in firefox for me but you defenitely need to clear your float. The easiest way to do that is using overflow: hidden on the list item so it takes the space of the floating icon and wraps its padding around that instead of just the text next to it
A: Try this my be slow your problem
CSS
give flot:left in below class
li p:nth-of-type(1) {float:left;}
And give flot:left in below class
li{float:left;}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/20398894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Deadlock found when trying to get lock I am facing this exception very often.
2015-Jan-06 14:24:59.167 (SEVERE) deadlock found when trying to get
lock [0] System.Exception Message = deadlock found when trying to get
lock Source = Data.ResultAccumulator
My application is a VB.Net application and I am using MySQL connection for .Net Version 6.8.3
While doing a bulk update using DataTable in DataAdapter.Update method I am getting this exception.
The whole update process is running in Transactions.
I found various threads with this exception in which users suggested to analyze Innodb status. So I have taken the status after getting this exception. But I don't know how to analyze it. Here's the the innodb status.
=====================================
2015-01-06 14:25:07 2ec INNODB MONITOR OUTPUT
=====================================
Per second averages calculated from the last 29 seconds
-----------------
BACKGROUND THREAD
-----------------
srv_master_thread loops: 7208 srv_active, 0 srv_shutdown, 74547 srv_idle
srv_master_thread log flush and writes: 81579
----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 3583
OS WAIT ARRAY INFO: signal count 4766
Mutex spin waits 3514, rounds 53287, OS waits 1608
RW-shared spins 2281, rounds 46994, OS waits 1262
RW-excl spins 1304, rounds 36969, OS waits 674
Spin rounds per wait: 15.16 mutex, 20.60 RW-shared, 28.35 RW-excl
------------
TRANSACTIONS
------------
Trx id counter 1859662
Purge done for trx's n:o < 1859649 undo n:o < 0 state: running but idle
History list length 1653
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 1859651, not started
MySQL thread id 127, OS thread handle 0xb00, query id 140079 N0414LEDF0275 10.0.52.40 opkeyapi cleaning up
---TRANSACTION 1859446, not started
MySQL thread id 98, OS thread handle 0xf9c, query id 140081 localhost 127.0.0.1 root cleaning up
---TRANSACTION 0, not started
MySQL thread id 97, OS thread handle 0x1c28, query id 140080 localhost 127.0.0.1 root cleaning up
---TRANSACTION 1859658, not started
MySQL thread id 107, OS thread handle 0x1f78, query id 140109 N0414LEDF0240 10.0.52.129 opkeyapi cleaning up
---TRANSACTION 1859652, not started
MySQL thread id 102, OS thread handle 0xed4, query id 140085 N0414LEDF0240 10.0.52.129 opkeyapi cleaning up
---TRANSACTION 0, not started
MySQL thread id 90, OS thread handle 0x1f60, query id 81724 localhost 127.0.0.1 root cleaning up
---TRANSACTION 1859634, not started
MySQL thread id 46, OS thread handle 0x159c, query id 140015 N0414LEDF0065 10.0.52.133 opkeyapi cleaning up
---TRANSACTION 1859633, not started
MySQL thread id 45, OS thread handle 0x1464, query id 140011 N0414LEDF0065 10.0.52.133 opkeyapi cleaning up
---TRANSACTION 1859630, not started
MySQL thread id 42, OS thread handle 0x1c00, query id 139999 N0414LEDF0250 10.0.52.128 opkeyapi cleaning up
---TRANSACTION 1859629, not started
MySQL thread id 41, OS thread handle 0x1768, query id 139995 N0414LEDF0250 10.0.52.128 opkeyapi cleaning up
---TRANSACTION 1859649, not started
MySQL thread id 38, OS thread handle 0x1ac8, query id 140071 N0414LEDF0085 10.0.52.63 opkeyapi cleaning up
---TRANSACTION 1859650, not started
MySQL thread id 36, OS thread handle 0x17d8, query id 140075 N0414LEDF0085 10.0.52.63 opkeyapi cleaning up
---TRANSACTION 1859661, ACTIVE 0 sec
39 lock struct(s), heap size 2496, 100 row lock(s), undo log entries 224
MySQL thread id 126, OS thread handle 0x2ec, query id 140356 N0414LEDF0275 10.0.52.40 opkeyapi init
show engine innodb status
Trx read view will not see trx with id >= 1859662, sees < 1859662
--------
FILE I/O
--------
I/O thread 0 state: wait Windows aio (insert buffer thread)
I/O thread 1 state: wait Windows aio (log thread)
I/O thread 2 state: wait Windows aio (read thread)
I/O thread 3 state: wait Windows aio (read thread)
I/O thread 4 state: wait Windows aio (read thread)
I/O thread 5 state: wait Windows aio (read thread)
I/O thread 6 state: wait Windows aio (write thread)
I/O thread 7 state: wait Windows aio (write thread)
I/O thread 8 state: wait Windows aio (write thread)
I/O thread 9 state: wait Windows aio (write thread)
Pending normal aio reads: 0 [0, 0, 0, 0] , aio writes: 0 [0, 0, 0, 0] ,
ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0
Pending flushes (fsync) log: 0; buffer pool: 0
22934 OS file reads, 39569 OS file writes, 9117 OS fsyncs
3.07 reads/s, 16384 avg bytes/read, 0.55 writes/s, 0.38 fsyncs/s
-------------------------------------
INSERT BUFFER AND ADAPTIVE HASH INDEX
-------------------------------------
Ibuf: size 1, free list len 36, seg size 38, 2078 merges
merged operations:
insert 16494, delete mark 25, delete 2
discarded operations:
insert 0, delete mark 0, delete 0
Hash table size 1602143, node heap has 366 buffer(s)
53.93 hash searches/s, 61.00 non-hash searches/s
---
LOG
---
Log sequence number 4140114089
Log flushed up to 4139522989
Pages flushed up to 4138539042
Last checkpoint at 4138539042
0 pending log writes, 0 pending chkp writes
5282 log i/o's done, 0.17 log i/o's/second
----------------------
BUFFER POOL AND MEMORY
----------------------
Total memory allocated 410091520; in additional pool allocated 0
Dictionary memory allocated 608237
Buffer pool size 24704
Free buffers 1024
Database pages 23314
Old database pages 8586
Modified db pages 249
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 9750, not young 279685
0.59 youngs/s, 19.55 non-youngs/s
Pages read 22848, created 10791, written 32759
3.07 reads/s, 3.28 creates/s, 0.31 writes/s
Buffer pool hit rate 993 / 1000, young-making rate 1 / 1000 not 50 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 23314, unzip_LRU len: 0
I/O sum[15]:cur[77], unzip sum[0]:cur[0]
--------------
ROW OPERATIONS
--------------
0 queries inside InnoDB, 0 queries in queue
1 read views open inside InnoDB
Main thread id 2656, state: sleeping
Number of rows inserted 21162, updated 2520, deleted 131, read 48116841
7.48 inserts/s, 0.31 updates/s, 0.00 deletes/s, 53.38 reads/s
----------------------------
END OF INNODB MONITOR OUTPUT
============================
Can anybody show me the way to debug this problem, So that I can find a fix for it.
A: The status in your question doesn't contain any deadlock information, so you haven't taken it just after the deadlock.Could be that the server had been restarted before you took the status
The status must cointain a section that looks like this example:
------------------------
LATEST DETECTED DEADLOCK
------------------------
2015-01-06 11:47:02 da8
*** (1) TRANSACTION:
TRANSACTION 24103246, ACTIVE 16 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 3 lock struct(s), heap size 376, 2 row lock(s)
MySQL thread id 3, OS thread handle 0xde8, query id 102 localhost 127.0.0.1 test
updating
update test set test=1 where test=1
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 489 page no 3 n bits 80 index `PRIMARY` of table `test`.`t
est` trx id 24103246 lock_mode X locks rec but not gap waiting
Record lock, heap no 2 PHYSICAL RECORD: n_fields 3; compact format; info bits 0
0: len 4; hex 80000001; asc ;;
1: len 6; hex 0000016fc940; asc o @;;
2: len 7; hex a80000026e0110; asc n ;;
*** (2) TRANSACTION:
TRANSACTION 24103245, ACTIVE 63 sec starting index read, thread declared inside
InnoDB 5000
mysql tables in use 1, locked 1
3 lock struct(s), heap size 376, 2 row lock(s)
MySQL thread id 4, OS thread handle 0xda8, query id 103 localhost 127.0.0.1 test
updating
update test set test=4 where test=4
*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 489 page no 3 n bits 80 index `PRIMARY` of table `test`.`t
est` trx id 24103245 lock_mode X locks rec but not gap
Record lock, heap no 2 PHYSICAL RECORD: n_fields 3; compact format; info bits 0
0: len 4; hex 80000001; asc ;;
1: len 6; hex 0000016fc940; asc o @;;
2: len 7; hex a80000026e0110; asc n ;;
*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 489 page no 3 n bits 80 index `PRIMARY` of table `test`.`t
est` trx id 24103245 lock_mode X locks rec but not gap waiting
Record lock, heap no 5 PHYSICAL RECORD: n_fields 3; compact format; info bits 0
0: len 4; hex 80000004; asc ;;
1: len 6; hex 0000016fc940; asc o @;;
2: len 7; hex a80000026e0137; asc n 7;;
*** WE ROLL BACK TRANSACTION (2)
------------
TRANSACTIONS
------------
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27795862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: How to make one component of two different but similar by passing a value? I have a menu that routes clicking to a named outlet presenting submenus. Each submenu is a separate component. After a clean-up I noticed that the only difference between those submenus is a single index to pick captions and urls from. Naturally, I'd like to get rid of all but one such submenu component to lower the complexity of my project.
HTML
<div class="nav-sub">
<div *ngFor="let menu of menus;"
(click)="onClick(menu,menu.link)"
[ngClass]="{'active':menu.active}"
routerLink="{{menu.link}}">{{menu.caption}}</div>
</div>
TS
@Component({ selector: "submenu1", templateUrl: "./submenu1.component.html", ... })
export class Submenu1Component implements OnInit {
constructor() { }
ngOnInit() { }
onClick(target, link) { this.menus.forEach(_ => _.active = _ === target); }
private menus = Settings.menus[1].subs;
}
Routing
const routes: Routes = [
{ path: "submenu1", component: Submenu1Component, outlet: "menus" },
{ path: "submenu2", component: Submenu2Component, outlet: "menus" },
{ path: "linky-from-submenu-1a", component: SomeComponent1a },
{ path: "linky-from-submenu-1b", component: SomeComponent1b },
...
];
Is it possible? How?
I've tried to play around with using different paths and trying to make the router tell me the ID targeted but the complexity grew dramatically and I suspect that I was doing more harm than good (i.e. increasing, not lowering, the complexity).
I also made some other attempts but I dislike the general thought there (i.e. a global holder of info for what's been clicked imposing the right texts/links to be in the submenu bar. Using the same HTML-template but different TS-file was a good idea for decreasing the file number but still... It feels like there's an opportunity to gain skills by doing it the most proper way.
A: You could use the ActivatedRoute service and subscribe to the url observable. Based on the value you get there you decide on what data to load.
Another approach would be to use params in the path, so you would have something like 'submenu/1' and 'submenu/2' where 1 and 2 are an 'id' parameter and in the activated route you read them from the params Observable. In this way it's more easily extendable and you could use something more meaningful for the param values.
Routes config:
const routes: Routes = [
{ path: "submenu/:menuId", component: SubmenuComponent, outlet: "menus" },
...
];
And then in the SubmenuComponent you can handle the OnInit lifecycle event and fetch the param:
ngOnInit() {
this.route.params
.subscribe(({ menuId }) => { /* load data based on menuId */ });
}
this.route is an instance of the ActivatedRoute service which you can inject in constructor:
constructor(private route: ActivatedRoute) { }
A: TS
import { Component, OnInit } from "@angular/core";
import { Router, Event, NavigationEnd } from "@angular/router";
import { SubMenu, Settings } from "../navbar.model";
@Component({ ... })
export class SubmenuComponent implements OnInit {
constructor(private router: Router) {
router.events.subscribe((event: Event) => {
if (event instanceof NavigationEnd) {
let url : string = router.url;
let index : number = getIndexFromUrl(url);
this.menus = Settings.menus[index].subs;
}
});
}
ngOnInit() { }
private menus: SubMenu[];
}
Routing
const routes: Routes = [
{ path: "submenu1", component: SubmenuComponent, outlet: "menus" },
{ path: "submenu2", component: SubmenuComponent, outlet: "menus" },
...
];
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45802386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Set the name for each ParallelFor iteration in KFP v2 on Vertex AI I am currently using kfp.dsl.ParallelFor to train 300 models. It looks something like this:
...
models_to_train_op = get_models()
with dsl.ParallelFor(models_to_train_op.outputs["data"], parallelism=100) as item:
prepare_data_op = prepare_data(item)
train_model_op = train_model(prepare_data_op["train_data"]
...
Currently, the iterations in Vertex AI are labeled in a dropdown as something like for-loop-worker-0, for-loop-worker-1, and so on. For tasks (like prepare_data_op, there's a function called set_display_name. Is there a similar method that allows you to set the iteration name? It would be helpful to relate them to the training data so that it's easier to look through the dropdown UI that Vertex AI provides.
A: I reached out to a contact I have at Google. They recommended that you can pass the list that is passed to ParallelFor to set_display_name for each 'iteration' of the loop. When the pipeline is compiled, it'll know to set the corresponding iteration.
# Create component that returns a range list
model_list_op = model_list(n_models)
# Parallelize jobs
ParallelFor(model_list_op.outputs["model_list"], parallelism=100) as x:
x.set_display_name(str(model_list_op.outputs["model_list"]))
| {
"language": "en",
"url": "https://stackoverflow.com/questions/73052584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Open a second winform asynchronously but still behave as a child to the parent form? I am creating an application and I would like to implement a progress window that appears when a lengthy process is taking place.
I've created a standard windows form project to which I've created my app using the default form. I've also created a new form to use as a progress window.
The problem arises when i open the progress window (in a function) using:
ProgressWindow.ShowDialog();
When this command is encountered, the focus is on the progress window and I assume it's now the window who's mainloop is being processed for events. The downside is it blocks the execution of my lengthy operation in the main form.
If I open the progress window using:
ProgressWindow.Show();
Then the window opens correctly and now doesn't block the execution of the main form but it doesn't act as a child (modal) window should, i.e. allows the main form to be selected, is not centered on the parent, etc..
Any ideas how I can open a new window but continue processing in the main form?
A: I implemented something very similar to this for another project. This form allows you to popup a modal dialog from within a worker thread:
public partial class NotificationForm : Form
{
public static SynchronizationContext SyncContext { get; set; }
public string Message
{
get { return lblNotification.Text; }
set { lblNotification.Text = value; }
}
public bool CloseOnClick { get; set; }
public NotificationForm()
{
InitializeComponent();
}
public static NotificationForm AsyncShowDialog(string message, bool closeOnClick)
{
if (SyncContext == null)
throw new ArgumentNullException("SyncContext",
"NotificationForm requires a SyncContext in order to execute AsyncShowDialog");
NotificationForm form = null;
//Create the form synchronously on the SyncContext thread
SyncContext.Send(s => form = CreateForm(message, closeOnClick), null);
//Call ShowDialog on the SyncContext thread and return immediately to calling thread
SyncContext.Post(s => form.ShowDialog(), null);
return form;
}
public static void ShowDialog(string message)
{
//Perform a blocking ShowDialog call in the calling thread
var form = CreateForm(message, true);
form.ShowDialog();
}
private static NotificationForm CreateForm(string message, bool closeOnClick)
{
NotificationForm form = new NotificationForm();
form.Message = message;
form.CloseOnClick = closeOnClick;
return form;
}
public void AsyncClose()
{
SyncContext.Post(s => Close(), null);
}
private void NotificationForm_Load(object sender, EventArgs e)
{
}
private void lblNotification_Click(object sender, EventArgs e)
{
if (CloseOnClick)
Close();
}
}
To use, you'll need to set the SyncContext from somewhere in your GUI thread:
NotificationForm.SyncContext = SynchronizationContext.Current;
A: Another option:
Use ProgressWindow.Show() & implement the modal-window behavior yourself. parentForm.Enabled = false, position the form yourself, etc.
A: You probably start your lengthy operation in a separate worker thread (e.g. using a background worker). Then show your form using ShowDialog() and on completion of the thread close the dialog you are showing.
Here is a sample - in this I assume that you have two forms (Form1 and Form2). On Form1 I pulled a BackgroundWorker from the Toolbox. Then I connected the RunWorkerComplete event of the BackgroundWorker to an event handler in my form. Here is the code that handles the events and shows the dialog:
public partial class Form1 : Form
{
public Form1() {
InitializeComponent();
}
private void backgroundWorker1_DoWork(object sender, DoWorkEventArgs e) {
Thread.Sleep(5000);
e.Result = e.Argument;
}
private void backgroundWorker1_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) {
var dlg = e.Result as Form2;
if (dlg != null) {
dlg.Close();
}
}
private void button1_Click(object sender, EventArgs e) {
var dlg = new Form2();
this.backgroundWorker1.RunWorkerAsync(dlg);
dlg.ShowDialog();
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/2435752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: R Hadoop counting I'm new in R, and i've a problem with MapReduce rmr2. I've a file to read of this kind, where in each row, there is a date and some words (A,B,C..) :
2016-05-10, A, B, C, A, R, E, F, E
2016-05-18, A, B, F, E, E
2016-06-01, A, B, K, T, T, E, G, E, A, N
2016-06-03, A, B, K, T, T, E, F, E, L, T
and i want to obtain in output something like :
2016-05: A 3
2016-05: E 4
2016-05: E 4
i've done the same question with java implementation, now i've to do the same in R code, but I've to figure out how to do my Reducer. There is a way to do some print inside my mapper and Reduce code, because using print command inside Mapper or Reduce, i obtain an error in RStudio
Sys.setenv(HADOOP_STREAMING = "/usr/local/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.8.0.jar")
Sys.setenv(HADOOP_HOME = "/usr/local/hadoop/bin/hadoop")
Sys.setenv(HADOOP_CMD = "/usr/local/hadoop/bin/hadoop")
library(stringr)
library(rmr2)
library(stringi)
customMapper = function(k,v){
#words = unlist(strsplit(v,"\\s"))
#words = unlist(strsplit(v,","))
tmp = unlist(stri_split_fixed(v, pattern= ",",n = 2))
data = tmp[1]
onlyYearMonth = unlist(stri_split_fixed(data, pattern= "-",n = 3))
#print(words)
words = unlist(strsplit(tmp[2],","))
compositeK = paste(onlyYearMonth[1],"-",onlyYearMonth[2])
keyval(compositeK,words)
}
customReducer = function(k,v) {
#Here there are all the value with same date ???
elementsWithSameDate = unlist(v)
#defining something similar to java Map to use for counting elements in same date
# myMap
for(elWithSameDate in elementsWithSameDate) {
words = unlist(strsplit(elWithSameDate,","))
for(word in words) {
compositeNewK = paste(k,":",word)
# if myMap contains compositeNewK
# myMap (compositeNewK, 1 + myMap.getValue(compositeNewK))
# else
#myMap (compositeNewK, 1)
}
}
#here i want to transorm myMap in a String, containing the first 3 words with max occurrencies
#fromMapToString = convert(myMap)
keyval(k,fromMapToString)
}
wordcount = function(inputData,outputData=NULL){
mapreduce(input = inputData,output = outputData,input.format = "text",map = customMapper,reduce = customReducer)
}
hdfs.data = file.path("/user/hduser","folder2")
hdfs.out = file.path("/user/hduser","output1")
result = wordcount(hdfs.data,hdfs.out)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45020540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Best way to display an array of images to the canvas in HTML5? As the title states, I'm trying to put a series of images into an array and then draw them onto the canvas in HTML5, I'm getting no errors but nothing is showing up. I'm new to HTML5 so am having a bit of trouble.
Here's what I have:
var canvas = document.querySelector('Canvas');
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;
var ctx = canvas.getContext('2d');
function drawImagesToCanvas() {
var imageArray = new Array();
imageArray[0] = "graphics/1.png";
imageArray[1] = "graphics/2.png";
imageArray[2] = "graphics/3.png";
imageArray[3] = "graphics/4.png";
imageArray[4] = "graphics/5.png";
imageArray[5] = "graphics/6.png";
imageArray[6] = "graphics/7.png";
imageArray[7] = "graphics/8.png";
imageArray[8] = "graphics/9.png";
imageArray[9] = "graphics/10.png";
ctx.drawImage(imageArray[0], 120, 280, 220, 150);
}
I thought it may be the file path but I've tried multiple variations.
A: You don't feed a source URL to drawImage; it wants the image itself. This can be read from a DOM element or by constructing an Image object in code, as shown here:
var canvas = document.querySelector('Canvas');
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;
var ctx = canvas.getContext('2d');
function drawImagesToCanvas() {
var imageArray = new Array();
imageArray[0] = "https://placehold.it/220x150";
// ...
var img = new Image();
img.onload = function() {
ctx.drawImage(img, 20, 80, 220, 150);
};
img.src = imageArray[0];
}
drawImagesToCanvas();
<canvas></canvas>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49239352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Multiple subnets in a GCP network Subnets are regional resource, while network are global resource. I am doing a Google LAB and I am facing up this doubt.
There is this kind of network:
networkA with subnet-a and subnet-b both in region us-central1
How is it possible?
A: I can see no issue with such configuration.
Please have a look at the documentation Networks and subnets:
Each VPC network consists of one or more useful IP range partitions called subnets. Each subnet is associated with a
region.
and
A network must have at least one subnet before you can use it. Auto
mode VPC networks create subnets in each region automatically. Custom
mode VPC networks start with no subnets, giving you full control over
subnet creation. You can create more than one subnet per region.
So, accordingly to the documentation, it's possible to have a network test-network with two subnets subnet-a and subnet-b both in same region us-central1, for example:
$ gcloud compute networks create test-network --subnet-mode=custom --mtu=1460 --bgp-routing-mode=regional
$ gcloud compute networks subnets create subnet-a --range=10.0.1.0/24 --network=test-network --region=us-central1
$ gcloud compute networks subnets create subnet-b --range=10.0.2.0/24 --network=test-network --region=us-central1
$ gcloud compute networks list
NAME SUBNET_MODE BGP_ROUTING_MODE IPV4_RANGE GATEWAY_IPV4
test-network CUSTOM REGIONAL
$ gcloud compute networks subnets list
NAME REGION NETWORK RANGE
subnet-a us-central1 test-network 10.0.1.0/24
subnet-b us-central1 test-network 10.0.2.0/24
In addition have a look at the documentation section Communication within the network:
Except for the default network, you must explicitly create higher
priority ingress firewall rules to allow instances to communicate with
one another. The default network includes several firewall rules in
addition to the implied ones, including the default-allow-internal
rule, which permits instance-to-instance communication within the
network. The default network also comes with ingress rules allowing
protocols such as RDP and SSH.
Please update your question if you have other doubts.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/64803008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to update property in angular? I am trying to update property after some time .but it is not reflecting on view why ?
Here is my code:
https://stackblitz.com/edit/angular-grjd1u?file=src%2Fapp%2Fhello.component.ts
ngOnInit(){
this.ls = "dddd"
setTimeout(function(){
this.name ="helllllll"
}, 3000);
}
I am trying to update my name property after 3 sec .but it is not updating.
Why change detection is not working ?
A: Because in your callback for setTimeout, this is the Window object, not your component instance. You can fix this by using an arrow function, which binds this to the context in which the function was declared:
ngOnInit(){
this.ls = "dddd"
setTimeout(() => {
this.name = "helllllll"
}, 3000);
}
A: You need to implement the interface OnChanges to get the name reference and do the change in the var name this function its executed after ngInit and ngOnChanges its triggered when you are using @Input() decorator demo
ngOnChanges(changes: any): void {
this.ls = "dddd"
setTimeout(() => {
this.name = "helllllll"
}, 3000);
}
A: Use Observable:
// Create observable
const myObservable = new Observable((observer) => {
setInterval(() => observer.next({ name: 'John Doe', time: new Date().toString() }), 3000);
})
// Subscribe to the observable
myObservable.subscribe((x: any) => {
this.name = `${x.name}, Time: ${x.time}`;
});
https://stackblitz.com/edit/angular-ce3x4v
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50614496",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Text Wrapping differences in IE7, IE8, and FF When I have this <table> below, the text wraps as needed in FF and IE8, but when I run this in compatibility mode or IE7 the text does not wrap and the width of the previous is basically ignored. Any way to get around this? Here is a simplified example.
<table>
<tr>
<td style="width:125px">
hi
</td>
<td>bye</td>
</tr>
<tr>
<td>
line of text that will equal more than the above width
</td>
<td>bye</td>
</tr>
</table>
A: <table>
<tr>
<td style="width:125px">
hi
</td>
<td>bye</td>
</tr>
<tr>
<td style="width:125px">
line of text that will equal more than the above width
</td>
<td>bye</td>
</tr>
</table>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/2595241",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: ARM template link to refactor template values Summary: We have Below mentioned release pipelines
1. Release1 -This pipeline will create resources like Application insights, App service plan, Key vault. (ARM files -azuredeploy.json and azuredeployparameters.json)
2. Release2 Pipeline: This pipeline will create resources like App service/Function App using Release1 components like Application insights, App service plan, Key vault. (ARM files -azuredeploy.json and azuredeployparameters.json)
We have multiple micro services In Release2 pipelines,
Environments like Dev, QA, Test .
Each environment has separate resource group.
azuredeployparameters.json all values are same for all services except webapp name.
Issue:If we want change or update any value in all azuredeployparameters.json files in all Release2 pipeline services, We are updating manually.
Kindly suggest the solution on below:
Can we link all our release2 azuredeployparameters.json files to one centralized azuredeployparameters.json file.
If we modify centralized azuredeployparameters.json file, it should update all azuredeployparameters.json files in all release 2 services.
A: You can put your azuredeployparameters.json in your central/main repo. And if you use release pipelines for instance, you should create build for your central repo and publish azuredeployparameters.json as artifact. You can later use this artifacts in any release pipeline you want. So you can get it Release1 and Release2.
If you use build pipelines also to deploy, you can use multiple repos and get source code (in release 1) from your central repo and repo dedicated to this release. In the same way you have this file available.
If you want to customize file a bit in Relese pipeline you can tokenize you azuredeployparameters.json file and replace those tokens in release. Here you have extension for this.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61005544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Mercurial to Manage Existing Website? Getting ready to launch a website/project that was in beta testing. I want to switch it over to version control (Mercurial since I'm familiar with it).
Problem is, I am not sure how to go about doing it since the code on the website is already up and in-use and how to deal with the directories I do not need to manage (vendor and web/Upload).
Whats the best way to go about this?
Would I put the entire site into a folder, init a Merc repo, use hgignore to not track vendor and web/Upload, commit, then clone it to the live server?
Thanks! Just confused on what to do since the site is live and has user uploads.
A: I'm assuming you want to turn the website directory on your web server into a Mercurial repository. If that's the case, you would create a new repository somewhere on that computer, then move the .hg directory in the new repository into the website directory you want to be the root of the repository. You should then be able to run
hg add * --exclude vendor --exclude web/Upload
hg commit -m "Adding site to version control."
to get all the non-user files into version control.
I recommend, however, that you write a script or investigate tools that will deploy your website out of a repository outside your web root. You don't want your .hg directory exposed to the world. Until you get a deploy script/tool working, make sure you tell your webserver to prohibit/reject all requests to your .hg directory.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/10936897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: "How to fix `remove default alphabetical ordering of SerializeJSON() ` I'm trying to add the serialized data in a request to third party API which needs a specific order of the data to be maintained, but SerializeJSON orders in alphabetical order which breaks the format required by the third party API. Could someone help me to figure it out
INPUT:
<cfset data ={
"Booking": {
"ActionCode":"DI",
"AgencyNumber":"23",
"Touroperator":"TVR",
"BookingNumber":"323",
},
"Payment": {
"__type":"paymenttype",
"PaymentProfile": {
"Value": 4,
"Manual": false
},
"PaymentType": 4,
"PaymentAction":2,
"Details": {
"IBAN": "DE02120300000000202051",
"BIC": "BYLADEM1001"
}
},
"Login":{
"UserCode": "usercode",
"Password": "password"
}
}>
When this method SerializeJSON() is used on my data:
SerializeJSON(data)
Current Output
"{"Booking":{"Touroperator":"TVR","ActionCode":"DI","BookingNumber":"323","AgencyNumber":"23"},"Login":{"UserCode":"usercode","Password":"password"},"Payment":{"PaymentProfile":{"Manual":false,"Value":4},"PaymentType":4,"PaymentAction":2,"__type":"paymenttype","Details":{"BIC":"BYLADEM1001","IBAN":"DE02120300000000202051"}}}"
Expected Output:
"{"Booking":{"ActionCode":"DI","AgencyNumber":"23","Touroperator":"TVR","BookingNumber":"323",},"Payment":{"__type":"paymenttype","PaymentProfile":{"Value":4,"Manual":false},"PaymentType":4,"PaymentAction":2,"Details":{"IBAN":"DE02120300000000202051","BIC":"BYLADEM1001"}},"Login":{"UserCode":"usercode","Password":"password"}}"
A: Structs in ColdFusion are unordered HashMaps, so there is no order at all. You can keep insertion order by using structNew("Ordered") (introduced with ColdFusion 2016). Unfortunately you can no longer use the literal syntax anymore, but I assume you are generating the data dynamically anyway.
<cfset data = structNew("Ordered")>
<cfset data["Booking"] = structNew("Ordered")>
<cfset data["Booking"]["ActionCode"] = "DI">
<cfset data["Booking"]["AgencyNumber"] = "TVR">
<cfset data["Booking"]["BookingNumber"] = "323">
<cfset data["Payment"] = structNew("Ordered")>
<cfset data["Payment"]["__type"] = "paymenttype">
<cfset data["Payment"]["PaymentProfile"] = structNew("Ordered")>
<cfset data["Payment"]["PaymentProfile"]["Value"] = 4>
<cfset data["Payment"]["PaymentProfile"]["Manual"] = false>
etc.
If you are stuck on an older ColdFusion version, you will have to use Java's LinkedHashMap.
<cfset data = createObject("java", "java.util.LinkedHashMap")>
<cfset data["Booking"] = createObject("java", "java.util.LinkedHashMap")>
<cfset data["Booking"]["ActionCode"] = "DI">
<cfset data["Booking"]["AgencyNumber"] = "TVR">
<cfset data["Booking"]["BookingNumber"] = "323">
<cfset data["Payment"] = createObject("java", "java.util.LinkedHashMap")>
<cfset data["Payment"]["__type"] = "paymenttype">
<cfset data["Payment"]["PaymentProfile"] = createObject("java", "java.util.LinkedHashMap")>
<cfset data["Payment"]["PaymentProfile"]["Value"] = 4>
<cfset data["Payment"]["PaymentProfile"]["Manual"] = false>
etc.
But be aware: LinkedHashMap is case-sensitive (and also type-sensitive: in case your keys are numbers, it does matter!).
<cfset data = createObject("java", "java.util.LinkedHashMap")>
<cfset data["Test"] = "">
<!---
accessing data["Test"] = works
accessing data["test"] = doesn't work
accessing data.Test = doesn't work
--->
Another issue you might encounter: Due to ColdFusion's internal type casting, serializeJSON() might stringify numbers and booleans in an unintended way. Something like:
<cfset data = structNew("Ordered")>
<cfset data["myBoolean"] = true>
<cfset data["myInteger"] = 123>
could easily end up like:
{
"myBoolean": "YES",
"myInteger": 123.0
}
(Note: The above literal syntax would work perefectly fine, but if you are passing the values around as variables/arguments, casting eventually happens.)
The easiest workaround is explicitly casting the value before serializing:
<cfset data = structNew("Ordered")>
<cfset data["myBoolean"] = javaCast("boolean", true)>
<cfset data["myInteger"] = javaCast("int", 123)>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55738639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: appending child div to each div with for loop I want to append child div to each given parent divs. The child div supposed to be decorated by calling decorateDiv().
For example, after the appendChildren is executed, following divs
should take the following form (assuming decorateDiv does nothing)
function appendChildren()
{
var allDivs = document.getElementsByTagName("div");
for (var i = 0; i < allDivs.length; i++)
{
var newDiv = document.createElement("div");
decorateDiv(newDiv);
allDivs[i].appendChild(newDiv);
}
}
// Mock of decorateDiv function for testing purposes
function decorateDiv(div) {}
what am i doing wrong?
A: Using the allDivs.length in the condition field will accesses the length property each time the loop iterates. So declare a variable for the length outside the function and specify the variable name in the condition field.
function appendChildren()
{
var allDivs = document.getElementsByTagName("div");
var len = allDivs.length;
for (var i = 0; i < len ; i++)
{
var newDiv = document.createElement("div");
decorateDiv(newDiv);
allDivs[i].appendChild(newDiv);
}
}
// Mock of decorateDiv function for testing purposes
function decorateDiv(div) {}
A: getElementsByTagName returns Live NodeList, which keeps the length increasing for every new div added.
Either use document.querySelectorAll() which is Not a Live NodeList.
let allDivs = document.querySelectorAll("div");
or Convert the Live NodeList to array. With ES6 [...spread] operator its really simple.
let allDivs = [...document.getElementsByTagName("div")];
A: You're running into the fact that .getElementsByTagName() returns a live NodeList. That means that the new <div> elements that you're adding to the page become part of the list as soon as you do so.
What you can do is turn that NodeList into a plain array beforehand:
var allDivs = document.getElementsByTagName("div");
allDivs = [].slice.call(allDivs, 0);
Now using "allDivs" in the loop will just append your new elements into the ones that were there when you originally went looking for them.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29605980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: SyCL ComputeCpp: issues with the matrix_multiply SDK example I just managed to install successfully the SyCL ComputeCpp + OpenCL (from CUDA) and running cmake to generate the samples VS2019 sln, successfully.
I've tried to run the matrix_multiply example ONLY, for now.
It ran successfully using the Intel FPGA emulator as a default device.
Changing the devices to the Device CPU worked well as well.
Choosing the host device, took ages without exiting.
When I tried to change the device to the nVidia, the GeForce GTX 1650 Ti.
I got this expection error from there ComputeCpp:RT0100, etc etc.
Googling a bit, I found I'd probably have to output the PTX instead of the SPIR.
So I regenerated the sln using -DCOMPUTECPP_BITCODE=ptx64
After doing that, the kernel ran successfully on the nVidia GPU.
My first question is: is that needed since nVidia does NOT support spir yet at the time of this writing, but only PTX?
However this broke the other devices, which are now reporting:
[ComputeCpp:RT0107] Failed to create program from binary
This happens now for all devices: Intel GPU, Device CPU, Device FPGA (While were formerly working)
Inspecting the .sycl I found now SYCL_matrix_multiply_cpp_bin_nvptx64[].
My question is: how to support nVidia with ptx and "normal" devices with spir altogether in the same exe? I did a menΓΉ from which the user can choose to play with, but now it's working only for nVidia.
What am I doing wrong, please?
I would expect to be able to run the same .sycl code for all the devices despite it contains ptx or spir. How to manage for that?
EDIT: I just tried to retarget the bitcode to spirv64, since the computecpp_info told me all my devices are supposed to support it.
However, now no device is anymore working with that setting :-(
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74402774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to use 2 xpaths to send keys (elements from a dictionary) via loop on Python? Selenium related I'm trying to automate a process on this page, and I'm stuck at the part where I need to take the keys and values from a dictionary called current_dictionary and paste them into the Type and Name textboxes, here's the code I have managed to do so far:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
opt = Options() #the variable that will store the selenium options
opt.add_experimental_option("debuggerAddress", "localhost:9222") #this allows bulk-dozer to take control of your Chrome Browser in DevTools mode.
s = Service(r'C:\Users\ResetStoreX\AppData\Local\Programs\Python\Python39\Scripts\chromedriver.exe') #Use the chrome driver located at the corresponding path
driver = webdriver.Chrome(service=s, options=opt) #execute the chromedriver.exe with the previous conditions
def wait_xpath(code): #function to wait for the element to be located by its XPATH
WebDriverWait(driver, 60).until(EC.presence_of_element_located((By.XPATH, code)))
current_dictionary = {'Background': 'Ocean',
'Body': 'Crab',
'Colour': 'Dark green',
'Eyes type': 'Antennae',
'Claws': 'None',
'Spikes': 'None'}
if driver.current_url == 'https://opensea.io/asset/create':
button_plus_properties = driver.find_element(By.XPATH, '//*[@id="main"]/div/div/section/div[2]/form/section/div[1]/div/div[2]/button').click() #click on the "+" button of Properties
wait_xpath('/html/body/div[5]/div/div/div') #wait for "Add properties" dialog to be loaded and located
After having logged in to this site with the Metamask extension, the code above will look for the + button from the Properties element and click on it:
Then, it will wait until this dialog has been located:
I know I can use list(current_dictionary.keys()) and list(current_dictionary.values()) to get the corresponding arrays of both elements in the current_dictionary:
In [120]: list(current_dictionary.keys())
Out[120]: ['Background', 'Body', 'Colour', 'Eyes type', 'Claws', 'Spikes']
In [121] list(current_dictionary.values())
Out[121]: ['Ocean', 'Crab', 'Dark green', 'Antennae', 'None', 'None']
And I have observed that the following paths from this page can be used for looping:
xpath_type = '/html/body/div/div/div/div/section/table/tbody/tr/td[1]/div/div/input'
xpath_name = '/html/body/div/div/div/div/section/table/tbody/tr/td[2]/div/div/input'
For instance: If I paste the xpath_type in to the Xpath search bar within the html source of this page and press several times Enter:
It can be seen that it will iterate over the Type textboxes, the same applies for the Name textboxes and the xpath_name.
So, how could I code a loop that manage to get the following output? (assume there are 6 textboxes available for Type element and Name element as well):
A: Unfortunately, when trying to use the XPATHs in this page, they ended up breaking after the first try, not sure why it happened.
However, their corresponding CSS_SELECTOR counterparts did manage to fulfill my expectations pretty well.
Here's the solution:
###improvements were applied from this part
if driver.current_url == 'https://opensea.io/asset/create':
button_plus_properties = driver.find_element(By.XPATH, '//*[@id="main"]/div/div/section/div[2]/form/section/div[1]/div/div[2]/button').click() #click on the "+" button of Properties
wait_xpath('/html/body/div[5]/div/div/div') #wait for "Add properties" dialog to be loaded and located
type_array = list(current_dictionary.keys()) #get the keys which will be send as types
name_array = list(current_dictionary.values()) #get the values which will be send as values
i = 1
while i <= len(current_dictionary): #iterate over the types and values lulz
css_type = f'body > div:nth-child(25) > div > div > div > section > table > tbody > tr:nth-child({i}) > td:nth-child(1) > div > div > input' #selector of the i type
css_name = f'body > div:nth-child(25) > div > div > div > section > table > tbody > tr:nth-child({i}) > td:nth-child(2) > div > div > input' #selector of the i value
button_css_type = driver.find_element(By.CSS_SELECTOR, css_type).send_keys(type_array[i-1]) #find the ith textbox type and paste the ith-1 element from the type_array variable
button_css_name = driver.find_element(By.CSS_SELECTOR, css_name).send_keys(name_array[i-1]) #find the ith textbox value and paste the ith-1 element from the name_array variable
if i != len(current_dictionary): #as long as i is not equal to the lenght of the current_dictionary
button_add_more = driver.find_element(By.XPATH, '/html/body/div[5]/div/div/div/section/button').click() #add a new textbox type and textbox value
i +=1
button_save_metadata = driver.find_element(By.XPATH, '/html/body/div[5]/div/div/div/footer/button') #find the save button in this dialog
button_save_metadata.click() #save the metadata
The improvement above first creates an array for both Type and Name elements to store the Keys and Values of the current_dictionary respectively.
Then, it will start a while loop, in which it is set the general CSS_SELECTOR for any new textbox created after clicking the Add more button until the counter i is equal to the lenght of the current_dictionary, then it sends the corresponding Key and Value to their respective textboxes, and finally it saves everything by clicking the Save button.
Output:
| {
"language": "en",
"url": "https://stackoverflow.com/questions/70892498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What is the difference between a Kubernetes Controller and a Kubernetes Operator? As I understand the purpose of the Kubernetes Controller is to make sure that current state is equal to the desired state. Nevertheless, Kubernetes Operator does the same job.
The list of controller in the Control-plane:
*
*Deployment
*ReplicaSet
*StatefulSet
*DaemonSet
*etc
From the Google Search, I found out that there are K8s Operators such as
*
*etcd Operator
*Prometheus Operator
*kong Operators
However, I was not able to understand why it cannot be done using Controller?
Is Operator complementing the Controllers?
What's the difference between these two design as a purpose and functionality.
What certain things need to keep in mind to choose between Controller and Operator? ?
A: TL;DR:
*
*Controller == Works on vanilla K8s resources
*Operator == a Controller that adds custom resources (CRDs) required for it's operation
Change my mind but in my opinion the difference is negligible and the terms rather confuse people then actually adding value to a discussion. I therefore would use them interchangeablely.
A: In Kubernetes, most of the operations happen in an asynchronous manner.
For instance, when one creates a ReplicaSet object (picking a simpler object), this is the sequence that happens:
*
*We send the request to the Kube api-server.
*The kube-api server has a complex validation
*
*Ensures that the user has the RBAC credential to create the RS in the given namespace
*The request is validated by all the configured admission controllers
*Finally the object is just written to ETCD - nothing more nothing less
Now, it is the responsibility of the various Kubernetes controllers to watch the ETCD changes and actually execute the necessary operations. In this case, the ReplicaSet controller would be watching for the changes in ETCD (e.g. CRUD of ReplicataSets) and would create the Pods as per the replica count etc.
Now, coming to Operators, conceptually they are very similar to Kubernetes controllers. But they are used with third-party entities. In Kubernetes, there is a concept of CRDs, where vendors can define their own CRD which is nothing but a custom (e.g. Vendor specific) kubernetes object type. Very similar to the manner in which Kubernetes controllers read to the CRUD of Kubernetes objects, these operators respond to the operations on the corresponding CRDs. E.g. Kong operator can create new API entries in the Kong API server when a new API CRD object is created in the Kubernetes cluster.
A: I believe the term "kubernetes operator" was introduced by the CoreOS people here
An Operator is an application-specific controller that extends the Kubernetes API to create, configure and manage instances of complex stateful applications on behalf of a Kubernetes user. It builds upon the basic Kubernetes resource and controller concepts, but also includes domain or application-specific knowledge to automate common tasks better managed by computers.
So basically, a kubernetes operator is the name of a pattern that consists of a kubernetes controller that adds new objects to the Kubernetes API, in order to configure and manage an application, such as Prometheus or etcd.
In one sentence: An operator is a domain specific controller.
Update
There is a new discussion on Github about this very same topic, linking to the same blog post. Relevant bits of the discussion are:
All Operators use the controller pattern, but not all controllers are Operators. It's only an Operator if it's got: controller pattern + API extension + single-app focus.
Operator is a customized controller implemented with CRD. It follows the same pattern as built-in controllers (i.e. watch, diff, action).
Update 2
I found a new blog post that tries to explain the difference as well.
A: Controllers are innate objects to kubernetes that follows the control loop theory and ensures desired state matches the actual. ReplicatSet, daemonset, replication they all are pre-configured/pre-installed controllers
Operators also have controllers. Operators are a means to customize or extend the functionality of kubernetes by means of CRD (Custom Resource Definition). For eg, if you have a need to auto-inject specialized monitoring or initilization container, when a new app pod is created, then you will need write some customization (operators) as this functionality is not available in kubernetes.
Operators can be written in any language with ability to communicate with Kubernetes API server; I have mostly seen them written in Golang.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/47848258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "126"
} |
Q: How to get underlying event in Esper I have created a listener implementing UpdateListener interface, which is attached to an Event (Example - TestEvent). Now every time this event is raised, I want to get the underlying event for TestEvent and print that.
Example:
Statement 1 -
on ParentEvent1
insert into TestEvent
Statement 2 -
on ParentEvent2
insert into TestEvent
Statement 3 -
on ParentEvent3
insert into TestEvent
Statement 4 -
on ParentEvent4
insert into TestEvent
So whenever TestEvent is raised, I need to print "TestEvent is raised because of ParentEvent4","TestEvent is raised because of ParentEvent1" etc.
Thanks,
Regards,
Ankit Jain
A: This is done by selecting some information regarding the triggering event itself from the stream.
on ParentEvent1 as p1 insert into TestEvent select p1, somemoreinformation from MyNamedWindow
Instead of selecting the event itself its also fine to select some text:
on P1 insert into TestEvent select 'P1' as triggeredBy from ...
on P2 insert into TestEvent select 'P2' as triggeredBy from ...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44428146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: mongodb aggregate sort by groupped fields MongoDB 4.0.
This is the data set (sales-aggregate-test.js):
use Test123;
const HOW_MANY_PRODUCTS = 1000
const HOW_MANY_SALES_PER_PRODUCT = 50
for(let i = 0; i < HOW_MANY_PRODUCTS; i++) {
const productNumber = (i + 10001)
const productId = '5bd9d139d96b8fce000' + productNumber
db.getCollection('products').insert({
_id: ObjectId(productId),
title: 'Product ' + productNumber,
})
for(let j = 0; j < HOW_MANY_SALES_PER_PRODUCT; j++) {
const saleNumber = (j + 10001)
const saleId = '5bd9d139d96b8f' + productNumber + saleNumber
db.getCollection('sales').insert({
_id: ObjectId(saleId),
product: ObjectId(productId),
quantity: i + j + 1,
})
}
}
Insert it with: mongo < ./sales-aggregate-test.js.
Now this is the query (sales-aggregate-test-actual-query.js):
use Test123;
db.getCollection('sales').aggregate(
[
{
$sort: { product: 1, remoteVariantId: 1, quantity: -1, }
},
{
$lookup: {
from: 'products',
localField: 'product',
foreignField: '_id',
as: 'productModel',
}
},
{
$unwind: '$productModel'
},
{
$match: {
'productModel.archived': { $ne: true }
}
},
{
$project: {
product: 1,
quantity: 1,
}
},
//{ $limit: 10 },
{
$group: {
_id: '$product',
saleModelsCount: { $sum: 1 },
quantity : { $sum: '$quantity' },
}
},
{
$sort: { quantity: -1, }
},
{ $limit: 3 },
]
// ,{ allowDiskUse: true }
)
What am trying to achieve? Getting this faster:
{ "_id" : ObjectId("5bd9d139d96b8fce00011000"), "saleModelsCount" : 50, "quantity" : 51225 }
{ "_id" : ObjectId("5bd9d139d96b8fce00010999"), "saleModelsCount" : 50, "quantity" : 51175 }
{ "_id" : ObjectId("5bd9d139d96b8fce00010998"), "saleModelsCount" : 50, "quantity" : 51125 }
This is basically: Give me the best selling product. Since sales include quantity, I need to first group them by quantity and then sort.
Now on this test data set it's "fast" - just 2.5 seconds. The problem is with a real data set, where the product models are much bigger, and more factors involved (like a 'price' field in a sale model).
The issue seems to be caused by both the last $group and $sort stages. Commenting out both returns quickly. Commenting out just one makes the query slow.
How do I make it faster? Open for suggestions - a different approach is also possible.
A: Few thoughts that might be useful for you:
First of all you can get rid of first $sort as you have another one in the last pipeline stage and that one will guarantee right order.
There are few ways how to replace $lookup + $unwind + $match + $project + $group.
You can use $addFields with $filter to filter out some elements before you $unwind:
{
$lookup: {
from: 'products',
localField: 'product',
foreignField: '_id',
as: 'productModel',
}
},
{
$addFields: {
productModel: {
$filter: {
input: '$productModel',
as: 'model',
cond: { $ne: [ '$$model.archived', true ] }
}
}
}
},
{
$unwind: '$productModel'
}
In this case you can remove $match since this operation is performed in nested array.
Second way might be to use $lookup with custom pipeline, so that you can perform this additional filtering inside $lookup:
{
$lookup: {
from: 'products',
let: { productId: "$product" },
pipeline: [
{
$match: { $expr: { $and: [ { $eq: [ "$$productId", "$_id" ] }, { $ne: [ "$archived", true ] } ] } }
}
],
as: 'productModel',
}
}
As another optimization in both cases you don't need $unwind as your productModel array is filtered and then you can just modify your $group:
{
$group: {
_id: '$product',
saleModelsCount: { $sum: { $size: "$productModel" } },
quantity : { $sum: '$quantity' },
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53088484",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: JavaScript Blockly.FieldDropdown parameters I'm doing a project with blockly which is a javascript library, i can't understand what type of parameters does the menuGenerator variable of the Blockly.FieldDropdown function accept.
Here you can see the bit of code interested:
/**
* Class for an editable dropdown field.
* @param {(!Array.<!Array>|!Function)} menuGenerator An array of options
* for a dropdown list, or a function which generates these options.
* @param {Function=} opt_validator A function that is executed when a new
* option is selected, with the newly selected value as i ts sole
argument.
* If it returns a value, that value (which must be one of the options)
will
* become selected in place of the newly selected option, unless the
return
* value is null, in which case the change is aborted.
* @extends {Blockly.Field}
* @constructor
*/
Blockly.FieldDropdown = function(menuGenerator, opt_validator) {
i can't understand what does @param {(!Array.|!Function)} mean
A: The documentation has more information and examples, including the basic structure of the dropdown options and how to make a dynamic dropdown menu.
The basic structure is:
Each dropdown menu is created with a list of menu options. Each option
is made up of two strings. The first is the human-readable text to
display. The second is a string constant which is used when saving the
option to XML. This separation allows a dropdown menu's setting to be
preserved between languages. For instance an English version of a
block may define [['left', 'LEFT'], ['right', 'RIGHT']] while a German
version of the same block would define
[['links', 'LEFT'], ['rechts', 'RIGHT']].
And for a dynamic menu:
Instead of providing a static list of options, one can provide a
function that returns a list of options when called. Every time the
menu is opened, the function is called and the options are
recalculated.
If menuGenerator is an array it is used; if it's a function it is run when the menu is opened. The function does not take in any arguments; it returns a list of options, structured as described above.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50080076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to perform text search on datagrid? May I know, how can I perform search on this default DataGrid? While added the value on it.
<DataGrid Name="table" AutoGenerateColumns="False" CanUserAddRows="False" CanUserDeleteRows="False" IsTextSearchEnabled="True" Background="White">
<DataGrid.Columns>
<DataGridTextColumn Header="Timestamp" Binding="{Binding StartDate}" SortDirection="Descending" SortMemberPath="StartDate" IsReadOnly="True" />
<DataGridTextColumn Header="Title" Binding="{Binding Title}" IsReadOnly="True" />
<DataGridTextColumn Header="Description" Binding="{Binding Description}" IsReadOnly="True" />
<DataGridTextColumn Header="Type" Binding="{Binding Tag}" IsReadOnly="True" />
</DataGrid.Columns>
</DataGrid>
A: How to Create and Use a CollectionView
The following example shows you how to create a collection view and bind it to a ListBox In the same way you can use it with datagrid
<Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
<ListBox ItemsSource={Binding Customers} />
</Window>
public class CustomerView
{
public CustomerView()
{
DataContext = new CustomerViewModel();
}
}
public class CustomerViewModel
{
private ICollectionView _customerView;
public ICollectionView Customers
{
get { return _customerView; }
}
public CustomerViewModel()
{
IList<Customer> customers = GetCustomers();
_customerView = CollectionViewSource.GetDefaultView(customers);
}
}
Filtering
To filter a collection view you can define a callback method that determines if the item should be part of the view or not. That method should have the following signature: bool Filter(object item). Now set the delegate of that method to the Filter property of the CollectionView and you're done.
ICollectionView _customerView = CollectionViewSource.GetDefaultView(customers);
_customerView.Filter = CustomerFilter
private bool CustomerFilter(object item)
{
Customer customer = item as Customer;
return customer.Name.Contains( _filterString );
}
Refresh the filter
If you change the filter criteria and you want to refresh the view, you have to call Refresh() on the collection view
public string FilterString
{
get { return _filterString; }
set
{
_filterString = value;
NotifyPropertyChanged("FilterString");
_customerView.Refresh();
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19107271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: SocketIO transferring data in bursts rather than continuously I have a program where the backend is written in Python, the frontend is written in React/Electron and I use websockets, specifically Socket.IO to communicate between the two (I use Flask-SocketIO in the backend).
I'd like to continuously transfer small amounts of data a few times a second (I'm aiming for 10 times a second) but on the receiving end of the browser the data comes in weird bursts:
data length time
42["TRANSFER_DATA",250] 35 18:13:42.253
42["TRANSFER_DATA",253] 35 18:13:42.255
42["TRANSFER_DATA",259] 35 18:13:42.258
42["TRANSFER_DATA",265] 35 18:13:42.553
42["TRANSFER_DATA",270] 35 18:13:42.556
42["TRANSFER_DATA",276] 35 18:13:42.557
42["TRANSFER_DATA",281] 35 18:13:42.854
42["TRANSFER_DATA",287] 35 18:13:42.855
42["TRANSFER_DATA",292] 35 18:13:42.857
42["TRANSFER_DATA",298] 35 18:13:43.156
42["TRANSFER_DATA",303] 35 18:13:43.157
42["TRANSFER_DATA",309] 35 18:13:43.160
Taken from the dev tools logs
You can see that instead of coming in continuously (every 100ms) they come in 3 bursts every 300ms.
I have tested if it's the backend freezing up and can confirm that it is not, the issue is probably somewhere in Socket.IO itself.
Very simplified backend code:
app = Flask(__name__)
sio = SocketIO(app, cors_allowed_origins="*")
def send_single(data, event_name, jsonify=False):
if jsonify:
data = json.dumps(data)
sio.emit(event_name, data, json=jsonify)
def main_loop():
while True: # This code is really simplified I don't actually do while True loops in the code itself
data = get_this_data_from_somewhere()
send_single(data, "TRANSFER_DATA")
time.sleep(0.1)
if __name__ == '__main__':
thread = threading.Thread(target=main_loop)
thread.start()
sio.run(app, host='127.0.0.1', port=58989)
Very simplified frontend code:
import React, {Component} from 'react';
import io from 'socket.io-client'
const socketURL = "http://127.0.0.1:58989";
class App extends Component {
state = {value: 0};
async initSocket(){
const socket = io(socketURL);
this.setState({socket});
await socket.on('connect', async () => {
console.log("SocketIO connection established");
});
};
setupListeners = () => {
this.state.socket.on('TRANSFER_DATA', (data) => {
this.setState({value: data});
});
};
componentDidMount() {
this.initSocket().then(this.setupListeners);
}
render() {
return(
<React.Fragment>
<span>{this.state.value}</span>
</React.Fragment>
)
}
}
export default App;
EDIT:
I have tried to completely get rid of using Flask to see if it is a Flask related issue and after switching to Gevent and python-socketio the issue remains.
I also did some logging of the SocketIO server and it seems to be emitting correctly with 100ms intervals
2019-12-10 21:15:15,710 INFO emitting event "TRANSFER_DATA" to all [/]
2019-12-10 21:15:15,811 INFO emitting event "TRANSFER_DATA" to all [/]
2019-12-10 21:15:15,911 INFO emitting event "TRANSFER_DATA" to all [/]
2019-12-10 21:15:16,013 INFO emitting event "TRANSFER_DATA" to all [/]
2019-12-10 21:15:16,114 INFO emitting event "TRANSFER_DATA" to all [/]
2019-12-10 21:15:16,214 INFO emitting event "TRANSFER_DATA" to all [/]
2019-12-10 21:15:16,315 INFO emitting event "TRANSFER_DATA" to all [/]
2019-12-10 21:15:16,415 INFO emitting event "TRANSFER_DATA" to all [/]
2019-12-10 21:15:16,516 INFO emitting event "TRANSFER_DATA" to all [/]
A: Thanks to @Miguel's response I realized that I forgot to monkey patch the standard library and that seem to have done the trick!
from gevent import monkey
monkey.patch_all()
| {
"language": "en",
"url": "https://stackoverflow.com/questions/59272679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Check user is locked in LDAP? I need to verify if the user account in the LDAP is locked
I am using below code
const int ADS_UF_LOCKOUT = 0x00000010;
DirectoryEntry entry = new DirectoryEntry (_path, domainAndUsername, pwd);
if (((int)entry.Properties["useraccountcontrol"].Value & ADS_UF_LOCKOUT) == 1)
{
return true;
}
But if the user account is locked , I am receiving , "Login Failed: Bad username /password"
Please Help.
A: If you want to determine if a user account is locked, then you can't use the user account information that you're checking for to determine this fact - because the user account is locked, you will be denied access.
You will not be told that the reason for being unable to log on is due to the account being locked, that would be considered excessive information disclosure.
If you want to determine if the reason for not being permitted to log on is due to the account being locked then you will need an already logged on account that can check the account lock state instead of trying from the failed connection.
A: You can use attribute lockout time for it too: it is 0 if user is not locked. (Connect to AD using administrator credentials).
DirectoryEntry _de = new DirectoryEntry (_path, domainAdmininstratorName, pwd); // get user as directory entry object
object largeInteger = _de.Properties["lockoutTime"].Value; // it's a large integer so we need to get it's value by a little bit complex way
long highPart =
(Int32)
largeInteger.GetType()
.InvokeMember("HighPart", BindingFlags.GetProperty, null, largeInteger, null);
long lowPart =
(Int32)
largeInteger.GetType()
.InvokeMember("LowPart", BindingFlags.GetProperty, null, largeInteger, null);
long result = (long) ((uint) lowPart + (((long) highPart) << 32));
if (result == 0)
{
// account is not locked
}
else
{
// account is locked
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17271864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Detecting Escape presses on open select lists in Firefox? I am writing an interface where when Escape is pressed it triggers the cancel button, if there is one. But I do not want to interfere with the standard logic that if a select drop down list is open that Escape will close the list. If a select element just happens to have focus though then the interface should be cancelled.
In Chrome and Opera no events are triggered by pressing Escape when a select list is open, so everything works as expected by default.
In Safari there is a keyup event but no preceding keydown one in this context. I have been able to resolve by setting a flag on keydown that I can then check for on keyup.
var _menuEscapeState = false
,_menu = document.getElementById( 'menu' );
_menu.addEventListener( 'keydown' ,function( e ) {
var e = e || window.event;
_menuEscapeState = ( e.which === 27 );
} );
_menu.addEventListener( 'keyup' ,function( e ) {
var e = e || window.event;
if( e.which === 27 && _menuEscapeState ) {
// DO STUFF
}
_menuEscapeState = false;
} );
But in Firefox it seems both a keyup and keydown fire when closing a select list. So is there any way to detect whether the list is open to be able to filter this out? Or is the only option to detect the use of Firefox and simply disable the Escape-to-cancel functionality in this browser?
Incidentally, I do not have IE to test with which is why it is not mentioned at this time, although the site is primarily aimed at use on a Mac.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35083585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: php mysql building a search query at ten tables + different columns I am trying to develop a search page for a website, but cannot come up with a single query.
Here is a list of those ten tables and their fields
*
*tmp_auction_auto
id order category manufacturer model price price_type location year run run_type doors airbags gear engine horsepower cylinders drivetype fuel color abs electronicwindows climatcontrol disks hatch boardcomputer alarm rightsteer turbo parkingcontrol conditioner leathersalon navigation centrallock chairwarm hydraulics noprice rent exchange customclearance status other contact
*tmp_auction_estate
id order type transaction price price_type price_sqm noprice city address area height repair condition project destination land veranda mansard conference stairs_total stair rooms bedrooms balcony sanitary_arr loggia fireplace conditioner garage parking land_destination buildings distance_central_street distance_tbilisi storeroom jacuzzi bathroom shower sauna furniture technique telephone internet generator pool businesscenter ate network inventory wardobe elevator gas hotwater heating intercom cabletv alarmsystem entrancesecurity windowguards security duplex triplex satelite kitchen showcase land_railway land_electricity land_gas land_water land_drainage status other contact
*tmp_auction_other
id order title price price_type noprice info contact
*tmp_branch
id lang title content x y
*tmp_comments
id reply_id path username email title content likes dislikes time admin
*tmp_news
id lang title content date
*tmp_pages
id lang title content date
*tmp_polls
id name question answers ip
*tmp_presentation
id lang title order
*tmp_sitemap
id parent lang title link order
I know I can write multiple queries for each table with any order (bad practice) and then combine it to a PHP array for output, but I rather need a professional approach to this subject.
P.S. I don't want to use memcache, solr, sphinxs and such libs (server won't support those)
*
*I will also appreciate other seach suggestions like content search,etc (website is written in php in mvc pattern with url rewrites and relies on mysql database though)
A: I guess you can join these tables and create a view into which the data obtained fom the joined tables can be saved. Now the search must be conducted on this view which will speed up the search.
For eg.
mysql> SELECT CONCAT(UPPER(supplier_name), ' ', supplier_address) FROM suppliers;
+-----------------------------------------------------+
| CONCAT(UPPER(supplier_name), ' ', supplier_address) |
+-----------------------------------------------------+
| MICROSOFT 1 Microsoft Way |
| APPLE, INC. 1 Infinate Loop |
| EASYTECH 100 Beltway Drive |
| WILDTECH 100 Hard Drive |
| HEWLETT PACKARD 100 Printer Expressway |
+-----------------------------------------------------+
CREATE VIEW suppformat AS
SELECT CONCAT(UPPER(supplier_name), ' ', supplier_address) FROM suppliers;
mysql> SELECT * FROM suppformat;
+-----------------------------------------------------+
| CONCAT(UPPER(supplier_name), ' ', supplier_address) |
+-----------------------------------------------------+
| MICROSOFT 1 Microsoft Way |
| APPLE, INC. 1 Infinate Loop |
| EASYTECH 100 Beltway Drive |
| WILDTECH 100 Hard Drive |
| HEWLETT PACKARD 100 Printer Expressway |
+-----------------------------------------------------+
Please check this link which will give you some idea of views
[http://www.techotopia.com/index.php/An_Introduction_to_MySQL_Views][1]
[1]: http://www.techotopia.com/index.php/An_Introduction_to_MySQL_Views
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13878691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: WSO2, convert an empty SOAP response to an empty JSON I have a EI 6.6.0 service that could return empty response, but they have to be JSONs, and in case there is an empty response
TID: [-1234] [] [2021-08-26 09:03:16,819] INFO {org.apache.synapse.mediators.builtin.LogMediator} - To: http://www.w3.org/2005/08/addressing/anonymous, WSAction: , SOAPAction: , MessageID: urn:uuid:5be867bf-2210-4ccd-8ecd-97a6078500f8, Direction: response, Envelope: <?xml version='1.0' encoding='utf-8'?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"><soapenv:Body/></soapenv:Envelope>`
the converted response should be {}
I'm using the property
<property name="messageType" value="application/json" scope="axis2"/>
to manage the conversion, but I get an error
TID: [-1234] [] [2021-08-26 09:06:17,135] ERROR {org.apache.synapse.commons.json.JsonUtil} - #writeAsJson. Payload could not be written as JSON. MessageID: urn:uuid:28a3eea3-a226-483a-8d5d-68d359d0fc08
TID: [-1234] [] [2021-08-26 09:06:17,136] ERROR {org.wso2.carbon.integrator.core.json.JsonStreamFormatter} - Error occurred while writing to application/json java.lang.reflect.InvocationTargetException
And if I use
<property name="NO_ENTITY_BODY" value="true" scope="axis2" type="BOOLEAN" />
then the conversion doesn't fail, but it returns nothing.
The following is my output flow:
<log level="full" xmlns="http://ws.apache.org/ns/synapse"/>
<switch source="//soapenv:Body/*[1]" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
<case regex="">
<property name="NO_ENTITY_BODY" value="true" scope="axis2" type="BOOLEAN" />
</case>
<default/>
</switch>
<property name="messageType" value="application/json" scope="axis2"/>
<respond/>
A: You can try the PayloadFactory Mediator
<payloadFactory media-type="json">
<format>{}</format>
<args/>
</payloadFactory>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/68935765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Exclude column from export in jQuery Datatables I'm using jQuery datatable 1.10.11 and it's export button functionality as described here:
I want to skip last column from being export into excel file as this column has edit/delete buttons in it. My columns are generated dynamically so I can't use following method:
$('#reservation').DataTable({
dom: 'Bfrtip',
buttons: [
{
extend: 'excel',
text: 'Export Search Results',
className: 'btn btn-default',
exportOptions: {
columns: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
}
}
]
});
I know this question is asked multiple time but non of them worked for me, might be version issue.
A: You can add a class:
<th class='notexport'>yourColumn</th>
then exclude by class:
$('#reservation').DataTable({
dom: 'Bfrtip',
buttons: [
{
extend: 'excel',
text: 'Export Search Results',
className: 'btn btn-default',
exportOptions: {
columns: ':not(.notexport)'
}
}]
});
A: Try using CSS selector that excludes last column for columns option.
$('#reservation').DataTable({
dom: 'Bfrtip',
buttons: [
{
extend: 'excel',
text: 'Export Search Results',
className: 'btn btn-default',
exportOptions: {
columns: 'th:not(:last-child)'
}
}
]
});
A: I just thought I'd add this in because the accepted answer only works to exclude if you are not already including something else (such as visible columns).
In order to include only visible columns except for the last column, so that you can use this in conjunction with the Column Visibility Button, use
$('#reservation').DataTable({
dom: 'Bfrtip',
buttons: [
{
extend: 'excel',
text: 'Export Search Results',
className: 'btn btn-default',
exportOptions: {
columns: ':visible:not(:last-child)'
}
}]
});
And if you want to explicitly add your own class:
$('#reservation').DataTable({
dom: 'Bfrtip',
buttons: [
{
extend: 'excel',
text: 'Export Search Results',
className: 'btn btn-default',
exportOptions: {
columns: ':visible:not(.notexport)'
}
}]
});
A: for Excel, csv, and pdf
dom: 'lBfrtip',
buttons: [
{
extend: 'excelHtml5',
text: '<i class="fa fa-file-excel-o"></i> Excel',
titleAttr: 'Export to Excel',
title: 'Insurance Companies',
exportOptions: {
columns: ':not(:last-child)',
}
},
{
extend: 'csvHtml5',
text: '<i class="fa fa-file-text-o"></i> CSV',
titleAttr: 'CSV',
title: 'Insurance Companies',
exportOptions: {
columns: ':not(:last-child)',
}
},
{
extend: 'pdfHtml5',
text: '<i class="fa fa-file-pdf-o"></i> PDF',
titleAttr: 'PDF',
title: 'Insurance Companies',
exportOptions: {
columns: ':not(:last-child)',
},
},
]
A: You can try this code, I copied it from PDF button.
E.g columns: [ 0, 1, 2, 3, 4, 5, 6, 7, 8 ].
Check this documentation: https://datatables.net/extensions/buttons/examples/html5/columns.html
buttons: [
{
extend: 'excelHtml5',
exportOptions: {
columns: [ 0, 1, 2, 3, 4, 5, 6, 7, 8 ]
}
},
{
extend: 'pdfHtml5',
exportOptions: {
columns: [ 0, 1, 2, 3, 4, 5, 6, 7, 8]
}
},
'colvis'
]
A: Javascript Part:
$(document).ready(function() {
$('#example').DataTable( {
dom: 'Bfrtip',
buttons: [
{
extend: 'print',
exportOptions: {
// columns: ':visible' or
columns: 'th:not(:last-child)'
}
},
'colvis'
],
columnDefs: [ {
targets: -1,
visible: false
} ]
} );
} );
And the js files to be included:
https://code.jquery.com/jquery-3.3.1.js
https://cdn.datatables.net/1.10.19/js/jquery.dataTables.min.js
https://cdn.datatables.net/buttons/1.5.2/js/dataTables.buttons.min.js
https://cdn.datatables.net/buttons/1.5.2/js/buttons.print.min.js
https://cdn.datatables.net/buttons/1.5.2/js/buttons.colVis.min.js
Hope this was helpful for you.
Thanks.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36763832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: Emu8086 shorten compile time I've written a 1100 lines Assembly code with Macros in emu8086 (without Macros it would be around 2700 lines) and compiling this program takes arround 2 minutes. It gives me 8 passes. I'd like to ask, whether it is possible to cut down on compile time?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/47529523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: an iterator that constructs a new object on dereference I have a Visual Studio 2013 C++11 project where I'm trying to define an iterator. I want that iterator to dereference to an object, but internally it actually iterates over some internal data the object requires for construction.
class my_obj
{
public:
my_obj(some_internal_initialization_value_ v);
std::wstring friendly_name() const;
// ...
};
class my_iterator
: public boost::iterator_facade<
my_iterator,
my_obj,
boost::forward_traversal_tag>
{
// ...
private:
my_obj& dereference() const
{
// warning C4172: returning address of local variable or temporary
return my_obj(some_internal_initialization_value_);
}
};
int main( int argc, char* argv[])
{
my_container c;
for (auto o = c.begin(); o != c.end(); ++o)
printf( "%s\n", o->friendly_name().c_str() );
}
These internal values are unimportant implementation details to the user and I'd prefer not to expose them. How can I write the iterator that does this correctly? The alternative is that I would have to do something like this:
my_container c;
for (auto i = c.begin(); i != c.end(); ++i)
{
my_obj o(*i);
printf( "%s\n", o.friendly_name().c_str() );
}
A: From the boost page on iterator_facade, the template arguments are: derived iterator, value_type, category, reference type, difference_type. Ergo, merely tell it that references are not references
class my_iterator
: public boost::iterator_facade<
my_iterator,
my_obj,
boost::forward_traversal_tag,
my_obj> //dereference returns "my_obj" not "my_obj&"
See it working here: http://coliru.stacked-crooked.com/a/4b09ddc37068368b
| {
"language": "en",
"url": "https://stackoverflow.com/questions/23115388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Looping through multiple excel files in python using pandas I know this type of question is asked all the time. But I am having trouble figuring out the best way to do this.
I wrote a script that reformats a single excel file using pandas.
It works great.
Now I want to loop through multiple excel files, preform the same reformat, and place the newly reformatted data from each excel sheet at the bottom, one after another.
I believe the first step is to make a list of all excel files in the directory.
There are so many different ways to do this so I am having trouble finding the best way.
Below is the code I currently using to import multiple .xlsx and create a list.
import os
import glob
os.chdir('C:\ExcelWorkbooksFolder')
for FileList in glob.glob('*.xlsx'):
print(FileList)
I am not sure if the previous glob code actually created the list that I need.
Then I have trouble understanding where to go from there.
The code below fails at pd.ExcelFile(File)
I beleive I am missing something....
# create for loop
for File in FileList:
for x in File:
# Import the excel file and call it xlsx_file
xlsx_file = pd.ExcelFile(File)
xlsx_file
# View the excel files sheet names
xlsx_file.sheet_names
# Load the xlsx files Data sheet as a dataframe
df = xlsx_file.parse('Data',header= None)
# select important rows,
df_NoHeader = df[4:]
#then It does some more reformatting.
'
Any help is greatly appreciated
A: I solved my problem. Instead of using the glob function I used the os.listdir to read all my excel sheets, loop through each excel file, reformat, then append the final data to the end of the table.
#first create empty appended_data table to store the info.
appended_data = []
for WorkingFile in os.listdir('C:\ExcelFiles'):
if os.path.isfile(WorkingFile):
# Import the excel file and call it xlsx_file
xlsx_file = pd.ExcelFile(WorkingFile)
# View the excel files sheet names
xlsx_file.sheet_names
# Load the xlsx files Data sheet as a dataframe
df = xlsx_file.parse('sheet1',header= None)
#.... do so reformating, call finished sheet reformatedDataSheet
reformatedDataSheet
appended_data.append(reformatedDataSheet)
appended_data = pd.concat(appended_data)
And thats it, it does everything I wanted.
A: you need to change
os.chdir('C:\ExcelWorkbooksFolder')
for FileList in glob.glob('*.xlsx'):
print(FileList)
to just
os.chdir('C:\ExcelWorkbooksFolder')
FileList = glob.glob('*.xlsx')
print(FileList)
Why does this fix it? glob returns a single list. Since you put for FileList in glob.glob(...), you're going to walk that list one by one and put the result into FileList. At the end of your loop, FileList is a single filename - a single string.
When you do this code:
for File in FileList:
for x in File:
the first line will assign File to the first character of the last filename (as a string). The second line will assign x to the first (and only) character of File. This is not likely to be a valid filename, so it throws an error.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/37397037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to delete spaces of grep search result? I managed to find a pattern using Gnu grep 2.5.4 (OS is Windows 7)
What grep returns looks like:
Word1 ( 1.22 )
Word2 ( -111.999 )
Word3 ( 123 )
So between end of the word and the bracket '(' always a various number of spaces is there.
Can I use grep to eliminate all spaces or all spaces but 1, so se result will look like:
Word1( 1.22 )
Word2( -111.999 )
Word3( 123 )
or (better)
Word1 ( 1.22 )
Word2 ( -111.999 )
Word3 ( 123 )
(Spaces within the brackets () may or may not be removed, this dosn' matter)
Do I need additional tools like sed or others?
I'm looking for a command line tool, so no text-editors search & replace can do this job.
Thanks for any hint!
A: You can use pipe your result to sed:
some_command | sed 's/[[:blank:]]*(/ (/'
Word1 ( 1.22 )
Word2 ( -111.999 )
Word3 ( 123 )
Instead of grep you may consider using awk also:
awk '/Word/{sub(/[[:blank:]]*\(/, " (")} 1' file
A: Simply Pipe your result to tr command.
your_grep_command | tr -s ' '
tr -s ' ' : It will squeeze multiple spaces to one on each line.
Ex:
$ echo "Word1 ( 1.22 )" | tr -s ' '
Word1 ( 1.22 )
| {
"language": "en",
"url": "https://stackoverflow.com/questions/46242506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Copying Excel formulas with Delphi 7 My team leader wants me to check if this is possible.
Our app has a grid (we use TAdvStringGrid from tmssoftware) that displays some values. Our users then copy and paste to Excel. (2010) Now they want the values to update automatically when they play with Excel. In other words, I need to copy formulas similar to having a Excel sheet with values and formulas and pasting it on another sheet.
I'm thinking of exporting it as an Excel file (with some kind of excel component) with the formulas but team leader first want to see if the copying will work or not.
I never worked with Excel (using Delphi) before. :-(
Thanks
A: Sounds like you need the TAdvSpreadGrid from TMS instead. It's an enchanced version of TAdvStringGrid that has support for the formulas as well.
If you need even more Excel Support they have TMS FlexCel Studio that is very nice.
A: I use TAdvSpreadGrid from TMS also. For reading and writing really spiffy spreadsheets with support for formulas, nice formatting and even pane freezing to make data editing easier for my clients, I use Native Excel. It's fast, has good documentation, and is easy to use. It's worth a look.
A: While the previous answers aren't wrong, I found another solution.
I tried adding the calculation (e.g. =A1+B1) to the cell as plain text. When copying to Excel it accepts my formula as an Excel formula and calculates it just like I want it.
No need to splash out more money on TAdvSpreadGrid or something else. :-)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3512863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Access parameters of JQuery object within onload I'm having some trouble changing some legacy code. I have little to no experience with JQuery, I do with JavaScript.
The code is like the following:
(function($) {
$.imgLoader= {
isDebug: true,
img: null,
someText: null,
/* example */
testFunction: function(imgObject) {
this.img = imgObject;
// wait for the img to load
this.img.onload = function() {
// here I want to access the `someText` variable but this refers to `this.img.someText` instead of `this.someText`
this.someText = 'someText';
}
};
}(jQuery));
As also described in the code above, I want to access the someText variable but this refers to this.img.someText instead of this.someText. Is there some way around this?
Preferably I'd even want to return a value from the testFunction() after the image has loaded.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/46846254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: submit form with files attached I have form, with ajax, that contains a textarea and upload file field. I can submit only one of them. How can I fix that?
I want to send "info" + "filesData" to the server. Please advise.
AJAX :
$(function() {
$("#submit").click(function() {
var file_data = $('#files').prop('files')[0];
var form_data = new FormData();
form_data.append('file', file_data);
var files_data = form_data;
alert(files_data);
var act = 'add';
var $form = $("#addCommentForm");
var info = $form.serialize();
info += '&act=' + act ;
alert(info);
$.ajax({
type: "POST",
url: "ajax/addPost.php",
dataType: 'text', // what to expect back from the PHP script, if anything
cache: false,
contentType: false,
processData: false,
data: files_data,
success: function(data)
{
// alert(data); // show response from the php script.
$('#commentsBox').html(data);
$("#addCommentForm")[0].reset();
}
});
return false;
});
});
HTML:
<form class="form-horizontal" action='#' method="post" id="addCommentForm" enctype="multipart/form-data">
<div class="form-group">
<div class="col-md-8 col-xs-12">
<textarea class="form-control" name="post[text]"></textarea>
</div>
</div>
<div class="form-group">
<div class="col-md-8 col-xs-12">
<input type="file" class="form-control" name="file" id="files">
</div>
</div>
<div class="form-group">
<label class="col-xs-2 control-label" for="textinput"></label>
<div class="col-md-8 col-xs-12">
<a class="btn btn-primary" id="submit">submit</a>
</div>
</div>
</form>
PHP
print_r ($_FILES);
print_r ($_POST);
A: Before the AJAX call,the data you are sending is not in the right format.Probably that's the reason you are not getting the values in the backend.Try something like this.
var file_data = $('#files').prop('files')[0];
var form_data = new FormData();
form_data.append('file', file_data);
var files_data = form_data;
var act = 'add';
form_data.append('act', act);
form_data.append('textarea', $("#addCommentForm").find("textarea").val());
And in the ajax call,the data to passed should be,
data: form_data,
| {
"language": "en",
"url": "https://stackoverflow.com/questions/31365664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Python2: Writing to stdin of interactive process, using Popen.communicate(), without trailing newline I am trying to write what I thought would be a simple utility script to call a different command, but Popen.communicate() seems to append a newline. I imagine this is to terminate input, and it works with a basic script that takes an input and prints it out, but it's causing problems when the other program is interactive (such as e.g. bc).
Minimal code to reproduce, using bc in lieu of the other program (since both are interactive, getting it to work with bc should solve the problem):
#!/usr/bin/env python
from subprocess import Popen, PIPE
command = "bc"
p = Popen(command, stdin=PIPE, stdout=PIPE, stderr=PIPE)
stdout_data = p.communicate(input="2+2")
print(stdout_data)
This prints ('', '(standard_in) 1: syntax error\n'), presumably caused by the appended newline character, as piping the same string to bc in a shell, echo "2+2" | bc, prints 4 just fine.
Is it possible to use Popen.communicate() without appending the newline, or would I need to use a different method?
A: I guess I'm an idiot, because the solution was the opposite of what I thought: adding a newline to the input: stdout_data = p.communicate(input="2+2\n") makes the script print ('4\n', '') as it should, rather than give an error.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53705093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to extend a C# ASP.NET class if DLL may or may not be available? Ok, think along the lines of a plugin.
I have my own dll and it has it's own functionality. If a third-party dll is present, I'm extending a class inside from that dll.
Everything works great except if the third party DLL is missing. This is the crux of the problem.
I get this exception when the dll is not present:
"Could not load file or assembly 'SOME_THIRD_PARTY_ASSEMBLY,
Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its
dependencies. The system cannot find the file specified."
The idea is to allow additional functionality if the third party dll is present, don't allow the functionality if not present.
I know I can use Reflection to test whether a type exists, but in order to get to that part of the code, I have to make it past the above exception.
It's not that I JUST need to know if a class is available, I am also extending the class.
So in order for MY dll to compile, I need to add a reference to the third party dll in Visual Studio.
Can I catch this exception somewhere? Should I go about this differently?
A: You could separate the code that was extending the third party DLL into another DLL. Then, in your "extension manager" dll, use a config file to match your extending assemblies with the 3rd party ones.
So, the config file could have an item with two entries like "someClass;inSome3rdPartDll" and then "yourClass;inYourDll".
Go through the config file to see if the listed 3rd party assemblies are present and, if they are, then load you associated assemblies in the app domain.
Now, if you want to extend future 3rd party assemblies, you need only add your dll and add a line to the config file.
Here's a link for help loading assemblies into app domains.
A: You could also look into MEF and leverage composition to accomplish this.
https://mef.codeplex.com
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16016908",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Sort Algritham Used SNOWFLAKE database I am trying to understand the kind of sorting algorithm used by ORDER BY Clause in SNOWFLAKE SQl.
When we put order by on columns . It sort data based ASC or DESC with NULL AS FIRST OR LAST.
Is it Merge Sort ? or any other hybrid sort technique
A: This is mentioned in the documentation @ https://docs.snowflake.com/en/sql-reference/constructs/order-by.html
All data is sorted according to the numeric byte value of each character in the ASCII table. UTF-8 encoding is supported.
For numeric values, leading zeros before the decimal point and trailing zeros (0) after the decimal point have no effect on sort order.
Unless specified otherwise, NULL values are considered to be higher than any non-NULL values. As a result, the ordering for NULLS depends on the sort order:
If the sort order is ASC, NULLS are returned last; to force NULLS to be first, use NULLS FIRST.
If the sort order is DESC, NULLS are returned first; to force NULLS to be last, use NULLS LAST.
An ORDER BY can be used at different levels in a query, for example in a subquery or inside an OVER() subclause. An ORDER BY inside a subquery or subclause applies only within that subquery or subclause. For example, the ORDER BY in the following query orders results only within the subquery, not the outermost level of the query:
| {
"language": "en",
"url": "https://stackoverflow.com/questions/63480185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: angularjs undefined main module when minifying using typescript I am trying to minify and uglify my angularjs + typescript app using grunt-minified. Currently I am getting an error that my main module for the app is not available when I minify. I know why this is occuring due variable names no longer matching the names of the modules they reference. How would I set up annotation so angular is able to identify my main module after minification?
declare module BB {
}
module BB.MyModule {
// initialize the module
export var module = angular
// load the dependencies
.module("MyModule", [
// dependancies
]);
}
This basic setup is working fine unminified, but MyModule is not defined when I minify it. How would I go about defining for safe minification?
A: You have:
declare module BB {
}
Probably BB has been minified to something else. That would make module BB.MyModule be different from BB.
Solution: Your code is already safe for minification if the point where you bootstrap angular https://docs.angularjs.org/api/ng/function/angular.bootstrap is minified through the same pipeline as BB.module is passed through.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27697770",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Why does PHP header(); work for him and not me INTRO:
FIRST: I Know about How to fix "Headers already sent" error in PHP
My question isn't about why it doesn't work. I know why it doesn't work, there is output before header();. My question is why it works for a guy in a YT video, that I am learning from, when he has the same output before header();
POST:
I am a super noob PHP programmer, self-learning for just a few weeks. I am trying to do a login system with PHP, learning from a youtube video. I got an error
Cannot modify header information - headers already sent by (output started at Sites/Login/inc/header.php:5...
and I think I know why I got the error.
My code:
index.php
<?php
require_once("inc/header.php");
?>
<?php
if (func::checkLoginState($conn))
{
echo "Welcome" . $_SESSION["username"] . "!";
} else
{
header("location:login.php");
}
?>
<?php
require_once("inc/footer.php");
?>
functions.php (Short version, so it returns false)
<?php
class func {
public static function checkLoginState ($conn)
{
return false;
}
}
?>
header.php
<?php
include_once("config/config.php");
include_once("functions.php");
?>
<html>
<head>
<title>Login project</title>
</head>
<body>
config.php contains just PDO database connection setup
footer.php closing HTML tags
Now, if I delete HTML tags from header.php, the header(); function works. What I can't for the life of me figure out is, why it works for the guy, who I am learning from in the youtube video (he HAS the html tags in header.php).
at 4:30 you can see his header.php
at 6:58 you can see his index.php (He actually has html code in this file before header(); and at 7:15 it just works..)
at 7:15 you can see index.php in the browser, it doesn't throw an error and redirects him to login.php
https://youtu.be/3xdxhfNg3Xg?t=4m30s
He HTML tags in the header.php as well and he includes it before the if the function in index.php. I have exactly the same code basically. Just to shorten it, I returned false in my checkLoginState function, so it does the else part in the function in index.php. He doesn't use ob_start(); anywhere (which is supposed to solve this somehow).
It's probably some basic thing that I missed but it bugs me so much. Thank you for clarifying this. Also, how am I supposed to redirect to login.php, since header(); seems kinda useless?
Thank you and have a great day!
A: that error only occurs when something is sent back to the browser before you try to send headers.
You have multiple php tags at the top of your page, there's white space between them. Also, you are sending text back in your header file.
copying your code straight out of your post, there is a space between your 2 sets of tags on line 3 after the ?>.
1. <?php
2. require_once("inc/header.php");
3. ?>
4. <?php
you dont need to open and close php tags around those lines.
<?php
session_start(); // start the session here so you dont forget
if (func::checkLoginState($conn))
{
require_once("inc/header.php"); // only include your top html template if you know you are good to go
echo "Welcome" . $_SESSION["username"] . "!";
} else
{
header("location:login.php");
}
require_once("inc/footer.php");
?>
this is perfectly fine.
edit based on your comments
youtube videos can be edited any way he wants it. If you forward to the end, hes removed the header redirect and put in a login form. So I would bet it wasnt working for him either.
A: Why we get this type error
Cannot modify header information - headers already sent by
*
*New line characters or spaces are before <?php
Solution: Remove everything before <?php
*BOM (Byte-Order-Mark ) in utf-8 encoded PHP files
Solution: Change the file encoding to "without BOM" (e.g. using notepad++) or remove the BOM before
*Echo before header
Solution: RemoveRemove the echo before header
Use ob_start(); and ob_end_flush(); for buffered output
Reference
A: You can use output buffering control to prevent header being sent out prematurely. The video you reference probably used it. Or else he has to use it to make it functional.
For example,
<?php
ob_start(); // start buffering all "echo" to an output buffer
?>
<html>
<?php
if (func::checkLoginState()) {
echo "Login";
} else {
header('Location: /login.php');
}
?>
</html>
<?php ob_end_flush(); ?>
The function ob_start can be put into header.php or config. The function call ob_end_flush can be put into the footer, or totally omit it.
Since all output are written to output buffer, and not directly to the browser, you can use header call after any echo, but before ob_end_flush.
A: So to sum up we couldn't figure out, why it works for the guy in the video. Thank you everyone for trying to figure it out. Theories are:
*
*He edited to video so it seemed it worked with this version of the code
*There is and ob_start(); and ob_end_flush(); somewhere in the code and it isn't visible in the video
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52381172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to do drag and drop a page from the application to the desktop/ Explorer How to do drag and drop a page from the application to the desktop/ Explorer in win32. My application is document management system.
Please help me how to get the handle of the desktop/explorer window or any other window outside of the application.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/32711001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Mail is not sending in php I have followed one tutorial to send mail from php.
public function send_credentials($beneficiary_user){
$this->load->library(βemailβ);
$email_config = Array(
'protocol' => 'smtp',
'smtp_host' => 'ssl://smtp.googlemail.com',
'smtp_port' => '465',
'smtp_user' => '[email protected]',
'smtp_pass' => 'apptesting',
'mailtype' => 'html',
'starttls' => true,
'newline' => "\r\n"
);
$this->email->from('[email protected]', 'invoice');
$this->email->to('[email protected]');
$this->email->subject('Invoice');
$this->email->message('Test');
$this->email->send();
}
What are the other settings i have to do to make it working
*After running echo $this->email->print_debugger();. I got*
Unable to send email using PHP mail(). Your server might not be configured to send mail using this method.User-Agent: CodeIgniter
Date: Sun, 9 Feb 2014 14:58:44 +0530
From: "invoice"
Return-Path:
Reply-To: "[email protected]"
X-Sender: [email protected]
X-Mailer: CodeIgniter
X-Priority: 3 (Normal)
Message-ID: <[email protected]>
Mime-Version: 1.0
Content-Type: multipart/alternative; boundary="B_ALT_52f74a4c41e88"
=?utf-8?Q?Invoice?=
This is a multi-part message in MIME format.
Your email application may not support this format.
--B_ALT_52f74a4c41e88
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Test
--B_ALT_52f74a4c41e88
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Test
--B_ALT_52f74a4c41e88--
A: Since we found the answer to your issue in the comments, it seemed prudent to write up an answer.
The problem was that your weren't doing anything with your email configuration array ($email_config). While you may or may not have had the right settings defined there, they meant nothing as they were not used properly.
Thus, at the very least, you must change your code to reflect the following changes:
$email_config = Array(
'protocol' => 'smtp',
'smtp_host' => 'ssl://smtp.googlemail.com',
'smtp_port' => '465',
'smtp_user' => '[email protected]',
'smtp_pass' => 'apptesting',
'mailtype' => 'html',
'starttls' => true,
'newline' => "\r\n"
);
$this->load->library('email', $email_config);
Please note that this will merely fix the issue with your approach, I cannot verify the credibility of your settings/access credentials.
EDIT:
As per jtheman's suggestion I decided to dig a bit deeper. You may want to look at this https://stackoverflow.com/a/17274496/2788532.
EDIT #2:
You can access useful error messages from CI's email class by using the following code (after you attempt to send an email, of course):
<?php echo $this->email->print_debugger(); ?>
A: Just add this in start of the function where you are writing send email code
$config = Array(
'protocol' => 'sendmail',
'mailtype' => 'html',
'charset' => 'utf-8',
'wordwrap' => TRUE
);
$this->email->initialize($config);
Email will forward but error same error will show
A: you may try this
*
*Open system/libraries/email.php
*Edit
var $newline = "\n";
var $crlf = "\n";
to
var $newline = "\r\n";
var $crlf = "\r\n";
A: make changes like this
'smtp_crypto'=>'ssl', //add this one
'protocol' => 'smtp',
'smtp_host' => 'smtp.gmail.com',
'smtp_port' => 465,
| {
"language": "en",
"url": "https://stackoverflow.com/questions/21656790",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: ng-table not show the filter when this fields are from the fields that are nested First, I'm working with ng-table. The problem is that isn't showing the filter when this fields are from the fields that are nested. I wanna to filter by field name of user or field title of event. this is my json field:
[
{
"id": 1,
"date_in": "2016-06-15 09:07:20",
"date_out": "0000-00-00 00:00:00",
"event_id": 1,
"event": {
"id": 1,
"title": "Programaci",
"authorize": "pending",
"user_id": 1,
"room_id": 1,
"user": {
"id": 1,
"name": "andres",
"email": "[email protected]"
}
}
}, {
"id": 2,
"date_in": "2016-06-15 10:07:20",
"date_out": "0000-00-00 00:00:00",
"event_id": 1,
"event": {
"id": 1,
"title": "tecnology",
"authorize": "pending",
"user_id": 1,
"room_id": 1,
"user": {
"id": 1,
"name": "peter",
"email": "[email protected]"
}
}
}
]
You can see my complete code here.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38340788",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: seize and release transporters in any logic [Model]
https://i.stack.imgur.com/xRIhB.jpg
I am trying to model a batches loaded by 2 cranes (service block) on a transporters fleet. then transporters move to unloading area by another 2 cranes (for now represented in flow chat as a delay2 block ).
the problem i am facing with transporters i want them to have a first stop (delay 1) before they(loaded) carried the batch and move according to the path. i tried to model the first stop by a seize transporter ( destination to delay node), delay , after they stop, they carry on to their agent location ( batch) to be loaded i represent that by another seize( destination is agent) , move by and release transporter after second delay.
i got a miss behavior after transporters been delayed and released in first delay ,as they didn't see the agents and carry on on their path ( some times they go to load and some time they don't).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74886709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I verify the ID token with Firebase and Django Rest? I just can't wrap my head around how the authentication is done if I use Firebase auth and I wish to connect it to my django rest backend.
I use the getIdTokenResult provided by firebase as such:
async login() {
this.error = null;
try {
const response = await firebase.auth().signInWithEmailAndPassword(this.email, this.password);
const token = await response.user.getIdTokenResult();
/*
No idea if this part below is correct
Should I create a custom django view for this part?
*/
fetch("/account/firebase/", {
method: "POST",
headers: {
"Content-Type": "application/json",
"HTTP_AUTHORIZATION": token.token,
},
body: JSON.stringify({ username: this.email, password: this.password }),
}).then((response) => response.json().then((data) => console.log(data)));
} catch (error) {
this.error = error;
}
},
The only thing I find in the firebase docs is this lackluster two line snippet: https://firebase.google.com/docs/auth/admin/verify-id-tokens#web
where they write
decoded_token = auth.verify_id_token(id_token)
uid = decoded_token['uid']
# wtf, where does this go?
# what do I do with this? Do I put it in a django View?
I found a guide here that connects django rest to firebase: https://www.oscaralsing.com/firebase-authentication-in-django/
But I still don't understand how its all tied together. When am I supposed to call this FirebaseAuthentication. Whenever I try to call the login function I just get a 403 CSRF verification failed. Request aborted.
This whole FirebaseAuthentication class provided by the guide I linked to above - should I add that as a path on the backend?
path("firebase/", FirebaseAuthentication, name="api-firebase"),
Which is the api endpoint my frontend calls?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65370776",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How to update text box with new values and check the count using selenium web driver I am facing this issue, that i have to count if the number of values in text box field has exceeded 5 or not. In case it has not i can enter values, and if yes then i cant enter anymore. The screenshot of screen and html is attached. Any help would be appreciated.
A: The "keywords" in your input box is stored as a span with class name "tag label label-info". You can simply get the count of that element and verify it to be 5 or not:
JAVA -
List <WebElements> tags_list = driver.findElements(By.xpath("//div[@class = 'bootstrap-tagsinput']/span[@class = 'tag label label-info']"));
if(tags_list.size() >= 5)
{
// do whatever you want
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51589360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-4"
} |
Q: Changing font family for a paragraph in latex rmarkdown does anyone know how to change of font family in the middle of a rmarkdown document?
I am currently using the following code but it resets the given text to computer modern, it is like it just never finds any other font. please help
---
title: ''
output:
pdf_document:
latex_engine: xelatex
keep_tex: yes
geometry: left=0.35in,right=0.35in,top=0.3in,bottom=0.3in
header-includes:
- \usepackage{graphicx}
- \usepackage{fancyhdr}
- \usepackage{fontspec}
- \pagestyle{fancy}
- \renewcommand{\headrulewidth}{0.0pt}
fontsize: 9pt
---
\begingroup
\fontfamily{pag}\fontsize{18}{16}\selectfont
\textcolor{black}{tasks}
\endgroup
In addition does anyone know which fonts I can use? I cant find a list that works.
A: You are using xelatex, so you can use whatever font you have installed on your operating system
---
title: ''
output:
pdf_document:
latex_engine: xelatex
keep_tex: yes
geometry: left=0.35in,right=0.35in,top=0.3in,bottom=0.3in
header-includes:
- \usepackage{graphicx}
- \usepackage{fancyhdr}
- \usepackage{fontspec}
- \pagestyle{fancy}
- \renewcommand{\headrulewidth}{0.0pt}
fontsize: 9pt
---
Global
\begingroup
\setmainfont{Arial}
\fontsize{18}{16}\selectfont
\textcolor{black}{tasks}
\endgroup
Global
| {
"language": "en",
"url": "https://stackoverflow.com/questions/67806430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: JavaScript object setting values to object but not updating to firebase There is a node named reader ranks having 14 ranks, when i am trying to calculate points with static code it works fine for me here is some code for that
rank_r = getReaderRank(parseFloat(oldPointsOfReder));
userDataReaderRef.update(rank_r);
function getReaderRank(points) {
let obj = {};
if (points >= 0 && points <= 50) {
obj.userRankBadge = 'Bronze';
obj.userRank = 'rank1';
obj.userPoints = points.toFixed(2);
obj.commission = 25;
obj.userRankPrice = 0.49;
} else if (points >= 51 && points <= 100) {
obj.userRankBadge = 'Silver';
obj.userRank = 'rank2';
obj.userPoints = points.toFixed(2);
obj.commission = 30;
obj.userRankPrice = 0.99;
} else if (points >= 101 && points <= 200) {
obj.userRankBadge = 'Gold';
obj.userRank = 'rank3';
obj.userPoints = points.toFixed(2);
obj.commission = 35;
obj.userRankPrice = 1.49;
} else if (points >= 201 && points <= 400) {
obj.userRankBadge = 'Platinum';
obj.userRank = 'rank4';
obj.userPoints = points.toFixed(2);
obj.commission = 40;
obj.userRankPrice = 1.99;
} else if (points >= 401 && points <= 800) {
obj.userRankBadge = 'Diamond';
obj.userRank = 'rank5';
obj.userPoints = points.toFixed(2);
obj.commission = 45;
obj.userRankPrice = 2.49;
} else if (points >= 801 && points <= 1600) {
obj.userRankBadge = 'Master';
obj.userRank = 'rank6';
obj.userPoints = points.toFixed(2);
obj.commission = 50;
obj.userRankPrice = 2.99;
} else if (points >= 1601 && points <= 3200) {
obj.userRankBadge = 'Challenger';
obj.userRank = 'rank7';
obj.userPoints = points.toFixed(2);
obj.commission = 55;
obj.userRankPrice = 3.49;
} else if (points >= 3201 && points <= 6400) {
obj.userRankBadge = 'Herald';
obj.userRank = 'rank8';
obj.userPoints = points.toFixed(2);
obj.commission = 60;
obj.userRankPrice = 3.99;
} else if (points >= 6401 && points <= 12800) {
obj.userRankBadge = 'Guardian';
obj.userRank = 'rank9';
obj.userPoints = points.toFixed(2);
obj.commission = 65;
obj.userRankPrice = 4.49;
} else if (points >= 12801 && points <= 25600) {
obj.userRankBadge = 'Crusader';
obj.userRank = 'rank10';
obj.userPoints = points.toFixed(2);
obj.commission = 70;
obj.userRankPrice = 4.99;
} else if (points >= 256001 && points <= 51200) {
obj.userRankBadge = 'Archon';
obj.userRank = 'rank11';
obj.userPoints = points.toFixed(2);
obj.commission = 75;
obj.userRankPrice = 5.49;
} else if (points >= 512001 && points <= 100000) {
obj.userRankBadge = 'Legend';
obj.userRank = 'rank12';
obj.userPoints = points.toFixed(2);
obj.commission = 80;
obj.userRankPrice = 5.99;
} else if (points >= 100001 && points <= 150000) {
obj.userRankBadge = 'Ancient';
obj.userRank = 'rank13';
obj.userPoints = points.toFixed(2);
obj.commission = 85;
obj.userRankPrice = 7.99;
} else if (points >= 150001 && points <= 10000000000) {
obj.userRankBadge = 'Archon';
obj.userRank = 'rank14';
obj.userPoints = points.toFixed(2);
obj.commission = 90;
obj.userRankPrice = 9.99;
} else if (points >= 10000000000) {
obj.userRankBadge = 'Archon';
obj.userRank = 'rank14';
obj.userPoints = points.toFixed(2);
obj.commission = 90;
obj.userRankPrice = 9.99;
} else {
obj.userRankBadge = 'Bronze';
obj.userRank = 'rank1';
obj.userPoints = '0';
obj.commission = 25;
obj.userRankPrice = 0.49;
}
return obj;
}
But when i am trying to make it dynamic to get all points value from firebase database, following function is setting all values to object but object is not updating values to database
*Dynamic Method
function getReaderRank(points) {
let obj = {};
let rank = firebase.database().ref("ReaderRanks/").once("value", function(snaps) {
snaps.forEach(function(child) {
let item = child.val();
let i;
for (i = 0; i <= item; i++) {
let strtVal = item.startValue;
let endVal = item.endValue;
if (points >= strtVal && points <= endVal) {
obj.userRankBadge = item.name;
obj.userRank = item.rank_no;
obj.userPoints = points.toFixed(2);
obj.commission = item.commission;
obj.userRankPrice = item.price;
}
}
});
});
return obj;
}
A:
But object is not updating values to database
With the code from your question you are only reading data from the database (with the once() method).
If you want to update the values in the database, you would need to use the update() or set() methods, depending on your exact needs.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/59764333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Why use terminal to start jar and idea to start are inconsistent Using idea to start the springboot program cannot get the value of the automatic configuration class
For example, the following code,PassConfig.getKey() is null:
private static final String encryptionFormat = String.format("to_base64(aes_encrypt(?,'%s'))", PassConfig.getKey());
@Component
@ConfigurationProperties(prefix = "mybatis.key")
public class PassConfig {
private static String key;
public static String getKey() {
return key;
}
public void setKey(String key) {
PassConfig.key = key;
}
}
At present, it can be solved by org.springframework.beans.factory.annotation.Value annotations.
@Value("${mybatiskey}")
public void setKey(String key) {
PassConfig.key = key;
}
A: In PassConfig you specified the prefix to be "mybatis.key" and the String as key
According to this, in application.properties your property is expected to be mybatis.key.key (prefix.variablename)
in setKey you specified the @Value to take mybatiskey from application.properties
property names should be consistent. It is null because most probably you don't have the propery mybatis.key.key in your application.properties file.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75406662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Difference between PHP includes and Java Import Declarations In PHP, we have:
<?php include 'external_file.php'; ?>
Whereas in Java, you have imports:
import javax.servlet.http.HttpServlet;
It is to my understanding that PHP includes simply dump the contents of the external file into the file that contains the include statement.
My gut feeling is that Java handles these includes/imports differently than PHP. What are the key differences?
A: PHP's include is pretty much the exact same thing as having literally cut/pasted the raw contents of the included file at the point where the include() directive is.
Java's compiled, so there's no source code to "include" - the JVM is simply loading object/class definitions and making them available for use. It's much like the #include directives in C. you're not loading literal source code, just function definitions/prototypes/fingerprints for later use.
A: In php it simply dumps the contents of the file in the current file.
In Java, an imported class is used:
*
*For compiling the source to byte code using the imported classes.
*At runtime when the JVM sees that your program references the imported class, it loads it and uses it(for method invocations and member accesses if it is the case)
A: PHP simply just includes whatever is in that file. It's simply merging the two files together.
Java's import function gives you access to the methods specified in that import. Basically, PHP is just a rudimentary combining of the two files while Java gives you access to that file's methods and interface.
A: They are very different. Php just include the source code from the included file.
Java is using the ClassLoader to load the compiled class located somewhere in the CLASSPATH. The import just tells the compiler that you want to reference those classes in the current namespace. The import does not load anything by itself, only when you use new, the JVM will load the class.
A: You have <jsp:include> in Java similar to PHP include.
Java import is similar to PHP load module.
A: The closest to a php include in Java is a static import. I.e. something like: import static javax.servlet.http.HttpServlet. This allows you to reference methods in the same class file as if they were declared locally (this only applies for static members of the imported class. However, this is very seldom used. It's a tighter form of coupling and should be avoided in most cases. The only time I find it helpful is for Junit test cases. Doing a static import of org.junit.Assert allows you to use the shorter form assertEquals(...) instead of Assert.assertEquals(...). Check out Oracle's documentation on static imports here.
A: The main difference from my experience is that PHP allows you do do anything. You can treat PHP includes the same way as Java uses its imports. A PHP file can be all function, or it can simply execute from start to finish.
So your php file could be
<?php
echo(1 + 4)
?>
or it could include function which you call later on
<?php
function addTwoNumbers()
{
return 1 + 4;
}
?>
If you inccluded the second php file you could call the addTwoNumbers function below your include statement. I like to practice specifying individual functions rather than create many PHP files.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11401906",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: CSS transition dropdown menu on hover - absolute positioning left (_s wordpress theme) I have a drop down menu that I would like to transition using css.
The drop down is displayed on hover using absolute positioning left:-999em and left:100% and I would like it to gently ease in and out on hover.
Amoungst other things I've tried the following
.main-navigation ul ul li:hover > ul {
left: 100%;
transition: left 0.5s ease;
}
I keep on leaving this issue and coming back to it and now I really have thrown in the towel and I'm asking for help. I'm clearly doing something silly.
jsfiddle of this example
Im using _s theme from Wordpress and SASS.
.main-navigation {
clear: both;
display: block;
float: left;
width: 100%;
ul {
list-style: none;
margin: 0;
padding-left: 0;
ul {
box-shadow: 0 3px 3px rgba(0, 0, 0, 0.2);
float: left;
position: absolute;
top: 1.5em;
left: -999em;
z-index: 99999;
ul {
left: -999em;
top: 0;
}
li {
&:hover > ul {
left: 100%;
}
}
a {
width: 200px;
}
:hover > a {
}
a:hover {
}
}
li:hover > ul {
left: auto;
}
}
li {
float: left;
position: relative;
&:hover > a {
}
}
a {
display: block;
text-decoration: none;
}
.current_page_item a,
.current-menu-item a {
}
}
A: Your SASS is quiet complex and nested quite a lot so it looks like you've missed a level out somewhere.
Using CSS (converted the SASS via SASSMeister) it was possible to see that the hover effect had not been applied to the first level li.
Also, 999em is a lot, you might want to consider reducing that or speeding up the transtion.
Reduced CSS using available classes.
.main-navigation {
clear: both;
display: block;
float: left;
width: 100%;
}
.main-navigation ul {
list-style: none;
margin: 0;
padding-left: 0;
}
.sub-menu {
box-shadow: 0 3px 3px rgba(0, 0, 0, 0.2);
float: left;
position: absolute;
top: 1.5em;
left: -999em;
z-index: 99999;
transition: left .25s ease;
}
.main-navigation ul li:hover > .sub-menu {
left:0;
}
.main-navigation ul ul a {
width: 200px;
padding: 1em;
}
.main-navigation li {
float: left;
position: relative;
}
.main-navigation a {
display: block;
text-decoration: none;
margin-right: 1em;
}
JSfiddle Demo
| {
"language": "en",
"url": "https://stackoverflow.com/questions/25851605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: fatal error: 'clang-c/Index.h' file not found on OSX 10.9.4 I was trying to install clang_complete on OSX 10.9.4. However while running make I get the following error:
[ 66%] Building CXX object CMakeFiles/clic_add.dir/clic_add.cpp.o
/Users/bharat/Desktop/clang_indexer/clic_add.cpp:5:10: fatal error: 'clang-c/Index.h'
file not found
#include <clang-c/Index.h>
^
1 error generated.
I have clang, llvm-g++ and llvm-gcc. How do I resolve this error?
A: You can download source code for clang
svn checkout http://llvm.org/svn/llvm-project/cfe/trunk
The header should be in the include/clang-c folders, you can edit the cmake file and pass in the location of the header in the CPP flags (-I)
cmake file would be here:-
ycm/CMakeFiles/ycm_core.dir/flags.make
| {
"language": "en",
"url": "https://stackoverflow.com/questions/25373692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to add/use colon ":" in the YML value? How to add/use colon : in the YML value?
spring:
application:
name: app
profiles:
active: DEV
config:
import: configserver:
The above example configserver: give error
A: Double quotes "" works
spring:
application:
name: app
profiles:
active: DEV
config:
import: "configserver:"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/69061394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Using classes in different .ts files in jasmine I am writting simple test in jasmine framework. I have following files stored in one folder:
*
*maintest.ts
*helper.ts
*Workflow1.ts
*Workflow2.ts
Workflow files have content as following (example):
import {element, by, browser, protractor} from "protractor";
import {Helper} from "../../helper";
export class Workflow1/2
{
static Foo1() {
let element1;
let element2;
describe('check all fields', function () {
it('check foobar', function () {
element1.isVisible();
});
it('check foobar2', function () {
element2.isVisible();
});
}
static Foo2() {
let element3;
let element4;
describe('check all fields', function () {
it('check foobar', function () {
element4.isVisible();
});
it('check foobar2', function () {
element3.isVisible();
});
}
}
And the maintest.ts is:
import {browser} from "protractor";
import {Helper} from "./helper";
import {Workflow1} from "./Workflow1";
import {Workflow2} from "./Workflow2";
describe ('Regression Tests', function() {
beforeAll(function () {
console.log('====================Start');
});
describe('Basic workflow', function () {
Workflow1.Foo1();
Workflow1.Foo2();
Workflow2.Foo2();
Workflow2.Foo2();
});
});
but when I run it, nothing has run correctly - I get this error:
Error: Error while waiting for Protractor to sync with the page: "window.angular is undefined. This could be either because this is a non-angular page or because your test involves client-side navigation, which can interfere with Protractor's bootstrapping. See http://git.io/v4gXM for details"
but if I comment:
//Workflow1.Foo2();
//Workflow2.Foo2();
//Workflow2.Foo2();
the Workflow1.Foo1 works perfectly fine.
Can't I use different methods from different files? It works with helper, where I have login and logout methods...
A: I think I got this. My code was 'quite' long with different describes, when I minimalized it to 2, it started working :)
EDIT: As I mentioned in comment below, each method in Workflow1 and Workflow2 files must have at least one describe and at least one it inside - having only describe without it throws error
| {
"language": "en",
"url": "https://stackoverflow.com/questions/47750953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Whats the difference and how to generate these two encodings? I am working with these two encoding type of strings:
%ueb08%u8b09%u3c40%u5756%u5ebe%u3440%u408d
\x26\x04\x9e\x8e\xf9\xd0
To generate the first type I found this function:
function encoder(s)
{
$res = strtoupper(bin2hex($s));
$g = round(strlen($res)/4);
if($g != (strlen($res)/4))
$res .= "00";
$out = "";
for($i = 0; $i < strlen($res); $i += 4)
$out .= "%u" . substr($res, $i + 2, 2) . substr($res, $i, 2);
return $out;
}
Now I need to convert the first type of strings, to the second type, which I don't even know what type of encoding it is. How could I do that?
A: The bottom is just standard notation for representing hex values in the ascii space.
If you want the number 0, it is \x00, if you want 10, it would be \x0A, and 16 (hex's 10) is \x10 (15 would be \x0F)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/6220778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to convert php mail() function to smtp I make a simple contact form for WordPress using ajax and php mail() but my maximum mail is going to spam folder. what is wrong with my code? or any other solution ? like SMTP.
<?php
$errorMSG = "";
// NAME
if (empty($_POST["name"])) {
$errorMSG = "Name is required ";
} else {
$name = $_POST["name"];
}
// EMAIL
if (empty($_POST["email"])) {
$errorMSG .= "Email is required ";
} else {
$email = $_POST["email"];
}
// Subject
if (empty($_POST["subject"])) {
$errorMSG .= "Subject is required ";
} else {
$subject = $_POST["subject"];
}
// MESSAGE
if (empty($_POST["message"])) {
$errorMSG .= "Message is required ";
} else {
$message = $_POST["message"];
}
//receiver email address
$EmailTo = "[email protected]";
$Subject = $subject;
$form = $name;
// prepare email body text
$Body = "";
$Body .= "Name: ";
$Body .= $name;
$Body .= "\n";
$Body .= "Email: ";
$Body .= $email;
$Body .= "\n";
$Body .= "Message: ";
$Body .= $message;
$Body .= "\n";
// send email
$headers = "From: " .($email) . "\r\n";
$headers .= "Reply-To: ".($email) . "\r\n";
$headers .= "Return-Path: ".($email) . "\r\n";;
$headers .= "MIME-Version: 1.0\r\n";
$headers .= "Content-Type: text/html; charset=ISO-8859-1\r\n";
$headers .= "X-Priority: 3\r\n";
$headers .= "X-Mailer: PHP". phpversion() ."\r\n";
// send email
$success = mail($EmailTo, $Subject, $Body, $headers);
// redirect to success page
if ($success && $errorMSG == ""){
echo "success";
}else{
if($errorMSG == ""){
echo "Something went wrong :(";
} else {
echo $errorMSG;
}
}
A: There are multiple reasons for emails treated as spam. Usually it relates to the server sending the mail, apart from content or headers of the mail itself. E.g. the DNS might be not configured correctly, there are some tools in the web that might support you troubleshoot, unfortunately I don't find the tool I used some years ago. Google might bring you up some tools for checking the DNS configuration.
There was already some discussion on stackoverflow about this topic, please check this thread to avoid a duplicate discussion.
I recommend using a library like PHPMailer for handling mails.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52577207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Iterate efficiently on large OneToMany β ManyToOne associations I have two entities User and Period, they have a ManyToMany association: a user belongs to many periods, and a period has multiple users. This association uses a UserPeriod entity.
class Period
{
/**
* @ORM\OneToMany(targetEntity="UserPeriod", mappedBy="period")
*/
private $users;
}
class UserPeriod
{
/**
* @ORM\Id
* @ORM\ManyToOne(targetEntity="User", inversedBy="periods")
*/
private $user;
/**
* @ORM\Id
* @ORM\ManyToOne(targetEntity="Period", inversedBy="users")
*/
private $period;
}
class User extends BaseUser
{
/**
* @ORM\OneToMany(targetEntity="UserPeriod", mappedBy="user")
*/
protected $periods;
}
What I'm trying to achieve is getting a list of all users from a defined period. Since there is a lot of users, I can't load them all in memory and must iterate on them (batch processing). Here is what I tried:
public function getUsersOfQuery($period)
{
return $this->_em->createQueryBuilder()
->select('u')
->from('SGLotteryUserBundle:LotteryUser', 'u')
->innerJoin('u.periods', 'p')
->where('p.period = :id')
->setParameter('id', $period->id())
->getQuery();
}
$it = $repo->getUsersOfQuery($period)->iterate();
But, this exception is raised:
[Doctrine\ORM\Query\QueryException]
Iterate with fetch join in class UserPeriod using association user not allowed.
I cannot use native queries since User uses table inheritance.
A: Github issue
This happens when using either MANY_TO_MANY or ONE_TO_MANY join in
your query then you cannot iterate over it because it is potentially
possible that the same entity could be in multiple rows.
If you add a distinct to your query then all will work as it will
guarantee each record is unique.
$qb = $this->createQueryBuilder('o');
$qb->distinct()->join('o.manyRelationship');
$i = $qb->iterator;
echo 'Profit!';
| {
"language": "en",
"url": "https://stackoverflow.com/questions/32543535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Java type is unknown in js side Please help point out what the problem might be in the code below. I have this javascript code (in a file '/arithmetic.js') which I'm testing using GraalVM's Context in java
function sum(num, b) {
return num.add(b)
}
function diff(num, b) {
return num.sub(b)
}
The java side looks like this
public class Num {
final int a;
public Num(int a) {
this.a = a;
}
public int add(int b) {
return this.a + b;
}
public int sub(int b) {
return this.a - b;
}
}
In the main class, I have
Engine engine = Engine.create();
try (Context context = Context.newBuilder().engine(engine).build()) {
context.eval(Source.newBuilder("js", loadFile("/arithmetic.js"), "arithmetic").build());
Num ten = new Num(10);
System.out.printf("10 + 3 = %d%n", ten.add(3)); //<-- just checking
context.getBindings("js").putMember("num", ten); //<-- is this line even necessary?
Value getSum = context.getBindings("js").getMember("sum");
Value result = getSum.execute(ten, 20);
System.out.println(result.asInt());
}
I get the output below:
> Task :graalvm-js-app:SampleJs.main() FAILED
10 + 3 = 13
Exception in thread "main" TypeError: invokeMember (add) on example.Num@31e4bb20 failed due to: Unknown identifier: add
at <js> sum(arithmetic:2:34-43)
A: In this particular case, it appears that I needed just one slight tweak when building the context. It's not so obvious but it kinda makes sense in the context of the problem encountered
Context.newBuilder().engine(engine).allowAllAccess(true).build()
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65998591",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Configure asp.net site with ASP.NET Ajax Control Toolkit I have an application that has been converted from VS2005 2.0 framework to VS2008 3.5 framework. I am attempting to add the ability to use the AjaxControlToolkit DLL [AjaxControlToolkit-Framework3.5SP1-DllOnly.zip] download only within my project. I have followed the configuration setups to get the project to build, and have not been successful in getting a control to load.
How do I install and use the ASP.NET AJAX Control Toolkit in my .NET 3.5 web applications?
and
Configuring ASP.NET AJAX
I am currently running into an error after adding all the web.config settings to my web application.
Server Error in '/' Application.
Configuration Error
Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately.
Parser Error Message: Could not load file or assembly 'System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.
Source Error:
<compilation defaultLanguage="vb" debug="true">
<assemblies>
<add assembly="System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
</assemblies>
<expressionBuilders>
I imagine others have had this problem, but can't find any resources that will help me fix this. Thanks in advance for the help.
A: That is the old version. Change the line in your web.config to use the 3.5 version:
<add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
(Yes, this is a common conversion error.)
A: I have this issue a lot from web.config inheritance. You can also add binding re-directs. Below will re-direct any calls to the old version to the new version, you can also set this to work the other way round.
<runtime>
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<dependentAssembly>
<assemblyIdentity name="System.Web.Extensions" publicKeyToken="31bf3856ad364e35"/>
<bindingRedirect oldVersion="1.0.61025.0" newVersion="3.5.0.0"/>
</dependentAssembly>
</assemblyBinding>
</runtime>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/447369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Sansation Light bug in Firefox I am using the Sansation Light fonts in my web site and in Firefox, I ve noticed the undesirable effect that the sequences ff, fi, fl render as a square instead.
Any ideas how to fix it?
A: Disable ligatures to make them show properly. Worked for me.
-moz-font-feature-settings: "liga" off;
https://developer.mozilla.org/en/CSS/font-feature-settings
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15523766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Can't change modules buildVariant
I have a project with a main app and 3 modules. They depend on each other like
app (android application)
|
--- module1 (android library)
|
--- module2 (android library)
|
--- module3 (android library)
I'm using AS 3.0 with BuildTool 3.0.0-alpha5.I applied changes described in the doc: https://developer.android.com/studio/build/gradle-plugin-3-0-0-migration.html#variant_dependencies
Here is my
build.gradle (app)
apply plugin: 'com.android.application'
android {
...
buildTypes {
debug {}
qa {}
release {}
}
flavorDimensions "default"
productFlavors {
flavor1 {dimension = "default"}
flavor2 {dimension = "default"}
}
}
dependencies {
...
implementation project(path: ':module1')
...
}
Here is my
build.gradle (module1)
apply plugin: 'com.android.library'
android {
...
buildTypes {
debug {}
qa {}
release {}
}
}
dependencies {
...
implementation project(path: ':module2')
...
}
Here is my
build.gradle (module2)
apply plugin: 'com.android.library'
android {
...
buildTypes {
debug {}
qa {}
release {}
}
}
dependencies {
...
implementation project(path: ':module3')
...
}
My problem: my app is set on "flavor1Debug" but all my modules variants are stuck on "qa". I can't switch them to "debug" in the BuildVariant window.
I have a Gradle Sync warning:
Warning:Module 'module1' has variant 'debug' selected, but the module
''app'' depends on variant 'qa'Select 'module1'
in "Build Variants" window
Is anyone have an idea of what I missed ?
A: You can change the inside of the panel build variant.
A: The latest AS3.0 canary 8 fixed this issue
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44913945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Array Index Out Of Bounds Exception Basic When I Compile this program it has no errors, But when I run it I get an Out Of Bounds Error. Any Help? Thanks!
The Exact Error Is As Follows :
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 4
at javaapplication.JavaApplication.main(JavaApplication.java:32)
C:\Users\Juwon\AppData\Local\NetBeans\Cache\8.2\executor-snippets\run.xml:53: Java returned: 1
BUILD FAILED (total time: 0 seconds)
public class JavaApplication
{
public static void main(String[] args)
{
int[] grades = new int[4];
grades[0] = 77;
grades[1] = 84;
grades[2] = 80;
grades[3] = 96;
String[] Students = new String[] {"Tom", "Ed", "Joe", "Bob"};
double sum = 0.0;
System.out.print("#\tStudent\tGrade\n");
System.out.print("-\t-------\t-----\n");
int i = 0;
for(;i<grades.length;i++);
{
System.out.printf("%d\t%s\t%d\n", i, Students[i], grades[i]);
sum += grades[i];
}
double average = sum / grades.length;
System.out.printf("Class Average %f\n", average);
}
}
A: for (; i < grades.length; i++); <--- See the semi-colon at the end, basically this is executing all the code between the ) and the ;, which isn't much, meaning you could actually remove the loop with only a minor side effect to the code.
Instead, you should be doing something more like...
for (int i = 0; i < grades.length; i++) {
System.out.println(i);
System.out.printf("%d\t%s\t%d\n", i, Students[i], grades[i]);
sum += grades[i];
}
Note, now I can include i in the definition of the loop without issue
This is not a uncommon issue and one you will likely repeat a few more times in the future
A: for(;i<grades.length;i++);
The semicolon is the troublemaker here.
This leads to your variable i being more than the last index of the array. Hence the out of bounds exception!!
A: As Elliot said, remove the semicolon from the end of your for-loop and your code should work fine. It's also common practice to initialize your counter variable inside the for-loop. The for-loop header should look like
for(int i = 0;i<grades.length;i++)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/48313660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-4"
} |
Q: Move file but rename if existing in python I am trying to move my files in Windows. Currently the files are in a folder under drive C: but I want to move them to a location in D:.
I am using shutil.move function but that function overwrites the file if it is existing. I want to keep a copy of the file in the destination not overwrite it. Is there a function to do that?
def movefiles(strsrc, strdest, strextension):
filelistsrc = [] #source files full path
# store the destination of the current file
dictfiles = {}
for f in os.listdir(strsrc):
if os.path.isfile(os.path.join(strsrc, f)):
filefullname = os.path.join(strsrc, f)
if filefullname.endswith(".html"):
filelistsrc.append(filefullname)
dictfiles[filefullname] = os.path.join(strdest, f)
if not filelistsrc:
return -1
print("Start moving files from:")
printstrlist(filelistsrc)
for filename in filelistsrc:
shutil.move(filename, dictfiles[filename])
return 0
A: before moving it in the last for-loop you can check if the file already exists and depending of the result moving it. I made an recursive function that checks the name of the files and incrementing it until the filename is new:
import os
def renamefile(ffpath, idx = 1):
#Rename the file from test.jpeg to test1.jpeg
path, ext = os.path.splitext(ffpath)
path, filename = path.split('/')[:-1], path.split('/')[-1]
new_filename = filename + str(idx)
path.append(new_filename + ext)
path = ('/').join(path)
#Check if the file exists. if not return the filename, if it exists increment the name with 1
if os.path.exists(path):
print("Filename {} already exists".format(path))
return renamefile(ffpath, idx = idx+1)
return path
for filename in filelistsrc:
if os.path.exists(filename):
renamefile(filename)
shutil.move(filename, dictfiles[filename])
A: Here is another solution,
def move_files(str_src, str_dest):
for f in os.listdir(str_src):
if os.path.isfile(os.path.join(str_src, f)):
# if not .html continue..
if not f.endswith(".html"):
continue
# count file in the dest folder with same name..
count = sum(1 for dst_f in os.listdir(str_dest) if dst_f == f)
# prefix file count if duplicate file exists in dest folder
if count:
dst_file = f + "_" + str(count + 1)
else:
dst_file = f
shutil.move(os.path.join(str_src, f),
os.path.join(str_dest, dst_file))
A: If the file already exists, we want to create a new one and not overwrite it.
for filname in filelistsrc:
if os.path.exists(dictfiles[filename]):
i, temp = 1, filename
file_name, ext = filename.split("/")[-1].split(".")
while os.path.exists(temp):
temp = os.path.join(strdest, f"{file_name}_{i}.{ext}")
dictfiles[filename] = temp
i += 1
shutil.move(filename, dictfiles[filename])
Check if the destination exists. If yes, create a new destination and move the file.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/63323275",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.