text
stringlengths 15
59.8k
| meta
dict |
---|---|
Q: How to sync light to the current song with Raspberry Pi? I want to make an LED attached to my Raspberry Pi blink in sync with the music playing on the Raspberry.
The problem is that I don't know how to get the current level of volume of the track which is in line audio. How can I do this?
I use Raspbian.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/28799107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Wrapping long text into a container, causes overflow Flutter I'm creating a notification center for my application, and it contains a bit of long strings for the title and the body, and it always causes overflow, I've tried using Expanded widgets, and these following stack over flow questions: Question 1, Question 2 But I cannot apply it to my code, here is what it looks like right now:
I get the text from my firebase cloud firestore, and I'm outputting the texts using a listview.
The title text is this: "Allen Indefenzo has rescinded consent for you to access his/her data"
The body text is this : "He/she has removed all your access and you will no longer be able to view / change his data, to have access again ask for the user to give you his/her special code again."
Here is my code:
Container(
color: Colors.white,
child: StreamBuilder(
stream: db
.collection('users')
.document(userData.userID)
.collection('notifications')
.snapshots(),
builder: (BuildContext context, AsyncSnapshot snapshot) {
if (!snapshot.hasData) {
return Center(
child: CircularProgressIndicator(),
);
}
if (snapshot.data.documents.length != 0) {
return ListView.builder(
itemCount: snapshot.data.documents.length,
itemBuilder: (BuildContext context, int index) {
return Container(
child: Row(
children: <Widget>[
Container(
height: 150,
width: 100,
child: Image(
image: NetworkImage(
snapshot.data.documents[index]['dpURL']), fit: BoxFit.contain,
),
),
Column(
children: <Widget>[
Container(
height: 50,
width: MediaQuery.of(context).size.width * 0.8,
child:
Text(snapshot.data.documents[index]['title']),
),
Container(
height: 50,
width: MediaQuery.of(context).size.width * 0.8,
child:
Text(snapshot.data.documents[index]['body']),
),
],
),
],
),
);
},
);
}
},
),
),
Any help is appreciated and if you can explain it would be amazing! Thank you so much!
A: try to wrap your Column inside Expanded,
Container(
child: Row(
children: <Widget>[
Container(
height: 150,
width: 100,
child: Image(
image: NetworkImage(
'https://encrypted-tbn0.gstatic.com/images?q=tbn%3AANd9GcQLBtBU3nUOK7osT41ZsQhP4VBV7fd9euqXvXVKQH0Q7bl4txeD'),
fit: BoxFit.contain,
),
),
Expanded(
child: Column(
children: <Widget>[
Text(
'Reallly looooooooooooooooooooooooooooooong textttttttttttttttttttttttttttttttttttttttttttttttt'),
Text(
'Reallly looooooooooooooooooooooooooooooong textttttttttttttttttttttttttttttttttttttttttttttttt'),
],
),
),
],
),
);
output:
Additionally if you want to limit lines of Text use,
Text(
'Reallly looooooooooooooooooooooooooooooongtextttttttttttttttttttttttttttttttttttttttttttttttt',
maxLines: 2,
overflow: TextOverflow.ellipsis,
),
Output:
A: Just wrap your column inside expanded widget
Expanded(child:
Column(
children: <Widget>[
Container(
height: 50,
width: MediaQuery.of(context).size.width * 0.8,
child:
Text(snapshot.data.documents[index]['title']),
),
Container(
height: 50,
width: MediaQuery.of(context).size.width * 0.8,
child:
Text(snapshot.data.documents[index]['body']),
),
],
),
),
Expanded widget is useful when you want to fill all the remaining space when a column or row widget have 2 or more child widgets. so wrapping a child inside an expanded widget will fill up the remaining space accordingly.
A: return Container(child: Row(
children: <Widget>[
Container(
height: 150,
width: 100,
child: Image(
image: NetworkImage(snapshot.data.documents[index]['dpURL']), fit: BoxFit.contain,
),
),
new Expanded(
child: Column(
children: <Widget>[
Padding(
padding: const EdgeInsets.all(8.0),
child: TitleText(
text:snapshot.data.documents[index]['title'] ,
maxLine: 5,
),
),
Container(
height: 50,
width: MediaQuery.of(context).size.width * 0.8,
child: TitleText(
text:snapshot.data.documents[index]['body']),
),
],
),
),
],
),
);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/59208066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to insert an image in the linear layout? I just wanted to insert an image in linear layout similar to this
Can anyone help me with this???
A: You are probably looking for ImageButton since your picture shows something similar to that.
Alternatively, take a look at ImageView
A: set layout's orientation to horizontal, then add 4 images in xml, one after another, add ids like image1, image2 etc, then call them in your onCreate like ImageView image
ImageView image1 = (ImageView) findViewById(R.id.image1)
and so on, then you can either set "src" attribute in yoru xml or say image1.setImageBitmap or any other method available in ImageView class to set the image you want from your res/drawable/
A: you can use ImageButton in horizontal LinearLayout
make background of the ImageButton @null
then put the margins between them
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19768076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How could I implement a trap to indicate ASP.Net Session expiry I need to create a user login/logout/Session expiry tracking page(ASP.Net)..
It is obvious that I can invoke my tracking page when user logs in and logs out..
How do I detect session expirey ?
A: Your most obvious way in a stateful app is to assume that any hit on a non-login page without being logged in implies that the session has expired.
A: use the Session_End event in Global.asax. Keep in mind this event does not fires when sessions are persisted outside of the process (in SQL for example)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/2300665",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: CSS3 Animation not working on chrome or safari I'm trying to get a simple css3 animation to work. Till now it works fine in Firefox, but not in Chrome or Safari.
I added @-webkit-keyframes so it should work on all browsers (maybe not in IE).
This is my css:
.myFace {
display: block;
width: 266px;
height: 266px;
background: url(http://cl.ly/image/443k292m2C24/face2x.png) no-repeat 0 0;}
.myFace:hover {
animation: threesixty 1s;
-webkit-animation: threesixty 1s; /* Safari and Chrome */
}
@keyframes threesixty {
0% { transform: rotate(0deg); }
100% { transform: rotate(360deg); }
}
@-webkit-keyframes threesixty {
0% {transform: rotate(0deg);}
100% {transform: rotate(360deg);}
}
This is what i'm trying to do -
http://jsfiddle.net/dansku/JTcxH/4/
A: I believe that's because inside your keyframes, you're using transform:, which will work for IE10, Firefox and Opera, but haven't also got the webkit equivalent - specifically -webkit-transform which will work for Chrome and Safari.
@keyframes threesixty {
0% {
transform: rotate(0deg);
-webkit-rotate(0deg);
-ms-transform:rotate(0deg);
}
100% {
transform: rotate(360deg);
-webkit-rotate(360deg);
-ms-transform:rotate(360deg);
}
}
@-webkit-keyframes threesixty {
0% {
transform: rotate(0deg);
-webkit-rotate(0deg);
-ms-rotate(0deg);
}
100% {
transform: rotate(360deg);
-webkit-rotate(360deg);
-ms-rotate(360deg);
}
}
You should also include the -ms- prefix as I've shown above, for versions of IE before 10.
A: From the MDN article on transforms:
Webkit browsers (Chrome, Safari) still needs the -webkit prefix for transforms, while Firefox, as of version 16, does not.
This code should do the job for you:
@keyframes threesixty {
0% {
transform: rotate(0deg);
-webkit-transform: rotate(0deg);
}
100% {
transform: rotate(360deg);
-webkit-transform: rotate(360deg); }
}
@-webkit-keyframes threesixty {
0% {
transform: rotate(0deg);
-webkit-transform: rotate(0deg);
}
100% {
transform: rotate(360deg);
-webkit-transform: rotate(360deg);
}
}
For Opera and IE9+ support, use -o and -ms prefixes respectively. (IE10 does not require the prefix)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16847042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Website images overlapping(Shopify) I am using shopify and added a section using my own html into it. I added the picture of the lipstick, the face and the one under those that is being overlapped by one of the products from the 'featured products' section that comes built into the theme. I was wondering how I would be able to fix this? Do I need to go in and fix something with liquid?
I have solved this issue by using a table in html instead; Ive provided my solution for future readers.
HTML
<div class="hover">
<center>
<table style="width: 182px; height: 159px;">
<tbody>
<tr style="height: 33px;">
<td style="width: 57.125px; height: 20px;">
<figure><img align=left valign=top width=350px type="image/jpeg" src="{{ 'lipstickpink.jpg' | asset_url}}" /></figure>
<figure> <img align=left valign=bottom width=400px type="image/jpeg" src="{{ 'shadow.jpg' | asset_url}}" /> </figure>
</td>
<td style="width: 900px; ">
<figure><img align=center valign=top width=900px type="image/jpeg" src="{{ 'facehomepage.jpg' | asset_url}}" /></figure>
</td>
</center>
</td>
</tr>
</tbody>
</table>
A: Add padding to the photographs. Padding (e.g. of 10px) will add a 10px white space on all sides (unless you specify padding-top or padding-bottom etc.. which you can do if you prefer. ) You could also add a margin to the container. A negative margin-top will shift the appearance of the position of the container upwards, likewise it can be applied to the photographs as well. (Positive or negative)
Hope this helps
Rachel
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44331018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Response to preflight request doesn't pass access control check with Angular 2 I am trying to connect to an API using Angular 2. I am unable to connect to it as it is giving me a OPTIONS error as well as:
Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:3000' is therefore not allowed access. The response had HTTP status code 500.
I have the Google Chrome plugin as well as the headers. Here is the code
map.service.ts
import { Injectable } from '@angular/core';
import { Headers, Http } from '@angular/http';
import 'rxjs/add/operator/map';
@Injectable()
export class MapService {
private headers = new Headers();
constructor(private http: Http) { }
getMarkers() {
this.headers.append("Access-Control-Allow-Origin", '*');
this.headers.append("Access-Control-Allow-Methods", 'GET, POST, PATCH, PUT, DELETE, OPTIONS');
this.headers.append("Access-Control-Allow-Headers", 'Origin, Content-Type, X-Auth-Token');
this.headers.append("Authorization", 'Bearer ' + 'mysecretcode');
return this.http.get('https://api.yelp.com/v3/businesses/search?location=suo&limit=50', { headers: this.headers})
.map(response => response.json());
}
}
map.component.ts
import { Component } from '@angular/core';
import { OnInit } from '@angular/core';
import { Marker } from '../../models/map/marker';
import { MapService } from '../../services/map/map.service';
import {Observable} from 'rxjs/Rx';
import {Http, Response, Headers } from '@angular/http';
@Component({
moduleId: module.id,
selector: 'map-view',
templateUrl: 'map.component.html',
styleUrls: ['map.component.css'],
})
export class MapComponent {
marker: Object;
constructor(mapService: MapService) {
mapService.getMarkers()
.subscribe(
marker => this.marker = marker,
error => console.error('Error: ' + error),
() => console.log('Completed!')
);
}
I understand that this is most likely a problem with CORS. Is this something i can do to fix or is this on yelps end?
EDIT:
Here is one of the errors:
Error: Response with status: 0 for URL: null(anonymous function) @ map.component.ts:24SafeSubscriber.__tryOrUnsub @ Subscriber.ts:238SafeSubscriber.error @ Subscriber.ts:202Subscriber._error @ Subscriber.ts:139Subscriber.error @ Subscriber.ts:109Subscriber._error @ Subscriber.ts:139Subscriber.error @ Subscriber.ts:109onError @ http.umd.js:1186ZoneDelegate.invokeTask @ zone.js:262onInvokeTask @ core.umd.js:7768ZoneDelegate.invokeTask @ zone.js:261Zone.runTask @ zone.js:151ZoneTask.invoke @ zone.js:332
A: The reason for pre-flight failing is a standard problem with CORS:
How does Access-Control-Allow-Origin header work?
It is fixed by having server to set correct Access-Control-Allow-Origin.
However, the root cause of your problem could be in misconfiguration of your server or your client, since 500 is the server internal error. So it might not be accessing intended server code, but some generic handler in apache or whatever is hosting the service. Most of the generic error handlers dont provide CORS support as default.
Note, if you are using REST API, 500 is not a code you should see in a successful API use scenario. It means that the server is not capable of handling the request properly.
Maybe this link would have relevant information:
EXCEPTION: Response with status: 0 for URL: null in angular2
| {
"language": "en",
"url": "https://stackoverflow.com/questions/41171262",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to scrape latitude longitude from java script I am fairly new to BeautifulSoup4 and am having trouble extracting latitude and longitude values out of javascript. The file is quite long and I have to prepare data frame from all the latitude longitudes
The Java script file will have strings like these:
var marker_9795626cfd584471ab4406d756a00baf = L.marker([19.041691972000024, 72.85052482000003],{}).addTo(feature_group_ad623471194f451d9f1cf7fc718747c5);
The marker id, here, would be - 9795626cfd584471ab4406d756a00baf
The latitude would be - 19.041691972000024
And the longitude would be - 72.85052482000003
How to extract marker id, latitude and longitude out of the strings using BeautifulSoup.
A: if all you need is to isolate those 2 numbers from that string try this:
def parse(text):
return [float(i) for i in text.split('[', 1)[1].split(']', 1)[0].split(', ')]
long_lat = parse(your_string_var)
EDIT:
oh and to get the id something like this should do:
def parse2(text):
return text.split('_', 1)[1].split(' ', 1)[0]
id = parse2(your_string_var)
A: This is Javasascript script, so BeautifulSoup won't execute/parse it. You can use re module to get the information.
For example:
import re
txt = '''var marker_9795626cfd584471ab4406d756a00baf = L.marker([19.041691972000024, 72.85052482000003],{}).addTo(feature_group_ad623471194f451d9f1cf7fc718747c5);'''
marker_id, lat, lon = re.search(r'marker_([a-f\d]+).*?\[(.*?), (.*?)\]', txt).groups()
print(marker_id)
print(lat)
print(lon)
Prints:
9795626cfd584471ab4406d756a00baf
19.041691972000024
72.85052482000003
EDIT: To parse the variables from file, you can use this script:
import re
with open('<YOUR FILE>', 'r') as f_in:
for line in f_in:
m = re.search(r'marker_([a-f\d]+).*?\[(.*?), (.*?)\]', line)
if m:
marker_id, lat, lon = m.groups()
print(marker_id, lat, lon)
EDIT2: New version:
import re
with open('<YOUR FILE>', 'r') as f_in:
data = f_in.read()
for marker_id, lat, lon in re.findall(r'marker_([a-fA-F\d]+).*?\[(.*?),\s*(.*?)\]', data):
print(marker_id, lat, lon)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62934114",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Sphinx MVA sql_attr_multi results always on first field in FACET Query I am trying to achieve a filter in Sphinx so that I can filter out the results from two MVA attributes (what I need is a double condition, but since MVA is only 2 fields..) from the same joined table, but whatever I do, It seems that I keep getting the same results since there is always a match on the first field in the multi attribute?
Maybe there's another solutions but I can't seem to find out how to get my desired result, I got to simplify 4 tables: PRODUCT, CATEGORY, PRICE and POSITION.
Data set-up
Below a sample of the data structure:
PRODUCT
+------+-------------+-----------------+---------------+
| ID | CATEGORY_ID | MANUFACTURER_ID | TITLE |
+------+-------------+-----------------+---------------+
| 1000 | 1000 | 1000 | Apple iPhone |
| 1001 | 1000 | 1000 | Apple iPad |
| 1002 | 1000 | 1000 | Apple iPod |
| 1003 | 1001 | 1001 | Do not show |
+------+-------------+-----------------+---------------+
CATEGORY
+------+-------+
| ID | TITLE |
+------+-------+
| 1000 | Apple |
| 1001 | Other |
+------+-------+
PRICE
+------+--------+---------+
| ID | USERID | PRICE |
+------+--------+---------+
| 1000 | 1000 | 359.00 |
| 1001 | 1001 | 1058.30 |
| 1002 | 1002 | 1078.00 |
| 1003 | 1003 | 1160.45 |
| 1004 | 1004 | 1180.00 |
| 1005 | 1000 | 1190.00 |
| 1006 | 1000 | 228.76 |
+------+--------+---------+
POSITION
+------+------------+--------+------+
| ID | PRODUCT_ID | USERID | RANK |
+------+------------+--------+------+
| 1000 | 1000 | 1000 | 1 |
| 1001 | 1001 | 1001 | 1 |
| 1002 | 1001 | 1002 | 2 |
| 1003 | 1001 | 1003 | 3 |
| 1004 | 1001 | 1004 | 4 |
| 1005 | 1001 | 1000 | 5 |
| 1006 | 1002 | 1000 | 1 |
+------+------------+--------+------+
Sphinx Set-up:
source product
{
type = mysql
sql_host = localhost
sql_user = ...
sql_pass = ...
sql_db = ...
sql_query_pre = SET NAMES utf8
sql_query = SELECT P.ID AS ID, C.ID AS SEARCH_CAT_ID, P.CATEGORY_ID, P.MANUFACTURER_ID, P.TITLE AS TITLE_SORT FROM CATEGORY C, PRODUCT P WHERE P.CATEGORY_ID=C.ID
sql_attr_uint = CATEGORY_ID
sql_attr_uint = MANUFACTURER_ID
sql_attr_string = TITLE_SORT
sql_attr_multi = uint POS_RANK from query; SELECT PRODUCT_ID, RANK FROM POSITION
sql_attr_multi = uint POS_USERID from query; SELECT PRODUCT_ID, USERID FROM POSITION
}
index product
{
source = product
path = ...
docinfo = extern
min_word_len = 1
}
MySQL Dump:
CREATE TABLE IF NOT EXISTS `CATEGORY` (
`ID` int(11) NOT NULL,
`TITLE` varchar(255) NOT NULL,
PRIMARY KEY (`ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
INSERT INTO `CATEGORY` (`ID`, `TITLE`) VALUES
(1000, 'Apple'),
(1001, 'Other');
CREATE TABLE IF NOT EXISTS `POSITION` (
`ID` int(11) NOT NULL,
`PRODUCT_ID` int(11) NOT NULL,
`USERID` int(11) NOT NULL,
`RANK` int(11) NOT NULL,
PRIMARY KEY (`ID`),
KEY `USERID` (`USERID`),
KEY `PRODUCT_ID` (`PRODUCT_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
INSERT INTO `POSITION` (`ID`, `PRODUCT_ID`, `USERID`, `RANK`) VALUES
(1000, 1000, 1000, 1),
(1001, 1001, 1001, 1),
(1002, 1001, 1002, 2),
(1003, 1001, 1003, 3),
(1004, 1001, 1004, 4),
(1005, 1001, 1000, 5),
(1006, 1002, 1000, 1);
CREATE TABLE IF NOT EXISTS `PRICE` (
`ID` int(11) NOT NULL,
`USERID` int(11) NOT NULL,
`PRICE` decimal(9,2) NOT NULL,
PRIMARY KEY (`ID`),
KEY `USERID` (`USERID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
INSERT INTO `PRICE` (`ID`, `USERID`, `PRICE`) VALUES
(1000, 1000, '359.00'),
(1001, 1001, '1058.30'),
(1002, 1002, '1078.00'),
(1003, 1003, '1160.45'),
(1004, 1004, '1180.00'),
(1005, 1000, '1190.00'),
(1006, 1000, '228.76');
CREATE TABLE IF NOT EXISTS `PRODUCT` (
`ID` int(11) NOT NULL,
`CATEGORY_ID` int(11) NOT NULL,
`MANUFACTURER_ID` int(11) NOT NULL,
`TITLE` varchar(255) NOT NULL,
PRIMARY KEY (`ID`),
KEY `CATEGORY_ID` (`CATEGORY_ID`),
KEY `MANUFACTURER_ID` (`MANUFACTURER_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
INSERT INTO `PRODUCT` (`ID`, `CATEGORY_ID`, `MANUFACTURER_ID`, `TITLE`) VALUES
(1000, 1000, 1000, 'Apple iPhone'),
(1001, 1000, 1000, 'Apple iPad'),
(1002, 1000, 1000, 'Apple iPod'),
(1003, 1001, 1001, 'Do not show');
ALTER TABLE `POSITION`
ADD CONSTRAINT `POSITION_ibfk_1` FOREIGN KEY (`PRODUCT_ID`) REFERENCES `POSITION` (`ID`) ON DELETE CASCADE;
ALTER TABLE `PRODUCT`
ADD CONSTRAINT `PRODUCT_ibfk_1` FOREIGN KEY (`CATEGORY_ID`) REFERENCES `CATEGORY` (`ID`) ON DELETE CASCADE;
Test 1
SphinxQL Query for filter USERID:
SELECT ID FROM product WHERE MATCH('1000') AND POS_USERID IN (1000) ORDER BY WEIGHT() DESC LIMIT 0,20 FACET POS_RANK LIMIT 5;
Result for USERID (first set seems OK this time, second returns everything):
+------+
| id |
+------+
| 1000 |
| 1001 |
| 1002 |
+------+
+----------+----------+
| pos_rank | count(*) |
+----------+----------+
| 5 | 1 |
| 4 | 1 |
| 3 | 1 |
| 2 | 1 |
| 1 | 3 |
+----------+----------+
What I expected:
*
*Product ID (1000,1001,1002)
*Position Rank (1 with count 3, 5 with count 1)
Test 2
SphinxQL Query for filter USERID 1000 and RANK 5:
SELECT ID FROM product WHERE MATCH('1000') AND POS_USERID IN (1000) AND POS_RANK IN (5) ORDER BY WEIGHT() DESC LIMIT 0,20 FACET POS_RANK LIMIT 5;
Result for USERID and RANK (first set seems OK this time, second returns everything):
+------+
| id |
+------+
| 1001 |
+------+
+----------+----------+
| pos_rank | count(*) |
+----------+----------+
| 5 | 1 |
| 4 | 1 |
| 3 | 1 |
| 2 | 1 |
| 1 | 1 |
+----------+----------+
What I expected:
*
*Product ID (1001)
*Position Rank (5 with count 1)
Test 3
SphinxQL Query for filter USERID 1000 and RANK 4:
SELECT ID FROM product WHERE MATCH('1000') AND POS_USERID IN (1000) AND POS_RANK IN (4) ORDER BY WEIGHT() DESC LIMIT 0,20 FACET POS_RANK LIMIT 5;
Result for USERID and RANK (first stil returns product, second returns everything):
+------+
| id |
+------+
| 1001 |
+------+
+----------+----------+
| pos_rank | count(*) |
+----------+----------+
| 5 | 1 |
| 4 | 1 |
| 3 | 1 |
| 2 | 1 |
| 1 | 1 |
+----------+----------+
What I expected:
*
*Product ID (empty set)
*Position Rank (empty set)
I hope you guys understand what I am trying to achieve and can help me out?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51609980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Mixing objects and rows, carrying muliples objects. How going from row datasets to datasets of objects (a Dataset where + mean: "join")? The following question is rather long. It depicts this:
*
*I have a row dataset made of primitive types a1, a2...a10, b1, b2...b8, c1..c4, d1.
*They are hiding objects A, B, C accompanied sometimes with others primitives attributes not in a class: d1, for example.
*I could return instead of a Dataset<Row> a Dataset<E> where E would be a class having for members the A, B, C objects and the attribute d1.
*But I want to avoid this, if I can, yet, and see how much I can approach a solution where I would return a dataset of joined objects [and attributes] : Dataset<A+B+C+d1>
(where the + sign means that objects are linked together by a join).
If it's possible, is it really manageable?
My code uses mostly Dataset<Row>.
For example, I have a method that builds a (French) city description:
/**
* Obtenir un Dataset des communes.
* @param session Session Spark.
* @param anneeCOG Année du Code Officiel Géographique de référence.
* @param verifications Vérifications demandées.
* @return Dataset des communes.
* @throws TechniqueException si un incident survient.
*/
public Dataset<Row> rowCommunes(SparkSession session, int anneeCOG, Verification... verifications) throws TechniqueException {
String nomStore = "communes_par_codeCommune";
Dataset<Row> communes = loadFromStore(session, "{0}_{1,number,#0}", nomStore, anneeCOG, verifications);
if (communes != null) {
return communes;
}
LOGGER.info("Constitution du dataset des communes depuis pour le Code Officiel Géographique (COG) de l'année {}...", anneeCOG);
Dataset<Row> c = loadAndRenameCommmunesCSV(session, anneeCOG, false, verifications);
Dataset<Row> s = this.datasetSirenCommunaux.rowSirenCommunes(session, anneeCOG, TriSirenCommunaux.CODE_COMMUNE);
Column condition1 = c.col("codeCommune").equalTo(s.col("codeCommune"));
Column condition2 = c.col("codeCommuneParente").equalTo(s.col("codeCommune"));
verifications("jonction communes et siren par codeCommune", c, null, s, condition1, verifications, SHOW_REJETS, COMPTAGES_ET_STATISTIQUES);
Dataset<Row> join1 = c.join(s, condition1)
.drop(s.col("codeCommune"))
.drop(s.col("nomCommune"))
.drop(s.col("codeRegion"))
.drop(s.col("codeDepartement"));
verifications("jonction communes et siren par codeCommune, join1", c, null, null, null, verifications);
verifications("jonction communes et siren par codeCommuneParente", c, null, s, condition2, verifications, SHOW_REJETS, COMPTAGES_ET_STATISTIQUES);
Dataset<Row> join2 = c.join(s, condition2)
.drop(s.col("codeCommune"))
.drop(s.col("nomCommune"))
.drop(s.col("codeRegion"))
.drop(s.col("codeDepartement"));
verifications("jonction communes et siren par codeCommuneParente, join2", c, null, null, null, verifications);
communes = join1.union(join2);
// La strate communale doit concorder avec celle des comptes individuels des communes.
communes = communes.withColumn("strateCommune",
when(s.col("populationTotale").between(0, 249), lit(1)) // communes de moins de 250 hab
.when(s.col("populationTotale").between(250, 499), lit(2)) // communes de 250 à 500 hab
.when(s.col("populationTotale").between(500, 1999), lit(3)) // communes de 500 à 2 000 hab
.when(s.col("populationTotale").between(2000, 3499), lit(4)) // communes de 2 000 à 3 500 hab
.when(s.col("populationTotale").between(3500, 4999), lit(5)) // communes de 3 500 à 5 000 hab
.when(s.col("populationTotale").between(5000, 9999), lit(6)) // communes de 5 000 à 10 000 hab
.when(s.col("populationTotale").between(10000, 19999), lit(7)) // communes de 10 000 à 20 000 hab
.when(s.col("populationTotale").between(20000, 49999), lit(8)) // communes de 20 000 à 50 000 hab
.when(s.col("populationTotale").between(50000, 99999), lit(9)) // communes de 50 000 à 100 000 hab
.otherwise(lit(10))); // communes de plus de 100 000 hab
// Obtenir les contours des communes.
// "(requête SQL) contours" est la forme de substitution pour Spark. cf https://stackoverflow.com/questions/38376307/create-spark-dataframe-from-sql-query
String format = "(select insee as codecommuneosm, nom as nomcommuneosm, surf_ha as surface2, st_x(st_centroid(geom)) as longitude, st_y(st_centroid(geom)) as latitude from communes_{0,number,#0}) contours";
String sql = MessageFormat.format(format, anneeCOG);
Dataset<Row> contours = sql(session, sql).load();
contours = contours.withColumn("surface", col("surface2").cast(DoubleType)).drop(col("surface2"))
.orderBy("codecommuneosm");
Column conditionJoinContours = col("codeCommune").equalTo(col("codecommuneosm"));
verifications("jonction communes et contours communaux OSM (centroïde, surface)", communes, null, contours, conditionJoinContours, verifications, SHOW_REJETS, COMPTAGES_ET_STATISTIQUES);
communes = communes.join(contours, conditionJoinContours, "left_outer")
.drop(col("codecommuneosm")).drop(col("nomcommuneosm"));
verifications("jonction communes et contours communaux OSM (centroïde, surface)", communes, null, null, null, verifications);
// Associer à chaque commune son code intercommunalité, si elle en a un (les communes-communautés peuvent ne pas en avoir).
Dataset<Row> perimetres = this.datasetPerimetres.rowPerimetres(session, anneeCOG, EPCIPerimetreDataset.TriPerimetresEPCI.CODE_COMMUNE_MEMBRE).selectExpr("sirenCommuneMembre", "sirenGroupement as codeEPCI", "nomGroupement as nomEPCI");
Column conditionJoinPerimetres = communes.col("sirenCommune").equalTo(perimetres.col("sirenCommuneMembre"));
verifications("jonction communes et périmètres", communes, null, perimetres, conditionJoinPerimetres, verifications, SHOW_REJETS, COMPTAGES_ET_STATISTIQUES);
communes = communes.join(perimetres, conditionJoinPerimetres, "left");
// Y associer les départements.
communes = this.datasetDepartements.withDepartement(session, "codeDepartementRetabli", communes, "codeDepartement", null, true, anneeCOG)
.drop("codeRegionDepartement")
.drop("codeDepartementRetabli");
communes = communes.repartition(col("codeDepartement"))
.sortWithinPartitions(col("codeCommune"))
.persist(); // Important : améliore les performances.
saveToStore(communes, new String[] {"codeDepartement"}, "{0}_{1,number,#0}", nomStore, anneeCOG);
LOGGER.info("Le dataset des communes du Code Officiel Géographique de l'année {} est prêt et stocké.", anneeCOG);
return communes;
}
Sometimes, it's useful if I convert these rows to a Commune object, because business objects, at least on server side, can have methods that bring them some kind of intelligence (limited to looking at themselves or to the objects of their package).
For example, the Commune object has this method to help detecting it has the same name than another one when an article can be found in its name.
/**
* Déterminer si notre commune a le même nom que celle en paramètre.
* @param nomCandidat Nom de commune : il peut contenir une charnière.
* @return true si c'est le cas.
*/
public boolean hasMemeNom(String nomCandidat) {
// Si le nom soumis vaut null, répondre non.
if (nomCandidat == null) {
return false;
}
// Faire une comparaison directe de nom de commune tout d'abord, car l'emploi du collator est très coûteux.
if (nomCandidat.equalsIgnoreCase(this.nomCommune)) {
return true;
}
// Puis, rechercher avec les différentes charnières.
if (nomCandidat.equalsIgnoreCase(nomAvecType(false, PrefixageNomCommune.AUCUN))) {
return true;
}
if (nomCandidat.equalsIgnoreCase(nomAvecType(false, PrefixageNomCommune.A))) {
return true;
}
if (nomCandidat.equalsIgnoreCase(nomAvecType(false, PrefixageNomCommune.POUR))) {
return true;
}
// En cas d'échec, reprendre ces tests, mais avec le collator, plus lent, mais qui passera outre les caractères accentués.
if (collator.equals(nomCandidat, this.nomCommune)) {
return true;
}
if (collator.equals(nomCandidat, nomAvecType(false, PrefixageNomCommune.AUCUN))) {
return true;
}
if (collator.equals(nomCandidat, nomAvecType(false, PrefixageNomCommune.A))) {
return true;
}
if (collator.equals(nomCandidat, nomAvecType(false, PrefixageNomCommune.POUR))) {
return true;
}
return false;
}
To do that conversion from a Dataset<Row> to a Dataset<Commune>, I wrote this function, which works, but troubles me because it looks clumsy to me:
/**
* Obtenir un Dataset des communes (comme objets communes, incluant les données siren communaux).
* @param session Session Spark.
* @param anneeCOG Année du Code Officiel Géographique de référence.
* @return Dataset des communes.
* @throws TechniqueException si un incident survient.
*/
public Dataset<Commune> obtenirCommunes(SparkSession session, int anneeCOG) throws TechniqueException {
Dataset<Row> communes = rowCommunes(session, anneeCOG);
return communes
.select(communes.col("typeCommune"), communes.col("codeCommune"), communes.col("codeRegion"), communes.col("codeDepartement"),
communes.col("arrondissement"), communes.col("typeNomEtCharniere"), communes.col("nomMajuscules"), communes.col("nomCommune"),
communes.col("codeCanton"), communes.col("codeCommuneParente"), communes.col("sirenCommune"), communes.col("populationTotale").alias("population"),
communes.col("strateCommune"), communes.col("codeEPCI"), communes.col("surface"), communes.col("longitude"), communes.col("latitude"),
communes.col("nomDepartement"), communes.col("nomEPCI"))
.as(Encoders.bean(Commune.class))
.cache();
}
However, my main problem is:
The Commune object (or rows) is almost never used (or returned) alone in most datasets, after that.
Often a Commune will come with some employment data associated, or with financial information linked. Or sometimes, a calculation method can associate only one or two raw (primitive) data values to a city, if they make sense.
So, until today, what happened?
*
*I was starting with a Dataset<Row> of let's say 20 columns.
*if I was doing some joins, calculations, enrichments, etc.: that dataset reached 40 or 50 columns sometimes,
*
*Let's say that a record was: a1, a2...a10, b1, b2...b8, c1..c4, d1 of rows columns having primitive types.
*then I was extracting from that record, through the mean of Encoders.bean(...) method, the various business objects it was hiding, depending on my needs.
*
*From a record, I could extract A from a1, a2...a10 or C from c1...c4.
It worked (and it's working) but I'm not really proud of this manner of doing things.
I would enjoy more starting with a plain Dataset<Commune> dataset, hiding completely the row phase to the users of my API, then being able to return to them, depending of their needs:
*
*a dataset containing Commune and Employment business object together, or Commune and Accounting.
*a dataset containing Commune, but with also few row values of columns "d1", "h1" and "k1" (if some calculation was needed to provide a specific information for that city, for some exceptional purpose/case that don't lead to changing the whole business object description, but only at this time to return an extra column value aside that city description).
It means that I will encounter cases where I would like to return a dataset showing per "record":
A, B (concrete objects coming together)
or
A, C
or even sometimes A,B,C.
Or what I'm fearing the most:
A, C, d1 (two concrete objects, plus... a primitive value).
Can you give me some ideas about how treating such problems before I start great moves?
Warning: this problem isn't a simple one. It could even not have a clear solution.
*
*Starting from row records made of primitive types a1, a2...a10, b1, b2...b8, c1..c4, d1, I am able to extract A, B, C objects from them, if I agree to do up to n additional transformations for n different objects types I can find in my attributes.
*But I wonder about approaching the most I can the situation where, given the + sign would mean: join, I had a dataset :
Dataset<A+B+C>
and even: Dataset<A+B+C+d1>, where d1 would still be a primitive type not in an object.
*If this could be achieved, it could cause troubles.
What is a Dataset<A+B+C+d1>.foreach?
A A, B or C object?
How would you manage it after that?
I'm living in a magma of attributes yet, and I wonder if I can improve my datasets to new ones using objects. Objects that are tied together to describe a single record.
This question could have the solution of creating an E object having for members A, B or C objects and the d1 primitive attribute, and returning a Dataset<E>.
But this I want to avoid the most I can, at this time. And try to find what I can do else, first.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/71283305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to find and remove optgroup and option in select? I've this select structure:
<select id="select-service" class="required span4" multiple="">
<optgroup label="Capelli">
<option selected="false" value="14">Colore capelli</option
</optgroup>
<optgroup label="Nessuna categoria">
<option selected="false" value="13">Taglio capelli</option>
<option selected="false" value="15">Mesh</option>
</optgroup>
</select>
Now my goal is remove all option in select, so I've made this code:
$('#select-service')
.find('option')
.remove()
.end();
but the problem's that this code remove only the option not the optgroup, how I can remove in one step option and optgroup?
A: You could simply pass two, comma-separated, selectors to the find() method:
$('#select-service').find('optgroup, option').remove();
You could also just remove all children elements for the same result:
$('#select-service').children().remove();
// or:
$('#select-service > *').remove();
The most concise approach would be to just remove the element using the .empty() method:
$('#select-service').empty();
A: The simplest way
$('#select-service optgroup').remove()
Since optgroup is the parent of option it will remove it as well.
A: Like this:
$('#select-service')
.find('optgroup')
.remove();
This will not only remove your optgroup, but all contents within it, meaning your option will also be removed.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33695586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Include assets in Twig Extension with Symfony2 I want to include asset with some script file in Twig extension. E.g. I want to declare twig function {{ init_project() }}, but when I writing in my twig function '<script type="text/javascript" src="{{ asset('bundles/mybundle/js/script.js') }}"></script>', it dont work, it return 404 error in debug panel of browser. So how I can do this?
A: Look at the 'asset' twig function :
You can find it in \Symfony\Bundle\TwigBundle\Extensions\AssetsExtension
public function getAssetUrl($path, $packageName = null, $absolute = false, $version = null)
{
$url = $this->container->get('templating.helper.assets')->getUrl($path, $packageName, $version);
if (!$absolute) {
return $url;
}
return $this->ensureUrlIsAbsolute($url);
}
Instead of calling '{{asset}}' in your extension, just call the public function getAssetUrl()
| {
"language": "en",
"url": "https://stackoverflow.com/questions/28737444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Selecting the second child of an ancestor with XPath I need to fetch some text from a HTML page. I'm trying to avoid relying on tag names or classes, because they will change.
Starting from an element that contains the text "Hello", I'm looking for the text stored in the grandparent's second element.
<...>
<...>
<...>
Hello
</...>
</..>
<...> <!-- UNCLE -->
<...>
World <!-- I need this! -->
</...>
<...>
</...>
I tried fetching the element UNCLE using XPath, with: //*[text()=="Hello")]/../..[2], but it doesn't work. It seems that [] cannot be applied to ..?
How can I fetch the second child of a node's grandparent?
Are there better ways to retrieve the text I'm looking for, instead of a similar XPath query and document.evaluate?
A: Try this:
//*[*[1][*[normalize-space()='Hello']]]/*[2]/*
It will select the element that contains "World".
It's testing the value of the descendant in the first child and then selecting the descendant of the second child.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/64486687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Correct way for thread safe indexing operator in c++ I want to have thread safe indexing operator and I came with the following code which seems to work.
Can you see any problems with it except for bounds checking?
Is there a better way to do the same thing (with overloading indexing operator, not with get/set functions)?
class A
{
public:
A()
{
for (auto i = 0; i < 100; ++i)
{
v[i] = i + 1;
}
};
class Proxy
{
int* val;
A* parent;
public:
Proxy(int& a, A* p) : parent(p)
{
parent->z.lock();
val = &a;
};
~Proxy()
{
parent->z.unlock();
}
int operator=(int a)
{
*val = a;
return a;
};
operator int() const
{
return *val;
};
};
int operator[](int i) const
{
z.lock();
int r = v[i];
z.unlock();
return r;
}
Proxy operator[](int i)
{
return Proxy(v[i], this);
}
int v[100];
Z z; // some locking mechanism, not important
};
A: Since the locking mechanism isn't specified I'd assume it uses a normal mutex in which case the obvious problem is this:
A a;
a[0] = a[1];
Put differently, it is very easy to dead-lock the program. This problem is avoided with recursive mutexes.
The other obvious problem is that the code depends on copy-elision which is not guaranteed to always happen. If the copy is not elided upon return the temporary will release the lock and the copy will release it again which normally is undefined behavior. In addition, access to the member is unguarded and will potentially introduce data races. To avoid this problem you should define a move constructor and make sure that the moved from state doesn't try to release the mutex, e.g.:
A::Proxy::Proxy(Proxy&& other)
: val(other.val)
, parent(other.parent) {
other.parent = 0;
}
A::Proxy::~Proxy() {
if (parent){
parent->unlock();
}
}
The not so obvious issue is that it is unlikely to result in an efficient implementation.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/28751373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Differentiate between Table Views I have three table views inside a view controller (going to show/hide table views to display a list of options in different contexts)
Just wondering what the best way is to distinguish between different table views that are using the same delegate.
Thanks
A: Use three separate instance variables in your view controller to store the table views. Then in the delegate methods you can do something like this:
if (tableView == myFirstTableView) {
// Do whatever you need for table view 1.
} else if (tableView == mySecondTableView) {
// Do whatever you need for table view 2.
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11268086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Use VisualBasic DLLs in Java I have to access VBA-code of Office applications from my Java application, I found THIS that says I can access VBA-code through VisualBasic DLLs using JNI. I don't want to use a COM-bridge if not necessary, I'd rather go with a DLL-solution.
I created a VisualBasic Class Library in Visual Studio 2013 (a simple example to test if it works):
Public Class Test1
Public Function box()
MsgBox("boxtest!")
End Function
End Class
I built is as a release and put this in my Java project:
public class Test1 {
static{
System.loadLibrary("Test1");
}
public native void box();
}
The function is being called by new Test1().box();.
I receive the following exception: Exception in thread "main" java.lang.UnsatisfiedLinkError: test.Test1.box()V
I also used JNA to access the DLL but after hours of trying I couldn't get it to work (I also read that it can't be used with VisualBasic DLLs).
I set the Native Library Folder of my src Folder to to folder containing the DLL.
question: Can I use VisualBasic DLLs in Java, if yes, with JNA or JNI (or both) and if so what did I do wrong, how can I access the function properly? (I guess the rest with returning and parameters is easy then...)
Thank you very much in advance and merry christmas to you all! :)
A: No idea why it cannot find your library. From what I remember of JNI, it does not appear that you've done the JNI setup for calling a native routine, but the error message just says it cannot find it. You might try figuring out if the library load statement worked.
A DLL is a library following certain rules and conventions; I am not aware of any great difference between a "Visual Basic DLL" and any other kind. At some level they need to be the same, because Windows programs don't distinguish among DLLs written with different languages, afaik, and I've done VB enough to know that I haven't seen documentation that says "this can be used from VB but not from other languages" etc.
Getting JNI/JNA stuff to work is tricky and tedious. The normal stuff that a language runtime tells you, especially a Java runtime, are not there for you in this case. You must painstakingly go through every line of whatever documentation you have, every parameter you are passing, every use of value versus reference, etc.
I once got things to work with the GitHub library here.
Good luck.
A: I have not found an answer how to call VB DLLs directly from Java but after some days of research I found out that you are able to call VB DLLs with the help of a C++ wrapper.
It may be possible to call VB DLL methods with JNI but there is no documentation on how to do it.
You find a lot about how to create C++ libraries that are able to communicate with JNI in the JNI specifications from Oracle.
In this special case (control Office Applications with Java) I suggest to write the code to access the Office Application in C++ and create a DLL.
The basic approach on creating a C++ DLL that may interact with JNI is:
*
*Think of names for the methods you want to create in C++ and a .dll-name [NAME].dll
*Create a Java class for the DLL, loading the library:
static{
System.loadLibrary([NAME].dll);
}
The Native Library path has to be set (in Eclipse, right-click on the folder containing your class and click Build-Path).
*Include the method names public native void [methodname]();.
*Compile the .java file using javac.exe (or let for example Eclipse do the work).
*Create a C++ header-file using javah.exe with -jni parameter.
*Create a new project in Visual Studio (Visual C++ MFC DLL).
*Copy the created header-file (your Java Project), the jni.h (JDK) and the jni_md.h (JDK)
*Include all three header-files in your Visual C++-project header-file [Project-Name].h
*Include the created header-file and the jni.h-file in [Project-name].cpp.
*Write desired code in your [Project-name].cpp.
*Build the DLL, put it inside your defined path for Native Libraries (see italic in 2.).
*Run and be happy!
Sorry for any mistakes!
An example with Visual Basic-DLL and JNI can be found 1HERE and somewhere else, google "classle" and "JNI" (can't post 2 links).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27637475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Customize List's cells in DashCode I have my site created with Dashcode and I am using the List object but I don't like the default blue background when a cell is selected.
How can I customize this? For example change it to grey or white, etc.
(As far as i know, everything is customizable in Dashcode, is just sometimes you have to do it using code and not Dashcode UI.)
Thanks in advance.
A: Answer to my self:
looking at main.css I found something like:
.listRowTemplate_template.selected {
background-color: rgb(56, 0, 217);
}
Which is the color I want to change ;)
A: Which would have been my answer, shall i vote you up? ;-)
It is easy to forget that when in Dashcode it is "just" JavaScript, CSS and HTML and so many problems will often succumb to those type of solutions such as you have done.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3758557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Enable JPA 2nd level cache in Spring Data JPA repository I try to enable JPA 2nd level cache for Spring Data JPA. I enabled cache in persistence.xml:
<persistence-unit name="ds-default">
<jta-data-source>SAMPLE_DATASOURCE</jta-data-source>
<shared-cache-mode>ENABLE_SELECTIVE</shared-cache-mode>
<properties>
<property name="hibernate.cache.use_second_level_cache" value="true" />
<property name="hibernate.cache.use_query_cache" value="true" />
<property name="hibernate.show_sql" value="true" />
<property name="hibernate.cache.region.factory_class" value="org.jboss.as.jpa.hibernate5.infinispan.InfinispanRegionFactory" />
</properties>
</persistence-unit>
In my repository I provided query hints for the cache as well:
public interface SampleRepository extends JpaRepository<SampleEntity, String> {
@Override
@QueryHints(value = {
@QueryHint(name = "org.hibernate.cacheable", value = "true")
})
List<SampleEntity> findAll();
}
I expect that the result of findAll query should be cached and that I should see only one select query in the logs. However, I see it more than once which leads me to the conclusion that cache is not configured correctly. What am I missing?
I do not want to use the Spring Cache mechanism (I do not use Spring Boot etc.).
A: I can give you some steps to check and follow and a sample project in github:
*
*Check in persistence.xml
<persistence-unit name="testPersistenceUnit" transaction-type="JTA">
...
<shared-cache-mode>ENABLE_SELECTIVE</shared-cache-mode>
<properties>
...
<property name="hibernate.cache.use_second_level_cache" value="true"/>
<property name="hibernate.cache.use_query_cache" value="true"/>
<property name="hibernate.cache.region_prefix" value="hibernate.test"/><!-- Optional -->
<!-- Use Infinispan second level cache provider -->
<property name="hibernate.cache.region.factory_class" value="org.infinispan.hibernate.cache.v53.InfinispanRegionFactory"/>
<!-- Optional: Initialize a caches (for standalone mode / test purpouse) -->
<property name="hibernate.cache.infinispan.cfg"
value="META-INF/infinispan-configs-local.xml"/>
...
</properties>
</persistence>
PS: "org.infinispan.hibernate.cache.v53.InfinispanRegionFactory" package depends on your maven dependencies and Hibernate version.
*If you want to use JCache interceptors. Add to beans.xml:
<interceptors>
<class>org.infinispan.jcache.annotation.InjectedCacheResultInterceptor</class>
<class>org.infinispan.jcache.annotation.InjectedCachePutInterceptor</class>
<class>org.infinispan.jcache.annotation.InjectedCacheRemoveEntryInterceptor</class>
<class>org.infinispan.jcache.annotation.InjectedCacheRemoveAllInterceptor</class>
</interceptors>
PS. Take in mind, that Infinispan have a variant without "Injected" in the name of the Interceptors for environments where the cache manager is not injected in a managed environment.
*Check that your cache is configured (in your application server) or your cdi producer. Eg.
@ApplicationScoped
public class CacheConfigProducer {
@Produces
public Configuration defaultCacheConfiguration() {
return new ConfigurationBuilder().simpleCache(false).customInterceptors().addInterceptor()
.interceptor(new TestInfinispanCacheInterceptor()).position(Position.FIRST).expiration().lifespan(60000l)
.build();
}
}
*Add to your repository your hints like you are adding:
@Eager public interface CacheableReadWriteRepository extends ReadWriteRepository<CacheableEntity, Integer>{
@QueryHints(value = { @QueryHint(name = org.hibernate.jpa.QueryHints.HINT_CACHEABLE, value = "true")})
List<CacheableEntity> findByValue(Integer value);
}
*Take care to add @Cacheable to your entity
You have this project with Spring Data JPA and CDI integration (spring-data-jpa with cdi). It has tests with JCache, Hibernate 2nd level cache and query cache.
I hope it helps.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/57788432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Generic types in array with Google HTTP Client Library I have base http response which looks like:
{
"data" : [],
"total" : 0,
"hasMore" : false
}
In data can be any object - Users, FeedItems and etc.
So i want create base class something like
public class BaseDataReponse<T> {
@Key
public List<T> data;
@Key
public Integer total;
@Key
public Boolean hasMore;
}
I can do so in Retrofit for Android library.
But i can't understand how to do this in Google HTTP Client Library.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/37523237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Im not sure what functions to use when sorting a dictionary Im not sure what functions to use to sort a dictionary which gets added to as the programme runs, the format of the dictionary is (name:score,name:score ..... )
print(" AZ : print out the scores of the selected class alphabteically \n HL : print out the scores of the selected class highest to lowest \n AV : print out the scores of the selected class with there average scores highest to lowest")
choice = input("How would you like the data to be presented? (AZ/HL/AV)")
while True:
if choice.lower() == 'az':
for entry in sorted(diction1.items(), key=lambda t:t[0]):
print(diction1)
break
elif choice.lower()=='hl':
for entry in sorted(diction1.items(), key=lambda t:t[1]):
print(diction1)
break
elif choice.lower() == 'av':
print(diction1)
break
else:
print("invalid entry")
break
A: A dictionary is unordered.
You can sort the data for output.
>>> data = {'b': 2, 'a': 3, 'c': 1}
>>> for key, value in sorted(data.items(), key=lambda x: x[0]):
... print('{}: {}'.format(key, value))
...
a: 3
b: 2
c: 1
>>> for key, value in sorted(data.items(), key=lambda x: x[1]):
... print('{}: {}'.format(key, value))
...
c: 1
b: 2
a: 3
Using an OrderedDict is not an option here, because you don't want to maintain order, but want to sort with different criteria.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27776299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-4"
} |
Q: C Programming: fgets() function is not reading the values that I think it should I am trying to open and read a .txt file that simply contains a 2 digit number from 00 to 99.
Currently the file contains the following numbers: 05
When I read the first two values with fgets(), I instead get 48 and 53, and I've noticed whatever number I put in there, the number that fgets() grabs is 48 higher. While I could just subtract 48 from the number like I am doing now, I would like to figure out what I am doing wrong so that I don't have to do that. Here is the relevant section of my code:
char buff[3];
FILE *fp;
fp = fopen("savedata/save.txt","r");
fgets(buff, 3, fp);
SampleLevelIndex = (buff[0]-48)*10 + buff[1]-48;
fclose(fp);
A: Your file contain the characters '0' and '5'. Note that the ASCII code for '0' is 48.
You are reading the values of the bytes, not the number represented. If you had an 'A' in the file, you would have the byte 65.
Your approach works for manually converting numeric characters into a number (although you might one some extra checking, it will break if it doesn't contain two numbers). Or you could use a function like atoi() which converts a numeric string into the number.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/64235107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Connect to gitlab production postgresql database with psycopg2? I am trying to connect to GitLab production (installed with omnibus package) postgresql database with psycopg2.
My configuration is like below:
onn = psycopg2.connect(database="gitlabhq_production", user="gitlab-psql", host="/var/opt/gitlab/postgresql", port="5432")
It gives the following error:
FATAL: Peer authentication failed for user "gitlab-psql"
I can connect to the postgresql server on command line with:
sudo -u gitlab-psql -i bash /opt/gitlab/embedded/bin/psql --port 5432 -h /var/opt/gitlab/postgresql -d gitlabhq_production
Does anyone know what will be the correct parameters to pass into?
A: Peer authentication works by checking the user the process is running as. In your command line example you switch to gitlab-psql using sudo.
There are two ways to fix this:
*
*Assign a password to the gitlab-psql postgres user (not the system user!) and use that to connect via python. Setting the password is just another query you need to run as a superuser like so:
sudo -u postgres psql -c "ALTER USER gitlab-psql WITH PASSWORD 'ReplaceThisWithYourLongAndSecurePassword';"
*Run your python script as gitlab-psql like so:
sudo -u gitlab-psql python /path/to/your/script.py
| {
"language": "en",
"url": "https://stackoverflow.com/questions/37957085",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: GCC regular expressions How do I use regular expressions in GNU G++ / GCC for matching, searching and replacing substrings? E.g. could you provide any tutorial on regex_t and others?
Googling for above an hour gave me no understandable tutorial or manual.
A: I strongly suggest using the Boost C++ regex library. If you are developing serious C++, Boost is definitely something you must take into account.
The library supports both Perl and POSIX regular expression syntax. I personally prefer Perl regular expressions since I believe they are more intuitive and easier to get right.
http://www.boost.org/doc/libs/1_46_0/libs/regex/doc/html/boost_regex/syntax.html
But if you don't have any knowledge of this fine library, I suggest you start here:
http://www.boost.org/doc/libs/1_46_0/libs/regex/doc/html/index.html
A: I found the answer here:
#include <regex.h>
#include <stdio.h>
int main()
{
int r;
regex_t reg;
if (r = regcomp(®, "\\b[A-Z]\\w*\\b", REG_NOSUB | REG_EXTENDED))
{
char errbuf[1024];
regerror(r, ®, errbuf, sizeof(errbuf));
printf("error: %s\n", errbuf);
return 1;
}
char* argv[] = { "Moo", "foo", "OlOlo", "ZaooZA~!" };
for (int i = 0; i < sizeof(argv) / sizeof(char*); i++)
{
if (regexec(®, argv[i], 0, NULL, 0) == REG_NOMATCH)
continue;
printf("matched: %s\n", argv[i]);
}
return 0;
}
The code above will provide us with
matched: Moo
matched: OlOlo
matched: ZaooZA~!
A: Manuals should be easy enough to find: POSIX regular expression functions. If you don't understand that, I would really recommend trying to brush up on your C and C++ skills.
Note that actually replacing a substring once you have a match is a completely different problem, one that the regex functions won't help you with.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/5179451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Hibernate - automatic and generic way to check if a defined unique key constraint exists? Imagine a City to Postalcode relation mapping. (For simplicity using no foreign-keys)
<class name="CityToPostalcode" table="city_to_postalcode" catalog="database">
<id name="id" type="java.lang.Integer">
<column name="id" />
<generator class="identity" />
</id>
<property name="city" type="String">
<column name="city" not-null="true"/>
</property>
<property name="postalcode" type="Integer">
<column name="postalcode" not-null="true"/>
</property>
<properties name="businessKey" unique="true">
<property name="city"/>
<property name="postalcode"/>
</properties>
</class>
Is there a function in the framework to check if the unique key "businessKey" for a given combination is unique (also for single-column unique constraints)?
Maybe in combination of mapping "businessKey" to a class? (Similar to usage of composite-id)
It is just so much redundance to write the code for each table to check its business-key, if it definetly could be done automatic.
A: there is natural Id which automaticly creates a unique constraint on schema creation and can be used to query for it more efficiently. in xml
<natural-id>
<property name="name"/>
<property name="org"/>
</natural-id>
naturalids use the second level cache more efficient and from H4.1 on loading by naturalid uses the first level cache and can save roundtrips. Other than that natural ids are just like normal properties. you could however write a generic natural id checker
*
*create a new object with the properties set
*access through the sessionfactory the classmetadata of the object
*iterate the natural id properties and use them to get the values from your object and set them on the using(propertyname, propertyvalue)
*if something is found copy the data, else just save the object you just created
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11217746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Read TIFF ICC profile using Twelvemonkeys ImageIO I need to extract the embedded ICC profile from TIFF files. I can read the IIOMetadata and my IDE shows the ifd field containing the ICC profile (tag ID 34675). But how can I read it to a ICC_Profile object?
ImageInputStream input = ImageIO.createImageInputStream(file);
try {
ImageReader reader = ImageIO.getImageReaders(input).next();
if (reader == null) {
throw new IllegalArgumentException("No image reader for file: " + file);
}
try {
reader.setInput(input);
IIOMetadata metadata = reader.getImageMetadata(0);
// metadata contains a field "ifd" containing the ICC profile
// How to extract it?
} finally {
reader.dispose();
}
} finally {
input.close();
}
A: You can use the function getProfile() of the ICCProfile class.
Usage:
int profileId = ...;
ICCProfile iccp = new ICCProfile(profileId, input);
ICC_Profile icc_p = iccp.getProfile();
In accordance to the code at google result #1 for twelvemonkeys icc_profile.
A: Found a solution. For this Twelvemonkeys package imageio-metadata is needed in version 3.4. Older version does not contain TIFFEntry class.
/**
* Extract ICC profile from an image file.
*
* @param file image file
* @return ICC profile
* @throws IOException on file errors
*/
protected ICC_Profile extractICCProfile(File file) throws IOException {
ICC_Profile profile;
try (ImageInputStream input = ImageIO.createImageInputStream(file)) {
ImageReader reader = ImageIO.getImageReaders(input).next();
if (reader == null) {
throw new IllegalArgumentException("No image reader for file: " + file);
}
try {
reader.setInput(input);
TIFFImageMetadata metadata = (TIFFImageMetadata) reader.getImageMetadata(0);
TIFFEntry entry = (TIFFEntry) metadata.getTIFFField(TIFF.TAG_ICC_PROFILE);
byte[] iccBytes = (byte[]) entry.getValue();
profile = ICC_Profile.getInstance(iccBytes);
} finally {
reader.dispose();
}
}
return profile;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54200755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to use a 3D stl viewer on Ruby on Rails I found this javascript plugin that allows to visualize STL files in 3D:
https://www.viewstl.com/plugin/
The example works very well, the problem is that I can not find how to put that into a rails template. I took all the javascript files to my assets/javascript, then I added the respective '// = require' in aplication.js, I took the small script with the div:
<div id="stl_cont" style="width:500px;height:500px;margin:0 auto;"></div>
<script>
var stl_viewer=new StlViewer(
document.getElementById("stl_cont"),
{ models: [ { filename:"viewstl_plugin.stl" } ] }
);
</script>
and put them in my template but it does not work. Watching the console of my browser I found the following error: ReferenceError: importScripts is not defined. I saw that it has to do with web workers and that an importScripts only works inside them but this problem does not appear in the test.html so I guess something I'm doing wrong when putting them in rails that blocks or prevents the correct operation of importScripts.
I apologize for my lack of fluency in English.
Help :(
| {
"language": "en",
"url": "https://stackoverflow.com/questions/56709849",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Server Side Hooks on Bitbucket I'm new to creating git hooks. I've successfully created a local git hook but I am having a hard time figuring out how to install a server side hook on Bitbucket.
I've tried using a plugin called External Hooks and making a External Pre Receive Hook, but that results in my push to Bitbucket being rejected with:
remote: Hook external-pre-receive-hook blocked the push
! [remote rejected] master -> master (pre-receive hook declined).
I've tried putting the hook in the .git folder on the server. But there's not a .git folder that I can find. I did find ApplicationData/Bitbucket/bin/git-hooks. I tried putting a pre-receive hook file in there but that was not successful. It did not prevent a push to the repo but the file also did not execute.
The hook/file that I'm using is as simple as can be so I don't think thats the problem. It has this text:
#!/bin/sh
#
echo 'hi there soldier'
A: I found out where to add a pre-receive or post-receive hook on a repository basis by adding a file to the Bitbucket server. In the Atlassian folder, it is in ApplicationData\Bitbucket\shared\data\repositories\[repository#]\hooks\.
Bitbucket keeps track of repos internally using numbers and not names so in the above replace [repository#] with the repo number. That can be found out this way.
Put the pre-receive hook in the pre-receive.d folder. Put the post-receive hook in the post-receive.d folder.
The names of the hooks/files should begin with a number. That determines what order the hooks are 'activated'. Begin the numbers with at least 21 because the default hook in the folder begins with 20. You want your hook to be activated after the one shipped with Bitbucket server. So a file name for a pre-receive hook could be 21_pre_receive.
Don't change the default hooks that are in the folder because they are needed to help Bitbucket work.
More information can be found here.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45907945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How can I get the maximum amount of the total amounts for different products in a month in Postgresql? I've just begun using Postgresql recently. I have a table named 'sales'.
create table sales
(
cust varchar(20),
prod varchar(20),
day integer,
month integer,
year integer,
state char(2),
quant integer
)
insert into sales values ('Bloom', 'Pepsi', 2, 12, 2001, 'NY', 4232);
insert into sales values ('Knuth', 'Bread', 23, 5, 2005, 'PA', 4167);
insert into sales values ('Emily', 'Pepsi', 22, 1, 2006, 'CT', 4404);
insert into sales values ('Emily', 'Fruits', 11, 1, 2000, 'NJ', 4369);
insert into sales values ('Helen', 'Milk', 7, 11, 2006, 'CT', 210);
...
There are 500 rows, 10 distinct products and 5 distinct customers in total.
It looks like this:
Now I need to , find the most “popular” and least “popular” products (those products with most and least total sales quantities) and the corresponding total sales quantities (i.e., SUMs) for each of the 12 months (regardless of the year).
The result should be like this:
Now I can only write query like this:
select month,
prod,
sum(quant)
from sales
group by month,prod
order by month,prod;
And it gives me the result like this:
Now I need to pick up the maximum value for each month. For example, the biggest value in the first 10 sums of month 1, and so on...
I also need to get the minimum value of the sums (regardless of the year). And combine them horizontally... I have no idea about this...
A: Note: for a TLDR, skip to the end.
Your problem is a very interesting textbook case as it involves multiple facets of Postgres.
I often find it very helpful to decompose the problem into multiple subproblems before joining them together for the final result set.
In your case, I see two subproblems: finding the most popular product for each month, and finding the least popular product for each month.
Let's start with the most popular products:
WITH months AS (
SELECT generate_series AS month
FROM generate_series(1, 12)
)
SELECT DISTINCT ON (month)
month,
prod,
SUM(quant)
FROM months
LEFT JOIN sales USING (month)
GROUP BY month, prod
ORDER BY month, sum DESC;
Explanations:
*
*WITH is a common table
expression,
which acts as a temporary table (for the duration of the query) and
helps clarify the query. If you find it confusing, you could also opt
for a subquery.
*generate_series(1, 12) is a Postgres function which generate a series of integers, in this case from 1 to 12.
*the LEFT JOIN allows us to associate each sale to the corresponding month. If no sale can be found for a given month, a row is returned with the month and the joined columns with NULL values. More information on joins can be found here. In your case, using LEFT JOIN is important, as using INNER JOIN would exclude products that have never been sold (which in that case should be the least popular product).
*GROUP BY is used to sum over the quantities.
*at this stage, you should -potentially- have multiple products for any given month. We only want to keep those with the most quantities for each month. DISTINCT ON is especially useful for that purpose. Given a column, it allows us to keep the first iteration of each value. It is therefore important to ORDER the sales by sum first, as only the first one will be selected. We want the bigger numbers first, so DESC (for descending order) should be used.
We can now repeat the process for the least popular products:
WITH months AS (
SELECT generate_series AS month
FROM generate_series(1, 12)
)
SELECT DISTINCT ON (month)
month,
prod,
SUM(quant)
FROM months
LEFT JOIN sales USING (month)
GROUP BY month, prod
ORDER BY month, sum;
Conclusion (and TLDR):
Now we need to merge the two queries into one final query.
WITH months AS (
SELECT generate_series AS month
FROM generate_series(1, 12)
), agg_sales AS (
SELECT
month,
prod,
SUM(quant)
FROM months
LEFT JOIN sales USING (month)
GROUP BY month, prod
), most_popular AS (
SELECT DISTINCT ON (month)
month,
prod,
sum
FROM agg_sales
ORDER BY month, sum DESC
), least_popular AS (
SELECT DISTINCT ON (month)
month,
prod,
sum
FROM agg_sales
ORDER BY month, sum
)
SELECT
most_popular.month,
most_popular.prod AS most_popular_prod,
most_popular.sum AS most_pop_total_q,
least_popular.prod AS least_popular_prod,
least_popular.sum AS least_pop_total_q
FROM most_popular
JOIN least_popular USING (month);
Note that I used an intermediate agg_sales CTE to try and make the query a bit clearer and avoid repeating the same operation twice, although it shouldn't be a problem for Postgres' optimizer.
I hope you find my answer satisfactory. Do not hesitate to comment otherwise!
EDIT: although this solution should work as is, I would suggest storing your dates as a single column of type TIMESTAMPTZ. It is often much easier to manipulate dates using that type and it is always good practice in case you need to analyze and audit your database further down the line.
You can get the month of any date by simply using EXTRACT(MONTH FROM date).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/58291694",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Using Breeze With Web Api And Angular js i want to select item from drop down list and then i will see selected customer data in input
then iwant to edit inputs and then save them in database by breez js and web api
I have Web Api Controller like This :
[BreezeController]
public class ZzaController : ApiController
{
readonly EFContextProvider<ZzaDbContext> _contextProvider =
new EFContextProvider<ZzaDbContext>();
// ~/breeze/Zza/Metadata
[HttpGet]
public string Metadata()
{
return _contextProvider.Metadata();
}
// ~/breeze/Zza/Customers
[HttpGet]
public IQueryable<Customer> Customers()
{
var customers = _contextProvider.Context.Customers;
return customers;
}
// ~/breeze/Zza/SaveChanges
[HttpPost]
public SaveResult SaveChanges(JObject saveBundle)
{
return _contextProvider.SaveChanges(saveBundle);
}
}
Angular Service Like This :
var services = function (http) {
breeze.config.initializeAdapterInstance("modelLibrary", "backingStore");
this.getBybreeze = function (successed) {
var dataService = new breeze.DataService({
serviceName: 'breeze/Zza',
hasServerMetadata: false
});
var manager = new breeze.EntityManager(
{
dataService: dataService
});
var entityQuery = breeze.EntityQuery;
return entityQuery.from('Customers').using(manager).execute().then(successed).catch();
}
this.saveByBreeze =function () {
var dataService = new breeze.DataService({
serviceName: 'breeze/Zza',
hasServerMetadata: false
});
var manager = new breeze.EntityManager(
{
dataService: dataService
});
manager.saveChanges().fail(function (error) { alert("Failed save to server: " + error.message); });
}
}
services.$inject = ["$http"];
app.service("TestService", services);
And Angular Controler Like This :
var controller = function (scope, testService, ngTableParams, filter, upload, notification) {
var self = this;
self.title = "Test";
self.customers = [];
self.selected = "";
self.selectedFirstName="";
self.selectedLastName="";
testService.getBybreeze(function (data) {
self.customers = data.results;
});
self.selectedCustomer = function () {
angular.forEach(self.customers, function (item) {
if (item.Id === self.selected) {
self.selectedFirstName = item.FirstName;
self.selectedLastName = item.LastName;
}
});
}
self.save = function () {
testService.saveByBreeze();
}
}
controller.$inject = ["$scope", "TestService", "NgTableParams", "$filter", "Upload", "Notification"];
app.controller("TestController", controller)
View :
<div class="col-md-12" style="margin-top:20px">
<div class="col-md-2 " style="margin-top: 7px">
<label class="">
Customers:
</label>
</div>
<div class="col-md-10">
<div class="col-md-3">
<select class=" form-control" ng-change="self.selectedCustomer()" name="Id" ng-model="self.selected" ng-options="item.Id as item.FullName for item in self.customers"></select>
</div>
</div>
<div class="col-md-10 form-group">
<hr style="border-color: #000080" />
<fieldset data-bind="with: currentCustomer">
<legend>Customer:</legend>
<label for="customerName">Name:</label>
<br/>
<input class="form-control" id="customerName" value="{{self.selectedFirstName}}" />
<label for="customerPhone">Tell:</label>
<br/>
<input id="customerPhone" class="form-control" value="{{self.selectedLastName}}" />
<br />
<button class=" btn btn-default" id="saveButton" ng-click="self.save()">Save</button>
</fieldset>
</div>
</div>
I think EveryThing Is ok
but when I save it,nothing Saved
A: Your code creates a new empty EntityManager with each query and save operation. Instead, you should create a single EntityManager in your TestService, and use it for all query and save operations.
var services = function (http) {
breeze.config.initializeAdapterInstance("modelLibrary", "backingStore");
var dataService = new breeze.DataService({
serviceName: 'breeze/Zza',
hasServerMetadata: false
});
var manager = new breeze.EntityManager(
{
dataService: dataService
});
this.getBybreeze = function (successed) {
var entityQuery = breeze.EntityQuery;
return entityQuery.from('Customers').using(manager).execute().then(successed);
}
this.saveByBreeze = function () {
manager.saveChanges().catch(function (error) { alert("Failed save to server: " + error.message); });
}
}
services.$inject = ["$http"];
app.service("TestService", services);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33125996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Regex replace total string I have and XML file with items that contain this string:
<field name="itemid">xx</field>
Where xx = a number from 50 to 250.
I need to remove the entire string from the whole file.
How would I do this with a Regex replace?
A: you can use this:
str = str.replace(/<.*>/g,'');
See an example for match here
var str = "<field name='itemid'>xx</field>";
str = str.replace(/<.*>/g, 'replaced');
console.log(str)
Explanation:
*
*< matches the character < literally
*.* matches any character (except newline)
*
*Quantifier: * Between zero and unlimited times, as many times as possible, giving back as needed [greedy]
*> matches the character > literally
*g modifier: global. All matches (don't return on first match)
If you want to be more restrictive you can do this:
str = str.replace(/<field name\=\"\w*\">\d*<\/field>/g, '');
See an example for match here
var str = '<field name="test">200</field>';
str = str.replace(/<field name\=\"\w*\">\d*<\/field>/g, 'replaced');
console.log(str)
Explanation:
*
*<field name matches the characters
*\= matches the character = literally
*\" matches the character " literally
*\w* match any word character [a-zA-Z0-9_]
*
*Quantifier: * Between zero and unlimited times, as many times as possible, giving back as needed [greedy]
*\" matches the character " literally
*> matches the character > literally
*\d* match a digit [0-9]
- Quantifier: * Between zero and unlimited times, as many times as possible, giving back as needed [greedy]
*< matches the character < literally
*\/ matches the character / literally
*field> matches the characters field> literally (case sensitive)
*g modifier: global. All matches (don't return on first match)
A: When you replace a tag especially XML tag you must be sure that you capture everything from opening to closing tag. In this case RegExp should look back.
var re = /<(\S+)[^>]*>.*<\/\1>/g;
var m ='some text <ab id="aa">aa <p>qq</p> aa</ab>test test <p>paragraph </p>'.replace(re,'');
console.log(m);//some text test test
\1 matches (\S+)
\S+ one or more non-white-space characters
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38058122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Soft delete in sails.js I'm trying to implement soft delete on a model in a sails.js project by I override the delete action in the respective controller to just update a boolean attribute called isDeleted.
The problem I'm facing is that, I need to override the find action for the respective controller so that it'll ignore the "deleted" records but I need to keep the rest of the original functionality. To do this I'm simply copying the code of the original find action into the override, but it depends on the actionUtil module, and, when doing the require for that module, no matter how I change the route it never manages to find the module.
So, this is what my controller looks like:
var actionUtil = require('/sails/lib/hooks/blueprints/actionUtil'),
_ = require('lodash');
module.exports = {
find: function(req, res){
// Look up the model
var Model = actionUtil.parseModel(req);
if ( actionUtil.parsePk(req) ) {
return require('./findOne')(req,res);
}
// Lookup for records that match the specified criteria
var query = Model.find()
.where( actionUtil.parseCriteria(req) )
.limit( actionUtil.parseLimit(req) )
.skip( actionUtil.parseSkip(req) )
.sort( actionUtil.parseSort(req) );
// TODO: .populateEach(req.options);
query = actionUtil.populateEach(query, req);
query.exec(function found(err, matchingRecords) {
if (err) return res.serverError(err);
// Only `.watch()` for new instances of the model if
// `autoWatch` is enabled.
if (req._sails.hooks.pubsub && req.isSocket) {
Model.subscribe(req, matchingRecords);
if (req.options.autoWatch)
{ Model.watch(req); }
// Also subscribe to instances of all associated models
_.each(matchingRecords, function (record) {
actionUtil.subscribeDeep(req, record);
});
}
res.ok(matchingRecords);
});
},//End .find
destroy: function(req,res){
console.log("Parametro: " + req.param('id') );
Vendedor.update({ id_vendedor: req.param('id') }, { isDeleted: true })
.exec(function (err, goal)
{
if (err)
return res.status(400).json(err);
return res.json(goal[0]);
});
}
};
But, when starting the server I get an error, saying that the module actionUtil couldn't be found. Does anyone know how I could do this?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/41969216",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Joining Tables For Find the Playlists without any track of the genres “Latin”, “Rock” or “Pop” I have a basic problem about create a query for Find the Playlists without any track of the genres “Latin”, “Rock” or “Pop”. I am really struggling with Joins some exercises I can do well but with this I can figure out.
I tried This:
SELECT p.PlaylistId as Playlist, p.Name
FROM Playlist p INNER JOIN PlaylistTrack pt ON p.PlaylistId = pt.PlaylistId
INNER JOIN Track t ON pt.TrackId = t.TrackId
JOIN Genre g ON g.GenreId = t.GenreId
WHERE g.Name <> "Latin " AND "Rock" AND "Pop"
My Output are returning 1 value: AC/DC, and null values when I use LEFT JOIN.
This is my DDL:
CREATE TABLE Genre (GenreId int PRIMARY KEY, `Name` CHAR(255) );
CREATE TABLE MediaType ( MediaTypeId int PRIMARY KEY, `Name` CHAR(30) );
CREATE TABLE Artist(ArtistId INT PRIMARY KEY, `Name` CHAR(255) ); -- LONG CHAR????
CREATE TABLE Album(AlbumId INT PRIMARY KEY, Title CHAR(120), ArtistId INT );
CREATE TABLE Track( TrackId int PRIMARY KEY, `Name` CHAR(255), AlbumId INT,
MediaTypeId INT, GenreId INT, Composer CHAR(220), Milliseconds int, Bytes INT, UnitPrice decimal(8,2)); -- UnitPrice number
CREATE TABLE Playlist(PlaylistId int PRIMARY KEY, `Name` CHAR(30));
If you are more interested and want to test this database that I build check this link, is a little dirty and need to be improve but commenting some rows will work.
As I Said: My Output are returning 1 value: AC/DC, and null values when I use LEFT JOIN.
This is the expected value:
PlaylistId
Name
2,
Movies
5
TV Shows
6
Audiobooks
7
Audiobooks
8
Movies
9
Music Videos
10
TV Shows
11
Classical
12
Classical 101 - Deep Cuts
13
Classical 101 - Next Step
14
Classical 101 - The Basics
15
On-The-Go 1
I thought that this question was really easy. I dont understand why I can't figure out, can someone help me?
A: SELECT *
FROM Playlist
WHERE NOT EXISTS ( SELECT NULL
FROM PlaylistTrack
INNER JOIN Track USING (TrackId)
INNER JOIN Genre USING (GenreId)
WHERE Playlist.PlaylistId = PlaylistTrack.PlaylistId
AND Genre.Name IN ('Latin', 'Rock', 'Pop') )
| {
"language": "en",
"url": "https://stackoverflow.com/questions/69099958",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to compress (resize?) >60k images on mac? There are >60k images 10KB-1MB size and i need them all weighing <80KB, how do i go about this? Can't open so many images with preview. I guess there is a solution for this in terminal
A: First install ImageMagick and GNU Parallel with homebrew:
brew install imagemagick
brew install parallel
Then go to the directory where the images are and create an output directory:
cd where/the/images/are
mkdir output
Now run ImageMagick from GNU Parallel
find . -name "*.jpg" -print0 | parallel -0 magick {} -define jpeg:extent=80kb output/{}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/56725146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: ASP.net Recover JavaScript Created Attribute from PostBack I've create a control that derivates from a TextBox. On that class is added an call for a Javascript code that, when the content of the text changes, it adds a attribute called "post" with value "true" to the control. I wish to collect that value.
So far i have this:
ASP
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="default.aspx.cs" Inherits="CWEB.Web.UI.USR.DefPage" %>
<%@ Register TagPrefix="Controls" Namespace="CWEB.Web.Controls" Assembly="Web" %>
<html>
<head>
<script type="text/javascript">
function SetPost(element){if(!element.hasAttribute('post')){z=document.createAttribute('post');z.value='true';element.setAttributeNode(z);}else{element.setAttribute('post','true');}}
</script>
</head>
<body>
<Controls:TextBox runat="server" ID="abc" />
</body>
</html>
CodeBehind (and TextBox derivated class)
using SysWeb = global::System.Web.UI;
using CtrWeb = global::CWEB.Web.Controls;
namespace CWEB.Web
{
internal static class Binder
{
internal const string PostableKey = "post";
internal const string AuxFieldKey = "_ck";
internal static bool GetCheck(string ck) { return (ck != null && ck != "" && (ck == global::CWEB.Data.Fields.Boolean.ValueTrue || ck.Contains("t") || ck.Contains("v"))); }
private static bool GetCheckInput(CtrWeb.Controls.Generic.FieldControl Control, ref bool Found)
{
if (Control == null || Control.Page == null) { Found = false; }
else
{
string value = Control.Page.Request.Form[Control.ClientID + CtrWeb.Binder.AuxFieldKey]; //If then it's child is null, it means it's unchecked.
Found = (value != null && value != "");
return ((Found) ? (value == "1" || value == "on" || value.Contains("t")) : false);
}
return false;
}
internal static bool GetCheck(CtrWeb.Controls.Generic.FieldControl Control, bool OutSideInput, string ViewStateKey = CtrWeb.Binder.CheckAttribute)
{
string value = Control.Page.Request.Form[Control.UniqueID]; //If the main control exists then it's child shal too.
if (value == null || value == "") { return CtrWeb.Binder.GetCheck((string)Control.GetViewState()[ViewStateKey]); }
else if (OutSideInput)
{
bool Found = false;
return CtrWeb.Binder.GetCheckInput(Control, ref Found);
} else { return CtrWeb.Binder.GetCheck((string)Control.GetAttributes()[ViewStateKey]); }
}
internal static void SetCheck(CtrWeb.Controls.Generic.FieldControl Control, bool OutSideInput, string ViewStateKey = CtrWeb.Binder.CheckAttribute)
{
bool FValue = CtrWeb.Binder.GetCheck(Control, OutSideInput, ViewStateKey: ViewStateKey);
Control.GetViewState()[ViewStateKey] = ((FValue) ? "true" : "false");
}
}
namespace Controls
{
public interface FieldControl
{
string ClientID { get; }
string UniqueID { get; }
SysWeb.Page Page { get; }
SysWeb.AttributeCollection GetAttributes();
SysWeb.StateBag GetViewState();
}
public class TextBox : SysWeb.WebControls.TextBox, CtrWeb.FieldControl
{
public bool Postable
{
get { return CtrWeb.Binder.GetCheck(this, false, ViewStateKey: CtrWeb.Binder.PostableKey); }
set { CtrWeb.Binder.SetCheck(this, false, ViewStateKey: CtrWeb.Binder.PostableKey); }
}
protected override void LoadViewState(object savedState)
{
this.BaseLoadViewState(savedState);
this.Postable = CtrWeb.Binder.GetCheck(this, false, ViewStateKey: CtrWeb.Binder.PostableKey);
}
protected override void OnInit(global::System.EventArgs e)
{
if (!this.Page.IsPostBack)
{
this.Attributes.Add("onchange", "SetPost(this)");
this.Attributes.Add(MdWeb.Binder.PostableKey, "false");
}
base.OnInit(e);
}
}
}
namespace UI.USR
{
public class DefPage : SysWeb.Page { protected CtrWeb.TextBox abc; }
}
}
Some codes were not copied here because they have nothing to do with the issue.
A: In order for that data to be posted to the server, you have to store it in a form value.
HTML elements in their entirety aren't posted to the server. Only the key/value pairs of form elements are. (WebForms is trying to trick you into thinking that the whole page is posted to the server, but it's a lie.)
Add a hidden form field to the page, something as simple as this:
<asp:Hidden id="someHiddenField" />
Then in the JavaScript, set the value of that field:
document.getElementById('<%= someHiddenField.ClientID %>').value = 'true';
Then when the page posts back to the server, the 'true' value will be in that hidden field:
someHiddenField.Value
A: asp:Hidden doesn't exist. only form elements can actually postback data. The server side will take the postback data and load it into the form element provided that the runat=server is set.
in markup or html:
<input type="hidden" runat="server" ID="txtHiddenDestControl" />
javascript:
document.getElementById('<%= txtHiddenDestControl.ClientID %>').value = '1';
code behind:
string postedVal = txtHiddenDestControl.Value.ToString();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/31386107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Failing to upload a local aar file to artifactory using gradle I have a simple problem which I have not been able to solve. I have downloaded an external .aar file that I want to upload to our in house Artifactory. But I havent been able to solve the problem at all. Here is my build.gradle file
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'digital.wup:android-maven-publish:3.6.2'
}
}
apply plugin: 'maven'
apply plugin: 'maven-publish'
apply plugin: 'digital.wup.android-maven-publish'
group 'com.hello'
def version = '3.1'
def groupA = 'com.hello'
def artifactA = 'world'
def intermediateDirName = 'outputs/aar'
def aarFileLocation = "${buildDir}/${intermediateDirName}/${artifactName}-${datadogVersion}.aar"
configurations {
aarLocal
}
dependencies {
aarLocal files('world-3.1.aar')
}
task copyFromLocal(type: Copy) {
from configurations.aarLocal
into "$buildDir/$intermediateDirName"
}
publishToMavenLocal.dependsOn copyFromLocal
publish.dependsOn copyFromLocal
publishing {
publications {
publishAarToArtifactory(MavenPublication) {
groupId groupName
artifactId artifactName
version datadogVersion
artifact "${buildDir}/${intermediateDirName}/${artifactName}-${version}.aar"
}
}
}
publishToMavenLocal works w/o any problems but the publish task gets skipped
Following is the error message I get Skipping task ':publish' as it has no actions. Any help?
Basically uploading a local aar to artifactory is all that i am trying to achieve.
NOTE : The repo credentials are in our CI/CD system for artifactory.
A: Both publishToMavenLocal and publish are aggregate tasks without actions. They are used to trigger a bunch of publishPubNamePublicationToMavenLocal tasks and publishPubNamePublicationToRepoNameRepository tasks respectively.
$ ./gradlew publishToMavenLocal --info
...
> Task :publishMavenPublicationToMavenLocal
...
> Task :publishToMavenLocal
Skipping task ':publishToMavenLocal' as it has no actions.
:publishToMavenLocal (Thread[Execution worker for ':',5,main]) completed. Took 0.0 secs.
$ ./gradlew publish --info
...
> Task :publishMavenPublicationToMavenRepository
...
> Task :publish
Skipping task ':publish' as it has no actions.
:publish (Thread[Execution worker for ':',5,main]) completed. Took 0.0 secs.
Reference: https://docs.gradle.org/current/userguide/publishing_maven.html#publishing_maven:tasks
| {
"language": "en",
"url": "https://stackoverflow.com/questions/67275446",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Best way of writing subroutines in 6502 Assembler? I'm new to assemblers, so here is a simple question:
My custom subroutines change the X, Y, and A registers. They manipulate these to produce the desired results. Is it a good idea to push these values to the stack when the routine starts and restore them before RTS?
I mean, this way I can write routines which can be called from anywhere without messing up the "state" or affecting other routines. But is it OK to use the stack this way? Or is there a better way to do this?
A:
But is it OK to use the stack this way? Or is there a better way to do this?
Absolutely; BASIC does it all the time, as do many routines in the kernal.
But, there is no right answer to this, it comes down to at least speed, portability, and style.
*
*If you use the stack a lot, there are some speed considerations. Your typical pha txa pha tya pha at the start, and then the reverse (pla tay pla tax pla) eats up 3 bytes of your stack, and adds in some cycle time due to the 2 x 5 operations
*You could use zero page, but that takes away some portability between different machines; VIC-20, C64, C128, the free zero page addresses may not be the same across platforms. And your routine can't be called "more than once" without exiting first (e.g. no recursion) because if it is called while it is active, it will overwrite zero page with new values. But, you don't need to use zero page...
*...because you can just create your own memory locations as part of your code:
myroutine = *
; do some stuff..
rts
mymem =*
.byt 0, 0, 0
*the downside to this is that your routine can only be called "once", otherwise subsequent calls will overwrite your storage areas (e.g. no recursion allowed!!, same problem as before!)
*You could write your own mini-stack
put_registers =*
sei ; turn off interrupts so we make this atomic
sty temp
ldy index
sta a_reg,y
stx x_reg,y
lda temp
sta y_reg,y
inc index
cli
rts
get_registers =*
sei ; turn off interrupts so we make this atomic
dec index
ldy index
lda y_reg,y
sta temp
lda a_reg,y
ldx x_reg,y
ldy temp
cli
rts
a_reg .buf 256
x_reg .buf 256
y_reg .buf 256
index .byt 0
temp .byt 0
*This has the added benefit that you now have 3 virtual stacks (one for each of .A, .X, .Y), but at a cost (not exactly a quick routine). And because we are using SEI and CLI, you may need to re-think this if doing it from an interrupt handler. But this also keeps the "true" stack clean and more than triples your available space.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/69515736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Running multpe containers without composer? Hey all I have three containers that I am currently running in compose
version: '2'
services:
web:
restart: always
build:
context: ./
dockerfile: deploy-1w/web.dockerfile
ports:
- "9000:9000"
links:
- redis:redis
volumes:
- /usr/src/app
- /usr/src/app/source/static
env_file:
- .env
environment:
- 'DEBUG=true'
- 'DOCKER_COMPOSE_MODE=true'
- 'APP_ENV=local'
nginx:
restart: always
build:
context: ./
dockerfile: deploy-1w/nginx.dockerfile
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
env_file:
- .env
links:
- web:web
environment:
- "BACKEND_ADDR=web:9000"
- 'DOCKER_COMPOSE_MODE=true'
- 'APP_ENV=local'
redis:
restart: always
build:
context: ./
dockerfile: deploy-1w/cache.dockerfile
ports:
- "6379:6379"
volumes:
- ./redisdata:/data
I am trying to run these without using compose (more for debugging purposes and understanding of what composer is actually DOING more than anything) but due to the links etc i am not able to get them running (nginx is using proxy pass and looking for an upstream of web)
Here's the Dockerfile for web
FROM python:2-onbuild
WORKDIR /usr/src/app/
RUN cp -rf /usr/src/app/src/* /usr/src/app/
# build dependencies
RUN apt-get update && apt-get install -y git-core rubygems ruby-dev gettext nano \
&& wget nodejs.org/dist/v6.10.0/node-v6.10.0-linux-x64.tar.gz \
&& tar -C /usr/local --strip-components 1 -xzf node-v6.10.0-linux-x64.tar.gz
RUN pip install -r requirements.txt
RUN apt-get install nodejs-legacy -y \
&& gem install sass compass \
&& npm install -g grunt-cli bower
# Bad command?
#RUN npm install -g
RUN npm install [email protected] -g --save-dev \
&& npm install \
&& bower install --allow-root
RUN grunt buildcss \
&& grunt buildjs --force
# Add the local settings file
COPY ./deploy-1w/config/local_settings.py /usr/src/app/source/source/local_settings.py
# copy the bootstrap script
COPY ./deploy-1w/web-bootstrap.sh /usr/local/bin/web-bootstrap.sh
RUN chmod 775 /usr/local/bin/web-bootstrap.sh
WORKDIR /usr/src/app/source
# run bootstrap script
CMD ["/usr/local/bin/web-bootstrap.sh"]
Nginx
FROM tutum/nginx
# backend web address
ENV BACKEND_ADDR 0.0.0.0:0
# build dependencies
RUN apt-get update && apt-get install -y gettext
# copy config template
COPY ./deploy-1w/config/nginx.conf /etc/nginx/nginx.conf.template
# copy static file to use as a health check
COPY ./deploy-1w/config/nginx-health-check /var/www/html/public/health-check
# copy the bootstrap script
COPY ./deploy-1w/nginx-bootstrap.sh /usr/local/bin/nginx-bootstrap.sh
RUN chmod 775 /usr/local/bin/nginx-bootstrap.sh
# run bootstrap script
CMD ["/usr/local/bin/nginx-bootstrap.sh"]
and Redis
FROM debian:jessie
ENV REDIS_MAJOR_MINOR_VERSION 4.0
ENV REDIS_VERSION 4.0.10
ENV REDIS_TARBALL_SHA1 d2738d9b93a3220eecc83e89a7c28593b58e4909
ENV BACKEND_ADDR 0.0.0.0:0
RUN apt-get -q update && \
DEBIAN_FRONTEND=noninteractive apt-get -qy --no-install-recommends install \
build-essential \
curl && \
curl -O http://download.redis.io/releases/redis-$REDIS_VERSION.tar.gz && \
[ $(shasum redis-$REDIS_VERSION.tar.gz | awk '{ print $1 }') = $REDIS_TARBALL_SHA1 ] && \
tar zxf redis-$REDIS_VERSION.tar.gz && \
cd redis-$REDIS_VERSION && \
make -j$(nproc) && \
cd src && \
cp \
redis-benchmark \
redis-check-aof \
redis-check-rdb \
redis-cli \
redis-sentinel \
redis-server \
/usr/local/bin && \
cd ../.. && \
rm -rf redis-$REDIS_VERSION redis-$REDIS_VERSION.tar.gz /tmp/* /var/tmp/* && \
apt-get -qy purge build-essential curl && \
apt-get -qy clean autoclean autoremove && \
rm -rf /var/lib/{apt,dpkg,cache,log}/
COPY ./deploy-1w/config/redis.conf /etc/redis.conf
VOLUME /var/lib/redis
CMD ["/usr/local/bin/redis-server", "/etc/redis.conf"]
I am literally trying to do what i'm doing in composer... without composer...
What i'm doing is this, and it's failing
docker build -t redis-test -f deploy-1w/cache.dockerfile .
docker build -t nginx-test -f deploy-1w/nginx.dockerfile .
docker build -t web-test -f deploy-1w/web.dockerfile .
docker network create test-network
docker run -d -it -p --network test-network '6379:6379' redis-test
docker run -d -it -p --network test-network '9000:9000' --link $(docker ps -a | grep redis-test |awk '{print$1}') web-test
docker run -d -it -p --network test-network '80:80' -e 'BACKEND_ADDR=web:9000' --link $(docker ps -a | grep web-test |awk '{print$1}') nginx-test
The above give me an error...
bash-4.4$ docker logs bc28bcd28653
nginx: [emerg] host not found in upstream "web" in /etc/nginx/nginx.conf:35
Here's the pertinent part of the nginx conf
location / {
proxy_pass http://${BACKEND_ADDR}; <-- Line 35
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
A: I use to have so many issues with docker and nginx because I didn't understand everything very well.
So here is my recommendation:
Quick fix :
Add
nginx:
restart: always
....
depends_on:
- web
Explanation :
Nginx with upstream can be useful but if the upstream doesn't exist, then nginx will never start.
And in the docker case, you have to say that web should run before nginx using depends_on parameter.
But after upgrading my stack to docker swarm, I discovered that depends_on can't be used anymore across multiple instances
for a reason I don't remember.
So I wanted to start nginx even if my web server is not running and the easy way to do it is to use variable
in nginx like this :
set $server_1 web;
proxy_pass http://$server_1:8080;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header HOST $host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
I definitly advise you to use the last docker-compose version because you have so much better feature.
Also, using link is quite deprecated, I recommend you to create an internal network using docker network command.
It's very useful to add or remove an instance to it and you save time and maintainability.
Running multiple containers without composer
You need to use docker run command for each container
docker run -d --name web -p 9000:9000 ....
docker run -d --name nginx -p 80:80 ....
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50868396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Copy data from excel to excel with openpyxl Couldn't find answers I can understand. So I've decided to ask.
I'm learning Python. And now I'm trying to solve a problem with collecting data from active spreadsheet in one excel file and paste it to another excel file. The first file contains table and a few cells with information to the right of it. I'm trying to fully copy the spreadsheet data.
import openpyxl, os
from openpyxl.cell import get_column_letter
os.chdir('D:\\Python')
wb = openpyxl.load_workbook('auto.xlsx')
ws = wb.active
# Create new workbook and chose active worksheet.
wbNew = openpyxl.Workbook()
wsNew = wbNew.active
# Loop through all cells (rows, then columns) in the first file
# and fill in the second one.
for allObj in ws['A1':get_column_letter(ws.max_column) + str(ws.max_row)]:
for cellObj in allObj:
for allNewObj in wsNew:
for newCellObj in allNewObj:
wsNew[cellObj.coordinate].value = cellObj.value
wbNew.save('example.xlsx')
Finally it works.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35604099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Are usings guarding for all eventualities? I've found the following passage.
using (TextWriter w1 = new StringWriter(...))
using (XmlWriter w2 = new XmlTextWriter(w1))
using (StringReader r1 = new StringReader(...))
using (XmlTextReader r2 = new XmlTextReader(r1))
{
_xslt.Transform(r2, w2);
...
FileOperations.LockFiles();
w1.Close();
w2.Close();
r1.Close();
r2.Close();
}
My suggestion is (besides renaming, of course) that we could remove the last four statements, since those are declared using using and will be closed and disposed properly when the framework feels like it.
However, one of the developers questioned me and asked a very disturbing question. "Are you entirely sure?". Then I got chicken feet and postponed the answer. Is there anything I could be missing?
A: Yes, the using blocks will always dispose the objects, no matter what. (Well, short of events like a power outage...)
Also, the disposing of the objects is very predictable, that will always happen at the end of the using block. (Once the objects are disposed, they are removed from the finalizer queue and are just regular managed objects that can be garbage collected, which happens when the garbage collector finds it convenient.)
In contrast, the Close calls will only be called if there are no unhandled exceptions inside the block. If you wanted them to surely be executed you would place them in finally blocks so that they are exected even if there is an exception. (That's what the using blocks are using to make sure that they can always dispose the objects.)
A: There are plenty of circumstances in which a using won't dispose of the resource in question. For example, if you pull the plug on the machine while the using is running, the finally block (which the using is translated to) won't run.
There is no situation in which the finally block wouldn't get run in which duplicating the same cleanup steps inside the using would be any better suited to handle.
A: If disputes like this ever come up, just fire up ILSpy (http://ilspy.net/) and peek inside the framework to see what it is actually doing on the Dispose that gets called from the end of the using statement.
In this case, you are right and all four of StringWriter, StringReader, XmlTextReader, and XmlTextWriter do their disposal work in the Dispose method and Close simply calls Dispose, so those four lines are redundant. (But note that StringWriter and StringReader don't seem to do anything interesting in their Dispose methods.)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27022870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Not sure what I am doing wrong - getting is not a function errors I am not sure what I am doing wrong, but I have a few functions inside my component. However, I'm not able to pass one of those functions down as a prop, I receive a this.nextScene is not a function.
Here's a snippet from my component, and I have commented out where I am having the issue:
nextScene() {
this.refs.navigator.push('browse');
}
renderNavigationView() {
return (
<View style={styles.drawer}>
<Touchable
onPress={this.nextScene()} //issue here, "this.nextScene is not a function"
>
<View style={styles.container}>
<Text style={styles.title}>Browse</Text>
</View>
</Touchable>
<Touchable>
<View style={styles.container}>
<Text style={styles.title}>Button</Text>
</View>
</Touchable>
</View>
);
}
render() {
return (
<DrawerLayoutAndroid
ref="drawer"
drawerWidth={300}
drawerPosition={DrawerLayoutAndroid.positions.Left}
renderNavigationView={this.renderNavigationView}>
<Navigator
ref="navigator"
configureScene={(route) => {
if (Platform.OS === 'android') {
return Navigator.SceneConfigs.FloatFromBottomAndroid;
}
} }
initialRoute={{}}
renderScene={this.renderScene}
/>
</DrawerLayoutAndroid>
);
}
Thanks!
A: If you take a look at the component you are rendering, and at the renderNavigationView prop:
renderNavigationView={this.renderNavigationView}
It seems fine, but since the this context in functions is window by default, this refers to window in renderNavigationView. Consider your onPress event handler:
onPress={this.nextScene()}
Since you use this.nextScene() and this refers to window in a function, you're effectively trying to do window.nextScene which does not exist, thus throwing the error. (Also note that that is an invocation - not a reference. Remove the parentheses).
So if I try this.nextScene.bind(this), I get a cannot read property 'bind' of undefined
This is because the function is undefined because window.nextScene doesn't exist. To fix this, use Function.prototype.bind to bind the this correctly on both renderNavigationView and nextScene:
renderNavigationView={this.renderNavigationView.bind(this)}
What bind does in this situation is set the this context in the function. Since this here refers to the class, the class will be used to execute the nextScene method which should work correctly. You must also use bind on nextScene because inside nextScene we want this to refer to the class, not window:
onPress={this.nextScene.bind(this)}
A: Another alternative to using the bind method that winter pointed out in his answer is to use arrow functions which automatically bind this to the parent context for you.
class MyComponent extends React.Component {
clickHandler = (e) => {
// do stuff here
}
render() {
return (
<button onClick={this.clickHandler}></button>
)
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/40144961",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: A class extending Ext.Window ignores constructor parameters I am trying to create a popup window by extending Ext.Window class:
Ext.define('mine.nameCreationPopup',{
extend: 'Ext.Window',
alias: 'nameCreationPopup',
config:{
title: 'aTitle', //default value
width: 700 //default value
},
constructor: function(config){
this.initConfig(config);
this.superclass.constructor.call
(this, {
title: config.title,
width: config.width,
height: 300, //ignored from now on
layout: 'fit',
bodyStyle: 'padding:5px;',
modal: true,
resizable: false
});
}
Now, creation:
var pops = Ext.create('nameCreationPopup',{title:'Dunno'});
pops.show();
pops.center();
Width and title are currectly set (title is Dunno, while width is 700 as default value) anyway height and the rest of window attributes are just ignored.
How come?
A: That is not the way how Ext.define should look like. Either configure the window directly (inline) or use initComponent. Inline configuration:
Ext.define('mine.nameCreationPopup',{
extend: 'Ext.Window',
alias: 'widget.nameCreationPopup',
title: 'aTitle',
width: 700,
height: 300, //ignored from now on
layout: 'fit',
bodyStyle: 'padding:5px;',
modal: true,
resizable: false
});
initComponent:
Ext.define('mine.nameCreationPopup',{
extend: 'Ext.Window',
alias: 'widget.nameCreationPopup',
initComponent:function() {
Ext.applyIf(this, {
title: 'aTitle',
width: 700,
height: 300, //ignored from now on
layout: 'fit',
bodyStyle: 'padding:5px;',
modal: true,
resizable: false
});
this.callParent(arguments);
}
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/28497755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: xstream.fromXML returns a Class Using XStream 1.2.2
The XML document:
<?xml version="1.0" encoding="ISO-8859-1"?>
<Document xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" protocol="OCI" xmlns="C">
<sessionId xmlns="">192.168.1.19,299365097130,1517884537</sessionId>
<command xsi:type="AuthenticationRequest" xmlns="">
<userId>[email protected]</userId>
</command>
</Document>
I'm trying to parse into to a Document;
public class Document {
private String sessionId;
public Command command;
public Command getCommand() {
return this.command;
}
public void setCommand(Command command) {
this.command = command;
}
public String getSessionId() {
return sessionId;
}
public void setSessionId(String sessionId) {
this.sessionId = sessionId;
}
}
Parsing code is:
XStream xstream = new XStream();
xstream.alias("Document", Document.class);
xstream.alias("sessionId", String.class);
xstream.alias("command", Command.class);
xstream.alias("userId", String.class);
Document doc = (Document) xstream.fromXML(theInput, Document.class);
but this throws:
java.lang.ClassCastException: java.lang.Class cannot be cast to com.mycompany.ocip.server.model.Document
because the returned object from fromXml is of type: Class<com.mycompany.ocip.server.model.Document>
Shouldn't I expect it to return a com.mycompany.ocip.server.model.Document instance?
A: That needs to be:
Document doc = (Document) xstream.fromXML(theInput);
If you pass in a second parameter, XStream will try to populate that with the values from the XML. Since in your code, you're passing in a class object, XStream will try to populate the class object and return it.
The JavaDoc has the details.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/48634668",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: changing slice colour of a pie chart in excel VBA Initially I wrote a function which changes the appearance of a series of pie-charts according to predefined colour themes
Function GetColorScheme(i As Long) As String
Const thmColor1 As String = "C:\Program Files\Microsoft Office\Document Themes 14\Theme Colors\Blue Green.xml"
Const thmColor2 As String = "C:\Program Files\Microsoft Office\Document Themes 14\Theme Colors\Orange Red.xml"
Select Case i Mod 2
Case 0
GetColorScheme = thmColor1
Case 1
GetColorScheme = thmColor2
End Select
End Function
However, the paths are not constant and I would like to define each Pie chart slice on its own by an rgb colour.
I found here on stackoverflow in a previosu topic (How to use VBA to colour pie chart) a way to change the colour of each slice of a pie chart
but I don't knwo how to implement the code into the function mentioned above. Could I potentially write
Function GetColorScheme(i As Long) As String
Select Case i Mod 2
Case 0
Dim clr As Long, x As Long
For x = 1 To 3
clr = RGB(0, x * 8, 0)
With ActiveSheet.ChartObjects(1).Chart.SeriesCollection(1).Points(x)
.Format.Fill.ForeColor.RGB = clr
End With
Next x
Case 1
Dim clr As Long, x As Long
For x = 1 To 3
clr = RGB(0, x * 8, 0)
With ActiveSheet.ChartObjects(1).Chart.SeriesCollection(1).Points(x)
.Format.Fill.ForeColor.RGB = clr
End With
Next x
End Select
End Function
The function is linked to the main part of the script (which is)
For Each rngRow In Range("PieChartValues").Rows
chtMarker.SeriesCollection(1).Values = rngRow
ThisWorkbook.Theme.ThemeColorScheme.Load GetColorScheme(thmColor)
chtMarker.Parent.CopyPicture xlScreen, xlPicture
lngPointIndex = lngPointIndex + 1
chtMain.SeriesCollection(1).Points(lngPointIndex).Paste
thmColor = thmColor + 1
where the line
ThisWorkbook.Theme.ThemeColorScheme.Load GetColorScheme(thmColor)
gets the value of the function (see first bit of code - the original function) but now I don#t longer have the thmColor variable defined and don't knwo how to best implement the code into the function part
A: Something like this (you'll need to adjust the colors to suit your needs)
http://www.rapidtables.com/web/color/RGB_Color.htm
Sub ApplyColorScheme(cht As Chart, i As Long)
Dim arrColors
Select Case i Mod 2
Case 0
arrColors = Array(RGB(50, 50, 50), _
RGB(100, 100, 100), _
RGB(200, 200, 200))
Case 1
arrColors = Array(RGB(150, 50, 50), _
RGB(150, 100, 100), _
RGB(250, 200, 200))
End Select
With cht.SeriesCollection(1)
.Points(1).Format.Fill.ForeColor.RGB = arrColors(0)
.Points(2).Format.Fill.ForeColor.RGB = arrColors(1)
.Points(3).Format.Fill.ForeColor.RGB = arrColors(2)
End With
End Sub
Example usage:
chtMarker.SeriesCollection(1).Values = rngRow
ApplyColorScheme chtMarker, thmColor
chtMarker.Parent.CopyPicture xlScreen, xlPicture
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17387926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: SQL Query to get the Result in One Scan without using analytical function I have a table named Employee Which has 3 columns EmployeeID Award Date and Award
| EMPOYEEID | AwardDate | Award
|1 | 10-03-2018| EOY
|1 | 14-08-2018| EBF
|2 | 10-03-2017| EOY
|3 | 10-03-2016| EOY
|2 | 31-12-2017| COINS
|1 | 31-08-2017| COINS
Using SQL query in oracle, What i want is without using analytical function i want to get the first award and last award date for each employee in one scan.
Below is and example
ID|LastDate | FirstAward
1|10-03-2018|31-08-2017
2|31-12-2017|10-03-2017
A: use aggregate function
select EMPOYEEID, max(AwardDate) LastDate, min(AwardDate) FirstAward
from table_name t1
group by EMPOYEEID
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54320468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Some images on a page is not shown through Varnish Cache-304 Not modified I am using varnish to speed up a customer's website load time. I have a problem with the images on a page. The Images on a page are not shown on the page. here is the chrome output headers when I hit Ctrl+f5:
Request URL:https://DOMAINNAME/wp-content/uploads/2017/12/telegram-768x255.png
Request Method:GET
Status Code:200 OK
Remote Address:IPADDRESS:443
Referrer Policy:no-referrer-when-downgrade
Response Headers
view source
Accept-Ranges:bytes
Age:28
Connection:keep-alive
Content-Length:96169
Content-Type:image/png
Date:Sat, 30 Dec 2017 14:38:40 GMT
Last-Modified:Sat, 16 Dec 2017 11:06:23 GMT
Server:Litespeed
Strict-Transport-Security:max-age=31536000
X-Cache:HIT
X-Configured-By:ServerSetup.co
Request Headers
view source
Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
Accept-Encoding:gzip, deflate, br
Accept-Language:en-US,en;q=0.9
AlexaToolbar-ALX_NS_PH:AlexaToolbar/alx-4.0.1
Cache-Control:no-cache
Connection:keep-alive
Cookie:_ga=GA1.2.1062445401.1514382767; _gid=GA1.2.498856688.1514639806
Host:HOSTNAME
Pragma:no-cache
Upgrade-Insecure-Requests:1
User-Agent:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/63.0.3239.84 Chrome/63.0.3239.84 Safari/537.36
and here's the output when I hit Enter on the address bar:
Request URL:https://DOMAINNAME/wp-content/uploads/2017/12/telegram-768x255.png
Request Method:GET
Status Code:304 Not Modified
Remote Address:IPADDRESS:443
Referrer Policy:no-referrer-when-downgrade
Response Headers
view source
Age:271
Connection:keep-alive
Content-Type:image/png
Date:Sat, 30 Dec 2017 14:42:43 GMT
Last-Modified:Sat, 16 Dec 2017 11:06:23 GMT
Server:Litespeed
Strict-Transport-Security:max-age=31536000
X-Cache:HIT
X-Configured-By:ServerSetup.co
Request Headers
view source
Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
Accept-Encoding:gzip, deflate, br
Accept-Language:en-US,en;q=0.9
AlexaToolbar-ALX_NS_PH:AlexaToolbar/alx-4.0.1
Cache-Control:max-age=0
Connection:keep-alive
Cookie:_ga=GA1.2.1062445401.1514382767; _gid=GA1.2.498856688.1514639806
Host:HOSTNAME
If-Modified-Since:Sat, 16 Dec 2017 11:06:23 GMT
Upgrade-Insecure-Requests:1
User-Agent:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/63.0.3239.84 Chrome/63.0.3239.84 Safari/537.36
and here's is the varnishlog for the image url:
varnishlog -g request -q "ReqUrl ~ 'wp-content/uploads/2017/12/telegram-768x255.png'"
* << Request >> 870337886
- Begin req 870337885 rxreq
- Timestamp Start: 1514645156.766974 0.000000 0.000000
- Timestamp Req: 1514645156.766974 0.000000 0.000000
- ReqStart 192.168.1.106 42860
- ReqMethod GET
- ReqURL /wp-content/uploads/2017/12/telegram-768x255.png
- ReqProtocol HTTP/1.0
- ReqHeader X-Real-IP: 192.168.1.104
- ReqHeader X-Forwarded-For: 46.225.112.57
- ReqHeader X-Forwarded-Proto: https
- ReqHeader X-Nginx: on
- ReqHeader Host: HOSTNAME
- ReqHeader Connection: close
- ReqHeader Pragma: no-cache
- ReqHeader Cache-Control: no-cache
- ReqHeader User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/63.0.3239.84 Chrome/63.0.3239.84 Safari/537.36
- ReqHeader Accept: image/webp,image/apng,image/*,*/*;q=0.8
- ReqHeader Referer: https://DOMAINNAME/contactus/
- ReqHeader Accept-Encoding: gzip, deflate, br
- ReqHeader Accept-Language: en-US,en;q=0.9
- ReqHeader Cookie: _ga=GA1.2.1062445401.1514382767; _gid=GA1.2.498856688.1514639806; _gat=1
- ReqUnset X-Forwarded-For: 46.225.112.57
- ReqHeader X-Forwarded-For: 46.225.112.57, 192.168.1.106
- VCL_call RECV
- ReqUnset Cookie: _ga=GA1.2.1062445401.1514382767; _gid=GA1.2.498856688.1514639806; _gat=1
- ReqHeader Cookie:
- ReqUnset Cookie:
- ReqHeader Cookie:
- ReqUnset Cookie:
- ReqHeader Cookie:
- ReqUnset Cookie:
- ReqHeader Cookie:
- ReqUnset Cookie:
- ReqUnset Accept-Encoding: gzip, deflate, br
- ReqHeader Accept-Encoding: gzip
- ReqUnset X-Forwarded-For: 46.225.112.57, 192.168.1.106
- ReqHeader X-Forwarded-For: 46.225.112.57, 192.168.1.106
- ReqUnset Accept-Language: en-US,en;q=0.9
- ReqUnset User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/63.0.3239.84 Chrome/63.0.3239.84 Safari/537.36
- ReqHeader cookie:
- ReqUnset cookie:
- ReqHeader cookie:
- VCL_return hash
- VCL_call HASH
- VCL_return lookup
- Hit 870337573
- VCL_call HIT
- VCL_return deliver
- RespProtocol HTTP/1.1
- RespStatus 200
- RespReason OK
- RespHeader Date: Sat, 30 Dec 2017 14:37:35 GMT
- RespHeader Server: Apache/2.2.15 (CentOS)
- RespHeader Last-Modified: Sat, 16 Dec 2017 11:06:23 GMT
- RespHeader Content-Length: 96169
- RespHeader Content-Type: image/png
- RespHeader X-Varnish: 870337886 870337573
- RespHeader Age: 464
- RespHeader Via: 1.1 varnish-v4
- VCL_call DELIVER
- RespHeader X-Cache: HIT
- RespUnset X-Varnish: 870337886 870337573
- RespUnset Via: 1.1 varnish-v4
- RespHeader X-Configured-By: ServerSetup.co
- RespUnset Server: Apache/2.2.15 (CentOS)
- RespHeader Server: Apache
- VCL_return deliver
- Timestamp Process: 1514645156.767049 0.000075 0.000075
- Debug "RES_MODE 2"
- RespHeader Connection: close
- RespHeader Accept-Ranges: bytes
- Timestamp Resp: 1514645156.767130 0.000156 0.000081
- Debug "XXX REF 2"
- ReqAcct 631 0 631 267 96169 96436
- End
The problem is that the image is not shown on the page, but it is shown in preview section of the chrome developer panel. Moreover if I open the image in a new tab in browser it is shown properly.
Varnish version is 4.0.4. and the web server is Apache 2.2.
Edit: When I load the page through varnish I get the following errors on the console tab (chrome):
(index):1820 Uncaught ReferenceError: tinymce is not defined
at HTMLDocument.<anonymous> ((index):1820)
at HTMLDocument.dispatch (jquery.js:3)
at HTMLDocument.r.handle (jquery.js:3)
at Object.trigger (jquery.js:3)
at Object.a.event.trigger (jquery-migrate.min.js:2)
at HTMLDocument.<anonymous> (jquery.js:3)
at Function.each (jquery.js:2)
at a.fn.init.each (jquery.js:2)
at a.fn.init.trigger (jquery.js:3)
at HTMLDocument.<anonymous> ((index):1316)
But when I load the page directly from backend server, there is no errors and the images are shown properly!!
Edit:
varnish log for 304 Not Modified
* << Request >> 885566068
- Begin req 885566067 rxreq
- Timestamp Start: 1515309410.697993 0.000000 0.000000
- Timestamp Req: 1515309410.697993 0.000000 0.000000
- ReqStart 192.168.1.106 33782
- ReqMethod GET
- ReqURL /wp-content/uploads/2017/12/Untitled-1.png
- ReqProtocol HTTP/1.0
- ReqHeader X-Real-IP: 192.168.1.104
- ReqHeader X-Forwarded-For: 46.225.112.57
- ReqHeader X-Forwarded-Proto: https
- ReqHeader X-Nginx: on
- ReqHeader Host: bigtheme.ir
- ReqHeader Connection: close
- ReqHeader Cache-Control: max-age=0
- ReqHeader User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/63.0.3239.84 Chrome/63.0.3239.84 Safari/537.36
- ReqHeader Upgrade-Insecure-Requests: 1
- ReqHeader Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
- ReqHeader Accept-Encoding: gzip, deflate, br
- ReqHeader Accept-Language: en-US,en;q=0.9
- ReqHeader Cookie: _ga=GA1.2.1062445401.1514382767; _gid=GA1.2.1237154839.1515307013
- ReqHeader If-None-Match: "9a2989-38271-560731bc13e20"
- ReqHeader If-Modified-Since: Sat, 16 Dec 2017 11:06:26 GMT
- ReqUnset X-Forwarded-For: 46.225.112.57
- ReqHeader X-Forwarded-For: 46.225.112.57, 192.168.1.106
- VCL_call RECV
- ReqUnset Cookie: _ga=GA1.2.1062445401.1514382767; _gid=GA1.2.1237154839.1515307013
- ReqHeader Cookie:
- ReqUnset Cookie:
- ReqHeader Cookie:
- ReqUnset Cookie:
- ReqHeader Cookie:
- ReqUnset Cookie:
- ReqHeader Cookie:
- ReqUnset Cookie:
- ReqUnset Accept-Encoding: gzip, deflate, br
- ReqUnset X-Forwarded-For: 46.225.112.57, 192.168.1.106
- ReqHeader X-Forwarded-For: 46.225.112.57, 192.168.1.106
- ReqUnset Accept-Language: en-US,en;q=0.9
- ReqUnset User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/63.0.3239.84 Chrome/63.0.3239.84 Safari/537.36
- ReqHeader cookie:
- ReqUnset cookie:
- ReqHeader cookie:
- VCL_return hash
- VCL_call HASH
- VCL_return lookup
- Hit 885027311
- VCL_call HIT
- VCL_return deliver
- RespProtocol HTTP/1.1
- RespStatus 200
- RespReason OK
- RespHeader Date: Sun, 07 Jan 2018 07:15:40 GMT
- RespHeader Server: Apache/2.2.15 (CentOS)
- RespHeader Last-Modified: Sat, 16 Dec 2017 11:06:26 GMT
- RespHeader Content-Length: 230001
- RespHeader Content-Type: image/png
- RespHeader X-Varnish: 885566068 885027311
- RespHeader Age: 33
- RespHeader Via: 1.1 varnish-v4
- VCL_call DELIVER
- RespHeader X-Cache: HIT
- RespUnset X-Varnish: 885566068 885027311
- RespUnset Via: 1.1 varnish-v4
- RespHeader X-Configured-By: ServerSetup.co
- RespUnset Server: Apache/2.2.15 (CentOS)
- RespHeader Server: Litespeed
- VCL_return deliver
- Timestamp Process: 1515309410.698046 0.000053 0.000053
- RespProtocol HTTP/1.1
- RespStatus 304
- RespReason Not Modified
- RespReason Not Modified
- RespUnset Content-Length: 230001
- Debug "RES_MODE 0"
- RespHeader Connection: close
- Timestamp Resp: 1515309410.698061 0.000068 0.000015
- Debug "XXX REF 2"
- ReqAcct 731 0 731 231 0 231
- End
A: As your console tab shows, the problem is with a HTML document or Javascript (jquery), not with the image itself.
Moreover, why isn't your varnishlog showing the second request? What server is returning the "304 Not Modified"? There's something in between Chrome and Varnish. A Chrome plugin?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/48035001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Add a property in the dam metadata in touch ui AEM 6.2 I have a use case.
While going to DAM Asset Dialog in AEM 6.2,I want to add a property rootPath: /etc/tags/geometrixx to the tags field.
I am using the concept of overlay and resourceMerger.
My dialog is under this
/libs/dam/content/schemaeditors/forms/default/items/tabs/items/tab1/items/col1/items/tags.
While Overlaying it in apps and add the property in it,it doesn't show anything.
Can anyone help me?
A: Just for testing i have replicated, and it works fine as expected
A: Actually you can't do that by overlaying the dialog, you'll need to rely in DAM Metadata Schemas to achieve that - https://experienceleague.adobe.com/docs/experience-manager-64/assets/administer/metadata-schemas.html?lang=en
From there, and if you want to persist these definitions in your code baseline, you can refer to the location where it gets stored - /conf/global/settings/dam/adminui-extension/metadataschema
Hope it helps
| {
"language": "en",
"url": "https://stackoverflow.com/questions/41680498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Want to avoid "failed" message trying to install .NET 4.6.2 on Windows 10 Our Installshield package for our product has .NET 4.6.2 as a prerequisite. It installs that correctly when needed on Windows 7. On a Windows 10 machine it cannot install it and shows the user a failure message.
From my understanding, .NET is already part of Windows 10 and thus doesn't need to be installed, but Installshield seems to try to do it anyway. How can we stop Installshield from even trying? or at least hide the error message?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/58844088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Javascript only validates the first input field I've been trying to validate this form using Javascript, however for some weird reason only the first input field seems to be doing what it is supposed to do. Here's the relevant snippet of code:
<form method="post" name="myform" action="www.google.com" onsubmit="return validateForm(this);">
<h3 id="secondarytext"><strong>Your details:</strong></h3>
<div class="label1">
<label for="firstName">First Name</label>
<input type="text" id="name" name="name" onblur="validateName(name)" />
<span id="nameError" style="display: none;">Please enter your name, you can only use alphabetic characters</span>
<label for="ssurname" >Surname</label>
<input type="text" id="surname" name="surname" onblur="validateSurname(surname)" /> <br />
</div>
<input type="reset" value="Reset Form">
<input type="submit" value="Submit">
</form>
Javascript:
function validateName(x)
{
var re = /^[A-Za-z]{2,25}$/;
if(re.test(document.getElementById(x).value)){ /* This checks the input's value corresponds with the regular expression (re) */
document.getElementById(x).style.background ='#ccffcc'; /* If it does it changes the background colour of the field to green*/
document.getElementById(x).style.border="4px solid #ccffcc";
document.getElementById(x + 'Error').style.display = "none"; /* This hides the error message because the function returned as true */
return true;
}
else{ /* If the function doesn't return as true this is what happens */
document.getElementById(x).style.background ='#e35152'; /* The background colour of the field changes to red */
document.getElementById(x).style.border="4px solid #e35152";
document.getElementById(x + 'Error').style.display = "block"; /* This displays the error message */
return false;
}
}
function validateSurname(x)
{
var re = /^[A-Za-z]{2,25}$/;
if(re.test(document.getElementById(x).value)){
document.getElementById(x).style.background ='#ccffcc';
document.getElementById(x + 'Error').style.display = "none";
return true;
}
else{
document.getElementById(x).style.background ='#e35152';
document.getElementById(x + 'Error').style.display = "block";
return false;
}
}
function validateForm()
{
var error = 0;
if(!validateName('name'))
{
document.getElementById('nameError').style.display = "block";
error++;
}
if(!validateSurname('surname'))
{
document.getElementById('surnameError').style.display = "block";
error++;
}
if(error > 0)
{
return false;
}
else if(error < 0) {
return true;
}
}
A: Maybe these changes will help you:
1) Add the error message you want to show when surname is wrong:
<label for="ssurname" >Surname</label>
<input type="text" id="surname" name="surname" onblur="validateSurname('surname')" /> <br />
<span id="surnameError" style="display: none;">Please enter your surname, you can only use alphabetic characters</span>
2) Pass a string to the parameter instead of a javascript var (e.g. do validateSurname('surname') instead of validateSurname(surname)), as the function you've defined is expecting a string as a parameter.
Fiddle demo: http://jsfiddle.net/uzyLogc1/1/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26575279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Select suppliers for each contract that have all codes required by the contract I have a table of contracts that have "requirement codes" (COXA) and a table of suppliers that have "approval codes" (VNDAPP). Contracts can have any number of requirements and suppliers can have any number of approvals.
Example data:
Contract Requirement (COXA):
CONTR REQMT
7736 1
7736 10
7737 1
7737 4
7737 6
7738 5
7739 1
Supplier Approval (VNDAPP):
VNDNO REQMT
10019 1
10020 1
10020 2
10020 10
10021 1
10021 4
10021 5
10021 6
Desired Result:
CONTR VNDNO
7736 10020
7737 10021
7738 10021
7739 10019
7739 10020
7739 10021
In another question I have received a response that works when I specify the contract number in the query:
select sa.supplierid
from supplier_approval sa
where sa.approvalid IN (
select cr.requirementid
from contracts_requirement cr
where cr.contractid = 7736
)
group by sa.supplierid
having count(distinct sa.approvalid) = (
select count(*)
from contracts_requirement cr
where cr.contractid = 7736
)
The problem is I need to have matching suppliers for every contract number.
Thanks in advance!
A: You can use CROSS JOIN to generate tuples of (contracts, suppliers, contract requirement), then use LEFT JOIN to match contract requirements with supplier approvals:
SELECT
contract_requirement.contr,
suppliers.vndno,
COUNT(contract_requirement.reqmt) AS req_count,
COUNT(supplier_approval.reqmt) AS app_count
FROM contract_requirement
CROSS JOIN (
SELECT DISTINCT vndno
FROM supplier_approval
) AS suppliers
LEFT JOIN supplier_approval ON suppliers.vndno = supplier_approval.vndno AND contract_requirement.reqmt = supplier_approval.reqmt
GROUP BY contract_requirement.contr, suppliers.vndno
HAVING COUNT(contract_requirement.reqmt) = COUNT(supplier_approval.reqmt)
A: You can use a join and group by and then having to be sure that you have all requirements:
select cr.contr
from (select cr.*, count(*) over (partition by cr.contr) as cnt
from contract_requirement cr
) cr join
supplier_approval sa
on sa.approvalid = cr.requirementid
group by cr.contr, cr.cnt
having cr.cnt = count(*)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54007951",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Jquery javascript html label modify value I want to hide a field and show it and also change the value of a variable depending on a checkbox value using jquery.
$(document).ready(function(){
$('#CompanyName').hide();
$('#inlineCheckbox1').click(function() {
var $this = $(this);
// $this will contain a reference to the checkbox
if ($this.is(':checked')) {
$('#CompanyName').show();
$("label[for='AdresseName']").text('Adresse Name');
} else {
$('#CompanyName').hide();
$("label[for='AdresseName']").text('Name');
}
});
});
it shows and hides the textfield fine, but the label text change does not work for me... it's like if the label change line of code is not being executed. What's going on? My HTML is this:
<form class="form-horizontal" role="form">
<div class="form-group">
<label for="isCompany" class="col-sm-2 control-label">It's a company</label>
<div class="col-sm-10">
<input type="checkbox" id="inlineCheckbox1" value="option1">
</div>
</div>
<div class="form-group" id="CompanyName">
<label for="companyName" class="col-sm-2 control-label">Company Name</label>
<div class="col-sm-10">
<input type="text" class="form-control" id="companyName" placeholder="Company">
</div>
</div>
<div class="form-group">
<label for="AdresseName" class="col-sm-2 control-label" id="AdresseNameNotCompnay">Name</label>
<div class="col-sm-10">
<input type="text" class="form-control" id="AdresseName" placeholder="Name">
</div>
</div>
<div class="form-group">
<div class="col-sm-offset-2 col-sm-10">
<button type="submit" class="btn btn-default">Save</button>
</div>
</div>
</form>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/24662275",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Can't cope with expected identifier I'm making a tetris. Well, my glass (QtGlass.h) creates a figure.
I would like to use a parameter here to specify which shape the figure should
take.
Could you suggest me why parameters cause this error:
QtGlass.h:29:23: error: expected identifier before 'L'
QtGlass.h:29:23: error: expected ',' or '...' before 'L'
I've shown in the comments below where this error occurs.
By the way, if I uncomment the lines which signigy a parameterless variant,
it works.
**Figure.h**
class Figure : public QObject {
Q_OBJECT
...
public:
Figure(char Shape);
//Figure();
...
};
**Figure.cpp**
Figure::Figure(char Shape) {
//Figure::Figure() {
previous_shape = 1;
colour = RED;
...
}
**QtGlass.h**
class QtGlass : public QFrame {
Q_OBJECT
...
protected:
Figure the_figure('L'); //QtGlass.h:29:23: error: expected identifier before 'L' QtGlass.h:29:23: error: expected ',' or '...' before 'L'
//Figure the_figure;
...
};
Edded later
When I use this:
class QtGlass : public QFrame {
Q_OBJECT
QtGlass() : the_figure('L') {}
I get this:
QtGlass.cpp:164:50: error: no matching function for call to 'Figure::Figure()'
QtGlass.cpp:164:50: note: candidates are:
Figure.h:38:5: note: Figure::Figure(char)
Figure.h:38:5: note: candidate expects 1 argument, 0 provided
Figure.h:20:7: note: Figure::Figure(const Figure&)
Figure.h:20:7: note: candidate expects 1 argument, 0 provided
QtGlass.cpp
QtGlass::QtGlass(QWidget *parent) : QFrame(parent) {
key_pressed = false;
coord_x = 5;
coord_y = 5;
arrow_n = 0;
highest_line = 21;
this->initialize_glass();
QTimer *timer = new QTimer(this);
connect(timer, SIGNAL(timeout()), this, SLOT(moveDownByTimer()));
timer->start(1000);
}
A: You can't initialize a member object using that syntax. If your compiler supports C++11's uniform initialization syntax or in-class initialization of member variables you can do this:
class QtGlass : public QFrame {
Q_OBJECT
...
protected:
Figure the_figure{'L'};
// or
Figure the_figure = 'L'; // works because Figure(char) is not explicit
...
};
Otherwise you need to initialize the object within QtGlass' constructor initializer list
class QtGlass : public QFrame {
Q_OBJECT
...
protected:
Figure the_figure;
...
};
// in QtGlass.cpp
QtGlass::QtGlass(QWidget *parent)
: QFrame(parent)
, the_figure('L')
{}
A: You are trying to create an instance of Figure in the definition of QtGlass which is not allowed. You have to instantiate the_figure in the constructor of QtGlass:
class QtGlass : public QFrame {
Q_OBJECT
QtGlass() {
the_figure = Figure('L');
};
...
protected:
Figure the_figure;
...
};
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16751688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Add label inside textbox control I want to make a control which inherits from TextBox and which has a label inside which "sticks" to the right side of the text box and which text is not user-editable but rather is set by a property. How can this be done? I realize there may be many reasons why this UX is a bad idea, but I have to do it this way.
A: Adapting from Hans Passant's Button inside a winforms textbox answer:
public class TextBoxWithLabel : TextBox {
[DllImport("user32.dll")]
private static extern IntPtr SendMessage(IntPtr hWnd, int msg, IntPtr wp, IntPtr lp);
Label label = new Label();
public TextBoxWithLabel() {
label.BackColor = Color.LightGray;
label.Cursor = Cursors.Default;
label.TextAlign = ContentAlignment.MiddleRight;
this.Controls.Add(label);
}
private int LabelWidth() {
return TextRenderer.MeasureText(label.Text, label.Font).Width;
}
public string LabelText {
get { return label.Text; }
set {
label.Text = value;
SendMessage(this.Handle, 0xd3, (IntPtr)2, (IntPtr)(LabelWidth() << 16));
OnResize(EventArgs.Empty);
}
}
protected override void OnResize(EventArgs e) {
base.OnResize(e);
int labelWidth = LabelWidth();
label.Left = this.ClientSize.Width - labelWidth;
label.Top = (this.ClientSize.Height / 2) - (label.Height / 2);
label.Width = labelWidth;
label.Height = this.ClientSize.Height;
}
}
Result:
A: I will suggest you to create a UserControl with TextBox and a Label docked right. That should be pain less and bug free.
As you said you already use TextBox to avoid much refactoring you can add all the properties you used in TextBox as "Proxy properties". Something like this:
class MyTextBox : UserControl
{
public int TextLength { get { return textbox.TextLength; } }
...
}
This can help you to avoid much refactoring.
A: I would actually create a composit control, or simply a UserControl, and put a label and textbox next to each other. Then you can remove the borders around the textbox and surround them with a borderbox to mimic the normal textbox design.
Finally I would make sure that the user controls properties, like Text is mapped to the Textbox, so it is easy to use the control as a drop-replacement.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/23875101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: *glibc detected double free or corruption() * message! The following deleteNode function when I run the program gets these:
* glibc detected free(): invalid next size (normal): 0x000000000103dd90 **
Even i make the ' free(here); ' a comment,i get the above message.
I dont think that the other 'free' calls provokes a problem like that. But I cant see why this would be wrong. :/
struct List *deleteNode(int Code,int i,char* Number)
{
struct List *here;
here=Head;
for (here; here!=Tail; here=here->next)
{
if ( (here->number==Number) && (here->code==Code) )//found node on the List
{
if (here->previous==Head) //delete from beginning
{
Head=here->next;
here->next->previous=Head;
}
else if (here->next==Tail) //delete from the end
{
here->previous->next=Tail;
Tail=here->previous;
}
else //delete from the middle of the list
{
here->previous->next=here->next;
here->next->previous=here->previous;
}
break;
}
}
free (here);
}
EDIT:
if i used and understand valgring well then the problem is on my main function.
i have also there some 'free' but i changed deleteNode before this message so i thought that the problem was on the deleteNode function.
Now,there is no free() invalid next size.... but unfortunately this:
glibc detected * : double free or corruption (out): 0x00007fff1aae9ae0 *
:(
A part of the main:
FILE *File;
if ( ( File=fopen("File.txt","r")) !=NULL )
{
int li = 0;
char *lin = (char *) malloc(MAX_LINE * sizeof(char));
while(fgets(lin, MAX_LINE, eventFile) != NULL)
{
token = linetok(lin, " ");
if(token != NULL)
{
int i,code,nodeID;
char *number;
char *event;
for(i = 0; token[i] != NULL; i += 1)
{
code=atoi(token[0]);
strcpy(event,token[1]);
nodeID=atoi(token[2]);
strcpy(number,token[3]) ;
int i;
if (!strcmp(event,"add"))
{
add_to_List(code,i,number);
}
else if(!strcmp(event,"delete"))
{
deleteNode(eventNo,i,number);
}
free(event);
free(phoneNumber);
}
free(token);
}
else
{
printf("Error reading line %s\n", lin);
exit(1);
}
}
}
else
{
printf("Error opening file with the events.\nEXIT!");
exit(0);
}
debugging it...
multiple definition of main'
pro:(.text+0xce0): first defined here
/usr/lib/gcc/x86_64-linux-gnu/4.4.1/crtend.o:(.dtors+0x0): multiple definition ofDTOR_END'
pro:(.dtors+0x8): first defined here
/usr/bin/ld: warning: Cannot create .eh_frame_hdr section, --eh-frame-hdr ignored.
/usr/bin/ld: error in pro1(.eh_frame); no .eh_frame_hdr table will be created.
collect2: ld returned 1 exit status
A: "Invalid next size" means that glibc has detected corruption in your memory arena.
You have overwritten valuable accounting information that's stored in between your allocated blocks.
With each block that malloc gives you, there is some accounting information stored close by. When you overwrite this information by, for example, writing 128 characters to a 20-character buffer, glibc may detect this the next time you try to free (or possibly allocate) some memory.
You need to find the root cause of this problem - it's not the free itself, that's just where the problem is being detected. Somewhere, some of your code is trashing memory and a memory analysis tool like valgrind will be invaluable here.
A: If the node is not found in the list, you will free the Tail node at the end of the function, without updating Tail to point to anything valid again.
Further using the list and the now deallocated Tail can easily result in memory corruption that might later be detected by glibc with a message like the one you got.
Also note that in (here->number==Number) you are comparing two pointers, not the values those pointers point to. I'm not sure if that's what you want.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4063583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Vimscript regex empty line So I'm learning Vimscript and Regex. I'm trying to detect if the current line is empty or not. By empty I mean either "" or " " (any number of spaces/tabs):
function! IsCurrentLineEmpty()
return IsLineEmpty(getline('.'))
endfu
function! IsLineEmpty(line) // returns 1 if the line is empty, 0 otherwise
if match(a:line, "^\s*$") != -1 // also tried \s+ instead of \s*
return 1
endif
endfu
If I put the cursor on an empty line or with spaces/tabs, IsCurrentLineEmpty always returns 0
I've tried other regex like ^[ \t]+|[ \t]+$ (also \s\t and \s instead of ' \t') but none really worked.
Any help would be appreciated! Thanks.
A: As pointed out by @vexe, you just need to escape the backslash. But you can also simplify your function:
function! IsLineEmpty(line)
return match(a:line, "^\\s*$") != -1
endfu
Another solution would be to use single quotes instead of double:
return match(a:line, '^\s*$') != -1
Another solution would be to use Vim's regexp matches operator:
return line =~ '^\s*$'
Alternatively, you could test if the line contains any non-whitespace characters:
return line !~ '[^\s]'
| {
"language": "en",
"url": "https://stackoverflow.com/questions/25438985",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I add time delay I would like to add a custom class on mouseover. So that when the mouse is hovered over .leftbar, a class is added and it should be popped up(I set css for his). How do I add slow or time delay for the popup?
<script>
$(document).ready(function(){
$( ".leftbar" ).mouseenter(function() {
$( "body" ).addClass( "myclass" );
});
});
$(document).ready(function(){
$( ".leftbar" ).mouseleave(function() {
$( "body" ).removeClass( "myclass1" );
});
});
</script>
I tried this- $( "body" ).addClass( "myclass" , '300'); with no luck
Thank you!
A: You can use setTimeout
$(document).ready(function(){
$( ".leftbar" ).mouseenter(function() {
window.setTimeout(function(){
$( "body" ).addClass( "myclass" );
}, 300);
});
}):
See https://developer.mozilla.org/en-US/docs/Web/API/WindowTimers.setTimeout
A: Use a setTimeout, being sure to clear it when the cursor leaves.
Minor error, but myclass != myclass1.
$(document).ready(function(){
var barTimeout = 0;
$( ".leftbar" ).on({
mouseenter: function(){
barTimeout = setTimeout(function(){
$( "body" ).addClass( "myclass" );
}, 300);
},
mouseleave: function(){
if( typeof barTimeout !== 'undefined' ) clearTimeout( barTimeout );
$( "body" ).removeClass( "myclass" );
}
});
});
JSFiddle
A: You could take a look at the jQuery UI method addClass which allows you to pass in some animation parameters into it. View the example and documentation here http://api.jqueryui.com/addClass/
For your use, it should be as simple as adding in the delay to addClass()
Add a reference to the jQuery Library, then change your code to;
$("body").addClass("myclass", 300);
A: You can do it like this:
$(document).ready(function () {
$(".leftbar").hover( function () {
$(this).delay(300).queue(function(next){
$(this).addClass("myclass");
next();
});
}, function(){
$(this).delay(300).queue(function(next){
$(this).removeClass("myclass");
next();
});
});
});
Check it out here: JSFiddle
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26468624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: SKCameraNode doesn't keep up with moving node I am currently trying to make a "launch(er) game", i.e. a game in the style of Toss the turle, Learn to fly and Burrito Bison using SpriteKit. I've gotten the launch to work using the built-in physics of SpriteKit, but when I try to use an SKCameraNode to follow the main character, it seems like it's always one (or more) step behind which makes the main character "shake" at high velocities.
I've tried both setting the position of the camera using SKAction and with the .position-property, with the same result.
I guess this is because the update of the physics are done at a faster rate than the actual update(), and I've tried searching for information about this but found zilch.
Function for "launching" the main character:
func touchStopped(touchPoint: CGPoint) {
if !inAir {
arrow.removeFromParent()
inAir = true
mainCharNode.physicsBody = SKPhysicsBody(circleOfRadius: mainCharNode.size.width/2)
mainCharNode.physicsBody?.mass = mainChar.mass
mainCharNode.physicsBody?.restitution = mainChar.restitution
mainCharNode.physicsBody?.linearDamping = mainChar.airResistance
mainCharNode.physicsBody?.velocity = CGVectorMake(touchPoint.x*2, touchPoint.y)
mainCharNode.physicsBody?.categoryBitMask = mainCategory
mainCharNode.physicsBody?.collisionBitMask = groundCategory
} else {
mainCharNode.physicsBody?.applyImpulse(CGVectorMake(10000, 10000))
}
}
Update()-function:
override func update(currentTime: CFTimeInterval) {
if lastUpdateTime > 0 {
dt = currentTime - lastUpdateTime
} else {
dt = 0
}
lastUpdateTime = currentTime
if mainCharNode.position.x > 1000 {
let moveCamera = SKAction.moveTo(CGPoint(x: mainCharNode.position.x, y: cameraNode.position.y), duration: dt)
cameraNode.runAction(moveCamera)
}
if(inAir && !gameOver) {
distance += (mainCharNode.physicsBody?.velocity.dx)!*CGFloat(dt)
if(mainCharNode.physicsBody?.velocity == CGVector(dx: 0,dy: 0)) {
gameOver = true
mainCharNode.physicsBody?.dynamic = false
}
}
}
https://github.com/CalleLundstedt/LauncherGame here is the full project on github.
A: By manually setting the camera position in update, you're delaying the camera movement by at least one frame — physics runs after update, so your camera move happens on the frame after your character moves.
When you use a move action instead of directly setting the position, and giving that action a nonzero duration, you're delaying the camera move even more. (The dt in your code is the time between this frame and the last one before it, and you're applying that time to a future movement.) Because the character is still moving while your action runs, the camera will never catch up — you're always moving the camera to where the character was.
Setting the camera position at all is just making extra work for yourself, though. Use SKConstraint instead, and SpriteKit itself will make sure that the camera position sticks to the character — it solves constraints after physics, but on the same frame.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36523568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to minify every request on js file with mod_pagespeed I have huge javascript application which uses uncompiled requirejs. I want to make every request made on js file being serve with mod_pagespeed
How to configure it and make every js files requested minified. Thank you
I'm using apache 2.4.6 on Centos 7.
A: Out of the box mod_pagespeed will optimize all javascript files referenced in the html, but with requirejs most scripts are going to be pulled in dynamically. To optimize those files, turn on InPlaceResourceOptimization (IPRO). IPRO is also enabled by default in versions 1.9 and newer.
You might also want to check out the optimization docs for requirejs.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27375498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Push Notification using REST API in outlook I tried to get subscription for push notification
Here is the code
`def subscribe(access_token):
url = graph_endpoint.format('/subscriptions')
d = {"changeType": "updated","notificationUrl":
"https://webhook.azurewebsites.net/api/send/myNotifyClient","resource":
"me/mailFolders('Inbox')/messages","clientState":
"secretClientValue","latestSupportedTlsVersion":
"v1_2"}
r = make_api_call('POST',url, access_token,payload= d)`
the response code was 400
the access token is correct
and another doubt is how to make a login in outlook webhook ???
If any body know the step by step to create a push notification in outlook please share....
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60394759",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Decode UTF-8 with Javascript I have Javascript in an XHTML web page that is passing UTF-8 encoded strings. It needs to continue to pass the UTF-8 version, as well as decode it. How is it possible to decode a UTF-8 string for display?
<script type="text/javascript">
// <![CDATA[
function updateUser(usernameSent){
var usernameReceived = usernameSent; // Current value: Größe
var usernameDecoded = usernameReceived; // Decode to: Größe
var html2id = '';
html2id += 'Encoded: ' + usernameReceived + '<br />Decoded: ' + usernameDecoded;
document.getElementById('userId').innerHTML = html2id;
}
// ]]>
</script>
A: This is what I found after a more specific Google search than just UTF-8 encode/decode. so for those who are looking for a converting library to convert between encodings, here you go.
https://github.com/inexorabletash/text-encoding
var uint8array = new TextEncoder().encode(str);
var str = new TextDecoder(encoding).decode(uint8array);
Paste from repo readme
All encodings from the Encoding specification are supported:
utf-8 ibm866 iso-8859-2 iso-8859-3 iso-8859-4 iso-8859-5 iso-8859-6 iso-8859-7 iso-8859-8 iso-8859-8-i iso-8859-10 iso-8859-13 iso-8859-14 iso-8859-15 iso-8859-16 koi8-r koi8-u macintosh windows-874 windows-1250 windows-1251 windows-1252 windows-1253 windows-1254 windows-1255 windows-1256 windows-1257 windows-1258 x-mac-cyrillic gb18030 hz-gb-2312 big5 euc-jp iso-2022-jp shift_jis euc-kr replacement utf-16be utf-16le x-user-defined
(Some encodings may be supported under other names, e.g. ascii, iso-8859-1, etc. See Encoding for additional labels for each encoding.)
A: @albert's solution was the closest I think but it can only parse up to 3 byte utf-8 characters
function utf8ArrayToStr(array) {
var out, i, len, c;
var char2, char3;
out = "";
len = array.length;
i = 0;
// XXX: Invalid bytes are ignored
while(i < len) {
c = array[i++];
if (c >> 7 == 0) {
// 0xxx xxxx
out += String.fromCharCode(c);
continue;
}
// Invalid starting byte
if (c >> 6 == 0x02) {
continue;
}
// #### MULTIBYTE ####
// How many bytes left for thus character?
var extraLength = null;
if (c >> 5 == 0x06) {
extraLength = 1;
} else if (c >> 4 == 0x0e) {
extraLength = 2;
} else if (c >> 3 == 0x1e) {
extraLength = 3;
} else if (c >> 2 == 0x3e) {
extraLength = 4;
} else if (c >> 1 == 0x7e) {
extraLength = 5;
} else {
continue;
}
// Do we have enough bytes in our data?
if (i+extraLength > len) {
var leftovers = array.slice(i-1);
// If there is an invalid byte in the leftovers we might want to
// continue from there.
for (; i < len; i++) if (array[i] >> 6 != 0x02) break;
if (i != len) continue;
// All leftover bytes are valid.
return {result: out, leftovers: leftovers};
}
// Remove the UTF-8 prefix from the char (res)
var mask = (1 << (8 - extraLength - 1)) - 1,
res = c & mask, nextChar, count;
for (count = 0; count < extraLength; count++) {
nextChar = array[i++];
// Is the char valid multibyte part?
if (nextChar >> 6 != 0x02) {break;};
res = (res << 6) | (nextChar & 0x3f);
}
if (count != extraLength) {
i--;
continue;
}
if (res <= 0xffff) {
out += String.fromCharCode(res);
continue;
}
res -= 0x10000;
var high = ((res >> 10) & 0x3ff) + 0xd800,
low = (res & 0x3ff) + 0xdc00;
out += String.fromCharCode(high, low);
}
return {result: out, leftovers: []};
}
This returns {result: "parsed string", leftovers: [list of invalid bytes at the end]} in case you are parsing the string in chunks.
EDIT: fixed the issue that @unhammer found.
A: // String to Utf8 ByteBuffer
function strToUTF8(str){
return Uint8Array.from(encodeURIComponent(str).replace(/%(..)/g,(m,v)=>{return String.fromCodePoint(parseInt(v,16))}), c=>c.codePointAt(0))
}
// Utf8 ByteArray to string
function UTF8toStr(ba){
return decodeURIComponent(ba.reduce((p,c)=>{return p+'%'+c.toString(16),''}))
}
A: This should work:
// http://www.onicos.com/staff/iz/amuse/javascript/expert/utf.txt
/* utf.js - UTF-8 <=> UTF-16 convertion
*
* Copyright (C) 1999 Masanao Izumo <[email protected]>
* Version: 1.0
* LastModified: Dec 25 1999
* This library is free. You can redistribute it and/or modify it.
*/
function Utf8ArrayToStr(array) {
var out, i, len, c;
var char2, char3;
out = "";
len = array.length;
i = 0;
while(i < len) {
c = array[i++];
switch(c >> 4)
{
case 0: case 1: case 2: case 3: case 4: case 5: case 6: case 7:
// 0xxxxxxx
out += String.fromCharCode(c);
break;
case 12: case 13:
// 110x xxxx 10xx xxxx
char2 = array[i++];
out += String.fromCharCode(((c & 0x1F) << 6) | (char2 & 0x3F));
break;
case 14:
// 1110 xxxx 10xx xxxx 10xx xxxx
char2 = array[i++];
char3 = array[i++];
out += String.fromCharCode(((c & 0x0F) << 12) |
((char2 & 0x3F) << 6) |
((char3 & 0x3F) << 0));
break;
}
}
return out;
}
Check out the JSFiddle demo.
Also see the related questions: here and here
A: Perhaps using the textDecoder will be sufficient.
Not supported in IE though.
var decoder = new TextDecoder('utf-8'),
decodedMessage;
decodedMessage = decoder.decode(message.data);
Handling non-UTF8 text
In this example, we decode the Russian text "Привет, мир!", which means "Hello, world." In our TextDecoder() constructor, we specify the Windows-1251 character encoding, which is appropriate for Cyrillic script.
let win1251decoder = new TextDecoder('windows-1251');
let bytes = new Uint8Array([207, 240, 232, 226, 229, 242, 44, 32, 236, 232, 240, 33]);
console.log(win1251decoder.decode(bytes)); // Привет, мир!
The interface for the TextDecoder is described here.
Retrieving a byte array from a string is equally simpel:
const decoder = new TextDecoder();
const encoder = new TextEncoder();
const byteArray = encoder.encode('Größe');
// converted it to a byte array
// now we can decode it back to a string if desired
console.log(decoder.decode(byteArray));
If you have it in a different encoding then you must compensate for that upon encoding.
The parameter in the constructor for the TextEncoder is any one of the valid encodings listed here.
A: To answer the original question: here is how you decode utf-8 in javascript:
http://ecmanaut.blogspot.ca/2006/07/encoding-decoding-utf8-in-javascript.html
Specifically,
function encode_utf8(s) {
return unescape(encodeURIComponent(s));
}
function decode_utf8(s) {
return decodeURIComponent(escape(s));
}
We have been using this in our production code for 6 years, and it has worked flawlessly.
Note, however, that escape() and unescape() are deprecated. See this.
A: Update @Albert's answer adding condition for emoji.
function Utf8ArrayToStr(array) {
var out, i, len, c;
var char2, char3, char4;
out = "";
len = array.length;
i = 0;
while(i < len) {
c = array[i++];
switch(c >> 4)
{
case 0: case 1: case 2: case 3: case 4: case 5: case 6: case 7:
// 0xxxxxxx
out += String.fromCharCode(c);
break;
case 12: case 13:
// 110x xxxx 10xx xxxx
char2 = array[i++];
out += String.fromCharCode(((c & 0x1F) << 6) | (char2 & 0x3F));
break;
case 14:
// 1110 xxxx 10xx xxxx 10xx xxxx
char2 = array[i++];
char3 = array[i++];
out += String.fromCharCode(((c & 0x0F) << 12) |
((char2 & 0x3F) << 6) |
((char3 & 0x3F) << 0));
break;
case 15:
// 1111 0xxx 10xx xxxx 10xx xxxx 10xx xxxx
char2 = array[i++];
char3 = array[i++];
char4 = array[i++];
out += String.fromCodePoint(((c & 0x07) << 18) | ((char2 & 0x3F) << 12) | ((char3 & 0x3F) << 6) | (char4 & 0x3F));
break;
}
return out;
}
A: Here is a solution handling all Unicode code points include upper (4 byte) values and supported by all modern browsers (IE and others > 5.5). It uses decodeURIComponent(), but NOT the deprecated escape/unescape functions:
function utf8_to_str(a) {
for(var i=0, s=''; i<a.length; i++) {
var h = a[i].toString(16)
if(h.length < 2) h = '0' + h
s += '%' + h
}
return decodeURIComponent(s)
}
Tested and available on GitHub
To create UTF-8 from a string:
function utf8_from_str(s) {
for(var i=0, enc = encodeURIComponent(s), a = []; i < enc.length;) {
if(enc[i] === '%') {
a.push(parseInt(enc.substr(i+1, 2), 16))
i += 3
} else {
a.push(enc.charCodeAt(i++))
}
}
return a
}
Tested and available on GitHub
A: Using my 1.6KB library, you can do
ToString(FromUTF8(Array.from(usernameReceived)))
A: This is a solution with extensive error reporting.
It would take an UTF-8 encoded byte array (where byte array is represented as
array of numbers and each number is an integer between 0 and 255 inclusive)
and will produce a JavaScript string of Unicode characters.
function getNextByte(value, startByteIndex, startBitsStr,
additional, index)
{
if (index >= value.length) {
var startByte = value[startByteIndex];
throw new Error("Invalid UTF-8 sequence. Byte " + startByteIndex
+ " with value " + startByte + " (" + String.fromCharCode(startByte)
+ "; binary: " + toBinary(startByte)
+ ") starts with " + startBitsStr + " in binary and thus requires "
+ additional + " bytes after it, but we only have "
+ (value.length - startByteIndex) + ".");
}
var byteValue = value[index];
checkNextByteFormat(value, startByteIndex, startBitsStr, additional, index);
return byteValue;
}
function checkNextByteFormat(value, startByteIndex, startBitsStr,
additional, index)
{
if ((value[index] & 0xC0) != 0x80) {
var startByte = value[startByteIndex];
var wrongByte = value[index];
throw new Error("Invalid UTF-8 byte sequence. Byte " + startByteIndex
+ " with value " + startByte + " (" +String.fromCharCode(startByte)
+ "; binary: " + toBinary(startByte) + ") starts with "
+ startBitsStr + " in binary and thus requires " + additional
+ " additional bytes, each of which shouls start with 10 in binary."
+ " However byte " + (index - startByteIndex)
+ " after it with value " + wrongByte + " ("
+ String.fromCharCode(wrongByte) + "; binary: " + toBinary(wrongByte)
+") does not start with 10 in binary.");
}
}
function fromUtf8 (str) {
var value = [];
var destIndex = 0;
for (var index = 0; index < str.length; index++) {
var code = str.charCodeAt(index);
if (code <= 0x7F) {
value[destIndex++] = code;
} else if (code <= 0x7FF) {
value[destIndex++] = ((code >> 6 ) & 0x1F) | 0xC0;
value[destIndex++] = ((code >> 0 ) & 0x3F) | 0x80;
} else if (code <= 0xFFFF) {
value[destIndex++] = ((code >> 12) & 0x0F) | 0xE0;
value[destIndex++] = ((code >> 6 ) & 0x3F) | 0x80;
value[destIndex++] = ((code >> 0 ) & 0x3F) | 0x80;
} else if (code <= 0x1FFFFF) {
value[destIndex++] = ((code >> 18) & 0x07) | 0xF0;
value[destIndex++] = ((code >> 12) & 0x3F) | 0x80;
value[destIndex++] = ((code >> 6 ) & 0x3F) | 0x80;
value[destIndex++] = ((code >> 0 ) & 0x3F) | 0x80;
} else if (code <= 0x03FFFFFF) {
value[destIndex++] = ((code >> 24) & 0x03) | 0xF0;
value[destIndex++] = ((code >> 18) & 0x3F) | 0x80;
value[destIndex++] = ((code >> 12) & 0x3F) | 0x80;
value[destIndex++] = ((code >> 6 ) & 0x3F) | 0x80;
value[destIndex++] = ((code >> 0 ) & 0x3F) | 0x80;
} else if (code <= 0x7FFFFFFF) {
value[destIndex++] = ((code >> 30) & 0x01) | 0xFC;
value[destIndex++] = ((code >> 24) & 0x3F) | 0x80;
value[destIndex++] = ((code >> 18) & 0x3F) | 0x80;
value[destIndex++] = ((code >> 12) & 0x3F) | 0x80;
value[destIndex++] = ((code >> 6 ) & 0x3F) | 0x80;
value[destIndex++] = ((code >> 0 ) & 0x3F) | 0x80;
} else {
throw new Error("Unsupported Unicode character \""
+ str.charAt(index) + "\" with code " + code + " (binary: "
+ toBinary(code) + ") at index " + index
+ ". Cannot represent it as UTF-8 byte sequence.");
}
}
return value;
}
A: You should take decodeURI for it.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/decodeURI
As simple as this:
decodeURI('https://developer.mozilla.org/ru/docs/JavaScript_%D1%88%D0%B5%D0%BB%D0%BB%D1%8B');
// "https://developer.mozilla.org/ru/docs/JavaScript_шеллы"
Consider to use it inside try catch block for not missing an URIError.
Also it has full browsers support.
A: const decoder = new TextDecoder();
console.log(decoder.decode(new Uint8Array([97])));
MDN resource link
A: I reckon the easiest way would be to use a built-in js functions decodeURI() / encodeURI().
function (usernameSent) {
var usernameEncoded = usernameSent; // Current value: utf8
var usernameDecoded = decodeURI(usernameReceived); // Decoded
// do stuff
}
A: Preferably, as others have suggested, use the Encoding API. But if you need to support IE (for some strange reason) MDN recommends this repo FastestSmallestTextEncoderDecoder
If you need to make use of the polyfill library:
import {encode, decode} from "fastestsmallesttextencoderdecoder";
Then (regardless of the polyfill) for encoding and decoding:
// takes in USVString and returns a Uint8Array object
const encoded = new TextEncoder().encode('€')
console.log(encoded);
// takes in an ArrayBuffer or an ArrayBufferView and returns a DOMString
const decoded = new TextDecoder().decode(encoded);
console.log(decoded);
A: I searched for a simple solution and this works well for me:
//input data
view = new Uint8Array(data);
//output string
serialString = ua2text(view);
//convert UTF8 to string
function ua2text(ua) {
s = "";
for (var i = 0; i < ua.length; i++) {
s += String.fromCharCode(ua[i]);
}
return s;
}
Only issue I have is sometimes I get one character at a time. This might be by design with my source of the arraybuffer. I'm using https://github.com/xseignard/cordovarduino to read serial data on an android device.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13356493",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "75"
} |
Q: How to pass arguments without concatenating in Powershell? (I'm a PowerShell newbie, and finding it a very frustrating language)
I want to create a function - with arguments and a return value. Not rocket science? That's what I thought anyway. But PowerShell keeps concatenating my arguments - here's the MCVE version:
function hello($a, $b) {
Write-host "a is $a and b is $b"
}
hello("first", "second")
I was expecting this to produce the output:
a is first and b is second
I wasn't expecting....
a is first second and b is
along with the fact it is concatenating values in a way I don't expect, its also failing to flag up the argument which is consequently missing.
How can I pass arguments to a function? (!!!!OMG WTF????!!!)
A: Remove the () from your call to the function
function hello($a, $b) {
Write-host "a is $a and b is $b"
}
hello "first" "second"
or
hello -a "first" -b "second"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49369695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: PHP cUrl returning empty string without var_dump I was using cUrl to scrape prices for some products . All worked well,a few months, until now .
Now after cURL, I get an empty result .. apparently ... because if I do a var_dump on the returned variable, it works ... and I don't understand how can a var be empty until i print it ?!
This is my full cURL functions :
function linkcurl($targetURL){
$linkcurl = curl_init();
curl_setopt($linkcurl, CURLOPT_COOKIEJAR, dirname(__FILE__) . "/cookie.tmpz");
curl_setopt($linkcurl, CURLOPT_COOKIEFILE, dirname(__FILE__) . "/cookie.tmpz");
curl_setopt($linkcurl, CURLOPT_VERBOSE, true);
//curl_setopt($linkcurl, CURLOPT_USERAGENT, random_user_agent());
curl_setopt($linkcurl, CURLOPT_FOLLOWLOCATION, TRUE);
curl_setopt($linkcurl, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($linkcurl, CURLOPT_AUTOREFERER, TRUE);
curl_setopt($linkcurl, CURLOPT_HEADER, 0); // debug headers sent - 1
curl_setopt($linkcurl, CURLOPT_URL, $targetURL);
$datax = curl_exec ($linkcurl);
curl_close($linkcurl);
return $datax;
}
$prdhtml = linkcurl($product_page_url); //
No, i did try to add more options to my cURL butit does not change a thing :
curl_setopt($linkcurl, CURLOPT_VERBOSE, true);
curl_setopt($linkcurl, CURLOPT_FOLLOWLOCATION, TRUE);
curl_setopt($linkcurl, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($linkcurl, CURLOPT_AUTOREFERER, TRUE);
Same result, nothing changed .
I did try to add a var_dump ob_start to my curl function like this :
ob_start();
return curl_exec ($ch);
ob_end_clean();
Still nothing .
I also tried to capture the cURL output outside the function like this :
ob_start();
var_dump($prdhtml);
$prdhtml = ob_get_clean();
Still .. nothing changed ... I also tried varionts with print_r and var_export... nothing .
I also did try a fixed user agent for cURL and also random user agents .. nothing...
The only time it works (from time to time,not always) is if I do a simple var_dump($prdhtml); of the string as a result on page, and I don't get how that is different from ob_start .
I don't understand what the problem is and how to fix it ...
EDIT:
sample code and fiddle :
http://codepad.viper-7.com/aePjg7
A: function linkcurl($targetURL){
$linkcurl = curl_init();
curl_setopt($linkcurl, CURLOPT_COOKIEJAR, dirname(__FILE__) . "/cookie.tmpz");
curl_setopt($linkcurl, CURLOPT_COOKIEFILE, dirname(__FILE__) . "/cookie.tmpz");
curl_setopt($linkcurl, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($linkcurl, CURLOPT_CUSTOMREQUEST, 'GET');
curl_setopt($linkcurl, CURLOPT_URL, $targetURL);
$datax = curl_exec ($linkcurl);
if ($datax) {
curl_close($linkcurl);
return $datax;
} else {
return curl_error ( $linkcurl );
}
}
$prdhtml = linkcurl($product_page_url); //
| {
"language": "en",
"url": "https://stackoverflow.com/questions/28011540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Proper way to declare JSON object in Typescript I have the following JSON object in my Angular 2 app and would like to know what is the proper what to declare it in typescript.
data = [
{
'id':1,
'title':'something'
'node': [
{
'id':1,
'title':'something'
'node': []
}
]
},
{
'id':2,
'title':'something'
'node': [
{
'id':1,
'title':'something'
'node': []
}
]
}
]
A: The proper way is using an interface, it doesn't generate extra code when compiled to javascript and it offers you static typing capabilities:
https://www.typescriptlang.org/docs/handbook/interfaces.html
A: Here is an easy and naive implementation of what you're asking for:
interface IDataNode {
id: number;
title: string;
node: Array<IDataNode>;
}
If you want to instantiate said nodes from code:
class DataNode implements IDataNode {
id: number;
title: string;
node: Array<IDataNode>;
constructor(id: number, title: string, node?: Array<IDataNode>) {
this.id = id;
this.title = title;
this.node = node || [];
}
addNode(node: IDataNode): void {
this.node.push(node);
}
}
Using this to hardcode your structure:
let data: Array<IDataNode> = [
new DataNode(1, 'something', [
new DataNode(2, 'something inner'),
new DataNode(3, 'something more')
]),
new DataNode(4, 'sibling 1'),
new DataNode(5, 'sibling 2', [
new DataNode(6, 'child'),
new DataNode(7, 'another child', [
new DataNode(8, 'even deeper nested')
])
])
];
A: Update: 7/26/2021
I revisited the discussion noted in the original answer, and there is now an update with an improved implementation.
type JSONValue =
| string
| number
| boolean
| null
| JSONValue[]
| {[key: string]: JSONValue}
interface JSONObject {
[k: string]: JSONValue
}
interface JSONArray extends Array<JSONValue> {}
This has been working very well for me.
Discussion reference: https://github.com/microsoft/TypeScript/issues/1897#issuecomment-822032151
Original answer: Sep 29 '20
I realize this is an old question, but I just found a solution that worked very well for me. Declare the following
type JsonPrimitive = string | number | boolean | null
interface JsonMap extends Record<string, JsonPrimitive | JsonArray | JsonMap> {}
interface JsonArray extends Array<JsonPrimitive | JsonArray | JsonMap> {}
type Json = JsonPrimitive | JsonMap | JsonArray
then any of the following (including the OP's version slightly modified for syntax errors) will work
let a: Json = {};
a[1] = 5;
a["abc"] = "abc";
a = {
a: {
a: 2,
},
b: [1, 2, 3],
c: true,
};
a = [
{
"id": 1,
"title": "something",
"node": [
{
"id": 1,
"title": "something",
"node": [],
},
],
},
{
"id": 2,
"title": "something",
"node": [
{
"id": 1,
"title": "something",
"node": [],
},
],
},
];
This answer should be credited to Andrew Kaiser who made the suggestion on the discussion about making Json a basic type in Typescript: https://github.com/microsoft/TypeScript/issues/1897#issuecomment-648484759
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38123222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: Business logic in MVC I have 2 questions:
Q1. Where exactly does "business logic" lie in the MVC pattern? I am confused between Model and Controller.
Q2. Is "business logic" the same as "business rules"? If not, what is the difference?
It would be great if you could explain with a small example.
A: The term business logic is in my opinion not a precise definition. Evans talks in his book, Domain Driven Design, about two types of business logic:
*
*Domain logic.
*Application logic.
This separation is in my opinion a lot clearer. And with the realization that there are different types of business rules also comes the realization that they don't all necessarily go the same place.
Domain logic is logic that corresponds to the actual domain. So if you are creating an accounting application, then domain rules would be rules regarding accounts, postings, taxation, etc. In an agile software planning tool, the rules would be stuff like calculating release dates based on velocity and story points in the backlog, etc.
For both these types of application, CSV import/export could be relevant, but the rules of CSV import/export has nothing to do with the actual domain. This kind of logic is application logic.
Domain logic most certainly goes into the model layer. The model would also correspond to the domain layer in DDD.
Application logic however does not necessarily have to be placed in the model layer. That could be placed in the controllers directly, or you could create a separate application layer hosting those rules. What is most logical in this case would depend on the actual application.
A: It does not make sense to put your business layer in the Model for an MVC project.
Say that your boss decides to change the presentation layer to something else, you would be screwed! The business layer should be a separate assembly. A Model contains the data that comes from the business layer that passes to the view to display. Then on post for example, the model binds to a Person class that resides in the business layer and calls PersonBusiness.SavePerson(p); where p is the Person class. Here's what I do (BusinessError class is missing but would go in the BusinessLayer too):
A: A1: Business Logic goes to Model part in MVC. Role of Model is to contain data and business logic. Controller on the other hand is responsible to receive user input and decide what to do.
A2: A Business Rule is part of Business Logic. They have a has a relationship. Business Logic has Business Rules.
Take a look at Wikipedia entry for MVC. Go to Overview where it mentions the flow of MVC pattern.
Also look at Wikipedia entry for Business Logic. It is mentioned that Business Logic is comprised of Business Rules and Workflow.
A: As a couple of answers have pointed out, I believe there is some some misunderstanding of multi tier vs MVC architecture.
Multi tier architecture involves breaking your application into tiers/layers (e.g. presentation, business logic, data access)
MVC is an architectural style for the presentation layer of an application. For non trivial applications, business logic/business rules/data access should not be placed directly into Models, Views, or Controllers. To do so would be placing business logic in your presentation layer and thus reducing reuse and maintainability of your code.
The model is a very reasonable choice choice to place business logic, but a better/more maintainable approach is to separate your presentation layer from your business logic layer and create a business logic layer and simply call the business logic layer from your models when needed. The business logic layer will in turn call into the data access layer.
I would like to point out that it is not uncommon to find code that mixes business logic and data access in one of the MVC components, especially if the application was not architected using multiple tiers. However, in most enterprise applications, you will commonly find multi tier architectures with an MVC architecture in place within the presentation layer.
A: Fist of all:
I believe that you are mixing up the MVC pattern and n-tier-based design principles.
Using an MVC approach does not mean that you shouldn't layer your application.
It might help if you see MVC more like an extension of the presentation layer.
If you put non-presentation code inside the MVC pattern you might very soon end up in a complicated design.
Therefore I would suggest that you put your business logic into a separate business layer.
Just have a look at this: Wikipedia article about multitier architecture
It says:
Today, MVC and similar model-view-presenter (MVP) are Separation of Concerns design patterns that apply exclusively to the presentation layer of a larger system.
Anyway ... when talking about an enterprise web application the calls from the UI to the business logic layer should be placed inside the (presentation) controller.
That is because the controller actually handles the calls to a specific resource, queries the data by making calls to the business logic and links the data (model) to the appropriate view.
Mud told you that the business rules go into the model.
That is also true, but he mixed up the (presentation) model (the 'M' in MVC) and the data layer model of a tier-based application design.
So it is valid to place your database related business rules in the model (data layer) of your application.
But you should not place them in the model of your MVC-structured presentation layer as this only applies to a specific UI.
This technique is independent of whether you use a domain driven design or a transaction script based approach.
Let me visualize that for you:
Presentation layer: Model - View - Controller
Business layer: Domain logic - Application logic
Data layer: Data repositories - Data access layer
The model that you see above means that you have an application that uses MVC, DDD and a database-independed data layer.
This is a common approach to design a larger enterprise web application.
But you can also shrink it down to use a simple non-DDD business layer (a business layer without domain logic) and a simple data layer that writes directly to a specific database.
You could even drop the whole data-layer and access the database directly from the business layer, though I do not recommend it.
[Note:]
You should also be aware of the fact that nowadays there is more than just one "model" in an application.
Commonly, each layer of an application has it's own model.
The model of the presentation layer is view specific but often independent of the used controls.
The business layer can also have a model, called the "domain-model". This is typically the case when you decide to take a domain-driven approach.
This "domain-model" contains of data as well as business logic (the main logic of your program) and is usually independent of the presentation layer.
The presentation layer usually calls the business layer on a certain "event" (button pressed etc.) to read data from or write data to the data layer.
The data layer might also have it's own model, which is typically database related. It often contains a set of entity classes as well as data-access-objects (DAOs).
The question is: how does this fit into the MVC concept?
Answer -> It doesn't!
Well - it kinda does, but not completely.This is because MVC is an approach that was developed in the late 1970's for the Smalltalk-80 programming language. At that time GUIs and personal computers were quite uncommon and the world wide web was not even invented!
Most of today's programming languages and IDEs were developed in the 1990s.
At that time computers and user interfaces were completely different from those in the 1970s.
You should keep that in mind when you talk about MVC.
Martin Fowler has written a very good article about MVC, MVP and today's GUIs.
A: Q1:
Business logics can be considered in two categories:
*
*Domain logics like controls on an email address (uniqueness, constraints, etc.), obtaining the price of a product for invoice, or, calculating the shoppingCart's total price based of its product objects.
*More broad and complicated workflows which are called business processes, like controlling the registration process for the student (which usually includes several steps and needs different checks and has more complicated constraints).
The first category goes into model and the second one belongs to controller. This is because the cases in the second category are broad application logics and putting them in the model may mix the model's abstraction (for example, it is not clear if we need to put those decisions in one model class or another, since they are related to both!).
See this answer for a specific distinction between model and controller, this link for very exact definitions and also this link for a nice Android example.
The point is that the notes mentioned by "Mud" and "Frank" above both can be true as well as "Pete"'s (business logic can be put in model, or controller, according to the type of business logic).
Finally, note that MVC differs from context to context. For example, in Android applications, some alternative definitions are suggested that differs from web-based ones (see this post for example).
Q2:
Business logic is more general and (as "decyclone" mentioned above) we have the following relation between them:
business rules ⊂ business logics
A: Why don't you introduce a service layer. then your controller will be lean and more readable, then your all controller functions will be pure actions. you can decompose business logic as you much as you need within service layer . code reusability is hight . no impact on models and repositories.
A: Business rules go in the model.
Say you were displaying emails for a mailing list. The user clicks the "delete" button next to one of the emails, the controller notifies the model to delete entry N, then notifies the view the model has changed.
Perhaps the admin's email should never be removed from the list. That's a business rule, that knowledge belongs in the model. The view may ultimately represent this rule somehow -- perhaps the model exposes an "IsDeletable" property which is a function of the business rule, so that the delete button in the view is disabled for certain entries - but the rule itself isn't contained in the view.
The model is ultimately gatekeeper for your data. You should be able to test your business logic without touching the UI at all.
A: This is an answered question, but I'll give my "one cent":
Business rules belong in the model.
The "model" always consists of (logically or physically separated):
*
*presentation model - a set of classes that is well suited for use in the view (it's tailored toward specific UI/presentation),
*domain model - the UI-independent portion of the model, and
*repository - the storage-aware portion of the "model".
Business rules live in the domain model, are exposed in a presentation-suitable form to the "presentation" model and are sometimes duplicated (or also enforced) in the "data layer".
A: Model = code for CRUD database operations.
Controller = responds to user actions, and passes the user requests for data retrieval or delete/update to the model, subject to the business rules specific to an organization. These business rules could be implemented in helper classes, or if they are not too complex, just directly in the controller actions. The controller finally asks the view to update itself so as to give feedback to the user in the form of a new display, or a message like 'updated, thanks', etc.,
View = UI that is generated based on a query on the model.
There are no hard and fast rules regarding where business rules should go. In some designs they go into model, whereas in others they are included with the controller. But I think it is better to keep them with the controller. Let the model worry only about database connectivity.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4415904",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "205"
} |
Q: PHP strpos is not working List array:
[lists] => Array
(
[0] => Array
(
[ID] => 1
[Name] => Bunglows
[Property_type] =>
[Status] => Open
)
[1] => Array
(
[ID] => 2
[Name] => Tenament
[Property_type] =>
[Status] => Open
)
)
Amenity Types array:
[amenitytypes] => Array
(
[0] => Array
(
[ID] => 13
[Name] => College
[Amenity_property_ID] => 14
[Property_name] => Bunglows,Tenament
)
)
PHP function
<?php foreach($lists as $list):?>
<?php if(strpos(trim($amenitytype['Property_name']), trim($list['Name'])) == TRUE):?>
<label class="checkbox-inline">
<input type="checkbox" id="checkbox" class="Propertytype" value="<?php echo $list['ID']?>" checked> <?php echo trim($list['Name']);?>
</label>
<?php else:?>
<label class="checkbox-inline">
<input type="checkbox" id="checkbox" class="Propertytype" value="<?php echo $list['ID']?>"> <?php echo trim($list['Name']);?>
</label>
<?php endif;?>
<?php endforeach;?>
strpos function is not working for me. during code execution Tenament is checked but Bunglows are not checked. how to I resolve this issue. Bunglows value are same in lists and amenitytypes. I don't know why my function is not working. can you please help me how to resolve this error?
A: Change the line you are using multidimensional array if you use $amenitytype['Property_name']. there is no property in the name Property_name in $amenitytype array
if(strpos(trim($amenitytype['Property_name']), trim($list['Name'])) == TRUE):?>
To
if(strpos(trim($amenitytype[0]['Property_name']), trim($list['Name'])) !== false):?>
A: You should use !== FALSE instead of == TRUE as strpos may return 0 which will be considered as not true, while in fact it points to the first character in the haystack.
Also your code isn't clear what $amenitytype contains so you may have to replace this line
<?php if(strpos(trim($amenitytype['Property_name']), trim($list['Name'])) == TRUE):?>
with this
<?php if(strpos(trim($amenitytypes[0]['Property_name']), trim($list['Name'])) !== FALSE):?>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43025581",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: How to load Google API (UDS.JS) on demand (with jQuery)? Tried this:
$('.link').click(function(e) {
$.getScript('http://www.google.com/uds/api?file=uds.js&v=1.0', function() {
$('body').append('<p>GOOGLE API (UDS) is loaded</p>');
});
return false;
});
Yes, it loads a primary "uds.js" file and then locks page by loading a locale JS file ("default+en.I.js", see line #48 in "uds.js").
workaround (@jsbin)
A: If you want to dynamically load google's libraries, you should check out google's autoloader:
http://code.google.com/apis/ajax/documentation/#AutoLoading
It works quite nicely, but be careful if you use the autoloader wizard.
http://code.google.com/apis/ajax/documentation/autoloader-wizard.html
there's a bug for the c&p code that tripped me up:
http://code.google.com/p/google-ajax-apis/issues/detail?id=244
Also I found that for some of google's libraries, if I try to asynchronously load scripts (like yours) that if I don't specify some of the optional parameters (language, callback, etc. -- even with an empty string), I'll see the behavior that you're seeing.
Edit: went ahead and tested it. Your solution here:
http://pastie.org/486925
| {
"language": "en",
"url": "https://stackoverflow.com/questions/897415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: JavaScript not executed on clicking button in Django I am new to web programming and currently working on Django.
I have a html page with 3 buttons
<p style = "color:black;"> <b> Your options: </b></p>
<p style="padding-bottom: 40px;"><button>A</button></p>
<p style="padding-bottom: 40px;"><button>B</button></p>
<p style="padding-bottom: 40px;"><button>C</button></p>
The script that needs to be executed after clicking on any one of this is as follows:
<script>
document.getElementById('submit_button').onclick = function(e) {
e.preventDefault();
$('#submit_form').css('display', 'none');
$('#loading').css('display', 'block');
$.ajax({
beforeSend: function (xhr, settings) {
xhr.setRequestHeader('X-CSRFToken', $('[name=csrfmiddlewaretoken]').val());
},
url: window.location.pathname,
type: 'POST',
processData: false,
contentType: false,
success: function () {
window.location.reload();
}
});
return false;
};
</script>
However, when I click on any one of the buttons, nothing happens.
A: None of your buttons have id submit_button. Which you have stated in the event handler:
document.getElementById('submit_button').onclick {
You need to add it to one of them like so:
<p style="padding-bottom: 40px;"><button id="submit_button">A</button></p>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62942545",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: unix command to truncate file contents can someone help me with unix command to truncate the contents of the files in the directory. I am using Cygwin in windows.
A: for file in *
do
>$file
done
A: Just redirect from nowhere:
> somefile.txt
A: If you want to truncate a file to keep the n last lines of a file, you can do something like (500 lines in this example)
mv file file.tmp && tail -n 500 file.tmp > file && rm file.tmp
| {
"language": "en",
"url": "https://stackoverflow.com/questions/2379761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Calculating averge intensity via histogram plot I have code that used to detect the object of the image code link here with a rectangle bound.
This will give the output image as shown below
3]3
Now I would like to calculate the average intensity of the green rectangle box and plot them against a number of green rectangle box. In short, plot the histogram of average intensity of all rectangle region. So far I have written the following codes for the histogram plot:
from skimage import io
import matplotlib.pyplot as plt
img = cv.imread('Fig1.png',0)
bins = np.arange(250)
ax = plt.hist(w4_img.ravel(), bins = bins)
plt.xlabel('Pixel Values')
plt.ylabel('No of Pixels')
plt.show()
Which shows the distribution of intensity histogram but this does not show the histogram of the average intensity of all rectangle of Fig. Any idea how to do this in a more simpler way? Thanks in advance.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/66989020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Airflow fails to write logs to s3 (v1.10.9) I am trying to setup remote logging in Airflow stable/airflow helm chart on v1.10.9 I am using Kubernetes executor and puckel/docker-airflow image. here's my values.yaml file.
airflow:
image:
repository: airflow-docker-local
tag: 1.10.9
executor: Kubernetes
service:
type: LoadBalancer
config:
AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY: airflow-docker-local
AIRFLOW__KUBERNETES__WORKER_CONTAINER_TAG: 1.10.9
AIRFLOW__KUBERNETES__WORKER_CONTAINER_IMAGE_PULL_POLICY: Never
AIRFLOW__KUBERNETES__WORKER_SERVICE_ACCOUNT_NAME: airflow
AIRFLOW__KUBERNETES__DAGS_VOLUME_CLAIM: airflow
AIRFLOW__KUBERNETES__NAMESPACE: airflow
AIRFLOW__CORE__REMOTE_LOGGING: True
AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER: "s3://xxx"
AIRFLOW__CORE__REMOTE_LOG_CONN_ID: "s3://aws_access_key_id:aws_secret_access_key@bucket"
AIRFLOW__CORE__ENCRYPT_S3_LOGS: False
persistence:
enabled: true
existingClaim: ''
postgresql:
enabled: true
workers:
enabled: false
redis:
enabled: false
flower:
enabled: false
but my logs don't get exported to S3, all I get on UI is
*** Log file does not exist: /usr/local/airflow/logs/icp_job_dag/icp-kube-job/2019-02-13T00:00:00+00:00/1.log
*** Fetching from: http://icpjobdagicpkubejob-f4144a374f7a4ac9b18c94f058bc7672:8793/log/icp_job_dag/icp-kube-job/2019-02-13T00:00:00+00:00/1.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='icpjobdagicpkubejob-f4144a374f7a4ac9b18c94f058bc7672', port=8793): Max retries exceeded with url: /log/icp_job_dag/icp-kube-job/2019-02-13T00:00:00+00:00/1.log (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f511c883710>: Failed to establish a new connection: [Errno -2] Name or service not known'))
any one have more insights what could I be missing?
Edit: from @trejas's suggestion below. I created a separate connection and using that. here's what my airflow config in values.yaml look like
airflow:
image:
repository: airflow-docker-local
tag: 1.10.9
executor: Kubernetes
service:
type: LoadBalancer
connections:
- id: my_aws
type: aws
extra: '{"aws_access_key_id": "xxxx", "aws_secret_access_key": "xxxx", "region_name":"us-west-2"}'
config:
AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY: airflow-docker-local
AIRFLOW__KUBERNETES__WORKER_CONTAINER_TAG: 1.10.9
AIRFLOW__KUBERNETES__WORKER_CONTAINER_IMAGE_PULL_POLICY: Never
AIRFLOW__KUBERNETES__WORKER_SERVICE_ACCOUNT_NAME: airflow
AIRFLOW__KUBERNETES__DAGS_VOLUME_CLAIM: airflow
AIRFLOW__KUBERNETES__NAMESPACE: airflow
AIRFLOW__CORE__REMOTE_LOGGING: True
AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER: s3://airflow.logs
AIRFLOW__CORE__REMOTE_LOG_CONN_ID: my_aws
AIRFLOW__CORE__ENCRYPT_S3_LOGS: False
I still have the same issue.
A: I was running into the same issue and thought I'd follow up with what ended up working for me. The connection is correct but you need to make sure that the worker pods have the same environment variables:
airflow:
image:
repository: airflow-docker-local
tag: 1.10.9
executor: Kubernetes
service:
type: LoadBalancer
connections:
- id: my_aws
type: aws
extra: '{"aws_access_key_id": "xxxx", "aws_secret_access_key": "xxxx", "region_name":"us-west-2"}'
config:
AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY: airflow-docker-local
AIRFLOW__KUBERNETES__WORKER_CONTAINER_TAG: 1.10.9
AIRFLOW__KUBERNETES__WORKER_CONTAINER_IMAGE_PULL_POLICY: Never
AIRFLOW__KUBERNETES__WORKER_SERVICE_ACCOUNT_NAME: airflow
AIRFLOW__KUBERNETES__DAGS_VOLUME_CLAIM: airflow
AIRFLOW__KUBERNETES__NAMESPACE: airflow
AIRFLOW__CORE__REMOTE_LOGGING: True
AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER: s3://airflow.logs
AIRFLOW__CORE__REMOTE_LOG_CONN_ID: my_aws
AIRFLOW__CORE__ENCRYPT_S3_LOGS: False
AIRFLOW__KUBERNETES_ENVIRONMENT_VARIABLES__AIRFLOW__CORE__REMOTE_LOGGING: True
AIRFLOW__KUBERNETES_ENVIRONMENT_VARIABLES__AIRFLOW__CORE__REMOTE_LOG_CONN_ID: my_aws
AIRFLOW__KUBERNETES_ENVIRONMENT_VARIABLES__AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER: s3://airflow.logs
AIRFLOW__KUBERNETES_ENVIRONMENT_VARIABLES__AIRFLOW__CORE__ENCRYPT_S3_LOGS: False
I also had to set the fernet key for the workers (and in general) otherwise I get an invalid token error:
airflow:
fernet_key: "abcdefghijkl1234567890zxcvbnmasdfghyrewsdsddfd="
config:
AIRFLOW__KUBERNETES_ENVIRONMENT_VARIABLES__AIRFLOW__CORE__FERNET_KEY: "abcdefghijkl1234567890zxcvbnmasdfghyrewsdsddfd="
A: Your remote log conn id needs to be an ID of a connection in the connections form/list. Not a connection string.
https://airflow.apache.org/docs/stable/howto/write-logs.html
https://airflow.apache.org/docs/stable/howto/connection/index.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60199159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Leaflet update WMS legend on change I have a dropdown that changes the SLD of a WMS layer.
Now, I want the legend to be updated according to this change. Finally, the legend must display the selected SLD of the layer.
The problem is that every time I change my choice with the dropdown, the legend is added to the map. But I want the legend to be refreshed.
For example on this image, I chose 'demo_secteur_statut_' then 'demo_secteur_statut' then 'demo_secteur_statut_'
I think we need to remove layer and/or legend before adding it but my attempts don’t work.
JS
$.ajax({
type: "GET",
url: "sld.json",
dataType: "json",
success: function(data) {
console.log(data);
$('#select-sld').empty();
$('#select-sld').append("<option value='0'>-- Choisir une analyse --</option>");
$.each(data, function(i, item) {
$('#select-sld').append('<option value=' + data[i].id + '>' + data[i].nom + '</option>');
});
////// SLD DROPDOWN (nom = name of SLD on GeoServer)
$("#select-sld").change(function(){
var selectId = $("#select-sld option:selected").val();
var getSLD = [];
for (var i in data){
if(data[i].id == selectId){
getSLD += data[i].nom;
if(data[i].id != selectId){
return false;
}
}
};
secteur.setParams({styles: getSLD});
////// LEGEND
var legend = L.control({position: 'bottomright'});
legend.onAdd = function (map2) {
var div = L.DomUtil.create('div', 'info legend');
var url = 'http://localhost:8080/geoserver/wms?REQUEST=GetLegendGraphic&VERSION=1.0.0&FORMAT=image/png&WIDTH=20&HEIGHT=20&LAYER=cite:demo_secteur&STYLE='+getSLD;
div.innerHTML += '<img src='+url+' alt="legend" width="75" height="55">';
return div;
};
legend.addTo(map2);
});
},
complete: function() {}
});
/////////////////////////////////////////
var map2 = L.map('map2', { zoomControl:false, attributionControl:false }).setView([48.11, -1.67], 14);
L.tileLayer('http://{s}.basemaps.cartocdn.com/light_all/{z}/{x}/{y}.png').addTo(map2);
var secteur = L.tileLayer.wms("http://localhost:8080/geoserver/cite/wms", {
layers: 'cite:demo_secteur',
format: 'image/png',
transparent: true
});
secteur.addTo(map2);
HTML
<div id="container-map">
<div id="map2"></div>
</div>
Thank you in advance for help
{
EDIT
}
Solution found, thks to : them
You could simply wrap your map.removeControl(legend) in an if
statement which checks if legend is actual an instance of L.Control.
So it gives that, hoping it helps someone
var legend;
$("#select-sld").change(function(){
var selectId = $("#select-sld option:selected").val();
var getSLD = [];
for (var i in data){
if(data[i].id == selectId){
getSLD += data[i].nom;
if(data[i].id != selectId){
return false;
}
}
};
secteur.setParams({styles: getSLD});
console.log(secteur.wmsParams.styles);
$('#result-select-sld').html(getSLD);
if(legend instanceof L.Control){map2.removeControl(legend);}
legend = L.control({position: 'bottomright'});
legend.onAdd = function (map2) {
var div = [];
div = L.DomUtil.create('div', 'info legend');
var url = 'http://localhost:8080/geoserver/wms?REQUEST=GetLegendGraphic&VERSION=1.0.0&FORMAT=image/png&WIDTH=20&HEIGHT=20&LAYER=cite:demo_secteur&STYLE='+getSLD;
div.innerHTML += '<img src='+url+' alt="legend" width="75" height="55">';
return div;
};
legend.addTo(map2);
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44888346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Regularly updating CSS inside an iFrame I have a colour picker widget on my page. I'm trying to attach an event handler that will take the selected colour and apply a list of CSS rules to an iFrame on my page. The iFrame is on the same domain as my page, so updated the css inline isn't an issue.
The issue is that I need to use pseudo class selectors (like :hover etc) in some of my styles, which cant be done inline. I could make a style element and append it to the head of my iFrame, but that would mean a new element gets added each time a new colour is picked. Is there any way to 'replace' the style element each time a colour is picked? Or alternatively, is there an easier solution to my problem?
A: You can use CSS-Variables for this.
Define your colors to ':root' in your css file:
:root{
--myColor: red;
--mySecondColor:green;
}
then set the colorProperties in your elements like this:
button{
background-color: var(--myColor);
padding:10px;
}
div{
color: var(--mySecondColor);
padding:10px;
}
Now in JavaScript you can change these properties globally
document.documentElement.style.setProperty('--myColor','blue');
document.documentElement.style.setProperty('--mySecondColor','orange');
Or change it just local
document.querySelector('button').style.setProperty('--myColor','blue');
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52751999",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to stream AKS pod logs into Eventhubs? Is there any solution to send AKS pod logs into Eventhub automatically?
A: I'm fairly certain this is not possible directly (because AKS can only stream to OMS), but this link outlines some principles. So you can create a function\logic app to do that for you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/58186640",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to quote this on Ruby? i have as string, with these value:
'`~!@#:;|$%^>?,)_+-={][&*(<]./"'
how to declare it on .rb without heredoc?
with heredoc:
bla = <<_
'`~!@#:;|$%^>?,)_+-={][&*(<]./"'
_
bla.chop!
A: You should be using HEREDOC for this, but here you go:
str = '\'`~!@#:;|$%^>?,)_+-={][&*(<]./"\''
Just use double quotes and escape the single quotes in the string. Simple.
A: Using heredoc:
bla = <<_
'`~!@#:;|$%^>?,)_+-={][&*(<]./"'
_
bla.chop!
you can observe the inspection and copy it:
"'`~!@#:;|$%^>?,)_+-={][&*(<]./\"'"
Simple as that.
A: You don't need to use a heredoc to do this, and you can do this simply without using any escapes in your preparation.
>> %q '`~!@#:;|$%^>?,)_+-={][&*(<]./"'
=> "'`~!@#:;|$%^>?,)_+-={][&*(<]./\"'"
The key here is that you are not using a space character in this collection of characters, and so we can use a space to delimit it.
You can use %q or %Q to do this.
Don't generally use space for delimiter for this, for obvious reasons, but sometimes it very useful.
A: In that string you have every character that can be used to escape a string. So even %q/%Q won't help you to use this string verbatim.
%q('`~!@#:;|$%^>?,\)_+-={][&*\(<]./"')
# quoted parentheses
So, your only option is heredoc. With every other method, you'll have to backslash-escape some characters.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18814637",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Rstudio construction of table I am learning rstudio for the first time and I have a question in organizing information in a table,
I have three variable, sex(0=man, 1=woman) BP(blood pressure) and Obese, in total I have 102 observations. In the obese variable, values above 1 means you are obese, less than 1 means you are healthy.
I want to construct a table that tells me in one column the number of women that are obese and healthy and in another column the number of man that are obese and healthy.
How do I do that?
A: Using the iris dataset:
library(dplyr) # Load package
iris %>% # Take the data
mutate(above_value = if_else(Sepal.Width > 3, "Above", "Below")) %>% # create an toy variable similar to your obese variable
count(Species, above_value) # Count by Species (or gender in your case) and the variable I created above
# A tibble: 6 x 3
Species above_value n
<fct> <chr> <int>
1 setosa Above 42
2 setosa Below 8
3 versicolor Above 8
4 versicolor Below 42
5 virginica Above 17
6 virginica Below 33
A: if your object is called data with columns sex, BP and Obese:
library(dplyr)
summary <- data %>%
group_by(sex) %>%
summarise(count_obese = sum(ifelse(Obese > 1, TRUE, FALSE), na.rm = TRUE),
count_healthy = sum(ifelse(Obese <= 1, TRUE, FALSE), na.rm = TRUE))
sum works as R equates the value TRUE to 1 and FALSE to 0. For example:
> TRUE + FALSE + TRUE
[1] 2
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61778093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: how future get() method works with timeout I am a bit confused how Future.get(timeout) works as per the definition it would through an exception after specified timeout time, but its not happening in my test cases.
import java.util.LinkedHashSet;
import java.util.Set;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
public class CallableExample {
public static class WordLengthCallable implements Callable<String> {
private String word;
private long waiting;
public WordLengthCallable(String word, long waiting) {
this.word = word;
this.waiting = waiting;
}
public String call() throws InterruptedException {
Thread.sleep(waiting);
return word;
}
}
public static void main(String args[]) throws Exception {
args = new String[] { "i", "am", "in", "love" };
long[] waitArr = new long[] { 3000, 3440, 2500, 3000 };
ExecutorService pool = Executors.newFixedThreadPool(3);
Set<Future<String>> set = new LinkedHashSet<Future<String>>();
int i = 0;
for (String word : args) {
Callable<String> callable = new WordLengthCallable(word, waitArr[i++]);
Future<String> future = pool.submit(callable);
set.add(future);
}
String sum = "";
for (Future<String> future : set) {
try {
sum += future.get(2000, TimeUnit.MILLISECONDS) + ", ";
} catch (Exception e) {
}
}
System.out.print("Result : " + sum);
}
}
output "am, in,"
it behaves differently on changing waiting time in array( timeArr values). when to use get with timeout?
A: In your for-loop you wait for the first future to complete. This may take 2000 millis. At this time all the other threads will sleep. Hence, all the values of the other threads are 2000 millis less. Then you wait another 2000 millis and perhaps the future you wait for returns. Hence, two or more threads will succeed.
In each iteration of your loop you donate 2000 millis to the other thread. Only if one future returns successfully, you donate less to the remaining futures. If you would like to observe all futures to fail, due to the 2000 millis timeout, you would have to process them in parallel as well.
If you change some of your code this way:
Set<Callable<String>> tasks = new HashSet<>();
for (String word : args) {
tasks.add(new WordLengthCallable(word, waitArr[i++]));
}
List<Future<String>> futures = Executors.newFixedThreadPool(3)
.invokeAll(tasks, 2000, TimeUnit.MILLISECONDS);
you should observe that none of the tasks will succeed, due to the wait times of:
3000, 3440, 2500, 3000
for each Callable created, which are all greater than 2000.
A: EDIT: Thanks to @RQube for warning me about thread's execution order as 3. thread will be finished before 1., 4. thread will start after 3.'s finish instead of 1.
First of all your thread pool's size is 3. This means your 4. Future will wait 3. to finish.
Lets assume there is no time consuming work other than thread waits. Execution will be like this:
*
*Future - 3000ms wait time - This will throw timeout exception but keep running since you are not terminating it on timeout. So your 4. Future still waiting for one thread to finish.
Total execution time: 2000ms
*Future - 1440ms wait since you already wait 2000ms - This will return as you see in your output "am". Also at 2500ms mark 3. Future will be executed and 4. Future will be started at 2500ms mark.
Total execution time: 3440ms
*Future - no wait time since we already wait 3440ms this will return immediately.
Total execution time: 3440ms.
*Future - 2060ms wait time to finish since this has been started at 2500ms mark and after start 940ms has passed. This will timeout after 2000ms(at 2940ms of wait)
As you can see just 2. and 3. Futures will return when you call get() but actually all of them is executed.
Sorry for bad formatting and any typos, as i am writing on mobile.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/22863965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Iframe Http error handling I am using iframe to show my database result.But for the veryfirst time since I am not hitting the database so in that case iframe is showing datatable.jsp page is not available (dataTable.jsp is the page for showing database table result).I searched in google and i found something called onError and onLoad methods for iframe.If anybody can show me a small example of how to show a different jsp if for the first time required src is not avaialable it would be a great help for me.
Thanks in advance
<iframe id="dataframe" src="dataTable.jsp" name="dataTable" width="720px" height="620px" align="middle" frameborder="0">
</iframe>
A: Well, with javascript, this could be a way:
HTML:
<iframe id="dataframe" name="dataTable" width="720px" height="620px" align="middle" frameborder="0">
</iframe>
(removed the src attribute)
Then, when you want to load the datatable with your jsp content
JS:
document.getElementById("dataframe").src = 'dataTable.jsp';
Hope this helps, Cheers
| {
"language": "en",
"url": "https://stackoverflow.com/questions/21067666",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How may I forbid calls to const member function of an rvalue object in C++ 2011? The following code
#include <vector>
#include <string>
#include <iostream>
std::string const& at(std::vector<std::string> const& n, int i)
{
return n[i];
}
std::vector<std::string> mkvec()
{
std::vector<std::string> n;
n.push_back("kagami");
n.push_back("misao");
return n;
}
int main()
{
std::string const& s = at(mkvec(), 0);
std::cout << s << std::endl; // D'oh!
return 0;
}
may lead to crash because the original vector is already destructed there. In C++ 2011 (c++0x) after rvalue-reference is introduced in, a deleted function declaration can be used to completely forbid calls to at if the vector argument is an rvalue
std::string const& at(std::vector<std::string>&&, int) = delete;
That looks good, but the following code still cause crash
int main()
{
std::string const& s = mkvec()[0];
std::cout << s << std::endl; // D'oh!
return 0;
}
because calls to member function operator [] (size_type) const of an rvalue object is still allowed. Is there any way can I forbid this kind of calls?
FIX:
The examples above is not what I did in real projects. I just wonder if C++ 2011 support any member function qualifying like
class A {
void func() rvalue; // Then a call on an rvalue object goes to this overload
void func() const;
};
FIX:
It's great, but I think C++ standard goes too far at this feature. Anyway, I have following code compiled on clang++ 2.9
#include <cstdio>
struct A {
A() {}
void func() &
{
puts("a");
}
void func() &&
{
puts("b");
}
void func() const &
{
puts("c");
}
};
int main()
{
A().func();
A a;
a.func();
A const b;
b.func();
return 0;
}
Thanks a lot!
A: No, and you shouldn't. How am I to do std::cout << at(mkvec(), 0) << std::endl;, a perfectly reasonable thing, if you've banned me from using at() on temporaries?
Storing references to temporaries is just a problem C++ programmers have to deal with, unfortunately.
To answer your new question, yes, you can do this:
class A {
void func() &; // lvalues go to this one
void func() &&; // rvalues go to this one
};
A a;
a.func(); // first overload
A().func(); // second overload
A: Just an idea:
To disable copying constructor on the vector somehow.
vector ( const vector<T,Allocator>& x );
Implicit copying of arrays is not that good thing anyway. (wondering why STL authors decided to define such ctor at all)
It will fix problems like you've mentioned and as a bonus will force you to use more effective version of your function:
void mkvec(std::vector<std::string>& n)
{
n.push_back("kagami");
n.push_back("misao");
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/5812631",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Saving a string to a txt file on an FTP server I am trying to save a string containing Json syntax to a .txt file on an FTP server.
I tried using this example http://msdn.microsoft.com/en-us/library/ms229715.aspx which worked great.
But this example takes an existing .txt local file and uploads it to the ftp server.
I would like to directly create / update a txt file on the ftp server from a string variable. Without having first to create the txt file locally in my pc.
A: Your example link is exactly what you need, but you need to get your information from a MemoryStream instead of an existing file.
You can turn a string directly into a Stream with this:
MemoryStream memStr = MemoryStream(UTF8Encoding.Default.GetBytes("asdf"));
However, you can shortcut this more by directly turning your string into a byte array, avoiding the need to make a Stream altogether:
System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding();
Byte[] bytes = encoding.GetBytes(yourString);
//and now plug that into your example
Stream requestStream = request.GetRequestStream();
requestStream.Write(bytes, 0, bytes.Length);
requestStream.Close();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19307253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Unable to cast UIImage in swift iOS 8 Extension I have a strange problem, I am trying to build an action extension that will scan barcode from the image provided. Here is the code.
override func viewDidLoad() {
super.viewDidLoad()
// Get the item[s] we're handling from the extension context.
// For example, look for an image and place it into an image view.
// Replace this with something appropriate for the type[s] your extension supports.
var imageFound = false
for item: AnyObject in self.extensionContext!.inputItems {
let inputItem = item as NSExtensionItem
for provider: AnyObject in inputItem.attachments! {
let itemProvider = provider as NSItemProvider
if itemProvider.hasItemConformingToTypeIdentifier(kUTTypeImage as NSString) {
// This is an image. We'll load it, then place it in our image view.
weak var weakImageView = self.imageView
itemProvider.loadItemForTypeIdentifier(kUTTypeImage as NSString, options: nil, completionHandler: { (image, error) in
if image != nil {
dispatch_async(dispatch_get_main_queue(),{
if let imageView = weakImageView {
var imageToSet: UIImage? = image as? UIImage
imageView.image = image as? UIImage
}
self.imageToScan = self.imageView.image
self.scanFromImage(self.imageToScan!)
})
}
})
imageFound = true
break
}
}
if (imageFound) {
// We only handle one image, so stop looking for more.
break
}
}
}
Now, whenever I try to get UIImage I always get nil, whereas in the image I can see an image is received. But when I try to get UIImage from that image it always returns nil. Here is the screen shot from debugging that might help
Update: Here is the description of the image that is received as :
Printing description of image: (NSSecureCoding!) image =
(instance_type = Builtin.RawPointer = 0x15d674c0 -> 0x32b6be5c (void
*)0x32b6be70: NSURL)
I have created the same extension using objective C and it works, but not in swift. Here is objective C Code:
- (void)viewDidLoad {
[super viewDidLoad];
// Get the item[s] we're handling from the extension context.
// For example, look for an image and place it into an image view.
// Replace this with something appropriate for the type[s] your extension supports.
BOOL imageFound = NO;
for (NSExtensionItem *item in self.extensionContext.inputItems) {
for (NSItemProvider *itemProvider in item.attachments) {
if ([itemProvider hasItemConformingToTypeIdentifier:(NSString *)kUTTypeImage]) {
// This is an image. We'll load it, then place it in our image view.
__weak UIImageView *imageView = self.imageView;
[itemProvider loadItemForTypeIdentifier:(NSString *)kUTTypeImage options:nil completionHandler:^(UIImage *image, NSError *error) {
if(image) {
dispatch_async(dispatch_get_main_queue(), ^{
[imageView setImage:image];
imageToScan = image;
[self scanImage:imageToScan];
});
}
}];
imageFound = YES;
break;
}
}
if (imageFound) {
// We only handle one image, so stop looking for more.
break;
}
}
}
I tries to search a lot on Google but found nothing. I even tried a changed code but doe not work, here is the changed code:
itemProvider.loadItemForTypeIdentifier(kUTTypeImage as NSString, options: nil, completionHandler: { (image, error) in
if image != nil {
NSOperationQueue.mainQueue().addOperationWithBlock {
if let imageView = weakImageView {
var imageToSet: UIImage? = image as? UIImage
imageView.image = image as? UIImage
}
self.imageToScan = self.imageView.image
self.scanFromImage(self.imageToScan)
}
}
})
Update: I have noticed one thing; if I create a new project and add an action extension to it, the same code is auto generated except few line that I added in the block. In that also without even changing a single line of auto generated code, the imageView.image is nil. Is this a bug in swift?? Or some bug with my Xcode app.
A: the thing is that image is not UIImage, it's NSURL.
Change code to this one:
imageView.image = UIImage(data: NSData(contentsOfURL: image as NSURL)!)!
A: U need to do like this
if let strongImageView = weakImageView {
if let imageURL = image as? NSURL{
strongImageView.image = UIImage(data:NSData(contentsOfURL: imageURL)!)
}else{
strongImageView.image = image as? UIImage
}
}
For Clarification I added Full Code Please refer, It worked for me
override func viewDidLoad() {
super.viewDidLoad()
// Get the item[s] we're handling from the extension context.
// For example, look for an image and place it into an image view.
// Replace this with something appropriate for the type[s] your extension supports.
var imageFound = false
for item: AnyObject in self.extensionContext!.inputItems {
let inputItem = item as! NSExtensionItem
for provider: AnyObject in inputItem.attachments! {
let itemProvider = provider as! NSItemProvider
if itemProvider.hasItemConformingToTypeIdentifier(kUTTypeImage as String) {
// This is an image. We'll load it, then place it in our image view.
weak var weakImageView = self.imageView
itemProvider.loadItemForTypeIdentifier(kUTTypeImage as String, options: nil, completionHandler: { (image, error) in
NSOperationQueue.mainQueue().addOperationWithBlock {
if let strongImageView = weakImageView {
if let imageURL = image as? NSURL{
strongImageView.image = UIImage(data:NSData(contentsOfURL: imageURL)!)
}else{
strongImageView.image = image as? UIImage
}
}
}
})
imageFound = true
break
}
}
if (imageFound) {
// We only handle one image, so stop looking for more.
break
}
}
}
A: Getting image as
<UIImage: 0x16ecb410>, {100, 100}
It cannot be casted as NSURL and getting nil in the following expression.
imageView.image = UIImage(data: NSData(contentsOfURL: image as NSURL)!)!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/25887242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Library Mozart does not exist in SAS I have written the following to store a file:
libname mozart 'C:\Users\PCPCPC\Documents\sasdeposite\learning';
data mozart.test_scores;
length ID $ 3 Name $ 15;
input ID $ Score1-Score3 Name $;
datalines;
1 90 95 98
2 78 77 75
3 88 91 92
;
But the compiler says that the library MOZART does not exist. But I can see MOZART in Solution->Analysis->Interactive dataAnalysis.
A: Check to make sure that folder location exists on the computer.
SAS will not create the folder for you if it doesn't already exist.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/56296867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Interface that Entails the Implementation of Indexer I am looking for a framework-defined interface that declares indexer. In other words, I am looking for something like this:
public interface IYourList<T>
{
T this[int index] { get; set; }
}
I just wonder whether .Net framework contains such interface? If yes, what is it called?
You might ask why I can't just create the interface myself. Well, I could have. But if .Net framework already has that why should I reinvent the wheel?
A: I think you're looking for IList<T>.
Sample pasted from the MSDN site:
T this[
int index
] { get; set; }
EDIT MORE:
This is the entire class I just Reflected to show you exactly how the interface is described in the framework:
[TypeDependency("System.SZArrayHelper")]
public interface IList<T> : ICollection<T>, IEnumerable<T>, IEnumerable
{
// Methods
int IndexOf(T item);
void Insert(int index, T item);
void RemoveAt(int index);
// Properties
T this[int index] { get; set; }
}
A: I am not aware of such an interface in the BCL.
A: There is no interface that only implements an indexer (generic or otherwise).
The closest you can get is use some of the collection interfaces, such as IList.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/2482826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Date Function Call Not Working I'm trying to call the constructor to call the checkDate function but no avail :(
..I'm really new to this.
class Date
{
public:
Date();
Date(int, int, int);
private:
void checkDate(void);
int month, day, year;
};
Date:: Date() // default constructor
{
month = 1;
day = 1;
year = 1960;
}
Date:: Date(int m, int d, int y) // constructor definition
{
m = month, d = day, y = year;
checkDate();
}
void Date:: checkDate() // function to check date
{
if (month < 1 || month > 12)
exit(0);
else if (day < 1 || day > 31)
exit(0);
else if (year < 1960 || year > 2013)
exit(0);
else
cout << "Works." << endl;
}
int main()
{
Date();
Date(1, 1, 1960); //make this work PLEASEEEEEE <333333333333333333333
}
This is what I have so far.
I'm new to this site, not sure if I posted correctly.
A: you date(int ,int,int) constructor is assigning the variables incorrectly. What you want is month = m; day =d; year = y;
A: Change
Date:: Date(int m, int d, int y) // constructor definition
{
m = month, d = day, y = year;
checkDate();
}
To
Date:: Date(int m, int d, int y) // constructor definition
{
month = m, day = d, year = y ;
checkDate();
}
I would actually change aaaalot, but this is the simplest answer I can give you besides, work, work, work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/23298621",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-7"
} |
Q: Foreign Key in Sequelize I have three tables(models): Users,preferences,ideas. Users has a column 'username' as a primary key and I would like to add 'username' as foreign key to the other tables . How is it done in sequelize? I am a noob at sequelize ,so please do answer this. The hasMany,belongsTo were confusing to me. It would be really helpful if someone answers this.
A: For the two objects: User and Preference, you can specify the relationship as follows:
const User = sequelize.define('User', {
username: Sequelize.STRING,
});
const Preference = sequelize.define('Preference', {
id: Sequelize.INTEGER,
//Below, 'users' refer to the table name and 'username' is the primary key in the 'users' table
user: {
type: Sequelize.STRING,
references: {
model: 'users',
key: 'username',
}
}
});
User.hasMany(Preference); // Add one to many relationship
I would suggest to read the following document to understand better:
Sequelize Foreign Key
| {
"language": "en",
"url": "https://stackoverflow.com/questions/63537595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: macOS - Anyone knows how to scale view in Xcode when programming for macOS? How can I scale storyboard view when developing MacOS app? When I do iPhone/iPad Developement I can scale the storyboard content either by Clicking on +/- at the bottom of the storyboard pane or by holding down option key and using scrolling wheel on my mouse. None of this is available when I start MacOS projet in Xcode. How come? I could use it even more in MacOS development than in iPhone development since MacOS apps usually occupy more sreen real estate than iPhone apps. Is there anything hidden that needs to be enabled in order to scale storyboard view in Xcode?
A: The answer, sadly, is that Xcode simply doesn't support scaling in AppKit storyboards. It only supports scaling in UIKit storyboards.
You should file a feature request at https://bugreport.apple.com.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50525835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Managing Action Controller Parameters I have this parameters:
Parameters: {
"diagram"=>{"name"=>"name123"},
"isit"=>{
"0"=>{"xposition"=>"171", "yposition"=>"451", "titleid"=>"isit0", "description"=>"-description-", "leftrelationsids"=>"", "rightrelationsids"=>""},
"1"=>{"xposition"=>"254", "yposition"=>"554", "titleid"=>"isit1", "description"=>"-description-", "leftrelationsids"=>"", "rightrelationsids"=>""}}}
In the create method that receives the parameters above I want to store a diagram (that for now it it just the name of it) and after that I want to store each of the components.
I'm doing this in the diagrams_controller.rbcreate method. The diagram has_many components.
My problem is how to store the components data?
I have tried this (for now just tried to of the columns, xposition and yposition):
def create
@diagram = Diagram.new(diagram_params)
@diagram.save
@diagram.components.create(params.require(:isit).permit(:xposition, :yposition))
The diagram are stored, but the components not. I don't know how to handle this require permite thing to the components.
Here is the result:
Any help? How should I store the components?
A: Try to use this code:
@diagram = Diagram.new(diagram_params)
@diagram.save
component = Component.create(params.require(:isit).permit(:xposition, :yposition))
@diagram.components << component
@diagram.save
Or use accepts_nested_attributes_for in diagram model, and edit diagram_params method to add the following:
params.require(:diagram).permit(components_attributes: [:xposition, :yposition])
Read about accepts_nested_attributes_for
| {
"language": "en",
"url": "https://stackoverflow.com/questions/25851768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: css How to have margin on element when responsive I have a simple problem I cant remember how to do
In my example I have a div that has a width and centered using margin auto.
When the window is resized I want the div to have a left and right margin when the window touches the div.
At the moment when the window is resized the div touches the edge of the window.
I want it to be this width untill the window gets to small and then it has left and right margin
*{
margin: 0;
padding: 0;
}
.block{
background: red;
height: 100px;
margin: 0 auto;
width: 600px;
}
<div class='block'>
</div>
A: @media (max-width: 600px) {
.block {
margin: 0 11px;
}
}
A: You just need to add a max-width: 90% or whatever fixes your requirement.
*{
margin: 0;
padding: 0;
}
.block{
background: red;
height: 100px;
margin: 0 auto;
width: 600px;
max-width: 90%;
}
<div class='block'>
</div>
A: You can use the media queries to control the styles of your elements.
Example:
The following example changes the background-color to lightgreen if the viewport is 480 pixels wide or wider (if the viewport is less than 480 pixels, the background-color will be pink)
@media screen and (min-width: 480px) {
body {
background-color: lightgreen;
}
}
for more and
different device width link
* {
margin: 0;
padding: 0;
}
.block {
text-align:center;
background: red;
height: 100px;
margin: 0 auto;
width: 600px;
}
@media (max-width: 400px) {
margin: 0 50px;
}
@media (min-width: 401px) and (max-width: 800px) {
margin: 0 30px;
}
@media (min-width: 801px) {
margin: 0 10px;
}
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width" />
<title>repl.it</title>
</head>
<body>
<div class="block">1</div>
<script src="script.js"></script>
</body>
</html>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60778450",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: why these two java.util.Pattern are not equal I have two regular expressions using java.util.regex.Pattern.compile. The first one is:
input[\\s\\w=:'\\-"]*type\\s*=\\s*['"]password['"];
the second one is:
input[\\s\\w=:'\\-\\\"]*type\\s*=\\s*['\\\"]password['\\\"];
The only difference between these two regexes is escaping double quotations in the latter string. " and \" refer to the same ASCII character ", so they get same matching results.
However, when I do the following code, it returns False.
Pattern p1=Pattern.compile("input[\\s\\w=:'\\-"]*type\\s*=\\s*['"]password['"]");
Pattern p2=Pattern.compile("input[\\s\\w=:'\\-\\\"]*type\\s*=\\s*['\\\"]password['\\\"]");
System.out.println(p1.equals(p2));
A: In Java 8 Pattern class doesn't override equals. So it uses default implementation which checks whether to references point to the same location in memory.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52103427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: paragraph and anchor as a button / how add server side event click hi my dear firends :
i have a button like below :
<p id="EnterToImagesParag" class="EnterParag">
<a id="EnterToImagesLink" name="EnterToImagesLink" class="EnterLink">
</a>
</p>
and css :
p.EnterParag, p.EnterParag a.EnterLink
{
width: 400px;
height: 45px;
display: block;
}
p#EnterToImagesParag
{
background: url(/Images/Admin/btnConfigImages.png) 0px -45px;
}
p#EnterToImagesParag a#EnterToImagesLink
{
background: url(/Images/Admin/btnConfigImages.png) 0px 0px;
}
and jquery Like this :
$(document.body).ready(function () {
$('.EnterParag a').hover(
function () { $(this).stop().animate({ 'opacity': '0' }, 500); },
function () { $(this).stop().animate({ 'opacity': '1' }, 500); });
});
how can i add server side event click to this button ?
thanks in advance
A: To clarify, that's not a button, it's an anchor. You can add a server side event by adding runat=server and an event handler for the OnServerClick event.
<a id="EnterToImagesLink" name="EnterToImagesLink" class="EnterLink" runat="server" OnServerClick="MyClickEvent"> </a>
A: you can replace the anchor element "a" with ASP.NET control LinkButton control, it will produce the same type of HTML element (anchor/"a") and it provides you with Click event as well (server side).
<asp:LinkButton ID="myLinkButton" runat="server" CssClass="EnterLink" Text="My LinkButton" OnClick="OnServerClickMethod" />
| {
"language": "en",
"url": "https://stackoverflow.com/questions/6237862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Python serial error I'm trying to get Python to read connect to an Arduino Uno on Serial port 3 so that would be COM3 in the python code. I am using Python33, The latest version on Arduino and pySerial 2.7. This is the code for the Arduino:
void setup() {
Serial.begin(9600); // set the baud rate
Serial.println("Ready"); // print "Ready" once
}
void loop() {
char inByte = ' ';
if(Serial.available()){ // only send data back if data has been sent
char inByte = Serial.read(); // read the incoming data
Serial.println(inByte); // send the data back in a new line so that it is not all one long line
}
delay(100); // delay for 1/10 of a second
}
And this is the python code:
import serial
ser = serial.Serial("COM3", 9600)
then I get this error:
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
ser = serial.Serial("COM3", 9600)
File "C:\Python33\lib\site-packages\serial\serialwin32.py", line 38, in __init__
SerialBase.__init__(self, *args, **kwargs)
File "C:\Python33\lib\site-packages\serial\serialutil.py", line 282, in __init__
self.open()
File "C:\Python33\lib\site-packages\serial\serialwin32.py", line 66, in open
raise SerialException("could not open port %r: %r" % (self.portstr, ctypes.WinError()))
serial.serialutil.SerialException: could not open port 'COM3': FileNotFoundError(2, 'The system cannot find the file specified.', None, 2)
This is probably something easy to fix and I have looked pretty much everywhere and I still can't seem to find the answer to the problem.
A: Why are you trying to use "COM3" on a Linux machine? That's a Windows port name. Linux/Unix port names are of the form /dev/ttyUSB0.
But, as the docs show, you can probably just use the port number directly - they start at 0, so you can do ser = serial.Serial(2, 9600).
A: If you have the arduino ide open, then python may not be able to access the port. Ive hade this problem as well using Processing.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/20869699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is there a way to "listen" to image taken events in react native? I'm new to React-Native and I couldn't find any clear answer to this online.
I would like to be able to detect a photo/video was taken with the native mobile camera, not a customized camera component. needless to say, would love that to work on both ios and android.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54293159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Validating label content equal to null or string.Empty I'm trying to check if the value of a label is equal to null, " ", string.Empty, but every time I run through my coding, I get the following error:
Object reference not set to an instance of an object.
Here is my coding:
if (lblSupplierEmailAddress.Content.ToString() == "") //Error here
{
MessageBox.Show("A Supplier was selected with no Email Address. Please update the Supplier's Email Address", "Warning", MessageBoxButton.OK, MessageBoxImage.Warning);
return;
}
How can I check if the string value inside my label is equal to null? I might be missing something simple, if so please ignore my incompetence :P
A: Change
if (lblSupplierEmailAddress.Content.ToString() == "")
To
if (String.IsNullOrEmpty((string) lblSupplierEmailAddress.Content)
When lblSupplierEmailAddress.Content is actually null you can of course not call ToString on it as it will cause a NullReferenceException. However the static IsNullOrEmpty-method takes respect on this and returns true if Content is null.
A: In C#6.0 This will do
if(lblSupplierEmailAddress?.Content?.ToString() == "")
Else if the lblSupplierEmailAddress always exists, you could simply do:
if(lblSupplierEmailAddress.Content?.ToString() == "")
The equivalent code would be:
if(lblSupplierEmailAddress.Content != null)
if (lblSupplierEmailAddress.Content.ToString() == ""){
//do something
}
A: if( null != lblSupplierEmailAddress.Content
&& string.IsNullOrEmpty(lblSupplierEmailAddress.Content.ToString() )
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34988044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How to turn a collection in Firebase into a List of Strings and make some conditions with it I'm trying to get a collection in Firebase and turn it into a List of Strings and make some conditions with it. I'm creating an app for a store and my intention is applying a way to create a list of favorites, for each user, when I press a button in a product. To add a favorite product in firebase, i'm using this, and it's working:
_saveFavorite(Product product) async {
await _loadCurrentUser(); //So I get the user ID
Firestore db = Firestore.instance;
db.collection("my_favorites")
.document(_userID)
.collection("products")
.document(product.id)
.setData(product.toMap());
}
To remove the favorite product, I'm using this:
_removeFavorite(String productID) async {
await _loadCurrentUser();
Firestore db = Firestore.instance;
db.collection("my_favorites")
.document(_userID)
.collection("products")
.document(product.id)
.delete();
}
So, this is the way: Collection("my_favorites") > Document (userID) > Collection ("products") > Document (productID) > products save as favorites.
I'm trying to get all the productsID saved in Collection("products") to make a condition for a raised button, but I don't know how to do it. I want to press this button and make a condition like: ListOfIDProducts.contains(product.id) ? _removeFavorite : _saveFavorite;
Thanks for your attention and if you could help me, I appreciate it very much!
A: You could store such a list inside of the /my_favorites/USER_ID document as an array of currently favorited product IDs. You could maintain this list using a Cloud Function as each product is added and removed from the /my_favorites/USER_ID/products collection, but it's arguably simpler to just make use of a batched write along with the array field transforms, arrayUnion() and arrayRemove().
_saveFavorite(Product product) async {
await _loadCurrentUser();
Firestore db = Firestore.instance;
WriteBatch batch = db.batch();
// queue adding the product's ID to the products array
batch.update(
db.collection("my_favorites")
.document(_userID),
{
products: FieldValue.arrayUnion([product.id])
}
);
// queue uploading a copy of the product's data to this user's favorites
batch.set(
db.collection("my_favorites")
.document(_userID)
.collection("products")
.document(product.id),
product.toMap()
);
return batch.commit();
}
_removeFavorite(String productID) async {
await _loadCurrentUser();
Firestore db = Firestore.instance;
WriteBatch batch = db.batch();
// queue removing product.id from the products array
batch.update(
db.collection("my_favorites")
.document(_userID),
{
products: FieldValue.arrayRemove([product.id])
}
);
// queue deleting the copy of /products/PRODUCT_ID in this user's favorites
batch.delete(
db.collection("my_favorites")
.document(_userID),
.collection("products")
.document(product.id)
);
return batch.commit();
}
To get the list of product IDs, you would use something similar to:
_getFavoriteProductIDs() async {
await _loadCurrentUser();
Firestore db = Firestore.instance;
return db.collection("my_favorites")
.document(_userID)
.get()
.then((querySnapshot) {
return querySnapshot.exists ? querySnapshot.get("products") : []
});
}
You could even convert it to work with lists instead:
_saveFavorite(List<Product> products) async {
if (products.length == 0) {
return; // no action needed
}
await _loadCurrentUser();
Firestore db = Firestore.instance;
WriteBatch batch = db.batch();
// queue adding each product's ID to the products array
batch.update(
db.collection("my_favorites")
.document(_userID),
{
products: FieldValue.arrayUnion(
products.map((product) => product.id).toList()
)
}
);
// queue uploading a copy of each product to this user's favorites
for (var product in products) {
batch.set(
db.collection("my_favorites")
.document(_userID)
.collection("products")
.document(product.id),
product.toMap()
);
}
return batch.commit();
}
_removeFavorite(List<String> productIDs) async {
if (productIDs.length == 0) {
return; // no action needed
}
await _loadCurrentUser();
Firestore db = Firestore.instance;
WriteBatch batch = db.batch();
// queue removing each product ID from the products array
batch.update(
db.collection("my_favorites")
.document(_userID),
{
products: FieldValue.arrayRemove(productIDs)
}
);
// queue deleting the copy of each product in this user's favorites
for (var product in products) {
batch.delete(
db.collection("my_favorites")
.document(_userID)
.collection("products")
.document(product.id)
);
}
return batch.commit();
}
Additional note: With your current implementation, a favourited product is copied from /products/PRODUCT_ID to /my_favorites/USER_ID/products/PRODUCT_ID. Remember that with this structure, if /products/PRODUCT_ID is ever updated, you will have to update every copy of that product. I suggest renaming products to favorited-products so that you can achieve this using a Cloud Function and a Collection Group Query (see this answer for more info).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/66825914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Exclude white space from Elasticsearch when searching I've got elastic search working great on my site, I can search pretty descent. My problem is when i do a search on something like HipHop vs Hip Hop, or cellphone vs cell phone, my results for the first text query won't appear . I want to make it so that if a user searches either word with or without a space, the results will be the same. here's what my code to search looks like. I'm using Laravel 5
$q = $request->input('q');
$response = $client->search([
'index' => 'users',
'type' => 'user',
'body' => [
'query' => [
'bool' => [
'should' => [
['match' => [ 'text' => $q ] ],
],
],
],
],
]);
A: First you must understand the index process, basically your string is passed trough the default analyser if you didn't change the default mapping.
HipHop will kept the same
Hip Hop is saved separated like Hip, Hop
When you do a match query like your example with HipHop vs Hip Hop, this query is passed trough the analyser too and will be separated like: "HipHop, vs, Hip, Hop" and a boolean query that will do a search like this: "match HipHop there?", "match vs there?", "match Hip there?", "match Hop there?" ?
If you want a scenario where you have HipHop indexed and the user search for Hip Hop or HipHop vs Hip Hop or HopHip and should return HipHop, you must implement some strategy like:
Regex or prefix query: https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-regexp-query.html
Ngram for partial matching: https://www.elastic.co/guide/en/elasticsearch/guide/current/_ngrams_for_partial_matching.html
Synonyms: https://www.elastic.co/guide/en/elasticsearch/guide/current/using-synonyms.html
And if you want to keep you terms with space indexed like "Hip Hop" you need to use a not_analyzed mapping.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/39259450",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: ReactJS Ant Design Table handle before sort event I want to handle event before sorting on Table component of Ant Design with ReactJS. I knew that there is onChange event which provides information AFTER on every change were made, but I need a hook which will catch something like beforeChange or beforeSortChange. Does Ant Design provides some?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75311015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Compute similarity percentage OR Compute correlation between more than 2 objects Consider I have four objects (a,b,c,d), and I ask five persons to label them (category 1 or 2) according to their physical appearance or something else. The labels provided by five persons for these objects are shown as
df <- data.frame(a = c(1,2,1,2,1), b=c(1,2,2,1,1), c= c(2,1,2,2,2), d=c(1,2,1,2,1))
In tabular format,
---------
a b c d
---------
1 1 2 1
2 2 1 2
1 2 2 1
2 1 2 2
1 1 2 1
----------
Now I want to calculate the percentage of times a group of objects were given the same label (either 1 or 2). For example, objects a, b and d were given the same label by 3 persons out of 5 persons. So its percentage is 3/5 (=60%). While as objects a and d were given same labels by all the people, so its percentage is 5/5 (=100%)
I can calculate this statistic manually, but in my original dataset, I have 50 such objects and the people are 30 and the labels are 4 (1,2,3, and 4). How can I compute such statistics for this bigger dataset automatically? Are there any existing packages/tools in R which can calculate such statistics?
Note: A group can be of any size. In the first example, a group consists of a,b and d while as second example group consists of a and d.
A: If you have numeric ratings, you could use diff to check if you consistently have 0 difference between each rater:
f <- function(cols, data) {
sum(colSums(diff(t(data[cols]))==0)==(length(cols)-1)) / nrow(data)
}
Results are as expected when applying the function to example groups:
f(c("a","b","d"), df)
#[1] 0.6
f(c("a","d"), df)
#[1] 1
A: There are two tasks here: firstly, making a list of all the relevant combinations, and secondly, evaluating and aggregating rowwise similarity. combn can start the first task, but it takes a little massaging to arrange the results into a neat list. The second task could be handled with prop.table, but here it's simpler to calculate directly.
Here I've used tidyverse grammar (primarily purrr, which is helpful for handling lists), but convert into base if you like.
library(tidyverse)
map(2:length(df), ~combn(names(df), .x, simplify = FALSE)) %>% # get combinations
flatten() %>% # eliminate nesting
set_names(map_chr(., paste0, collapse = '')) %>% # add useful names
# subset df with combination, see if each row has only one unique value
map(~apply(df[.x], 1, function(x){n_distinct(x) == 1})) %>%
map_dbl(~sum(.x) / length(.x)) # calculate TRUE proportion
## ab ac ad bc bd cd abc abd acd bcd abcd
## 0.6 0.2 1.0 0.2 0.6 0.2 0.0 0.6 0.2 0.0 0.0
A: With base R functions you could do:
groupVec = c("a","b","d")
transDF = t(as.matrix(DF))
subDF = transDF[rownames(transDF) %in% groupVec,]
subDF
# [,1] [,2] [,3] [,4] [,5]
# a 1 2 1 2 1
# b 1 2 2 1 1
# d 1 2 1 2 1
#if length of unique values is 1, it implies match across all objects, count unique values/total columns = match pct
match_pct = sum(sapply(as.data.frame(subDF), function(x) sum(length(unique(x))==1) ))/ncol(subDF)
match_pct
# [1] 0.6
Wrapping it in a custom funtion:
fn_matchPercent = function(groupVec = c("a","d") ) {
transDF = t(as.matrix(DF))
subDF = transDF[rownames(transDF) %in% groupVec,]
match_pct = sum(sapply(as.data.frame(subDF), function(x) sum(length(unique(x))==1) ))/ncol(subDF)
outputDF = data.frame(groups = paste0(groupVec,collapse=",") ,match_pct = match_pct)
return(outputDF)
}
fn_matchPercent(c("a","d"))
# groups match_pct
# 1 a,d 1
fn_matchPercent(c("a","b","d"))
# groups match_pct
# 1 a,b,d 0.6
A: Try this:
find.unanimous.percentage <- function(df, at.a.time) {
cols <- as.data.frame(t(combn(names(df), at.a.time)))
names(cols) <- paste('O', 1:at.a.time, sep='')
cols$percent.unanimous <- 100*colMeans(apply(cols, 1, function(x) apply(df[x], 1, function(y) length(unique(y)) == 1)))
return(cols)
}
find.unanimous.percentage(df, 2) # take 2 at a time
O1 O2 percent.unanimous
1 a b 60
2 a c 20
3 a d 100
4 b c 20
5 b d 60
6 c d 20
find.unanimous.percentage(df, 3) # take 3 at a time
O1 O2 O3 percent.unanimous
1 a b c 0
2 a b d 60
3 a c d 20
4 b c d 0
find.unanimous.percentage(df, 4)
O1 O2 O3 O4 percent.unanimous
1 a b c d 0
A: Clustering similarity metrics
It seems that you might want to calculate a substantially different (better?) metric than what you propose now, if your actual problem requires to evaluate various options of clustering the same data.
This http://cs.utsa.edu/~qitian/seminar/Spring11/03_11_11/IR2009.pdf is a good overview of the problem, but the BCubed precision/recall metrics are commonly used for similar problems in NLP (e.g http://alias-i.com/lingpipe/docs/api/com/aliasi/cluster/ClusterScore.html).
A: Try this code. It works for your example and should hold for the extended case.
df <- data.frame(a = c(1,2,1,2,1), b=c(1,2,2,1,1), c= c(2,1,2,2,2), d=c(1,2,1,2,1))
# Find all unique combinations of the column names
group_pairs <- data.frame(t(combn(colnames(df), 2)))
# For each combination calculate the similarity
group_pairs$similarities <- apply(group_pairs, 1, function(x) {
sum(df[x["X1"]] == df[x["X2"]])/nrow(df)
})
| {
"language": "en",
"url": "https://stackoverflow.com/questions/40713096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: core data and paging I have a database of 50,000 records. I'm using core data to fetch records from a search. A search could return 1000 records easily. What is needed to page through these records using core data and uitableview? I would like to show 100 records at a time and have 'load more' button after viewing 100 records.
A: Take a look at the NSFetchRequest and its controls over batches. You can set the batch size and the offset which will allow you to "page" through the data.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3067543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Why HBITMAP use so little memory? I met an interesting question:
*
*load a big (4500x6000) jpeg into memory (RGBRGBRGB....) by libjpeg (cost about 200M memory)
*CreateDIBitmap() to create a HBITMAP from the data
*free the memory used
now I found that the process use only 5M memory at all. I wonder where is the data of the HBITMAP. (I disable pagefile)
update:
I write such code for testing:
// initilise
BITMAP bitmap;
BITMAPINFO info;
// ....
void *data = NULL;
HDC hdc = ::GetDC(NULL);
HBITMAP hBitmap = ::CreateDIBSection(hdc, &info, DIB_RGB_COLORS, &data, NULL, 0);
::ReleaseDC(NULL, hdc);
if (hBitmap) {
::GetObject(m_hBitmap, sizeof(bitmap), &bitmap);
}
Then the data is 0x2d0000 (surely in user space), bitmap.bmBits is also 0x2d0000. So I make sure that CreateDIBSection use user space memory for bitmap.
A: How about this for a test. Create HBITMAPs in a loop. Counting the number of bytes theoretically used (Based on the bitdepth of your video card).
How many bytes worth of HBITMAPs can you allocate before they start to fail? (Or, alternately, until you do start to see an impact on memory).
DDBs are managed by device drivers. Hence they tend to be stored in one of two places :- kernel mode paged pool or in the Video Cards memory itself. Both of which will not be reflected in any process memory count.
In theory device drivers can allocate system memory storage for bitmaps, moving them across to vram as and when needed... but some video card drivers think that video memory should be enough and simply allocate all HBITMAPs on the card. Which means you run out of space for HBITMAPs at either the 2Gb mark (if they're allocated in kernel paged pool; depending on available ram and assuming 32bit windows editions), or 256Mb mark (or however much memory the video card has).
That discussion covered Device Dependent Bitmaps.
DIBSections are a special case as they're allocated in memory accessible from kernel mode, but available in userspace. As such, any application that uses a lot of bitmaps probably should use DIBSections where possible as there should be much less opportunity to starve the system of space to store DDBs.
I suspect that one still has a system wide limit of up to 2Gb worth of DIBSections (on 32bit Windows versions) as there is no concept of 'current process' in kernel mode where the video device drivers will need access.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/2127358",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How can I copy the first i amount of characters in a character array into another character array in C++? read(client_sockfd, &chID, 4);
char newID[4];
for(int i; i<5; i++){
newID[i] = chID[i];
}
I'm reading char chID[4] over a socket. I want to put the first 4 characters into newID. Above is what I've tried, but I get some weird output when I copy newID into a string and print it out. Any guidance?
A: You declare i within the for loop without initialising it. This is the reason you get 'weird values'. In order to rectify, you need to write:
for(int i=0; i<5; i++)
Hope this helps!
A: Just copy the bytes:
memcpy(newID, chID, 4);
A: One more note that it seems some people have overlooked here: if chId is length 4 then the loop bounds are i=0;i<4. That way you get i=0,1,2,3. (General programming tip, unroll loops in your head when possible. At least until you are satisfied that the program really is doing what you meant it to.)
NB: You're not copying chId into a string. You're copying it into a char array. That may seem like semantics, but "string" names a data type in C++ which is distinct from an array of characters. Got it right in the title, wrong in the question description.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36974386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: determine user name during live docker session in Azure while running RStudio I am running a RStudio docker container in Azure Apps that requires Single Sign On to access. I can write to a database, but need to know the user who made the write, which is based on the SSO username.
Is there a way to extract this info in a live session?
I tried running Sys.getenv() and viewing under USER, but the result it rstudio
I also tried searching /usr/bin/env but did not find anything there.
Can anyone offer any possible solutions?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/73252748",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Maple Error: final value in for loop must be numeric or character I have this simple procedure in Maple that I want to plot.
test:=proc(n)
local i,t;
for i from 1 to n do
t:=1;
od;
return t;
end:
The procedure itself works fine.
> test(19)
1
When I want to plot I get the following Error:
> plot(test(x),x=1..10)
Error, (in test) final value in for loop must be numeric or character
Please help
A: Maple's usual evaluation model is that arguments passed to commands are evaluated up front, prior to the computation done within the body of the command's own procedure.
So if you pass test(x) to the plot command then Maple will evaluate that argument test(x) up front, with x being simply a symbolic name.
It's only later in the construction of the plot that the plot command would substitute actual numeric values for that x name.
So, the argument test(x) is evaluated up front. But let's see what happens when we try such an up front evaluation of test(x).
test:=proc(n)
local i,t;
for i from 1 to n do
t:=1;
od;
return t;
end:
test(x);
Error, (in test) final value in for loop
must be numeric or character
We can see that your test procedure is not set up to receive a non-numeric, symbolic name such as x for its own argument.
In other words, the problem lies in what you are passing to the plot command.
This kind of problem is sometimes called "premature evaluation". It's a common Maple usage mistake. There are a few ways to avoid the problem.
One way is to utilize the so-called "operator form" calling sequence of the plot command.
plot(test, 1..10);
Another way is to delay the evaluation of test(x). The following use of so-called unevalution quotes (aka single right ticks, ie. apostrophes) delays the evaluation of test(x). That prevents test(x) from being evaluated until the internal plotting routines substitute the symbolic name x with actual numeric values.
plot('test(x)', x=1..10);
Another technique is to rewrite test so that any call to it will return unevaluated unless its argument is numeric.
test:=proc(n)
local i,t;
if not type(n,numeric) then
return 'procname'(args);
end if;
for i from 1 to n do
t:=1;
od;
return t;
end:
# no longer produces an error
test(x);
test(x)
# the passed argument is numeric
test(19);
1
plot(test(x), x=1..10);
I won't bother showing the actual plots here, as your example produces just the plot of the constant 1 (one).
A: @acer already talked about the technical problem, but your case may actually have a mathematical problem. Your function has Natural numbers as its domain, i.e. the set of positive integers {1, 2, 3, 4, 5, ...} Not the set of real numbers! How do you interpret doing a for-loop for until a real number for example Pi or sqrt(2) to 5/2? Why am I talking about real numbers? Because in your plot line you used plot( text(x), x = 1..10 ). The x=1..10 in plot is standing for x over the real interval (1, 10), not the integer set {1, 2, ..., 10}! There are two ways to make it meaningful.
*
*Did you mean a function with integer domain? Then your plot should be a set of points. In that case you want a points-plot, you can use plots:-pointplot or adding the option style=point in plot. See their help pages for more details. Here is the simplest edit to your plot-line (keeping the part defininf test the same as in your post).
plot( [ seq( [n, test(n)], n = 1..10 ) ], style = point );
And the plot result is the following.
*In your function you want the for loop to be done until an integer in relation with a real number, such as its floor? This is what the for-loop in Maple be default does. See the following example.
t := 0:
for i from 1 by 1 to 5/2 do
t := t + 1:
end do;
As you can see Maple do two steps, one for i=1 and one for i=2, it is treated literally as for i from 1 by 1 to floor(5/2) do, and this default behavior is not for every real number, if you replace 5/2 by sqrt(2) or Pi, Maple instead raise an error message for you. But anyway the fix that @acer provided for your code, plots the function "x -> test(floor(x))" in your case when x comes from rational numbers (and float numbers ^_^). If you change your code instead of returning constant number, you will see it in your plot as well. For example let's try the following.
test := proc(n)
local i,t:
if not type(n, numeric) then
return 'procname'(args):
end if:
t := 0:
for i from 1 to n do
t := t + 1:
end do:
return(t):
end:
plot(test(x), x=1..10);
Here is the plot.
It is indeed "x -> test(floor(x))".
| {
"language": "en",
"url": "https://stackoverflow.com/questions/68966814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.