text
stringlengths 15
59.8k
| meta
dict |
---|---|
Q: Using react hooks, how would you write an event listener dependent on state that does not need to be added and removed every time the state changes? The code below shows a working, but inefficient implementation of an Android BackHandler within React Native to have the app exit after two presses in two seconds. This is implemented using React hooks within the main functional component of the app.
However, due to dependency on a state variable recentlyPressedHardwareBack, the useEffect hook will cleanup then run each time the state changes, causing the BackHandler event listener to be detached and re-attached whenever the back button is pressed. How do you set up this event listener just once without constant creation and deletion, while allowing it to access a changing component state?
const [recentlyPressedHardwareBack, setRecentlyPressedHardwareBack] =
useState(false);
useEffect(() => {
const backHandler = BackHandler.addEventListener(
'hardwareBackPress',
() => {
// Exit app if user pressed the button within last 2 seconds.
if (recentlyPressedHardwareBack) {
return false;
}
ToastAndroid.show(
'Press back again to exit the app',
ToastAndroid.SHORT,
);
setRecentlyPressedHardwareBack(true);
// Toast shows for approx 2 seconds, so this is the valid period for exiting the app.
setTimeout(() => {
setRecentlyPressedHardwareBack(false);
}, 2000);
// Don't exit yet.
return true;
},
);
return () => backHandler.remove();
}, [recentlyPressedHardwareBack]);
A: You could use useRef for this.
const recentlyPressedHardwareBackRef = useRef(false);
useEffect(() => {
const backHandler = BackHandler.addEventListener(
'hardwareBackPress',
() => {
// Exit app if user pressed the button within last 2 seconds.
if (recentlyPressedHardwareBackRef.current) {
return false;
}
ToastAndroid.show(
'Press back again to exit the app',
ToastAndroid.SHORT,
);
recentlyPressedHardwareBackRef.current = true;
// Toast shows for approx 2 seconds, so this is the valid period for exiting the app.
setTimeout(() => {
recentlyPressedHardwareBackRef.current = false;
}, 2000);
// Don't exit yet.
return true;
},
);
return () => backHandler.remove();
}, [])
| {
"language": "en",
"url": "https://stackoverflow.com/questions/72292879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to execute simultaneous PHP CLI scripts in Mac I have 50 php files that I would like to run simultaneously from the command line. Right now, I am running them in multiple CLI windows using the code:
php Script1.php
I would like to be able to call one single script file that would execute all 50 php files simultaneously. I have been reading about how to make the command line not wait for the output, but I can't seem to make it work.
I am new to both MAC and Scripting - maybe I don't need a script? Is there another mac based solultion that can do this without me having to open 50 separate terminal windows?
A: You can just add ampersand '&' to separate each command:
php script1.php & php script2.php & php script3.php ...
This ampersand symbol will tell the shell to run command on background.
To check the output, you can redirect it to a file:
php script1.php > script1.log.txt & php script2.php > script2.log.txt
And you can just do a tail on it to read the log:
tail -f script1.log.txt
A: If you script is nicely numbered from 1 to 50, you can try the following in a .command file:
i=1;
while [ $i -lt 51 ]
do
osascript -e 'tell app "Terminal"
do script "php Script$i.php"
end tell' &
i=$[$i+1]
done
This should open 50 separate terminal windows each running script{$i}.php
A: You could also run them at the same time but not in the background.
php test1.php; php test2.php;
I don't know why you would want to "interact" with the script after its running.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/9882376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: BASH: how to store subshell variables into an array? I have array1, array2 and a function.
I am trying in a for j=0 to ARRAY_SIZE loop to get the data from array2[j], pass it to a function and the returning output store it in array1[j].
Below is the part of the code that I am working on:
exec 3>&1
${ppart_block_fstype[$i]}=_ppart_block_fstype < <(
for i in $(eval echo {0..$ARRAY_END})
do
if [[ ppart_block_alloc[$i] -eq "ALLOC" ]]
then
printf "%s\n" "${ppart_block_num[$i]}" >&3
fi
done)
exec 3>&-
_ppart_block_fstype is the function that I have previously defined and will return an output that I will store in array ppart_block_fstype.
The problem of above function is that is using some "heavy tools", hence is not really possible to invoke it at every loop cycle.
This! was a good starting point, but I am stuck on how to make $i visible out of the subshell, and I am also not sure if I am invoking < <( )* in the correct way.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19188610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Issue on creating an application for facebook Currently i am creating an sample facebook application. The problem is its asking for my application url in two types. Canvas url and Secure canvas url i had hosted my application on this link Fb app url but the hosting is not providing me ssl i.e. the link with https Because of this problem i am unable to include the Secure canvas url
How can i overcome this issue. How can i solve this. Is there any way to submit the facebook application without secure canvas url or is there any free webhosting site which also provides ssl (https). Please suggest me
A: no you have to get the ssl certificate to secure url
check this What is SSL and what are Certificates? and Https connection without SSL certificate
this is Free SSL Certificate :by Comodo for 90 days but never tried by me
A: try heroku . Facebook provides you that option while you are registering your app.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12550932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Java: How to parse string? Just started a programming class a few months ago, and I'm going over some review questions for a test. I'm not sure how to tackle this one. We have to format a few lines of text.
So something that looks like this: 2002/Bourne Identity/Action/1:58
Will have to look like this: 2002 - Bourne Identity Action 118 minutes
There are a few of these, and then we have to add up the total running time for each genre. Would I be right in using the split function here?
Thanks
A: I am assuming that the input data are 4 variables. For example
public String parse(Integer year, String title, String genere, Date duration)
So you just have to operate the values. For example
return year + " - " + title + " " + genere + " " + toMinutes(duration) + " minutes"
where toMinutes(duration) is a function which parse a date to minutes.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36705815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: If column 1 is a match, change value of column 3 using Awk I have to edit a big file where the columns for each record are not delimited by a character but have a fixed lenght. I would like to search a value in the first column and if found, change the value of the 3rd column.
I can't take the file out from where it is so I can only use the command line with awk, sed and maybe java 5. otherwise I would try other solutions.
bigfile.dat structure:
Column1Col2Column3Column4Col5
Example:
id12345TEXTVALUE01SOMCODETEXT
id23456TEXTVALUE02SOMCODETEXT
id34567TEXTVALUE02SOMCODETEXT
id45678TEXTVALUE01SOMCODETEXT
id56789TEXTVALUE03SOMCODETEXT
What I need: set VALUE04 for id45678
id12345TEXTVALUE01SOMCODETEXT
id23456TEXTVALUE02SOMCODETEXT
id34567TEXTVALUE02SOMCODETEXT
id45678TEXTVALUE04SOMCODETEXT
id56789TEXTVALUE03SOMCODETEXT
I don't know if this is possible. Here is some pseudo code that I thought maybe could work with awk:
if (match id = subtr(Column1))
print subtr(Column1+Col2) + "mychange" +substr(Column4+Col5)
else
print unchanged line
I'm not asking to do my work for me I just don't know I'm wasting my time with the tools I have or I just lack the knowledge.
Thanks.
A: This is actually quite easy to do with awk:
pax: awk <input.txt '/^id45678/{$0=substr($0,1,11)"VALUE04"substr($0,19)}1'
id12345TEXTVALUE01SOMCODETEXT
id23456TEXTVALUE02SOMCODETEXT
id34567TEXTVALUE02SOMCODETEXT
id45678TEXTVALUE04SOMCODETEXT
id56789TEXTVALUE03SOMCODETEXT
It just finds lines beginning with id45678 and modifies that part of the line that you want changed.
The 1 at the end is simply a command to print the line whether changed or not (it's a "trick" using a truth value 1 to select the (default) action of printing the line).
A: Using GNU awk's FIELDWIDTHS for fixed width fields:
$ awk '
BEGIN {
FIELDWIDTHS="7 4 7 7 4" 3 # set the field widths
OFS=""
}
$1=="id45678" { # when the first field has the given value
$3="VALUE04" # replace the third field
}1' file # output
Column1Col2Column3Column4Col5
id12345TEXTVALUE01SOMCODETEXT
id23456TEXTVALUE02SOMCODETEXT
id34567TEXTVALUE02SOMCODETEXT
id45678TEXTVALUE04SOMCODETEXT
id56789TEXTVALUE03SOMCODETEXT
A: With GNU sed:
sed -E 's/^(id45678....)......./\1VALUE04/' file
or shorter:
sed -E 's/^(id45678.{4}).{7}/\1VALUE04/' file
and with variables:
s="id45678"
r="VALUE04"
sed -E 's/^('"$s"'.{4}).{7}/\1'"$r"'/' file
Output:
id12345TEXTVALUE01SOMCODETEXT
id23456TEXTVALUE02SOMCODETEXT
id34567TEXTVALUE02SOMCODETEXT
id45678TEXTVALUE04SOMCODETEXT
id56789TEXTVALUE03SOMCODETEXT
If you want to edit your file "in place" use sed's option -i.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/47504358",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is there any difference between normal and static local variables in static methods? class A
{
static void f(void)
{
int a;
static int b;
}
};
Is there any (formal or practical) difference between a and b?
A: Yes, consider the following:
#include <iostream>
using namespace std;
class A
{
public:
static void func()
{
static int a = 10;
int b = 10;
a++;
b++;
std::cout << a << " " << b << endl;
}
};
int main() {
A a, b;
a.func();
b.func();
a.func();
return 0;
}
a is shared across all instances of func, but b is local to each instance, so the output is:
11 11
12 11
13 11
http://ideone.com/kwlra3
A: Yes both are different. For every call a will be created whereas b will be created only once and it is same for all the objects of type A. By same I mean, all the objects share a single memory of b.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27048672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-4"
} |
Q: Mule connector config needs dynamic attributes I have develop a new Connector. This connector requires to be configured with two parameters, lets say:
*
*default_trip_timeout_milis
*default_trip_threshold
Challenge is, I want read ${myValue_a} and ${myValue_a} from an API, using an HTTP call, not from a file or inline values.
Since this is a connector, I need to make this API call somewhere before connectors are initialized.
FlowVars aren't an option, since they are initialized with the Flows, and this is happening before in the Mule app life Cycle.
My idea is to create an Spring Bean implementing Initialisable, so it will be called before Connectors are init, and here, using any java based libs (Spring RestTemplate?) , call API, get values, and store them somewhere (context? objectStore?) , so the connector can access them.
Make sense? Any other ideas?
Thanks!
A: mmm you could make a class that will create the properties in the startup and in this class obtain the API properties via http request. Example below:
public class PropertyInit implements InitializingBean,FactoryBean {
private Properties props = new Properties();
@Override
public Object getObject() throws Exception {
return props;
}
@Override
public Class getObjectType() {
return Properties.class;
}
}
Now you should be able to load this property class with:
<context:property-placeholder properties-ref="propertyInit"/>
Hope you like this idea. I used this approach in a previous project.
A: I want to give you first a strong warning on doing this. If you go down this path then you risk breaking your application in very strange ways because if any other components depend on this component you are having dynamic components on startup, you will break them, and you should think if there are other ways to achieve this behaviour instead of using properties.
That said the way to do this would be to use a proxy pattern, which is a proxy for the component you recreate whenever its properties are changed. So you will need to create a class which extends Circuit Breaker, which encapsulates and instance of Circuit Breaker which is recreated whenever its properties change. These properties must not be used outside of the proxy class as other components may read these properties at startup and then not refresh, you must keep this in mind that anything which might directly or indirectly access these properties cannot do so in their initialisation phase or your application will break.
It's worth taking a look at SpringCloudConfig which allows for you to have a properties server and then all your applications can hot-reload those properties at runtime when they change. Not sure if you can take that path in Mule if SpringCloud is supported yet but it's a nice thing to know exists.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44888623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: background-image doesn't appear with span class <span class="arrow" style="width:20px;height:20px;background-image: url(image/arrow.png);display:inline-block;"></span>
However the arrow doesn't show up. I double checked the path. It's correct. What wrong am I doing?
I even tried display:block;
No change.
A: You have to triple check you path :)
<span class="arrow" style="width:100px;height:50px;background-image: url(http://lorempixel.com/100/50/cats/1/);display:inline-block;"></span>
For a cleaner code, you can write your CSS code outside of style attribute, in a separate file.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27470249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to convert timezone inside bindings? I tried to calculate unfinished downtime duration. Start time is stored in GMT timezone and finish time is local datetime in my timezone (GMT+2 or GMT+3). I want to calculate difference in seconds.
Null datetime is 1900-01-01 00:00:00, so I have expression that means start time is after end time to find unfinished downtimes. Than I get current time and convert it into epoch seconds consigering timezone difference and subtract start time converted into epoch seconds.
In code it looks like
LoadingDowntimes
(globalobjectid == $CurrentDowntimeGOId &&
downtimeend.toEpochSecond(ZoneOffset.ofHours(0)) < downtimestart.toEpochSecond(ZoneOffset.ofHours(0)),
$DowntimesTimeUnfinished : (LocalDateTime.now().toEpochSecond(ZoneOffset.ofHours($CurrentZoneOffset.offset)) - downtimestart.toEpochSecond(ZoneOffset.ofHours(0))))
All parts of this code work corretly in another places, but in this place I have error message
Variables can not be used inside bindings.
Variable [$CurrentZoneOffset] is being used in binding '(LocalDateTime.now().toEpochSecond(ZoneOffset.ofHours($CurrentZoneOffset.offset)) - StringToLocalDateTime(downtimestart).toEpochSecond(ZoneOffset.ofHours(0)))'
I don't know how to make it another way so I need a help with solving this problem.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/57515632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: select sum() from multiple table with same column I have a table for product, sales_item and a stock with following structure
Product table:
+----+-----+-------------+
| id |name |description |
+----+-----+-------------+
| 1 |Pr1 |prod1 |
+----+-----+-------------+
| 2 |Pr2 |prod2 |
+----+-----+-------------+
| .. |... |..... |
+----+-----+-------------+
sales_item_details table
+-----+----------+------------+-----+
| id | sales_id | product_id | qty |
+-----+----------+------------+-----+
| 517 | 211 | 1 | 200 |
+-----+----------+------------+-----+
| 518 | 211 | 1 | 120 |
+-----+----------+------------+-----+
and production
+----+------------+-------+
| id | product_id | qty |
+----+------------+-------+
| 1 | 1 | 20 |
| 2 | 2 | 200 |
| 3 | 1 | 20 |
| 4 | 3 | 30 |
| 5 | 9 | 30 |
| 6 | 65 | 10 |
| 7 | 65 | 50 |
| 8 | 71 | 10 |
| 9 | 71 | 10 |
| 10 | 71 | 10 |
+----+------------+-------+
And now I am creating multiple database with same table defination and need to maintain stock
production table and product table will be maintained from single database
only sales_item_details table will be different but product id will same
So how will be the query to get SUM(qty) of sales item details and view the inventory in stock
I have tried this:
SELECT
`pr`.`id`,
`pr`.`name`,
sl.size,
IFNULL(SUM(s.qty), 0) AS sales,
IFNULL((SELECT SUM(qty) FROM production st WHERE st.product_id = `pr`.`product-id`), 0) AS stock_added
FROM products pr
LEFT JOIN (
SELECT qty, product_id FROM db1.sales_item_details
UNION ALL
SELECT qty, product_id FROM db2.sales_item_details
) s ON pr.`id` = s.product_id
LEFT JOIN size_list sl ON sl.id = `pr`.`product-size`
GROUP BY s.product_id
ORDER BY sales DESC
but getting the product which is sold
Any help will be appriciated
A: 1st I created a view holding all sales items grouped by product id in the main database:
CREATE OR REPLACE VIEW unit_sold_all AS
SELECT
p.`product-id` AS product_id,
(
(SELECT IFNULL(SUM(s0.qty), 0) FROM db_1.sales_item_details s0 WHERE s0.product_id = p.`product-id`) +
(SELECT IFNULL(SUM(s1.qty), 0) FROM db_2.sales_item_details s1 WHERE s1.product_id = p.`product-id`) +
(SELECT IFNULL(SUM(s2.qty), 0) FROM db_3.sales_item_details s2 WHERE s2.product_id = p.`product-id`) +
(SELECT IFNULL(SUM(s3.qty), 0) FROM db_4.sales_item_details s3 WHERE s3.product_id = p.`product-id`) +
(SELECT IFNULL(SUM(s4.qty), 0) FROM db_5.sales_item_details s4 WHERE s4.product_id = p.`product-id`)
) as total_unit_sales
FROM products p
Then in another sql, I selected the sum of the sales.
PS: I answered this question myself because this might need by another person in the future.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44128784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Jquery how to make div visible after valid mail is written I wanna make form, which will show its next part, after valid email adress is written in input. I have some code to validate, but i need some code, which will wait until email adress is in input.
<input style="border-radius: 4px;" name="editsubsemail" type="text" id="editsubsemail" size="40"/><br /><br />
This is jquery code and i need to replace that .click (just placeholder) with something, that will check, if that "editsubsemail" is filled with email
var visibility_edit = false;
$("#changesubs-wraper").click(function(){
if( visibility_edit == false ){
$('#change-info-wraper').show("slow");
$('#delete-subs-wraper').show("slow");
visibility_edit = true;
} else {
$('#change-info-wraper').hide("slow");
$('#delete-subs-wraper').hide("slow");
visibility_edit = false;
}
});
A: Check out this seminar registration demo form on css-tricks. It looks like it could solve your problem with a little tweaking. Here is the source.
A: For: <input name='email' type='email' id='email'> and <div id='somediv'></div>
This is some untested code:
$('#email').on( 'change', function() {
if( email_regex_must_be_here ) {
//this is a valid email
$('#somediv').show();
}
else {
$('#somediv').hide();
}
} );
Harry's link might be better at helping you though.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17136339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: How can i show the count value as well as % on my highchart pie chart?
var json=
[
{
"a": "OOCBER",
"b": "OOCL BERLIN",
"c": "CHINA",
"d": "GREAT BRITAIN",
"e": "*PI",
"f": "NGB",
"g": "CN",
"i": "GB",
"j": "9PIO5090196",
"k": "2PI5090420",
"l": "WL15802C/D",
"m": "NB1500682",
"n": 9,
"o": 6,
"p": "2015-09-14",
"q": "2015-09-14",
"s": 4,
"u": "40HC",
"v": "TLU7564566",
"w": "CN074909",
"x": "LEIGH",
"y": "NINGBO",
"z": 395,
"B": 68,
"C": 7987.5,
"D": "534",
"E": "Chunghy",
"F": "07405",
"G": "PIF",
"H": "FB",
"I": "NIGBO",
"J": "NGB",
"K": "2015-09-12",
"L": "2015-09-29T10:05",
"M": "2015-09-29T10:05",
"Y": "SOUTHAMPTON",
"zp": "SOU",
"N": "2015-10-21",
"O": "2015-09-22T17:40",
"P": "2015-09-22T17:40",
"Q": "2015-10-21T12:54",
"R": "2015-10-22T14:13",
"S": "2015-10-27T10:30",
"T": "2015-10-27T10:30",
"U": "2015-10-27T10:30",
"V": true,
}, etc......
I am looking to display the actual count number as well as the % on my pie chart. As it stand it just shows the % but I want the actual count number to be displayed next to it. On my snippet look for *** Actual count here value ***. I tried point.y but that didn't give me the correct value. Please help.
$(function() {
var ContainerCounts = {};
var ContainerTypes = [];
var totalCount = 0;
//loop through the object
$.each(json, function(key, val) {
//get the container name
var ContainerType = val["u"];
//build array of unique container names
if ($.inArray(ContainerType, ContainerTypes) == -1) {
ContainerTypes.push(ContainerType);
}
//add or increment a count for the container name
if (typeof ContainerCounts[ContainerType] == 'undefined') {
ContainerCounts[ContainerType] = 1;
} else {
ContainerCounts[ContainerType] ++;
}
//increment the total count so we can calculate %
totalCount++;
});
//console.log(ContainerTypes);
var data = [];
//loop through unique countries to build data for chart
$.each(ContainerTypes, function(key, ContainerType) {
data.push({
name: ContainerType,
y: Math.round((ContainerCounts[ContainerType] / totalCount) * 100)
});
});
//console.log(data);
function popchart_shipment_breakout() {
$('#container_shipment_breakout').highcharts({
chart: {
type: 'pie',
options3d: {
enabled: false,
alpha: 45,
beta: 0
}
},
title: {
text: 'Break out of shipments'
},
tooltip: {
pointFormat: '*** Actual count here value *** <b>{point.percentage:.1f}%</b>'
},
plotOptions: {
pie: {
allowPointSelect: true,
cursor: 'pointer',
depth: 35,
innerSize: 150,
dataLabels: {
enabled: true
}
}
},
});
chart = $('#container_shipment_breakout').highcharts();
chart.addSeries({
data: data
});
}
popchart_shipment_breakout();
});
A: Just use this.y instead of point.percentage
OR
Use below code to format the tooltip.
tooltip: {
formatter: function () {
return '</b>' + this.y + '</b>';
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33915718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Bottleneck of Web Applications I am developing a web application using django, postgreSQL, html5 and javascript. The application is javacript intensive and is both read/write intensive. What are some common bottlenecks? What are some things to keep in mind when designing a web application that scales well? What are some tips to improve performance?
Thanks!
A: Database database database. Make sure you can add more db servers without lots of pain.
Adding app servers is usually fairly straightforward, but DB replication/clustering can get tricky.
A: It depends. It depends on how your application is used. You cannot tell until you know how many users will do concurrent requests. Most of the time it will be your database, depending on how many requests per second it can manage.
So if you know your application patterns (read intensive or write intensive) you can design it for easier scaling. Would it be enough to just do replication?
A: Your problem from day 1 will be the time taken for the client to download the content from your server. CDN's, minification and combining assets should be a primary concern.
(Un)fortunatly, unless you go Google, you won't become big enough to require multiple application, database servers (no offence).
A: A couple suggestions:
1) Since you said read write intensive you need to decide how you will be structuring your database to be more read or write friendly. What will be happening most often. If it is reads then indexes are your friend. If it is writes then be careful of going index crazy.
2) On the client side be careful of writing to the live DOM too often. If you need to do do a lot of loads of say table rows, load them into a parent element in memory and then load that parent element into the DOM all at once.
A: You may take a look at http://ilian.i-n-i.org/tag/cache/ there are three articles about caching and how it will help your website.
As for the scales well... are you sure that your application needs it?
Do not get me wring, it is awesome to have multiple database servers, CDN, load balancers etc. but do you really need it? If you are starting a new project you should focus on providing a stable features more than optimizing it for a million hits per day cause you are probably not going to make them(at least in the beginning).
Start with caching, it is easy to implement and effective if it is correctly used. When your visitors get so much that you reach the bottle necks think for them then but not earlier.
The above does not mean that you should write the slowest code that is possible. It only means that the need for extreme scalability comes when you have working application and that if you waste 1 year and a ton gold to build it probably you will miss the moment.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/7955237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Convert AMPL Model to ORTools I am trying to convert my AMPL Model to ORTools. This is the model:
set NUTR;
set FOOD;
param cost {FOOD} > 0;
param f_min {FOOD} >= 0;
param f_max {j in FOOD} >= f_min[j];
param n_min {NUTR} >= 0;
param n_max {i in NUTR} >= n_min[i];
param amt {NUTR,FOOD} >= 0;
var Buy {j in FOOD} >= f_min[j], <= f_max[j];
minimize Total_Cost: sum {j in FOOD} cost[j] * Buy[j];
subject to Diet {i in NUTR}:
n_min[i] <= sum {j in FOOD} amt[i,j] * Buy[j] <= n_max[i];
This is my semi python code for ORTools. I don't know if Buy is correct and how to add the constraint:
Buy = {}
for f_ in FOOD:
Buy[f_] = solver.IntVar(0, 1000, 'Buy[%s]' % (f_,))
## Objective Function
Total_Cost = solver.Sum([cost[j_] * Buy[j_] for j_ in FOOD])
## Constraints
for i in NUTR:
for j in FOOD:
print(amt[i,j] *Buy[j])
#solver.Add( solver.Sum(amt[i,j] * Buy[j] <= n_max[i] ))
solver.Minimize(Total_Cost)
A: wrong parenthesis
solver.Add( solver.Sum(amt[i,j] * Buy[j]) <= n_max[i] )
| {
"language": "en",
"url": "https://stackoverflow.com/questions/69938803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to reference Column data in a rolling window calculation? Error ValueError: window must be an integer 0 or greater I am currently working on a large DF and need to reference the data in a column for a rolling window calculation. All rows have a separate rolling window value so i need to reference the column but i am getting the out put
ValueError: window must be an integer 0 or greater
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.randint(0,20,size=(20, 4)), columns=list('abcd'))
df['op'] = (np.random.randint(0,20, size=20))
a b c d op
0 6 17 3 5 9
1 8 3 13 7 2
2 19 12 18 3 8
3 8 8 5 4 17
4 0 5 9 3 19
5 0 5 19 9 11
6 7 7 13 8 10
7 7 5 12 0 4
8 13 17 4 4 17
9 7 0 16 9 7
10 7 8 13 10 13
11 18 3 1 11 16
12 4 4 5 13 4
13 9 8 14 19 9
14 13 10 10 7 10
15 9 16 11 16 3
16 5 7 3 0 11
17 13 14 10 1 16
18 6 14 13 4 18
19 1 9 8 0 19
trying to reference the value in df['op'] for a rolling average.
df['SMA'] = df.a.rolling(window=df.op).mean()
produces Error ValueError: window must be an integer 0 or greater
As mentioned i am working on a large data frame so the above is example code.
A: Solution
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.randint(0,20,size=(20, 4)),
columns=list('abcd'))
df['op'] = (np.random.randint(0,20, size=20))
def lookback_window(row, values, lookback, method='mean', *args, **kwargs):
loc = values.index.get_loc(row.name)
lb = lookback.loc[row.name]
return getattr(values.iloc[loc - lb: loc + 1], method)(*args, **kwargs)
df['SMA'] = df.apply(lookback_window, values=df['a'], lookback=df['op'], axis=1)
df
a b c d op SMA
0 17 19 11 9 0 17.000000
1 0 10 9 11 19 NaN
2 13 8 11 2 16 NaN
3 9 2 4 4 8 NaN
4 11 10 0 17 18 NaN
5 14 19 17 10 17 NaN
6 6 12 17 1 4 10.600000
7 10 1 3 18 2 10.000000
8 7 6 12 3 19 NaN
9 1 9 7 5 9 8.800000
10 17 1 3 13 1 9.000000
11 19 17 0 2 7 10.625000
12 18 5 2 4 12 10.923077
13 18 5 4 2 1 18.000000
14 5 11 17 11 11 11.250000
15 16 9 2 11 16 NaN
16 15 17 1 8 14 11.933333
17 15 2 0 3 6 15.142857
18 18 3 18 3 10 13.545455
19 7 0 12 15 3 13.750000
| {
"language": "en",
"url": "https://stackoverflow.com/questions/70932728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: text from With statement to geometry and geography WITH values(location, userid) as (
VALUES (st_geomfromewkt('0101000000000000C04141464000000000FE174440'),5))
SELECT st_distance(p.location,(SELECT location FROM values where v.userid = "blacklisterId")::geometry)
FROM values as v,live.partners as p
Inner JOIN backend.cars c ON (c."userId" = p."partnerId" AND c.active = true)
Inner JOIN backend."carCompatibleTariffs" cCT ON c."carId" = cCT."carId" and 1 = cCT."tariffId" AND cCT.active = true
LEFT JOIN backend."userBlacklist" bl on 5 = bl."blacklisterId" AND p."partnerId" = bl."blacklistedId"
WHERE bl."blacklistedId" ISNULL
--AND st_dwithin(p.location::geography,(SELECT FROM values where v.userid = "blacklisterId")::geography,1500)
--ORDER BY st_distance(p.location,(SELECT location FROM values where v.userid = "blacklisterId")::geometry) ASC;
How to take text values table as geometry or geography,I tried to make it with ST_GeographyFromText,ST_GeogmetryFromText and with st_makepoint((Select st_x),(Select st_y)) but it always return null,how can I solve this ?
A: Your problem might be somewhere else. Your geometry seems to be valid:
WITH values(location, userid) AS
(VALUES ('0101000000000000C04141464000000000FE174440',5),
('0101000000000000200A45464000000000D7174440',16))
SELECT ST_Distance(location::GEOMETRY,ST_GeomFromText('POINT(44 45)')) FROM values;
st_distance
------------------
4.83948955585514
4.84387473202619
(2 Zeilen)
Here a few examples on different types of geometry literals:
ST_GeomFromText
db=# SELECT ST_GeomFromText('POINT(1 2)');
st_geomfromtext
--------------------------------------------
0101000000000000000000F03F0000000000000040
(1 Zeile)
ST_GeomFromGeoJSON:
db=# SELECT ST_GeomFromGeoJSON('{"type":"Point","coordinates":[1,2]}');
st_geomfromgeojson
--------------------------------------------
0101000000000000000000F03F0000000000000040
(1 Zeile)
ST_GeomFromEWKT:
db=# SELECT ST_GeomFromEWKT('SRID=4269;POINT(1 2)');
st_geomfromewkt
----------------------------------------------------
0101000020AD100000000000000000F03F0000000000000040
(1 Zeile)
ST_GeomFromGML:
db=# SELECT ST_GeomFromGML('<gml:Point><gml:coordinates>1,2</gml:coordinates></gml:Point>');
st_geomfromgml
--------------------------------------------
0101000000000000000000F03F0000000000000040
(1 Zeile)
ST_GeomFromKML:
db=# SELECT ST_GeomFromKML('<Point><coordinates>1,2</coordinates></Point>');
st_geomfromkml
----------------------------------------------------
0101000020E6100000000000000000F03F0000000000000040
(1 Zeile)
A: This could have been your intention:
WITH vvv(location, userid) as (
VALUES (st_geomfromewkt('0101000000000000C04141464000000000FE174440'),5)
)
SELECT st_distance(p.location, v.location::geometry) AS THE_DISTANCE
FROM vvv as v
JOIN live.partners as p
ON p."partnerId" = v.userid
WHERE EXISTS (
SELECT *
FROM backend.cars c
JOIN backend."carCompatibleTariffs" cCT
ON c."carId" = cCT."carId"
AND cCT."tariffId" = 1 AND cCT.active = true
WHERE c."userId" = p."partnerId"
AND c.active = true
)
AND NOT EXISTS(
SELECT * FROM backend."userBlacklist" bl
WHERE p."partnerId" = bl."blacklistedId"
AND 5 = bl."blacklisterId"
)
;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50170741",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I deduplicate a list based on hourly intervals in Java? First of all, I have this object that I call MyObject;
public class MyObject{
private google.protobuf.Timestamp timestamp;
private String description;
}
Then I have this list:
List<MyObject> myList = new ArrayList<>();
Now let's imagine that myList contains 500 items. What I want, is to eliminate duplicates (identical descriptions) that occur within the same hour.
So two different items with identical descriptions should not both exist in the list within the same hour. If they do, we want to only keep one and delete the other.
Example:
If the list contains the following two items:
06-07-2022T01:30:00, "some random description" and 06-07-2022T01:35:00, "some random description"
Then we want to delete one of them because they have identical description and are within the same hour.
But if we have this:
06-07-2022T01:30:00, "some random description" and 06-07-2022T03:20:00, "some random description"
Then we don't want to delete any of them as they are not within the same hour.
How do I do that?
A: Based on the clarifications you've given in the comments I've used a LocalDateTime to simplify the sample entry and retrieve the hour, but I'm sure that google.protobuf.Timestamp can be converted to a proper date and extract its hour.
To keep only one object according to description, date and hour, I've added a helper method to your POJO to get a concatenation of these fields and then group by their result value in order to get a Map where to each key (description, date and hour) there is only one object associated. Lastly, I've collected the Map's values into a List.
List<MyObject> list = new ArrayList<>(List.of(
new MyObject(LocalDateTime.parse("06-07-2022T01:30:00", DateTimeFormatter.ofPattern("dd-MM-yyyy'T'HH:mm:ss")), "some random description"),
new MyObject(LocalDateTime.parse("06-07-2022T01:35:00", DateTimeFormatter.ofPattern("dd-MM-yyyy'T'HH:mm:ss")), "some random description"),
new MyObject(LocalDateTime.parse("06-07-2022T03:20:00", DateTimeFormatter.ofPattern("dd-MM-yyyy'T'HH:mm:ss")), "some random description"),
new MyObject(LocalDateTime.parse("06-07-2022T04:30:00", DateTimeFormatter.ofPattern("dd-MM-yyyy'T'HH:mm:ss")), "some random description2"),
new MyObject(LocalDateTime.parse("06-07-2022T04:35:00", DateTimeFormatter.ofPattern("dd-MM-yyyy'T'HH:mm:ss")), "some random description2"),
new MyObject(LocalDateTime.parse("06-07-2022T06:20:00", DateTimeFormatter.ofPattern("dd-MM-yyyy'T'HH:mm:ss")), "some random description2"),
new MyObject(LocalDateTime.parse("08-07-2022T01:30:00", DateTimeFormatter.ofPattern("dd-MM-yyyy'T'HH:mm:ss")), "some random description")
));
List<MyObject> listRes = list.stream()
.collect(Collectors.toMap(
obj -> obj.getDescrDateHour(),
Function.identity(),
(obj1, obj2) -> obj1
))
.values()
.stream().
collect(Collectors.toList());
POJO Class
class MyObject {
private LocalDateTime timestamp;
private String description;
public MyObject(LocalDateTime timestamp, String description) {
this.timestamp = timestamp;
this.description = description;
}
public LocalDateTime getTimestamp() {
return timestamp;
}
public String getDescription() {
return description;
}
public String getDescrDateHour() {
return description + timestamp.toLocalDate().toString() + timestamp.getHour();
}
@Override
public String toString() {
return timestamp + " - " + description;
}
}
Here is a link to test the code
https://www.jdoodle.com/iembed/v0/sZV
Output
Input:
2022-07-06T01:30 - some random description
2022-07-06T01:35 - some random description
2022-07-06T03:20 - some random description
2022-07-06T04:30 - some random description2
2022-07-06T04:35 - some random description2
2022-07-06T06:20 - some random description2
2022-07-08T01:30 - some random description
Output:
2022-07-06T04:30 - some random description2
2022-07-08T01:30 - some random description
2022-07-06T06:20 - some random description2
2022-07-06T03:20 - some random description
2022-07-06T01:30 - some random description
A: A quiet simple solution would be a HashMap. You use description as key and timestamp as value. So you always save only the last timestamp to given description and overwrite it automaticly.
If you want to hold your Object I would just sort the list by date, then fill in a HashMap and transform the HashMap to List again. It has not the best Performance, but its easy. You can Sort by Date with functional Java sorting a Collection in functional style
A: You could define an equality calculating class (or do it in the MyObject class, depending on what it actually represents) and use it to find unique values based on the equality definition. In this case equality would mean: same description and same timestamp with hourly precision.
Here's an example (might need some tweaking, just a concept presentation):
class UniqueDescriptionWithinHourIdentifier {
// equals and hashCode could also be implemented in MyObject
// if it's only purpose is data representation
// but a separate class defines a more concrete abstraction
private static final SimpleDateFormat DATE_FORMAT = new SimpleDateFormat("yyyyMMddHH");
private Date timestamp;
private String description;
UniqueDescriptionWithinHourIdentifier(MyObject object) {
timestamp = object.timestamp;
description = object.description;
}
@Override
public boolean equals(Object object) {
if (this == object) {
return true;
}
if (object == null || getClass() != object.getClass()) {
return false;
}
var other = (UniqueDescriptionWithinHourIdentifier) object;
return description.equals(other.description)
// compare the timestamps however you want - format used for simplicity
&& DATE_FORMAT.format(timestamp)
.equals(DATE_FORMAT.format(other.timestamp));
}
@Override
public int hashCode() {
// cannot contain timestamp - a single hash bucket will contain multiple elements
// with the same definition and the equals method will filter them out
return Objects.hashCode(description);
}
}
class MyObjectService {
// here a new list without duplicates is calculated
List<MyObject> withoutDuplicates(List<MyObject> objects) {
return List.copyOf(objects.stream()
.collect(toMap(UniqueDescriptionWithinHourIdentifier::new,
identity(),
(e1, e2) -> e1,
LinkedHashMap::new))
.values());
}
}
A: Add equals & hashcode method to your MyObject class with equals has some logic like below:
@Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
MyObject other = (MyObject) obj;
Calendar calendar = Calendar.getInstance();
calendar.setTime(timestamp);
int hour1=calendar.HOUR;
int date1 = calendar.DATE;
calendar.setTime(other.timestamp);
int hour2 = calendar.HOUR;
int date2 =calendar.DATE;
return Objects.equals(hour1, hour2) && Objects.equals(date1, date2);
}
Here, basically I am checking if 2 objects has same hour & date & if so, just ignore another object.
Once you do that, you can just use :
List<MyObject> myList = new ArrayList<>();
myList.stream().distinct().collect(Collectors.toList()); // returns you new distinct objects list.
Please note, you can use default implementation of hashCode generated via your editor for this case. distinct() method of stream is checking if you have equals & hashcode available for underlying streams class.
Note: you can extend equals to check day , date, month, year etc. for verifying exact date.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/72880991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Can i store a pointer to RAM in flash at compile time My problem explained:
On my microcontroller (Atmel AT90CAN128) i have about 2500 bytes of RAM left.
In those 2500 bytes i need to store 5 times 100 data sets (size could change in the future). The data sets have a predefined but varying length between 1 and 9 bytes. The total bytes that the pure data sets occupy is about 2000 bytes. I now need to be able to access the data sets in an array like fashion by passing a uint8 to a function and get a pointer to the data set in return.
But i only have about 500 bytes left, so an array with pointers to each data set (calculated at start of run time) is simply not possible.
My attempt:
i use one big uint8 array[2000] (in RAM) and the length of the data sets is stored in flash as const uint8[] = {1, 5, 9, ...};.
The position of the data set in the big array is the accumulated length of the sets before it. So i would have to iterate through the length array and add the values up and then use it as an offset to the pointer of the big data array.
At runtime this gives me bad performance. The position of the data sets within the big array IS KNOWN at compile time, I just dont know how to put this information into an array that the compiler can store into flash.
As the amount of data sets could change, i need a solution that automatically calculates the positions.
Goal:
something like that
uint8 index = 57;
uint8 *pointer_to_data = pointer_array[57];
Is this even possible, as the compiler is a 1 pass comiler ?
(I am using Codevision, not avr gcc)
My solution
The pure C solution/answer is technically the right answer for my question but it just seems overly complicated (from my perspective). The idea with the build script seemed better but codevision is not very practical in that way.
So i ended up with a bit of a mix.
I wrote a javascript that writes the C code/definition of the variables for me. The raw-definitions are easy to edit and i just copy paste the whole thing into a html text file and open it in a browser and copy paste the content back into my C file.
In the beginning i was missing a crucial element and that is the position of the 'flash' keyword in the definition. The following is a simplified output of my javascript that compiles just the way i like it.
flash uint8 len[150] = {4, 4, 0, 2, ...};
uint8 data1[241] = {0}; //accumulated from above
uint8 * flash pointers_1[150] = {data1 +0, data1 +4, data1 +0, data1 +8, ...};
The ugly part (lots of manual labor without script) is adding up the length for each pointer as the compiler will only compile if the pointer is increased by a constant and not a value stored in a constant array.
The raw definitions that are fed to the javascript then look like this
var strings = [
"len[0] = 4;",
"len[1] = 4;",
"len[3] = 2;",
...
Within the javascript it is an array of strings, this way i could copy my old definitions into it and just add some quotes. I only need to define the ones that i want to use, index 2 is not defined and the script uses length 0 for it but does include it. The macro would have needed an entry with 0 i guess, which is bad for overview in my case.
It is not a one click solution but it is very readable and tidy which makes up for the copy-paste.
A: One common method of packing variable-length data sets to a single continuous array is using one element to describe the length of the next data sequence, followed by that many data items, with a zero length terminating the array.
In other words, if you have data "strings" 1, 2 3, 4 5 6, and 7 8 9 10, you can pack them into an array of 1+1+1+2+1+3+1+4+1 = 15 bytes as 1 1 2 2 3 3 4 5 6 4 7 8 9 10 0.
The functions to access said sequences are quite simple, too. In OP's case, each data item is an uint8:
uint8 dataset[] = { ..., 0 };
To loop over each set, you use two variables: one for the offset of current set, and another for the length:
uint16 offset = 0;
while (1) {
const uint8 length = dataset[offset];
if (!length) {
offset = 0;
break;
} else
++offset;
/* You have 'length' uint8's at dataset+offset. */
/* Skip to next set. */
offset += length;
}
To find a specific dataset, you do need to find it using a loop. For example:
uint8 *find_dataset(const uint16 index)
{
uint16 offset = 0;
uint16 count = 0;
while (1) {
const uint8 length = dataset[offset];
if (length == 0)
return NULL;
else
if (count == index)
return dataset + offset;
offset += 1 + length;
count++;
}
}
The above function will return a pointer to the length item of the index'th set (0 referring to the first set, 1 to the second set, and so on), or NULL if there is no such set.
It is not difficult to write functions to remove, append, prepend, and insert new sets. (When prepending and inserting, you do need to copy the rest of the elements in the dataset array forward (to higher indexes), by 1+length elements, first; this means that you cannot access the array in an interrupt context or from a second core, while the array is being modified.)
If the data is immutable (for example, generated whenever a new firmware is uploaded to the microcontroller), and you have sufficient flash/rom available, you can use a separate array for each set, an array of pointers to each set, and an array of sizes of each set:
static const uint8 dataset_0[] PROGMEM = { 1 };
static const uint8 dataset_1[] PROGMEM = { 2, 3 };
static const uint8 dataset_2[] PROGMEM = { 4, 5, 6 };
static const uint8 dataset_3[] PROGMEM = { 7, 8, 9, 10 };
#define DATASETS 4
static const uint8 *dataset_ptr[DATASETS] PROGMEM = {
dataset_0,
dataset_1,
dataset_2,
dataset_3,
};
static const uint8 dataset_len[DATASETS] PROGMEM = {
sizeof dataset_0,
sizeof dataset_1,
sizeof dataset_2,
sizeof dataset_3,
};
When this data is generated at firmware compile time, it is common to put this into a separate header file, and simply include it from the main firmware .c source file (or, if the firmware is very complicated, from the specific .c source file that accesses the data sets). If the above is dataset.h, then the source file typically contains say
#include "dataset.h"
const uint8 dataset_length(const uint16 index)
{
return (index < DATASETS) ? dataset_len[index] : 0;
}
const uint8 *dataset_pointer_P(const uint16 index)
{
return (index < DATASETS) ? dataset_ptr[index] : NULL;
}
i.e., it includes the dataset, and then defines the functions that access the data. (Note that I deliberately made the data itself static, so they are only visible in the current compilation unit; but the dataset_length() and dataset_pointer(), the safe accessor functions, are accessible from other compilation units (C source files), too.)
When the build is controlled via a Makefile, this is trivial. Let's say the generated header file is dataset.h, and you have a shell script, say generate-dataset.sh, that generates the contents for that header. Then, the Makefile recipe is simply
dataset.h: generate-dataset.sh
@$(RM) $@
$(SHELL) -c "$^ > $@"
with the recipes for the compilation of the C source files that need it, containing it as a prerequisite:
main.o: main.c dataset.h
$(CC) $(CFLAGS) -c main.c
Do note that the indentation in Makefiles always uses Tabs, but this forum does not reproduce them in code snippets. (You can always run sed -e 's|^ *|\t|g' -i Makefile to fix copy-pasted Makefiles, though.)
OP mentioned that they are using Codevision, that does not use Makefiles (but a menu-driven configuration system). If Codevision does not provide a pre-build hook (to run an executable or script before compiling the source files), then OP can write a script or program run on the host machine, perhaps named pre-build, that regenerates all generated header files, and run it by hand before every build.
In the hybrid case, where you know the length of each data set at compile time, and it is immutable (constant), but the sets themselves vary at run time, you need to use a helper script to generate a rather large C header (or source) file. (It will have 1500 lines or more, and nobody should have to maintain that by hand.)
The idea is that you first declare each data set, but do not initialize them. This makes the C compiler reserve RAM for each:
static uint8 dataset_0_0[3];
static uint8 dataset_0_1[2];
static uint8 dataset_0_2[9];
static uint8 dataset_0_3[4];
/* : : */
static uint8 dataset_0_97[1];
static uint8 dataset_0_98[5];
static uint8 dataset_0_99[7];
static uint8 dataset_1_0[6];
static uint8 dataset_1_1[8];
/* : : */
static uint8 dataset_1_98[2];
static uint8 dataset_1_99[3];
static uint8 dataset_2_0[5];
/* : : : */
static uint8 dataset_4_99[9];
Next, declare an array that specifies the length of each set. Make this constant and PROGMEM, since it is immutable and goes into flash/rom:
static const uint8 dataset_len[5][100] PROGMEM = {
sizeof dataset_0_0, sizeof dataset_0_1, sizeof dataset_0_2,
/* ... */
sizeof dataset_4_97, sizeof dataset_4_98, sizeof dataset_4_99
};
Instead of the sizeof statements, you can also have your script output the lengths of each set as a decimal value.
Finally, create an array of pointers to the datasets. This array itself will be immutable (const and PROGMEM), but the targets, the datasets defined first above, are mutable:
static uint8 *const dataset_ptr[5][100] PROGMEM = {
dataset_0_0, dataset_0_1, dataset_0_2, dataset_0_3,
/* ... */
dataset_4_96, dataset_4_97, dataset_4_98, dataset_4_99
};
On AT90CAN128, the flash memory is at addresses 0x0 .. 0x1FFFF (131072 bytes total). Internal SRAM is at addresses 0x0100 .. 0x10FF (4096 bytes total). Like other AVRs, it uses Harvard architecture, where code resides in a separate address space -- in Flash. It has separate instructions for reading bytes from flash (LPM, ELPM).
Because a 16-bit pointer can only reach half the flash, it is rather important that the dataset_len and dataset_ptr arrays are "near", in the lower 64k. Your compiler should take care of this, though.
To generate correct code for accessing the arrays from flash (progmem), at least AVR-GCC needs some helper code:
#include <avr/pgmspace.h>
uint8 subset_len(const uint8 group, const uint8 set)
{
return pgm_read_byte_near(&(dataset_len[group][set]));
}
uint8 *subset_ptr(const uint8 group, const uint8 set)
{
return (uint8 *)pgm_read_word_near(&(dataset_ptr[group][set]));
}
The assembly code, annotated with the cycle counts, avr-gcc-4.9.2 generates for at90can128 from above, is
subset_len:
ldi r25, 0 ; 1 cycle
movw r30, r24 ; 1 cycle
lsl r30 ; 1 cycle
rol r31 ; 1 cycle
add r30, r24 ; 1 cycle
adc r31, r25 ; 1 cycle
add r30, r22 ; 1 cycle
adc r31, __zero_reg__ ; 1 cycle
subi r30, lo8(-(dataset_len)) ; 1 cycle
sbci r31, hi8(-(dataset_len)) ; 1 cycle
lpm r24, Z ; 3 cycles
ret
subset_ptr:
ldi r25, 0 ; 1 cycle
movw r30, r24 ; 1 cycle
lsl r30 ; 1 cycle
rol r31 ; 1 cycle
add r30, r24 ; 1 cycle
adc r31, r25 ; 1 cycle
add r30, r22 ; 1 cycle
adc r31, __zero_reg__ ; 1 cycle
lsl r30 ; 1 cycle
rol r31 ; 1 cycle
subi r30, lo8(-(dataset_ptr)) ; 1 cycle
sbci r31, hi8(-(dataset_ptr)) ; 1 cycle
lpm r24, Z+ ; 3 cycles
lpm r25, Z ; 3 cycles
ret
Of course, declaring subset_len and subset_ptr as static inline would indicate to the compiler you want them inlined, which increases the code size a bit, but might shave off a couple of cycles per invocation.
Note that I have verified the above (except using unsigned char instead of uint8) for at90can128 using avr-gcc 4.9.2.
A: First, you should put the predefined length array in flash using PROGMEM, if you haven't already.
You could write a script, using the predefined length array as input, to generate a .c (or cpp) file that contains the PROGMEM array definition. Here is an example in python:
# Assume the array that defines the data length is in a file named DataLengthArray.c
# and the array is of the format
# const uint16 dataLengthArray[] PROGMEM = {
# 2, 4, 5, 1, 2,
# 4 ... };
START_OF_ARRAY = "const uint16 dataLengthArray[] PROGMEM = {"
outFile = open('PointerArray.c', 'w')
with open("DataLengthArray.c") as f:
fc = f.read().replace('\n', '')
dataLengthArray=fc[fc.find(START_OF_ARRAY)+len(START_OF_ARRAY):]
dataLengthArray=dataLengthArray[:dataLengthArray.find("}")]
offsets = [int(s) for s in dataLengthArray.split(",")]
outFile.write("extern uint8 array[2000];\n")
outFile.write("uint8* pointer_array[] PROGMEM = {\n")
sum = 0
for offset in offsets:
outFile.write("array + {}, ".format(sum))
sum=sum+offset
outFile.write("};")
Which would output PointerArray.c:
extern uint8 array[2000];
uint8* pointer_array[] = {
array + 0, array + 2, array + 6, array + 11, array + 12, array + 14, };
You could run the script as a Pre-build event, if your IDE supports it. Otherwise you will have to remember to run the script every time you update the offsets.
A: You mention that the data set lengths are pre-defined, but not how they are defined - so I'm going to make the assumption of how the lengths are written into code is up for grabs..
If you define your flash array in terms of offsets instead of lengths, you should immediately get a run-time benefit.
With lengths in flash, I expect you have something like this:
const uint8_t lengths[] = {1, 5, 9, ...};
uint8_t get_data_set_length(uint16_t index)
{
return lengths[index];
}
uint8_t * get_data_set_pointer(uint16_t index)
{
uint16_t offset = 0;
uint16_t i = 0;
for ( i = 0; i < index; ++i )
{
offset += lengths[index];
}
return &(array[offset]);
}
With offsets in flash, the const array has gone from uint8_t to uint16_t, which doubles the flash usage, plus an additional element to be speed up calculating the length of the last element.
const uint16_t offsets[] = {0, 1, 6, 15, ..., /* last offset + last length */ };
uint8_t get_data_set_length(uint16_t index)
{
return offsets[index+1] - offsets[index];
}
uint8_t * get_data_set_pointer(uint16_t index)
{
uint16_t offset = offsets[index];
return &(array[offset]);
}
If you can't afford that extra flash memory, ou could also combine the two by having the lengths for all elements and offsets for a fraction of the indices, e.g every 16 element in the example below, trading off run-time cost vs flash memory cost.
uint8_t get_data_set_length(uint16_t index)
{
return lengths[index];
}
uint8_t * get_data_set_pointer(uint16_t index)
{
uint16_t i;
uint16_t offset = offsets[index / 16];
for ( i = index & 0xFFF0u; i < index; ++i )
{
offset += lengths[index];
}
return &(array[offset]);
}
To simplify the encoding, you can consider using x-macros, e.g.
#define DATA_SET_X_MACRO(data_set_expansion) \
data_set_expansion( A, 1 ) \
data_set_expansion( B, 5 ) \
data_set_expansion( C, 9 )
uint8_t array[2000];
#define count_struct(tag,len) uint8_t tag;
#define offset_struct(tag,len) uint8_t tag[len];
#define offset_array(tag,len) (uint16_t)(offsetof(data_set_offset_struct,tag)),
#define length_array(tag,len) len,
#define pointer_array(tag,len) (&(array[offsetof(data_set_offset_struct,tag)])),
typedef struct
{
DATA_SET_X_MACRO(count_struct)
} data_set_count_struct;
typedef struct
{
DATA_SET_X_MACRO(offset_struct)
} data_set_offset_struct;
const uint16_t offsets[] =
{
DATA_SET_X_MACRO(offset_array)
};
const uint16_t lengths[] =
{
DATA_SET_X_MACRO(length_array)
};
uint8_t * const pointers[] =
{
DATA_SET_X_MACRO(pointer_array)
};
The preprocessor turns that into:
typedef struct
{
uint8_t A;
uint8_t B;
uint8_t C;
} data_set_count_struct;
typedef struct
{
uint8_t A[1];
uint8_t B[5];
uint8_t C[9];
} data_set_offset_struct;
typedef struct
{
uint8_t A[1];
uint8_t B[5];
uint8_t C[9];
} data_set_offset_struct;
const uint16_t offsets[] = { 0,1,6, };
const uint16_t lengths[] = { 1,5,9, };
uint8_t * const pointers[] =
{
array+0,
array+1,
array+6,
};
This just shows an example of what the x-macro can expand to. A short main() can show these in action:
int main()
{
printf("There are %d individual data sets\n", (int)sizeof(data_set_count_struct) );
printf("The total size of the data sets is %d\n", (int)sizeof(data_set_offset_struct) );
printf("The data array base address is %x\n", array );
int i;
for ( i = 0; i < sizeof(data_set_count_struct); ++i )
{
printf( "elem %d: %d bytes at offset %d, or address %x\n", i, lengths[i], offsets[i], pointers[i]);
}
return 0;
}
With sample output
There are 3 individual data sets
The total size of the data sets is 15
The data array base address is 601060
elem 0: 1 bytes at offset 0, or address 601060
elem 1: 5 bytes at offset 1, or address 601061
elem 2: 9 bytes at offset 6, or address 601066
The above require you to give a 'tag' - a valid C identifier for each data set, but if you have 500 of these, pairing each length with a descriptor is probably not a bad thing. With that amount of data, I would also recommend using an include file for the x-macro, rather than a #define, in particular if the data set definitions can be exported somewhere else.
The benefit of this approach is that you have the data sets defined in one place, and everything is generated from this one definition. If you re-order the definition, or add to it, the arrays will be generated at compile-time. It is also purely using the compiler toolchain, in particular the pre-processor, but there's no need for writing external scripts or hooking in pre-build scripts.
A: You said that you want to store the address of each data set but it seems like it would be much simpler if you store the offset of each data set. Storing the offsets instead of the addresses means that you don't need to know the address of big array at compile time.
Right now you have an array of constants containing the length of each data set.
const uint8_t data_set_lengths[] = { 1, 5, 9...};
Just change that to be an array of constants containing the offset of each data set in the big array.
const uint8_t data_set_offsets[] = { 0, 1, 6, 15, ...};
You should be able to calculate these offsets at design time given that you already know the lengths. You said yourself, just accumulate the lengths to get the offsets.
With the offsets precalculated the code won't have the bad performance of accumulating at run time. And you can find the address of any data set at run time simply by adding the data set's offset to the address of the big array. And the address of big array doesn't need to be settled until link time.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50177738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Power Query Assistance: How to return the range of three consecutive fields across a row with highest value out of all fields New to Power Query. Have pieced together what I've needed until now.
I need to return the sum of the three consecutive cells with highest value out of a range of cells. See the screenshot I provided as an example. This wouldn't necessarily include the MAX value of a specific cell in the given range.
I can do this all day long in a spreadsheet with MAX function and overlapping arrays. I can't seem to figure it out with Power Query although I am pretty certain List.Max will be involved somehow.
Help is appreciated.
March 23rd Edit:
I've revised and tried both code suggestions but either the code runs on for a very long period for the first suggestion (I killed it after one hour) or the 2nd suggestion errors out unable to find column 'Material 1' (what I renamed the 'Part Number' field in my screenshot).
I failed to mention that my data table has over 20k rows. I'm having trouble interpreting the M code suggestions you both provided, but are they looping thru each row one-by-one? Perhaps this is what is causing the lag. Maybe I would just be better off using VBA to prep the table in advance of Power Query by entering/filling a formula in a new column finding MAX(B1:N^1+C1:N^2+D1:N^3)? Seems like it might actually be faster in this case?
My version of both code suggestions:
1st Method:
let
Source = Excel.CurrentWorkbook(){[Name="tblSource"]}[Content],
#"Promoted Headers" = Table.PromoteHeaders(Source),
#"Changed Type" = Table.TransformColumnTypes(#"Promoted Headers",{{"Material 1", type text}, {"Sold Month 1", Int64.Type}, {"Sold Month 2", Int64.Type}, {"Sold Month 3", Int64.Type}, {"Sold Month 4", Int64.Type}, {"Sold Month 5", Int64.Type}, {"Sold Month 6", Int64.Type}, {"Sold Month 7", Int64.Type}, {"Sold Month 8", Int64.Type}, {"Sold Month 9", Int64.Type}, {"Sold Month 10", Int64.Type}, {"Sold Month 11", Int64.Type}, {"Sold Month 12", Int64.Type}}),
#"Unpivoted Other Columns" = Table.UnpivotOtherColumns(#"Changed Type",{"Material 1"}, "Attribute", "Value"),
#"Added Index" = Table.AddIndexColumn(#"Unpivoted Other Columns", "Index", 0, 1),
#"Reordered Columns" = Table.ReorderColumns(#"Added Index",{"Index", "Material 1", "Attribute", "Value"}),
#"Added Custom" = Table.AddColumn(#"Reordered Columns" ,"Sum",(i)=>
List.Sum(Table.SelectRows(#"Reordered Columns", each [Material 1]=i[Material 1] and [Index]=i[Index]) [Value])+
List.Sum(Table.SelectRows(#"Reordered Columns", each [Material 1]=i[Material 1] and [Index]=i[Index]+1) [Value])+
List.Sum(Table.SelectRows(#"Reordered Columns", each [Material 1]=i[Material 1] and [Index]=i[Index]+2) [Value])
, type number),
#"Grouped Rows" = Table.Group(#"Reordered Columns", {"Material 1"}, {{"Max", each List.Max([Sum]), type number}}),
// Merge the max into the original table
#"Merged Queries" = Table.NestedJoin(#"Grouped Rows", {"Material 1"}, #"Grouped Rows",{"Material 1"},"Table1",JoinKind.LeftOuter),
#"Expanded Table1" = Table.ExpandTableColumn(#"Merged Queries", "Table1", {"Max"}, {"Max2"})
in #"Expanded Table1"`
2nd Method:
let
Source = Excel.CurrentWorkbook(){[Name="tblSource"]}[Content],
#"Promoted Headers" = Table.PromoteHeaders(Source),
#"Changed Type" = Table.TransformColumnTypes(#"Promoted Headers",{{"Material 1", type text}, {"Sold Month 1", Int64.Type}, {"Sold Month 2", Int64.Type}, {"Sold Month 3", Int64.Type}, {"Sold Month 4", Int64.Type}, {"Sold Month 5", Int64.Type}, {"Sold Month 6", Int64.Type}, {"Sold Month 7", Int64.Type}, {"Sold Month 8", Int64.Type}, {"Sold Month 9", Int64.Type}, {"Sold Month 10", Int64.Type}, {"Sold Month 11", Int64.Type}, {"Sold Month 12", Int64.Type}}),
i = {"i"}, k = {"Material 1"},
base = Table.UnpivotOtherColumns(#"Changed Type", k, "Col", "Val"),
f = (n)=>let x = Table.Group(base, k, {"t", each Table.AddIndexColumn(_, "i", n)})
in Table.Combine(x[t]),
a = f(0),
b = f(1),
c = f(2),
join = Table.NestedJoin(Table.NestedJoin(a,i&k,b,i&k,"a"),i&k,c,i&k,"b"),
add = Table.AddColumn(join, "sum", each List.Sum({[Val],[a][Val]{0}?,[b][Val]{0}?})),
group = Table.Group(add, k, {"max", each List.Max([sum])}),
final = Table.Join(Source, k, group, k)
in final
A: Sample code below
Main trick is unpivot, then using custom column and index to add the current, next, and next+1 row if they are the same Part Number
let Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content],
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Part Number", type text}}),
#"Unpivoted Other Columns" = Table.UnpivotOtherColumns(#"Changed Type" , {"Part Number"}, "Attribute", "Value"),
#"Added Index" = Table.AddIndexColumn(#"Unpivoted Other Columns", "Index", 0, 1),
#"Added Custom" = Table.AddColumn(#"Added Index" ,"Sum",(i)=>
List.Sum(Table.SelectRows(#"Added Index" , each [Part Number]=i[Part Number] and [Index]=i[Index]) [Value])+
List.Sum(Table.SelectRows(#"Added Index" , each [Part Number]=i[Part Number] and [Index]=i[Index]+1) [Value])+
List.Sum(Table.SelectRows(#"Added Index" , each [Part Number]=i[Part Number] and [Index]=i[Index]+2) [Value])
, type number),
#"Grouped Rows" = Table.Group(#"Added Custom", {"Part Number"}, {{"Max", each List.Max([Sum]), type number}}),
// Merge the max into the original table
#"Merged Queries" = Table.NestedJoin(#"Changed Type",{"Part Number"}, #"Grouped Rows",{"Part Number"},"Table1",JoinKind.LeftOuter),
#"Expanded Table1" = Table.ExpandTableColumn(#"Merged Queries", "Table1", {"Max"}, {"Max"})
in #"Expanded Table1"
A: Slightly different way:
let
Source = Excel.CurrentWorkbook(){[Name="Table"]}[Content],
i = {"i"}, k = {"Part Number"},
base = Table.UnpivotOtherColumns(Source, k, "Col", "Val"),
f = (n)=>let x = Table.Group(base, k, {"t", each Table.AddIndexColumn(_, "i", n)})
in Table.Combine(x[t]),
a = f(0),
b = f(1),
c = f(2),
join = Table.NestedJoin(Table.NestedJoin(a,i&k,b,i&k,"a"),i&k,c,i&k,"b"),
add = Table.AddColumn(join, "sum", each List.Sum({[Val],[a][Val]{0}?,[b][Val]{0}?})),
group = Table.Group(add, k, {"max", each List.Max([sum])}),
final = Table.Join(Source, k, group, k)
in
final
| {
"language": "en",
"url": "https://stackoverflow.com/questions/66711688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to display Bootstrap Autocloseable alert message on asp.net button click? How to display Bootstrap Autocloseable alert message on asp.net button click?
Code as follows:
<div>
<asp:Button ID="btnLogin" runat="server" Text="Login" CssClass="btn btn-block org"
Style="margin-top: 0px" OnClick="btnLogin_Click showalertmsg();" ValidationGroup="Login" />
</div>
closeable code using javascript
<script type="text/javascript">
function showalertmsg(message, alerttype) {
$('#alert_placeholder').append('<div id="alertdiv" class="alert ' + alerttype + '"><a class="close" data-dismiss="alert">×</a><span>' + message + '</span></div>')
setTimeout(function () {
$("#alertdiv").remove();
}, 5000);
}
</script>
I called function name"showalertmsg" on asp button click it gives error?
error gives as follows while compiling
Compiler Error Message: CS1026: ) expected
A: Issues in the asp.net button
<asp:Button ID="btnLogin" runat="server" Text="Login" CssClass="btn btn-block org"
Style="margin-top: 0px" OnClick="btnLogin_Click" ValidationGroup="Login" OnClientClick="showalertmsg(); return false;" />
A: you can't call javascript fuction like this
OnClick="btnLogin_Click showalertmsg();"
But
use OnClientClick="showalertmsg();"
and also call return false at the end to stop postback
function showalertmsg(message, alerttype) {
$('#alert_placeholder').append('<div id="alertdiv" class="alert ' + alerttype + '"><a class="close" data-dismiss="alert">×</a><span>' + message + '</span></div>')
setTimeout(function () {
$("#alertdiv").remove();
}, 5000);
return false;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29365677",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How conv2D function change the input layer In my ResNet32 network coded using Tensorflow, the input size is 32 x 32 x 3 and the output of the
layer is 32 x 32 x 32. Why 32 channel is used ?
tf.contrib.layers.conv2d(
inputs,
**num_outputs**, /// how to determine the number of channel to be used in my layer?
kernel_size,
stride=1,
padding='SAME',
data_format=None,
rate=1,
activation_fn=tf.nn.relu,
normalizer_fn=None,
normalizer_params=None,
weights_initializer=initializers.xavier_initializer(),
weights_regularizer=None,
biases_initializer=tf.zeros_initializer(),
biases_regularizer=None,
reuse=None,
variables_collections=None,
outputs_collections=None,
trainable=True,
scope=None
)
Thank's in advance,
A: The 3 in input is the number to represent that the input image is RGB (color image), also known as color channels, and if it were a black and white image then it would have been 1 (monochrome image).
The 32 in output of this represents the number of neurons\number of features\number of channels you are using, so basically you are representing the image in 3 colors with 32 channels.
This helps in learning more complex and different set of features of the image. For example, it can make the network learn better edges.
A: By assigning stride=2 you can reduce the spatial size of input tensor so that the height and width of output tensor becomes half of that input tensor. That means, if your input tensor shape is (batch, 32, 32, 3) (3 is for RGB channel) to a Convolution layer having 32 kernels/filters with stride=2 then the shape of output tensor will be (batch, 16, 16, 32). Alternatively, Pooling is also widely used to reduce the output tensor size.
The ability of learning hierarchical representation by stacking conv layer is considered as the key to success of CNN. In CNN, as we go deeper the spatial size of the tensor reduces whereas the number of channel increases that helps to handle the variations in appearance of complex target object . This reduction of spatial size drastically decreases the required number of arithmetic operations and computation time with the motive of extracting prominent features contributing towards final output/decision. However, finding this optimal number of filter/kernel/output channel is time consuming and, therefore, people follow the proven earlier architectures e.g. VGG.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/58181449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Failed to resolve libraries when changing from jcenter() to mavenCentral() So I am working on an old project and Android Studio warned me to change from jcenter() to mavenCentral() as jcenter() is not beeing updated.
So this was my initial project build.gradle:
buildscript {
repositories {
google()
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:7.2.1'
classpath 'com.google.gms:google-services:4.3.13
}
}
allprojects {
repositories {
google()
jcenter()
maven { url "https://jitpack.io" }
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
And I have changed it to the following one:
buildscript {
repositories {
google()
mavenCentral()
}
dependencies {
classpath 'com.android.tools.build:gradle:7.2.1'
classpath 'com.google.gms:google-services:4.3.13
}
}
allprojects {
repositories {
google()
mavenCentral()
maven { url "https://jitpack.io" }
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
With the initial build.gradle using jcenter() everything was working fine, but using mavenCentral it fails to resolve these 2 github libraries:
"Failed to resolve: com.loopeer.lib:shadow:0.0.3"
"Failed to resolve: com.vatsal.imagezoomer:image-zoomer:1.0.2"
I don´t know why with jcenter() I don´t have any problem and using mavenCentral() yes, could there be any solution or maybe they can be only working with jcenter()?
The curious thing is that for example I have searched for information of one of this libraries and supposedly it should work with mavenCentral -> "ImageZoomer is available in the MavenCentral, so getting it as simple as adding it as a dependency" (This is what it is said in this github library)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/73433875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Python pylab chart plot and loops I'm just learning Python and I'm wondering if someone could help me get the chart to render properly in the following code, i.e. plot the sequence of data points.
I have put print statements so I can see if the calculations are correct which they are.
Thanks
from pylab import *
def some_function(ff, dd):
if dd >=0 and dd <=200:
tt = (22/-90)*ff+24
elif dd >=200 and dd <=1000:
st = (22/-90)*(ff)+24
gg = (st-2)/-800
tt = gg*dd+(gg*-1000+2)
else:
tt = 2.0
return tt
ff = float(25)
for dd in range (0, 1200, 100):
tt1 = some_function(ff, dd)
plot(dd,tt1)
print(dd)
print(tt1)
title("Something")
xlabel("x label")
ylabel("y label")
show()
A: Since you are plotting one point at a time, you need either a scatter plot or a plot with markers
for dd in range (0, 1200, 100):
tt1 = some_function(ff, dd)
scatter(dd, tt1) # Way number 1
# plot(dd,tt1, 'o') # Way number 2
EDIT (answering your second question in the comments below): Save the results in a list and plot outside the for loop
result = []
dd_range = range (0, 1200, 100)
for dd in dd_range:
tt1 = some_function(ff, dd)
result.append(tt1)
plt.plot(dd_range, result, '-o')
A: You can vectorize your function and work with NumPy arrays to avoid the for-loop and better inform matplotlib of what you want to plot
import numpy as np
from pylab import *
def some_function(ff, dd):
if dd >=0 and dd <=200:
tt = (22/-90)*ff+24
elif dd >=200 and dd <=1000:
st = (22/-90)*(ff)+24
gg = (st-2)/-800
tt = gg*dd+(gg*-1000+2)
else:
tt = 2.0
return tt
vectorized_some_function = np.vectorize(some_function)
ff = float(25)
dd = np.linspace(0, 1100, 12)
tt = vectorized_some_function(ff, dd)
plot(dd, tt)
title("Something")
xlabel("x label")
ylabel("y label")
show()
| {
"language": "en",
"url": "https://stackoverflow.com/questions/56100837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to generate flexible number of tabs in php? Hello guys I really need your help.
I am very new php or programming in general. So maybe my question is already solved but I wasn't able to adjust the code in correct way or see the correct solution. I have the following problem: I need a procedure that can handle flexible numbers of tabs. For example if I have 5 keywords, there should be 5 tabs generated. For 3 keywords 3 tabs and so on...
I know how to handle tabs their number is known and doesn't change (like for example always three tabs with 'home', 'news' and 'registration'). But in my case, number of tabs change in time.
I tried this code:
My php part:
<?php
/* How it was handled before I tried to make it dynamically:
<button class="tablink" onclick="openPage('Home', this, 'red')" id="defaultOpen">Home</button>
<button class="tablink" onclick="openPage('News', this, 'green')">News</button>
<button class="tablink" onclick="openPage('Contact', this, 'blue')">Contact</button>
<button class="tablink" onclick="openPage('About', this, 'orange')">About</button>
<div id="Home" class="tabcontent">
<h3>Home</h3>
<p> home... </p>
</div>
<div id="News" class="tabcontent">
<h3>News</h3>
<p> news... </p>
</div>
<div id="Contact" class="tabcontent">
<h3>Contact</h3>
<p> contact... </p>
</div>
<div id="About" class="tabcontent">
<h3>About</h3>
<p> about... </p>
</div>
My try making the number of tabs dynamically:
*/
$current_tab_number = 0; //Number of tabs generated
$end_number=3; //this should be the number of tabs at the end
while($current_tab_number < $end_number){
$current_tab_number = current_tab_number + 1;
?>
<button class="tablink" onclick="openPage('$current_tab_number', this, 'red')">Name</button>
<div id="$current_tab_number" class="tabcontent">
<h3>Name</h3>
<p> Some text...</p>
</div>
<?php
}
This is my javascript:
<!-- Javascript part -->
<script>
function openPage(pageName, elmnt, color) {
// Hide all elements with class="tabcontent" by default */
var i, tabcontent, tablinks;
tabcontent = document.getElementsByClassName("tabcontent");
for (i = 0; i < tabcontent.length; i++) {
tabcontent[i].style.display = "none";
}
// Remove the background color of all tablinks/buttons
tablinks = document.getElementsByClassName("tablink");
for (i = 0; i < tablinks.length; i++) {
tablinks[i].style.backgroundColor = "";
}
// Show the specific tab content
document.getElementById(pageName).style.display = "block";
// Add the specific color to the button used to open the tab content
elmnt.style.backgroundColor = color;
}
// Get the element with id="defaultOpen" and click on it
document.getElementById("defaultOpen").click();
</script>
As you can see, I only changed the php part. But since I'm a beginner, code doesen't work as expected and I have really no clue, how to adjust in correct way.
In addition it would be fantatstic, if tabs could be named flexible,too. Like 'tab 1', 'tab 2',...
I really hope, someone could help me.
So far, have a great day!
A: You need to know a name for each tab and how many to create, so create an array with the names you want, that tells you how many are needed and the name that should be used in the javascript call.
/*
My try making the number of tabs dynamically:
*/
$tabs = ['Home', 'News', 'Contact', 'About'];
foreach ($tabs as $tab){
?>
<button class="tablink" onclick="openPage('<?php echo $tab;?>', this, 'red')>Name</button>
<div id="<?php echo $tab;?>" class="tabcontent">
<h3>Name</h3>
<p> Some text...</p>
</div>
<?php
} // endforeach
A: Key Issue
The reason your code doesn't work is because you've missed a $ symbol from the line:
$current_tab_number = current_tab_number + 1;
So, effectively, your while loop becomes infinite as you don't increment the $current_tab_number variable.
Replace that line with:
$current_tab_number = current_tab_number + 1;
// OR
$current_tab_number++;
Which then becomes:
while($current_tab_number < $end_number){
$current_tab_number++;
// Print tabs here
}
Alternatively you could use a for loop like:
for($i = 0; $i < $end_number; $i++){
// print tabs here
}
Additionally
Using variables outside of PHP tags
This line:
<button class="tablink" onclick="openPage('$current_tab_number', this, 'red')">Name</button>
Will literally print the string $current_tab_number what it should be is:
<button class="tablink" onclick="openPage('<?php echo $current_tab_number; ?>', this, 'red')">Name</button>
Same here:
<div id="$current_tab_number" class="tabcontent">
<div id="<?php echo $current_tab_number; ?>" class="tabcontent">
We could easily get around this by just using echo to print the html instead. For example:
echo <<<EOT
<button class="tablink" onclick="openPage('{$current_tab_number}', this, 'red')">Name</button>
<div id="{$current_tab_number}" class="tabcontent">
<h3>Name</h3>
<p> Some text...</p>
</div>
EOT;
This method doesn't add data to the divs
The output of this code is expected to be:
<button class="tablink" onclick="openPage('Home', this, 'red')" id="defaultOpen">Home</button>
<button class="tablink" onclick="openPage('News', this, 'green')">News</button>
<div id="Home" class="tabcontent">
<h3>Home</h3>
<p> home... </p>
</div>
<div id="News" class="tabcontent">
<h3>News</h3>
<p> news... </p>
</div>
But will actually be:
<button class="tablink" onclick="openPage('<?php echo $current_tab_number; ?>', this, 'red')">Name</button>
<div id="<?php echo $current_tab_number; ?>" class="tabcontent">
<h3>Name</h3>
<p> Some text...</p>
</div>
<button class="tablink" onclick="openPage('<?php echo $current_tab_number; ?>', this, 'red')">Name</button>
<div id="<?php echo $current_tab_number; ?>" class="tabcontent">
<h3>Name</h3>
<p> Some text...</p>
</div>
To add data to the output we need to source it from somewhere, for example:
$tab_list = [
1 => [
"name" => "Home",
"content" => "Some text for content..."
],
2 => [
"name" => "About",
"content" => "Some text for about..."
]
];
foreach($tab_list as $tab_number => $tab){
echo "
<button class=\"tablink\" onclick=\"openPage('{$tab_number}', this, 'red')\">Name</button>
<div id=\"{$tab_number}\" class=\"tabcontent\">
<h3>{$tab["name"]}</h3>
<p>{$tab["content"]}</p>
</div>
";
}
A: You can iterate over the tab names and concatenate the string to generate the html code for tabs. For example:
$tab_names = ['home', 'about', 'contact'];
$resulting_html_code = "";
foreach ($tab_names as $tab){
$resulting_html_code .= "{$resulting_html_code} <div class='tab'>{$tab}</div>";
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65277887",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Can I write in a user input in Python? I would like to add a "default value" to an input().
Something like this:
>>> x = input("Enter a number: ", value="0")
Enter a number: 0
So that the user can edit it, like the value attribute in html <input>
I tried using pyautogui:
import pyautogui
def input_value():
x = input("Enter a number: ")
pyautogui.hotkey("0")return x
But naturally it doesn't work because pyautogui.hotkey() is called after input().
Is there any way to do this?
A: Take a look at pyinputplus for enhanced input() functionality. If the online documentation isn't enough, you can also read Automate the Boring Stuff with Python for a good guide on using the module
https://pypi.org/project/PyInputPlus/
All input functions have the following parameters:
default (str, None): A default value to use should the user time out or exceed the number of tries to enter valid input.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/71609779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: system.webServer/security/authorization in web.config how to migrate to aspcore I'm using a asp.net webapp which uses system.webServer in web.config and have a list of user accounts as roles.
<system.webServer>
<validation validateIntegratedModeConfiguration="false" />
<security>
<authorization>
<remove users="*" roles="" verbs="" />
<add accessType="Allow" roles="MOON\USER1" />
<add accessType="Allow" roles="MARS\USER2" />
</authorization>
</security>
</system.webServer>
and to achieve this authorization in asp.net core i tried to use different approaches and none seems working. What is the right and best way in asp.net core to implement authorization of web app to load for restricted users.
A: I don't know if you managed to find the solution to your issue but the first problem in that config file is that the auth rules are matched in order. All your requests are matching the deny first and you never get to evaluate the access for USER1 and USER2.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43038447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: StandardOutput.ReadToEnd() always NULL I have a problem with this function. It is supposed to output an install command to a textbox but install.StandardOutput.ReadToEnd() is always null.
I get the error: Expression cannot be evaluated because there is a native frame at the top of the call stack
Can you help me out with this?
Process install = new Process();
install.StartInfo.FileName = "cmd.exe";
install.StartInfo.UseShellExecute = false;
install.StartInfo.Arguments = "/all";
install.StartInfo.CreateNoWindow = true;
install.StartInfo.RedirectStandardInput = true;
install.StartInfo.RedirectStandardOutput = true;
install.Start();
if (CB_MultiADB.Checked == true)
{
install.StandardInput.WriteLine("FOR /F \"skip=1\" %%x IN ('adb devices') DO start cmd.exe @cmd /k" + InstallOption + InstallPfad + "\"");
}
else
{
install.StandardInput.WriteLine("adb install" + InstallOption + InstallPfad + "\"");
InstallAusgabe = install.StandardOutput.ReadToEnd();
string Index = "adb install";
int Indexnr = InstallAusgabe.IndexOf(Index);
string SubInstall = InstallAusgabe.Substring(Indexnr, 100);
TB_Ausgabe.Text = SubInstall;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60643262",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Register a Facebook user for an existing web app I have a Flex application that I would like to embed in a Facebook canvas. The app already runs on my website and is accessible there once the user has created an account. I am trying to work out the flow of integrating with Facebook:
*
*User accesses my app in Facebook;
*The app.facebook.com/myApp page loads;
*The Facebook SDk in myApp checks if the user is already logged into Facebook, and if not asks them to do so;
*Once I know the user is logged into Facebook myApp will redirect to my server which will check if the Facebook response.username already has an account.
*
*If the user does not have an account then I create one for them, log them in and redirect them back to the app page with a parameter hasAccount=true so that I know they do have an account. I then show them the app;
*If the user does have an account then I redirect them with hasAccount=true and show them the app.
Can anyone comment on that control-flow, is that normal or have I made it overly complicated?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18492783",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: TFS Sub-area name change I would like to know if its possible to edit the name of a sub-area in TFS. I know I can create new areas and delete but I can't seem to find a way to rename. There might be implications around this that would keep someone from performing this task.
A: In TFS 2012, you should be able to go the same screen you create the areas and double click (or right click on select edit) on the area you want to rename. At that point, you should be able to modify the name accordingly.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14244281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to abort only the current git cherry-pick and not previous ones? I have cherry picked multiple commits by doing git cherry-pick -x <SHA> and one of them failed (due to conflicts). I haven't pushed anything to the branch yet and I just want to abort the last cherry pick alone (the failed one) and keep everything else. Should I simply do git push to my branch and checkout all the files to discard the changes? Is there a command to abort just this?
git status shows something like this:
# On branch bugs-br # You are currently cherry-picking.
# (fix conflicts and run "git commit")
#
# Changes to be committed:
#
# modified: oper/svc/oper.cc
#
# Unmerged paths:
# (use "git add <file>..." to mark resolution)
#
# both modified: oper/svc_thread.cc
A: The command git cherry-pick --abort will only abort the last cherry pick you started. Previous complete cherry-pick operations are already committed, so they will not be affected.
So that's all you need to do at this point.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/72791909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: VBA: Replace text from txt for each line and save as Could somebody help me with the following? I tried making it on my own, but all I could do is open a txt and replace a static word with static word.
VBA script:
Open and Read first line of ThisVbaPath.WordsToUse.txt
Open and Find USER_INPUT in ThisVbaPath.BaseDoc.docx (or txt)
Replace all occurrence of USER_INPUT with first line from WordsToUse.txt
Save BaseDoc.docx as BaseDoc&First_Line.docx
Close all
Go next line and do same, but don't ask for user input, use previous one
If error go next
When done show if there were any errors (unlikely I guess)
I would use it about weekly for 150-ish lines.
Thanks!
A: I think something like this should work?
Sub test()
Dim text, textReplace, findMe As String
findMe = InputBox("What Text To Find?")
Open "C:\Excel Scripts\test.txt" For Input As #1
Open "C:\Excel Scripts\test2.txt" For Input As #2
Open "C:\Excel Scripts\output.txt" For Output As #3
While Not EOF(1)
Line Input #1, text
While Not EOF(2)
Line Input #2, textReplace
Write #3, Replace(text, findMe, textReplace)
Wend
Wend
Close #1
Close #2
Close #3
End Sub
A: Sub TextFile_FindReplace()
Dim TextFile As Integer
Dim FilePath As String
Dim FileContent As String
FilePath = Application.ActiveWorkbook.Path & "\NEWTEST.txt"
TextFile = FreeFile
Open FilePath For Input As TextFile
FileContent = Input(LOF(TextFile), TextFile)
FileContent = Replace(FileContent, "FOO", "BAR")
Print #TextFile, FileContent
Close TextFile
End Sub
| {
"language": "en",
"url": "https://stackoverflow.com/questions/39109383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Display an overlay when submitting a form I want to display an overlay when the user is submitting a form. I'm using $('form').submit(function { [...] }) to achieve this.
The problem is that I'm using client-side validation. The immediate solution is of course to manually validate the form: if ($('form').validate().valid()) { .. }
The ultimate solution would be to hook onto some event which is invoked when the POST is made. (To support any other plugins that might prevent the form from submitting and when the validation plugin is not used.)
A: Assuming you're using jQuery validate, you can use the submitHandler property to run code when the validation passes, for example:
$("#myForm").validate({
submitHandler: function(form) {
// display overlay
form.submit();
}
});
Further reading
A: Try to return false; on validation errors while submiting.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/9311752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: mysql utf8 turkish characters not correct recognized In mysql utf8 coded turkish data i can't search "İ" and "ı". when i search its giving result contains "Y" or "y". Because in latin1 "İ" displaying as "Ý" and "ı" as "ý".
in latin1 data i was used latin1_general_ci for correct result. but there is not alternative collation for utf8. its already utf8_general_ci.
is there any other people have some problems or do you have a solution.
thanks.
i have tried stackoverflow search engine to for this problem. if its have mysql and utf8 then my work true. try search "alİ" and "ali". both search give another result. but both same in turkish. the "İ" is capital i and capital "I" is "ı" in turkish.
there is a solution but not fully.
i you use utf8_turkish_ci then the result give "İ" but also "Y".
A: the problem is temporary solved. i you use all collations uf8_turkish_ci you can get correct result. but i am wondering why i have to use turkish_ci.
try collate all columns utf8_turkish_ci, tables utf8_turkish_ci, and database too.
good luck
| {
"language": "en",
"url": "https://stackoverflow.com/questions/2706266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Does Facebook allow CORS headers? How can I discover if Facebook allowes cross domain requests? Can I make a JavaScript which can handle a Cross Domain Request to Facebook?
A: Are you talking about the Facebook API? Anyway:
http://www.test-cors.org
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16496622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to bulk load JSON with values into Synapse SQL dedicated pool I'm attempting to bulk load json files, along with their filenames and paths into a synapse analytics dedicated sql table but I'm just stumped on how to accomplish it. I can load the json files solo without a problem, but I really need the additional values as well.
This is what I'm trying but it doesn't work:
Copy INTO dbo.PolicyStagingJsonOnly
SELECT jsonContent,
[result].filename() as fn,
[result].filepath() as fp
FROM
OPENROWSET(
BULK 'https://datalakexxxx.blob.core.windows.net/staging/policy/*.json',
FORMAT = 'CSV',
FIELDQUOTE = '0x0b',
FIELDTERMINATOR ='0x0b',
rowterminator = '0x0c'
)
WITH (
jsonContent varchar(MAX)
) AS [result]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74307160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to check for bounced emails in rails? I'm currently creating an email app that is able to send emails to many users. However, I want to know whether there are bounced emails. I'm currently using Amazon SES to notify me if the email is bounced. However, I want the bounced email's data to be automatically entered into my Rails application instead of typing it manually based to the mailer daemons I get from Amazon. Is there are way to do so?
A: If you are willing to pay, this SaaS site called bouncely seems to provide an interface and an api wrapper around SES bounces.
A: send_email() returns a response object which can be interrogated for the response metadata.
Compare the status of this with the code you are interested in, perhaps a 550.
A: I couldn't find any clean existing solution for this, so I wrote a gem (email_events) that allows you to put a email event handler method (including for bounce events) right in your mailer class: https://github.com/85x14/email_events. It supports SES and Sendgrid, so should work for you if you still need it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12077481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Sql view from multiple tables I am stuck in a query, please help.
I want to create view.
Table1
ID | Acode | Bcode | Ccode |
1 | 10 | 101 | 102 |
2 | 11 | 100 | 101 |
3 | 10 | 100 | 102 |
Table2
Acode | Adescription |
10 | English |
11 | Math |
Table3
Bcode | Bdescription |
100 | Grade A |
101 | Grade B |
Table4
Ccode | Cdescription |
100 | Level A |
101 | Level B |
102 | Level C |
I want to print all rows in Table1 with description from other tables based on code in table1.
Output should be:
data
NewView
ID | Acode |Adescription | Bcode | Bdescription | Ccode | Cdescription |
1 | 10 | English | 101 | Grade B | 102 | Level C |
2 | 11 | Math | 100 | Grade A | 101 | Level B |
3 | 10 | English | 100 | Grade A | 102 | Level C |
I created left join but it returns more rows than actual in table1. I want to have only all records from table1 with description from other tables.
Please help
A: Below is an example. Since you didn't post your original query attempt, we can't really say why you were getting multiple rows. No need for a LEFT JOIN unless you are missing codes in the joined tables.
SELECT Table1.ID
, Table1.Acode
, Table2.Adescription
, Table1.Bcode
, Table3.Bdescription
, Table1.Ccode
, Table4.Cdescription
FROM dbo.Table1
JOIN dbo.Table2 ON Table2.Acode = Table1.Acode
JOIN dbo.Table3 ON Table3.Bcode = Table1.Bcode
JOIN dbo.Table4 ON Table4.Ccode = Table1.Ccode;
A: Thanks for help
LEFT Join worked well. I tried to narrow down the tables one by one and found the table where I was getting duplicate records. After finding table I found that I forgot to add Unique key and 1 record (Description) was entered twice which was giving duplicate records and total number of rows were increased.
Thanks all to help me out,and Dan Guzman to point me for duplicate codes.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29452123",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to get artist and title information using python-vlc? I'm trying to record and get song information (title and artist) from web radios using python-vlc lib.
The recording functionality is working well but the media parse to get the song information doesn't work!
This is my code:
inst = vlc.Instance() # Create a VLC instance
p = inst.media_player_new() # Create a player instance
cmd1 = "sout=file/ts:%s" % outfile
media = inst.media_new("http://playerservices.streamtheworld.com/api/livestream-redirect/JBFMAAC1.aac", cmd1)
media.get_mrl()
p.set_media(media)
p.play()
media.parse()
for i in range(13):
print("{} - {}".format(i, media.get_meta(i)))
It's always returning "MediaParsedStatus.skipped" status. And all song information returns "None". I tested the same radio in VLC App and there it works fine.
Anyone can help me?
thanks in advance
A: Since this is a stream, libvlc will not parse it by default (only local files).
You need to use a flag to tell libvlc to parse the media even if it is a network stream.
Use libvlc_media_parse_with_options with MediaParseFlag set to network (1).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54524434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: badimageformatexception dynamic code generation I'm trying to dynamically generate an executable with the CSharpCodeProvider whose only purpose is to invoke one single method from a specific dll. When I execute the generated file I get a BadImageFormatException.
I already set the platform to x86. When I manually write the code which invokes the method and debug it in Visual Studio, it works perfectly fine.
This is the code for the executable:
using DLLName;
namespace ExampleNamespace
{
class Program
{
public static void Main(string[] args)
{
MyClass obj = new MyClass();
obj.MyMethod();
}
}
}
Before dynamically compiling the code I add the assembly via
compilerParameters.ReferencedAssemblies.Add("PathToDLL");
I write the executable to the same directory as the dll.
EDIT
This is the code that I use to invoke the compiler:
CSharpCodeProvider provider = new CSharpCodeProvider();
CompilerParameters parameters = new CompilerParameters();
parameters.GenerateExecutable = true;
parameters.GenerateInMemory = false;
parameters.OutputAssembly = @"DirectoryOfDLL\MyMethod.exe";
parameters.ReferencedAssemblies.Add("PathToDLL");
provider.CompileAssemblyFromSource(parameters, code);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/28500100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Distinguish swipe and click in Angular I use ng-click to show details of an item in a list and also I attach jQuery mobile swipe event to the list item to show delete button when a user swipes to the left.
The problem is that when I swipe on the element, it doesn't only emit swipe event, but click event as well. So, when I want to swipe to delete an element, it shows the delete button and opens details view.
What can I do about this? It would be cool to have something like ng-swipe.
A: The most recent unstable angularjs (1.1.4+) builds include both an ngSwipeLeft and ngSwipeRight directive. You must take care to reference the angular-mobile.js library.
https://github.com/angular/angular.js/blob/master/src/ngMobile/directive/ngSwipe.js
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16694554",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: iOS - How to set nil in UITextField (It should be visible) I know this is tricky question.
Is there any way to visible nil value in UITextField?
I tried with this code but TextField showing blank.
myTextField.text = nil;
For this task we can't use myTextField.text = @"nil"; (Without using @"nil" I need to complete this task)
But It should be look like.
A: Your requirement isn't valid. The only way to get the text nil to appear in the text field is to assign the actual text @"nil" to the text field.
It makes no sense that you to see nil without using @"nil" to do so.
Perhaps setting the placeholder to @"nil" is a compromise.
A: i think you can use only myTextField.text = @"nil"; or instead of showing nil you can show proper message inside uitextfield or bellow textfiled input is not valid. showing nil is doesn't helps to user. you can use alert to show no input.
A: As others have said, this seems like a strange requirement, but you could create a subclass of UITextField that overrides the getter of the text property so that it returns @"nil" if text was actually nil.
A: Try This self.textFieldName.placeholder = @"nil";
| {
"language": "en",
"url": "https://stackoverflow.com/questions/39707135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: MacOS Application Reopens After the User Quits I have an app on the Mac App Store and over the last few weeks some people have complained that when they quit the app, it will reopen again.
Since the app is sandboxed, it should be technically impossible for my app to reopen itself (even if I want it to) after the user choose to quit it. So, I have no clue what went wrong and I can't reproduce this issue.
Any ideas?
A: We figured out what went wrong and how to fix it.
First off, since the app is sandboxed it's technically impossible that we caused this with our code. However, according to a user, there was a plist file (named after our app) in the LauchAgents directory that caused the restarting of our app. After deleting that file, everything was fine again. As to why this entry existed in the first place and how it got there: ¯\ _(ツ)_/¯
Hope this helps anyone who has the same problem.
A: We haven't seen this exact issue, but something similar where we overrode the - (NSApplicationTerminateReply)applicationShouldTerminate:(NSApplication *)sender method.
And in certain circumstances we were returning NSTerminateLater or 'NSTerminateCancelinstead ofNSTerminateNow`. In turn, the application would continue running even after the user told us to quit.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/48136265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Make nested API call to fetch all list In my scenario, I've got an API call to hit and on the response of the API, I've to call the final API to get records of list.
From the first API, I'll get a flag on which I've to make the conditional API call to get the records.
The problem is, GET_LIST gets called by react-admin and in that, I cannot handle two asynchronous API call.
I tried with Query and custom DataSource.
This is the API type called by the react-admin to fetch list.
GET_LIST: {
request(params: GetListParams): Ticket {
return {
method: 'GET',
url: `${base}/q`,
headers: makeHeaders(token),
query: mapQueryParams(params, universalFilters)
}
},
response(res: Object): {total: number, data: Array<Contact>} {
const {hits, rows} = res
return {
total: hits,
data: rows.map(row => {
return {...row, id: getId(row)}
})
}
}
},
This is the component that is called for the list.
export const ContactList = (props) => {
return (
<List {...props} filters={<Search />} bulkActionButtons={<PostBulkActions />}>
<Datagrid className="data-grid">
<GravatarField source="email" label="" sortable={false} />
<LinkField
source="first"
label="First Name"
text={ ({first}) => first }
deriveUrl={ ({id}) => `/#/contacts/${id}` }
target='' />
<LinkField
source="last"
label="Last Name"
text={ ({last}) => last || '(unknown name)' }
deriveUrl={ ({id}) => `/#/contacts/${id}` }
target='' />
<DateField source="lastActivity" />
<TextField source="email" />
<PhoneNumberField source="primaryPhone" label="Phone" sortable={false} />
<MultiTagField source="tags" sortable={false} deletable={false} />
{/* <DeleteButton /> */}
</Datagrid>
</List>
)
}
A: From what I understand about your code, this should be the job of your dataprovider. As it should return a promise, you can make the two api calls in it and resolve only after you get the second call response
| {
"language": "en",
"url": "https://stackoverflow.com/questions/57107799",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: "The SQL statement is not valid. There are no columns detected in the statement" I am merging data from one file that contains six different sheets or tabs. All sheets contain the same headers. For the task I am using PowerPivot.
1st I created a connection using Excel and selected the file/imported Sheet1
2nd I activated my PowerPivot > Design > Existing Connections
3rd Table properties Table Name [Sheet1] > Switch to [Query Editor]
Here's my SQL that resulted the error in the title:
Select [Sheet1$].* From [Sheet1$]
UNION ALL
Select [Sheet2$].* FROM `C:\_TEST\HG.xlsx`.[Shee2$]
UNION ALL
Select [Sheet3$].* FROM `C:\_TEST\HG.xlsx`.[Shee3$]
UNION ALL
Select [Sheet4$].* FROM `C:\_TEST\HG.xlsx`.[Shee4$]
UNION ALL
Select [Sheet5$].* FROM `C:\_TEST\HG.xlsx`.[Shee5$]
UNION ALL
Select [Sheet6$].* FROM `C:\_TEST\HG.xlsx`.[Shee6$]
When I validated the statement had the error:
The SQL statement is not valid. There are no columns detected in the statement.
A: To answer and share how I resolved my question I found that when I created the connection I can still select sheets 2 to 6 in the dropdown from existing connections. So I tried this code .........and resolved the error :).
Select [Sheet1$].* From [Sheet1$]
UNION ALL
Select [Sheet2$].* FROM [Shee2$]
UNION ALL
Select [Sheet3$].* FROM [Shee3$]
UNION ALL
Select [Sheet4$].* FROM [Shee4$]
UNION ALL
Select [Sheet5$].* FROM [Shee5$]
UNION ALL
Select [Sheet6$].* FROM [Shee6$]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33183234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Pandas boolean dataframe search returns False but should be True The Problem
I'm attempting to search through a pandas dataframe to find a single value. The dataframe columns I'm searching through are of type float64.
Working Example
Here is a working example of what I'd like, with a dataframe of type int64.
myseries = pd.Series([1,4,0,7,5], index=[0,1,2,3,4])
myseries
The output is the following:
0 1
1 4
2 0
3 7
4 5
dtype: int64
Now for the search:
myseries == 4
Results:
0 False
1 True
2 False
3 False
4 False
dtype: bool
Not Working Example
Here is a sample of my data.
df['difference']
Result
0 -2.979296
1 -0.423903
2 0.396515
...
48 0.450493
49 -1.216324
Name: priceDiff1, dtype: float64
As you can see, it is of type float64.
Now here's the issue. If I copy the value on row 2, and create a conditional statement like before, it doesn't return the True.
df['difference'] == 0.396515
Output
0 False
1 False
2 False
...
48 False
49 False
Name: priceDiff1, dtype: bool
Row 2 should be True.
Any assistance at this issue with this issue would be great.
What I believe is happening, is that my query isn't setting the type to float64 and might be assuming it's a different type. I've tested this by downcasting the column type from float64 to float32, with no luck.
A: You want to use Numpy's isclose
np.isclose(s, 0.396515)
array([False, False, True, False, False, False], dtype=bool)
A: Your python series stores, or points to, numeric data represented as floats, not decimals.
Here is a trivial example:-
import pandas as pd
s = pd.Series([1/3, 1/7, 2, 1/11, 1/3])
# 0 0.333333
# 1 0.142857
# 2 2.000000
# 3 0.090909
# 4 0.333333
# dtype: float64
s.iloc[0] == 0.333333 # False
s.iloc[0] == 1/3 # True
As @piRSquared explains, use np.isclose for such comparisons. Or, alternatively, round your data to a fixed number of decimal places.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/48818340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Infinite loop through array of strings in React New to programming, and I'd really appreciate some help in rendering a React component. I have an array of strings, and I'd like to display each string in that array every 5 seconds on an infinite loop. The error I'm running into when trying to set the state is 'this.setState is not a function'. I'm inclined to think I'm using the wrong lifecycle method or there is a binding issue, but I'm lost. Below is the code. I'd really appreciate some help.
import React, {Component} from 'react';
class Home extends Component{
constructor(props){
super(props);
this.state = {
service: ''
}
}
componentDidMount(){
var services = ['Delivering professional and personalized care to your loved ones','Home visits with a personalized health plan', 'Transition Assistace', 'Advocacy and Guidance', 'Respite Care']
let i=0;
setInterval(function(){
console.log('set interval working');
const currentService = services[i];
this.setState({
service: currentService
})
i++;
if(i>=services.length){
i = 0;
}
}, 5000);
}
render(){
console.log('this', this.state.service);
return(
<div className="home">
<div className="profile-img"></div>
<div className="mission">
<div className="overlay">
<p>{this.state.service}</p>
</div>
</div>
</div>
)
}
}
export default Home;
A: The this in your function in the setInterval isn't the this from ComponentWillMount .. that's why it fails. Do something like:
var that = this;
before you call setInterval and then
that.setState()
You can read more about the this keyword here.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/42542103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Error when plotting using ggplot2: Error in validate_facets(x) : object 'set' not found I want to plot data in R but I keep getting an error related to facet_grid(-set) which is: Error in validate_facets(x) : object 'set' not found
install.packages('Tmisc')
library(Tmisc)
data(quartet)
View(quartet)
quartet %>%
group_by(set) %>%
summarize(mean(x), sd(x), mean(y), sd(y), cor(x,y))
ggplot(quartet,aes(x,y))
+ geom_point()
+ geom_smooth(method=lm,se=FALSE)
+ facet_wrap(-set)
> Error in validate_facets(x) : object 'set' not found
A: Try this code:
install.packages('Tmisc')
library(Tmisc)
data(quartet)
View(quartet)
quartet %>%
group_by(set) %>%
summarize(mean(x), sd(x), mean(y), sd(y), cor(x,y)) %>%
ggplot(quartet,aes(x,y))
+ geom_point()
+ geom_smooth(method=lm,se=FALSE)
+ facet_wrap(~set)
you have to use tilde not dash in front of set because set is an independent variable in the quartet dataset. dash is used to remove a column from selection
A: It works with the code:
ggplot(quartet,
aes(x,y)) +
geom_point() +
geom_smooth(method=lm,
se=FALSE) +
facet_wrap(~set)
A: Try to install the tidyverse package and then this command supposed to be work. since ggplot function is under the tidyverse package
| {
"language": "en",
"url": "https://stackoverflow.com/questions/69810152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to delete data from the list view after checked in android Hi the issue is that the selected data's are appearing in the list view, but actually we would like to delete the data's from the list view which is checked means position deletion
can anybody join us to resolve the problem
coding
protected void onActivityResult(int requestCode, int resultCode, Intent data)
{
super.onActivityResult(requestCode, resultCode, data);
if(data!=null){enter code here
Bundle bundle = data.getExtras();
if(requestCode ==1){
selectedConatcts = bundle.getStringArrayList("sel_contacts");
Log.v("", "Selected contacts-->"+selectedConatcts);
if(selectedConatcts.size()<0){
}else{
for(int i =0;i<selectedConatcts.size();i++){
RelativeLayout lnr_inflate = (RelativeLayout)View.inflate(thisActivity, R.layout.contacts_inflate, null);
// EditText edt = (EditText)lnr_inflate.findViewById(R.id.edt_contact);
String selectednames =selectedConatcts.get(i) ;
List<String> stringList = new ArrayList<String>(Arrays.asList(selectednames));
final ListView edt = (ListView)lnr_inflate.findViewById(R.id.edt_contact);
ArrayAdapter<String> adaptercon = new ArrayAdapter<String>(thisActivity, android.R.layout.select_dialog_multichoice,stringList);
edt.setAdapter(adaptercon);
edt.setChoiceMode(ListView.CHOICE_MODE_MULTIPLE);
edt.setOnItemClickListener(new OnItemClickListener() {
@Override
public void onItemClick(AdapterView<?> arg0,
View arg1, int arg2, long arg3) {
// TODO Auto-generated method stub
final int len = edt.getCount();
final SparseBooleanArray checked = edt.getCheckedItemPositions();
for(int i =0;i<len;i++){
if (checked.get(i)) {
// selectedContacts.add(names[i]);
// selectedConatcts.get(i);
// selectedConatcts.remove(i);
selectedConatcts.remove(phone_nos[i]);
contactdisp.removeViewAt(i);
//you can you this array list to next activity
// do whatever you want with the checked item
}
// selectedConatcts.get(i);
System.out.println("i m in check button cheked"+selectedConatcts.get(i));
}
selectedConatcts.remove(phone_nos[arg2]);
contactdisp.removeViewAt(arg2);
}
});
contactdisp.addView(lnr_inflate);
}}}}
A: *
*Better check whether the checkbox is checked or not code write under the button_click
*Remove Items
remove selectedConatcts.remove(phone_nos[arg2]); contactdisp.removeViewAt(arg2);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29821463",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: How to do restful ajax routes to methods in Laravel 5? So I have a route that looks like this:
Route::any('some/page', ['as' => 'some-page', 'uses' => 'SomePageController@index']);
However, I also have ajax calls at the same URL (using a request parameter called ajax like: some/page/?ajax=my_action) that I want to hit methods on my controller:
index already routes: 'SomePageController@index'
ajax = my_action needs to route: 'SomePageController@ajaxMyAction'
ajax = my_other_action needs to route: 'SomePageController@ajaxMyOtherAction'
ajax = blah_blah needs to route: 'SomePageController@ajaxBlahBlah
...
What's the elegant solution to setting this up in my routes.php file?
A: After inspection of Laravel's Http Request and Route classes, I found the route() and setAction() methods could be useful.
So I created a middleware to handle this:
<?php namespace App\Http\Middleware;
class Ajax {
public function handle($request, Closure $next)
{
// Looks for the value of request parameter called "ajax"
// to determine controller's method call
if ($request->ajax()) {
$routeAction = $request->route()->getAction();
$ajaxValue = studly_case($request->input("ajax"));
$routeAction['uses'] = str_replace("@index", "@ajax".$ajaxValue, $routeAction['uses']);
$routeAction['controller'] = str_replace("@index", "@ajax".$ajaxValue, $routeAction['controller']);
$request->route()->setAction($routeAction);
}
return $next($request);
}
}
Now my route looks like:
Route::any('some/page/', ['as' => 'some-page', 'middleware'=>'ajax', 'uses' => 'SomePageController@index']);
And correctly hits my controller methods (without disturbing Laravel's normal flow):
<?php namespace App\Http\Controllers;
class SomePageController extends Controller {
public function index()
{
return view('some.page.index');
}
public function ajaxMyAction(Requests\SomeFormRequest $request){
die('Do my action here!');
}
public function ajaxMyOtherAction(Requests\SomeFormRequest $request){
die('Do my other action here!');
}
...
I think this is a fairly clean solution.
A: You can't make this dispatch in the routing layer if you keep the same URL. You have two options :
*
*Use different routes for your AJAX calls. For example, you can prefix all your ajax calls by /api. This is a common way :
Route::group(['prefix' => 'api'], function()
{
Route::get('items', function()
{
//
});
});
*If the only different thing is your response format. You can use a condition in your controller. Laravel provides methods for that, for example :
public function index()
{
$items = ...;
if (Request::ajax()) {
return Response::json($items);
} else {
return View::make('items.index');
}
}
You can read this http://laravel.com/api/5.0/Illuminate/Http/Request.html#method_ajax and this http://laravel.com/docs/5.0/routing#route-groups if you want more details.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/28970350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How can I scaffold with bootstrap different number of columns? Is there a way to scaffold on a fluid layout two rows with the one having 3 the other 4 and have those centered on my page.
_________________ __________________ ______________________
______________ _______________ ______________ _____________
The problem is that I use the first and the last grid column to center and then I have 10 columns and not been
able to separate them properly.
A: Have you tried using the offset*classes?
http://twitter.github.io/bootstrap/scaffolding.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16308557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to sort by texture? Now and then the phrase "sort by texture" appears in OpenGL-related posts. While I see how this sort could be implemented for a single texture bound to a certain texture unit, I am not certain how this might be done for multiple textures bound to arbitrary texture units. Perhaps by capturing the state of the TUs into an integer id, then use the id as an index into either an array or a map? How to best do the "sort by texture" in an OpenGL application?
A: Treating texture units as levels in a tree and do a sorted depth-first traversal is a good starting point.
Ideally what you want to do is minimize the number of texture unit state changes which may mean that the same texture gets selected and deselected multiple times in the optimal case. You suggestion of sorting the tuples of texture IDs is a good idea as well.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/23338048",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Spread two rows to one with tidyr I've seen this, this and this but still don't know how to solve the following problem within the tidyr::spread() function.
Here's my example data frame:
libary(tidyverse)
df <- structure(list(Jaar = c(2014L, 2018L), Gemeente = c("Stichtse Vecht",
"Stichtse Vecht"), GMcode = c("GM1904", "GM1904"), Partij = c("VVD",
"VVD"), Aantal_stemmen = c(4347L, 0L)), .Names = c("Jaar", "Gemeente",
"GMcode", "Partij", "Aantal_stemmen"), row.names = c(NA, -2L), class = c("tbl_df",
"tbl", "data.frame"))
result:
# A tibble: 2 x 5
Jaar Gemeente GMcode Partij Aantal_stemmen
<int> <chr> <chr> <chr> <int>
1 2014 Stichtse Vecht GM1904 VVD 4347
2 2018 Stichtse Vecht GM1904 VVD 0
When I run the following code I don't get the desired one row but two with NA's:
df %>%
rowid_to_column() %>% # Without this in my original dataframe I'll get an error: Error: Duplicate identifiers for rows
spread(Jaar, Aantal_stemmen)
result:
# A tibble: 5,938 x 6
rowid Gemeente GMcode Partij `2014` `2018`
<int> <chr> <chr> <chr> <int> <int>
1 1 Stichtse Vecht GM1904 VVD 4347 NA
2 2 Stichtse Vecht GM1904 VVD NA 0
A: I am not exactly sure what you want as you didn't provide the wanted output. I hope the following is of help to you.
The call to rowid_to_column generates a column with 2 rows. That is what it is intended to do. Dropping it solves your problem:
df %>%
# rowid_to_column() %>%
spread(Jaar, Aantal_stemmen)
which gives
# A tibble: 1 x 5
Gemeente GMcode Partij `2014` `2018`
<chr> <chr> <chr> <int> <int>
1 Stichtse Vecht GM1904 VVD 4347 0
Please let me know whether this is what you want.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49409325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: PyInstaller and EasyOCR I have a ocr gui application which has easyocr in it. When I ran the project from pyCharm, it works without any problem. However when I build the project with pyinstaller to an exe form, other ocr algorithms work but easyOCR terminates without showing any error.
python -m PyInstaller --paths "fullpath-to-custom-libraries" --add-data "C:\Program Files\Tesseract-OCR;Tesseract-OCR" --collect-all easyocr --onedir -w main.py
Following warning messages appear after PyInstaller command :
13048 INFO: Determining a mapping of distributions to packages...
40136 WARNING: Unable to find package for requirement opencv-python-headless from package easyocr.
40136 WARNING: Unable to find package for requirement Pillow from package easyocr.
40137 WARNING: Unable to find package for requirement scikit-image from package easyocr.
40137 WARNING: Unable to find package for requirement python-bidi from package easyocr.
40137 WARNING: Unable to find package for requirement PyYAML from package easyocr.
40137 INFO: Packages required by easyocr:
['torch', 'torchvision', 'scipy', 'numpy']
EXE file is generated successfully. But, when I distribute the bundle, application terminates without any error in easyOCR operation. Other OCR's work.
I noticed that dist info folder name and library folder name differ for these below in "venv" environment. I have many other packages installed. However, their library and dist info folder names are same. Can this cause the warnings in PyInstaller ?
cv2
opencv_python_headless-4.5.1.48.dist-info
PIL
Pillow-8.2.0.dist-info
skimage
scikit_image-0.19.2.dist-info
bidi
python_bidi-0.4.2.dist-info
yaml
PyYAML-6.0.dist-info
I cannot pass easyOCR to PyInstaller. How to add required libraries for easyOCR properly then?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/71644061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Adding views with controllers to a view with controller - proper way? I have to add several views (each having own controller) to a main view (with controller). I am following MVC. Should the code to add these subviews be written in view class or controller class? Also, what is proper way,
MyViewController1 *myViewController1 = [[MyViewController1 alloc] init];
[myMainViewController.view addSubview:myViewController1.view];
Or, some other way?
There is another option - container view controller (with addChildViewController method), but that is tough to manage, so I need the simple way.
A: If you're adding view controllers to the view of another view controller, then you need to use container containment. You can do that in IB with container views. That makes it easier, than making custom container controllers in code.
A: The Absolute best way is to maintain ViewControllers according to their functionality (ex. one might be dashboardView one might be settingsView). Now when moving from one view controller to another is to use navigationController.
The practice I follow is to declare one navigationController in appDelegate when your app starts and then keep reusing this. Example -
YourAppDelegate *delegate=(YourAppDelegate *)[[UIApplication sharedApplication] delegate];
MyViewController1 *myVC = [[ FLOHome alloc ]initWithNibName:@"MyViewController1" bundle:[NSBundle mainBundle]];
[delegate.navigationController pushViewController:myVC animated:NO];
This is the absolute best way when dealing with viewControllers. navigationController handles whole lot of stuff like memory management, caching views to make them snappy. You could keep pushing viewcontrollers and poping them when you exit from them...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14576447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What's wrong with my camera? I'm using VS2010 together with C# and XNA 4.0. I'm following the book "Learning XNA 4.0" by Aaron Reed. I added the camera game component to my project but this camera behaves really weird. Moving mouse makes it "jump" and then crash. I don't know what is causing this. It looks that the code is almost identical to that in book. It worked earlier, but now it isn't, although I can't remember whether I change anything in it (I think not). Note that I am only registering this component in game, nothing more. It should work, but it does not. Any help would be great. Here's the code:
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Input;
namespace MyGame
{
public class Camera : Microsoft.Xna.Framework.GameComponent
{
public Matrix View { get; protected set; }
public Matrix Projection { get; protected set; }
public Vector3 CameraPosition { get; protected set; }
public Vector3 CameraDirection { get; protected set; }
public Vector3 CameraUp { get; protected set; }
private MouseState prevMouseState;
public Camera(Game game, Vector3 pos, Vector3 target, Vector3 up)
: base(game)
{
// Some initialization of camera.
CameraPosition = pos;
CameraDirection = target - pos;
CameraDirection.Normalize();
CameraUp = up;
CreateLookAt();
Projection = Matrix.CreatePerspectiveFieldOfView(
MathHelper.PiOver4,
(float)Game.Window.ClientBounds.Width /
(float)Game.Window.ClientBounds.Height,
1f, 3000f);
}
public override void Initialize()
{
// Set mouse position and do initial get state.
Mouse.SetPosition(Game.Window.ClientBounds.Width / 2,
Game.Window.ClientBounds.Height / 2);
prevMouseState = Mouse.GetState();
base.Initialize();
}
public override void Update(GameTime gameTime)
{
// Standard WSAD keyboard movement.
if (Keyboard.GetState().IsKeyDown(Keys.W))
CameraPosition += CameraDirection;
if (Keyboard.GetState().IsKeyDown(Keys.S))
CameraPosition -= CameraDirection;
if (Keyboard.GetState( ).IsKeyDown(Keys.A))
CameraPosition += Vector3.Cross(CameraUp, CameraDirection);
if (Keyboard.GetState().IsKeyDown(Keys.D))
CameraPosition -= Vector3.Cross(CameraUp, CameraDirection);
// Yaw rotation (via mouse).
CameraDirection = Vector3.Transform(CameraDirection,
Matrix.CreateFromAxisAngle(CameraUp, (-MathHelper.PiOver4 / 150) *
(Mouse.GetState().X - prevMouseState.X)
));
// Pitch rotation (via mouse).
CameraDirection = Vector3.Transform(CameraDirection,
Matrix.CreateFromAxisAngle(Vector3.Cross(CameraUp, CameraDirection),
(MathHelper.PiOver4 / 100) *
(Mouse.GetState().Y - prevMouseState.Y)
));
CameraUp = Vector3.Transform(CameraUp,
Matrix.CreateFromAxisAngle(Vector3.Cross(CameraUp, CameraDirection),
(MathHelper.PiOver4 / 100) *
(Mouse.GetState().Y - prevMouseState.Y)
));
prevMouseState = Mouse.GetState();
CreateLookAt();
base.Update(gameTime);
}
private void CreateLookAt()
{
View = Matrix.CreateLookAt(CameraPosition,
CameraPosition + CameraDirection, CameraUp);
}
}
}
Can someone please check whether this code works for him? Any idea would be great!
//EDIT 1:
In game's initialization I use code:
Camera cam = new Camera(this, new Vector3(0, 0, 100), Vector3.Zero, Vector3.Up);
Components.Add(cam);
and then I draw model. I use cam's view and projection matrices in model's BasicEffect.
//EDIT 2:
Something's seriously wrong. I did some debuging and it seems that at some point those Vector3 objects have their X,Y,Z properties set to NaN. I couldn't find out how and when. Also (perhaps this is nothing) after running for some time gameTime shows that the elapsed game time is ONLY 16 miliseconds (like it skips frames?) and that the game is running slowly. This 16 miliseconds is always the same, no matter when I debug.
Is it possible that I somehow messed up XNA/C# settings? If so, can someone give me any hint what to do now?
A: // Pitch rotation (via mouse).
CameraDirection = Vector3.Transform(CameraDirection,
Matrix.CreateFromAxisAngle(Vector3.Cross(CameraUp, CameraDirection),
(MathHelper.PiOver4 / 100) *
(Mouse.GetState().Y - prevMouseState.Y)
));
CameraUp = Vector3.Transform(CameraUp,
Matrix.CreateFromAxisAngle(Vector3.Cross(CameraUp, CameraDirection),
(MathHelper.PiOver4 / 100) *
(Mouse.GetState().Y - prevMouseState.Y)
));
Whenever you use CreateFromAxisAngle, the first param (the axis) needs to be a unit length vector. If it isn't a unit length vector, it will result in a distorted (skewed) Matrix and one that is over (or under) rotated.
Whenever you cross two vectors the result in rarely a unit length vector. To make a result of a cross unit length. try this:
Vector3.Transform(CameraDirection, Matrix.CreateFromAxisAngle(Vector3.Normalize(Vector3.Cross(CameraUp, CameraDirection)), ...
by adding the 'Normalize', it makes the result of the cross a unit length vector. Don't forget to do the same to the yaw line as well.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/6822037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to use SharedPreferences value to fill EditText? I'd like to know how to fill EditText field with the values of SharedPreferences. My code is as follows:
protected void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar);
setSupportActionBar(toolbar);
FloatingActionButton fab = (FloatingActionButton) findViewById(R.id.fab);
fab.setOnClickListener(this);
sharedPreferences = getSharedPreferences(getString(R.string.preference_file_key),MODE_PRIVATE);
textViewCheck();
}
public void textViewCheck()
{
EditText sil_key = (EditText)findViewById(R.id.silent_key);
String silent_mode_key = sil_key.getText().toString();
EditText gen_key = (EditText)findViewById(R.id.general_key);
String general_mode_key = gen_key.getText().toString();
EditText vib_key = (EditText)findViewById(R.id.vibrate_key);
String vibrate_mode_key = vib_key.getText().toString();
SharedPreferences.Editor editor = sharedPreferences.edit();
if((sharedPreferences.contains("silent")) && (sharedPreferences.contains("general")) && (sharedPreferences.contains("vibrate")))
{
sil_key.setText(sharedPreferences.getString("silent","silent1"));
gen_key.setText(sharedPreferences.getString("general","general1"));
vib_key.setText(sharedPreferences.getString("vibrate","vibrate1"));
}
editor.putString("silent",silent_mode_key);
editor.putString("general",general_mode_key);
editor.putString("vibrate",vibrate_mode_key);
editor.apply();
}
I keep getting the same values even after changing in the EditText fields. I would like to know why I keep getting this and how to overcome this. ( I am so sorry for the previous error with getString() method, didn't notice that.Please clarify this one)
A: i think you may be messing when creating the strings
EditText sil_key = (EditText)findViewById(R.id.silent_key);
String silent_mode_key = sil_key.toString();
I think you meant to make it a string from the content of the edit text
like this
String silent_mode_key = sil_key.getText().toString();
try this
EditText sil_key = (EditText)findViewById(R.id.silent_key);
String silent_mode_key = sil_key.getText().toString();
EditText gen_key = (EditText)findViewById(R.id.general_key);
String general_mode_key = gen_key.getText().toString();
EditText vib_key = (EditText)findViewById(R.id.vibrate_key);
String vibrate_mode_key = vib_key.getText().toString();
SharedPreferences.Editor editor = sharedPreferences.edit();
if((sharedPreferences.contains("silent")) && (sharedPreferences.contains("general")) && (sharedPreferences.contains("vibrate")))
{
sil_key.setText(sharedPreferences.getString("silent","silent"));
gen_key.setText(sharedPreferences.getString("general","general"));
vib_key.setText(sharedPreferences.getString("vibrate","vibrate"));
}
editor.putString("silent",silent_mode_key);
editor.putString("general",general_mode_key);
editor.putString("vibrate",vibrate_mode_key);
editor.apply();
A: try to add getText() like below for all the strings:
EditText sil_key = (EditText)findViewById(R.id.silent_key);
String silent_mode_key = sil_key.getText().toString();
A: I think that you have to add this before sharedPreferences.getString
sharedPreferences = PreferenceManager.getDefaultSharedPreferences(this);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/37551489",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: OCI with long double I'm developing a C++ library with oracle OCI library. I'm loading from a Oracle Database 11g database. Here I need to load large values with decimal places. In this case I need to use long double instead of double. I'm not sure whether OCI supports for long double.
According to the documentation SQLT_FLT is for float and double only. Can someone let me know whether OCI supports for long double and if so how to retrieve them
A: Yes and no.
According to the server docs on floats, the 'native' C types support up to double precision (BINARY_DOUBLE). However, the NUMBER type can store:
*
*Positive numbers in the range 1 x 10-130 to 9.99...9 x 10125 with up to 38 significant digits
*Negative numbers from -1 x 10-130 to 9.99...99 x 10125 with up to 38 significant digits
which is more precision than long double on x86/amd64.
So, you'd need to use that type instead of BINARY_*. Sadly, the developer docs say:
You should not need to use NUMBER as an external datatype. If you do use it, Oracle returns numeric values in its internal 21-byte binary format and will expect this format on input. The following discussion is included for completeness only.
On the other hand, there are also docs on:
*
*using the OCINumber type;
*OCI NUMBER Functions.
AFAICS OCINumberFromReal() supports long double; and if you want even more precision, then you could use OCINumberFromText() with decimal string.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12156403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Android Resolve Host I am getting the following error while trying to access the web-service.
Error in http connection java.net.UnknownHostException: Unable to resolve host "192.168.1.109"
Web-service is working fine, when checked using browser.
I am using AsyncTask to access web-service, so it cannot be a background thread issue.
Missed this permission:
<uses-permission android:name="android.permission.INTERNET" />
as identified in the answer below.
A: Mostly I doubt it has to do with the permission you need to give while running the app. Are you making sure you are setting the the network permission in manifest?
More information or code could help you get better answers though.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/23204260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Azure Table Storage Not saving Property I have an abstract class that inherits from Table Entity.
public abstract class AzureEntityBase : TableEntity
{
public AzureEntityBase()
{
}
public virtual string TableName
{
get
{
return string.Empty;
}
}
private string ObjectHash { get; set; }
public bool IsBackedUp { get; set; }
}
I then have a class that implements this abstract class
public class DepartmentTotalEntity : AzureEntityBase
{
public override string TableName
{
get
{
return "DepartmentTotals";
}
}
public Int64 SessionDateTimeInteger { get; set; }
public string StoreID { get { return PartitionKey; } set { PartitionKey = value; } }
public string DetailKey { get; set; }
public string RangeString { get; set; }
public string DateStart { get; set; }
public string DateEnd { get; set; }
public Int64 DateStartInt { get; set; }
public Int64 DateEndInt { get; set; }
public string Dept_ID { get; set; }
public string DepartmentDescription { get; set; }
public decimal Quantity { get; set; }
public decimal TotalPrice { get; set; }
}
I am submitting a revised entity to Azure table storage with the IsBackedUp value set to false.
I then have a service that runs on an Azure Compute instance that copies the row from one table storage account to another. The other Azure table storage account is at a different Azure datacenter. After all the rows are copied, I want to limit what I grab when I copy the next round and the IsBackedUp field is supposed to do this.
I run a function that loops the rows already inserted, checks to see if they exist at the destination, if they do exist, then update the source table row to reflect that it was backed up and write that to azure via the following code.
foreach (DepartmentTotalEntity row in CopiedRows)
{
DepartmentTotalEntity find = dst.BrowseSingle(row.PartitionKey, row.RowKey);
if (find != null)
{
row.IsBackedUp = true;
int tmp = src.InsertOrReplace(row);
}
}
The return integer from the InsertOrReplace is the HttpStatusCode of the tableoperation and it always reads 204. This is expected for a successful write the ATS.
For completeness here is the InsertOrReplaceRow function.
public int InsertOrReplace(DepartmentTotalEntity input)
{
if (input.PartitionKey.IsNull())
{
throw new ArgumentNullException("PartitionKey");
}
if (input.RowKey.IsNull())
{
throw new ArgumentNullException("RowKey");
}
TableOperation replaceOperation = TableOperation.InsertOrReplace(input);
TableResult result = table.Execute(replaceOperation);
return result.HttpStatusCode;
}
The main problem is that the IsBackedUp field is not being updated when the InsertOrReplace command is being called in the third block of code.
Banging me head against a wall here trying to figure out why ATS will not accept my revision.
I can successfully change the value of IsBackedUp using Azure Table Storage Explorer. I have confirmed that the datatype of the column is Boolean.
Any help would be greatly appreciated. Let me know if I have posted enough of the code to be of assistance. The only class that is not posted is the rest of the class that surrounds the last code block. It is over 2000 lines so I omitted it for brevity. That class has the CloudTable, CloudTableClient and CloudStorageAccount variables.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/47567651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Weird error when initializing SKView
I have this block of code on my project to practice SpriteKit. But, for some unexpected reason, it gives me this error(compiles fine).
Just for the reference, I don't have anything on my HelloScene.h file. I hope this doesn't matter. There's nothing else other than these code. So, I can't see why this is happening. Can anyone tell me why? Thanks.
A: Check your storyboard, the view inside your ViewController should be SKView instead of UIView.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/24533291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Go create struct from XML Please see this playground: https://play.golang.org/p/FOMWqhjdneg
As you can see I have a XML response which I want to unmarshal into a Product struct. The XML has an "assetBlock" which contains nested nodes and extract data to Product struct. Any help would be appreciated
A: You need to make a struct for AssetBlock and all of the types below it, I've done it up to group to show you what I mean:
https://play.golang.org/p/vj_CkneHuLd
type Product struct {
GlobalID string `xml:"globalId"`
Title string `xml:"title"`
ChunkID int `xml:"gpcChunkId"`
AssetBlock assetBlock `xml:"assetBlock"`
}
type assetBlock struct {
Images images `xml:"images"`
}
type images struct {
GroupList groupList `xml:"groupList"`
}
type groupList struct {
Groups []group `xml:"group"`
}
type group struct {
Usage string `xml:"usage"`
Size string `xml:"size"`
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/48945851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Forcing UTF8 Format with PHP's XMLReader, DOM and SimpleXML We have a script that parses XML feeds from user generated sources which from time to time contain improperly formated entries with special characters.
While I would normally just run utf8_encode() on the line, I'm not certain how to do this as DOM is progressively reading the file and the error is thrown as the expand command takes place.
Since simple_xml chokes on the code, subsequent lines are also off.
Here's the code.
$z = new XMLReader;
$z->open($filename); $doc = new DOMDocument('1.0','UTF-8');
while ($z->read() && $z->name !== 'product');
while ($z->nodeType == XMLReader::ELEMENT AND $z->name === 'product'){
$producti = simplexml_import_dom($doc->importNode($z->expand(), true));
print_r($producti);
}
Errors:
Message: XMLReader::expand(): foo.xml:29081: parser error : Input is
not proper UTF-8, indicate encoding ! Bytes: 0x05 0x20 0x2D 0x35
Severity: Warning Message: XMLReader::expand(): An Error
Occured while expanding Filename: controllers/feeds.php
Line Number: 106
Message: Argument 1 passed to DOMDocument::importNode() must be an
instance of DOMNode, boolean given Filename:
controllers/feeds.php Line Number: 106
A: Use HTML Tidy library first to clean your string.
Also I'd better use DOMDocument instead of XMLReader.
Something like that:
$tidy = new Tidy;
$config = array(
'drop-font-tags' => true,
'drop-proprietary-attributes' => true,
'hide-comments' => true,
'indent' => true,
'logical-emphasis' => true,
'numeric-entities' => true,
'output-xhtml' => true,
'wrap' => 0
);
$tidy->parseString($html, $config, 'utf8');
$tidy->cleanRepair();
$xml = $tidy->value; // Get clear string
$dom = new DOMDocument;
$dom->loadXML($xml);
...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/10169211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Value are showing different results can anybody explain why this below code is showing different values
class ABC: UIViewController
{
var Distance : Int!
override func viewDidLoad()
{
super.viewDidLoad()
var obj_A = ABC()
obj_A.Distance = 10
var obj_B = objVR_A
obj_A.Distance = 30
print(obj_A.Distance) // 30
print(obj_B.Distance) // 30
var x = 10
let y = x
x = 30
print(x) //30
print(y) //10
}}
Why the value of Obj_B is 30 while value of Y is 10.
Thanks.
A: From Apple documentation:
// Reference type example
class C { var data: Int = -1 }
var x = C()
var y = x // x is copied to y
x.data = 42 // changes the instance referred to by x (and y)
println("\(x.data), \(y.data)") // prints "42, 42"
Copying a reference implicitly creates a shared instance. After a copy, two variables then refer to a single instance of the data.
A: This is because primitives are value-based types, and classes are reference-based. You can find detailed explanation on Apple's blog.
// Value type example
struct S { var data: Int = -1 }
var a = S()
var b = a // a is copied to b
a.data = 42 // Changes a, not b
println("\(a.data), \(b.data)") // prints "42, -1"
// Reference type example
class C { var data: Int = -1 }
var x = C()
var y = x // x is copied to y
x.data = 42 // changes the instance referred to by x (and y)
println("\(x.data), \(y.data)") // prints "42, 42"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/48009500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Lit-html not found when using polymer build I have a fairly basic lit-html app that works locally when it's not build.
However when I build it using polymer build using the following config:
{
"entrypoint": "index.html",
"shell": "src/school-home.js",
"sources": [
"src/**.js",
"package.json"
],
"extraDependencies": [
"node_modules/@webcomponents/webcomponentsjs/bundles/**"
],
"builds": [
{"preset": "es6-bundled"}
]
}
This results in a successful build but for some reason I keep getting an error:
I just don't get why it doesn't work. This like the basics of the basics yet it doesn't get found?
Aside: I use nginx for windows since I want to test E2E with my developed APIs.
An additional issue is that I keep getting CORS error for my API calls even though they are on the exact same location?!!
Please help.
Edit:
My NGINX config:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include cors-settings.conf;
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 8000;
server_name localhost;
location /school {
root /html/ing-school;
try_files $uri $uri/ $uri.html /index.html;
}
location ~ ^/(api|login|logout) {
proxy_pass http://localhost:8080;
proxy_set_header Connection "";
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain; charset=utf-8';
add_header 'Content-Length' 0;
return 204;
}
if ($request_method = 'POST') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
}
if ($request_method = 'GET') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested- With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content- Range';
}
}
location /ws {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62473035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: 'Can't find' repo error - missing a repo and can't move past it
I'm new to git/github. A few months ago, I put an old wordpress site as a repo on there, that I've since deleted from my computer. Had alot of trouble since then with github. When I try and put a new repo on github, it won't let me, and says it's missing the original repo (see attached pic). I thought this might be a github issue for my account, so I tried creating a brand new github profile from scratch to bypass the error. But even logged into the new github (with no repos), I still get this error. It's like my computer is stuck on this git issue for this one old repo. I'm using a M1 macbook pro if that matters. Can anyone help?
I have a copy of the wordpress site that it was originally linked to, but it's a different version. I don't know if I need to put this in a specific location and that will make the error go away?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/71607997",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Radio buttons are circle on PC, but square on mobile I have coded some simple radio buttons (with no styling)
<input onclick="calcAdult()" type="radio" name="age">Adult
<input onclick="calcPup()" type="radio" name="age">Puppy (younger than 1 year)
On pc they show as normal circles, but on mobile they are check boxes, how can I change this?
A: You can make customize.
Simple example
<!DOCTYPE html>
<html>
<style>
/* The container */
.container {
display: block;
position: relative;
padding-left: 35px;
margin-bottom: 12px;
cursor: pointer;
font-size: 22px;
-webkit-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
/* Hide the browser's default radio button */
.container input {
position: absolute;
opacity: 0;
cursor: pointer;
}
/* Create a custom radio button */
.checkmark {
position: absolute;
top: 0;
left: 0;
height: 25px;
width: 25px;
background-color: #eee;
border-radius: 50%;
}
/* On mouse-over, add a grey background color */
.container:hover input ~ .checkmark {
background-color: #ccc;
}
/* When the radio button is checked, add a blue background */
.container input:checked ~ .checkmark {
background-color: #2196F3;
}
/* Create the indicator (the dot/circle - hidden when not checked) */
.checkmark:after {
content: "";
position: absolute;
display: none;
}
/* Show the indicator (dot/circle) when checked */
.container input:checked ~ .checkmark:after {
display: block;
}
/* Style the indicator (dot/circle) */
.container .checkmark:after {
top: 9px;
left: 9px;
width: 8px;
height: 8px;
border-radius: 50%;
background: white;
}
</style>
<body>
<h1>Custom Radio Buttons</h1>
<label class="container">One
<input type="radio" checked="checked" name="radio">
<span class="checkmark"></span>
</label>
<label class="container">Two
<input type="radio" name="radio">
<span class="checkmark"></span>
</label>
<label class="container">Three
<input type="radio" name="radio">
<span class="checkmark"></span>
</label>
<label class="container">Four
<input type="radio" name="radio">
<span class="checkmark"></span>
</label>
</body>
</html>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/64821737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: How to make Youtube videos play from code without having the user to switch to that tab I am making a chrome extension where user can make playlists of songs. For that when I click on play button I need that new tab should open and the video should start playing but youtube video doesn't play without switching over to that tab. I tried to use the 'canplay' eventlistener but it also is called only when I switch over to that tab in which video is played. Can somebody help?
console.log("Hello from content script!");
var video = document.getElementsByTagName("video")[0];
if(video) {
video.addEventListener("ended", function() {
console.log("Video ended");
//alert('Ended');
chrome.runtime.sendMessage(
"ended"
);
})
video.addEventListener("canplay", function(event) {
alert("noted");
//video.play();
let playbtn = document.getElementsByClassName('ytp-play-button ytp-button')[0];
playbtn.click();
})
} else {
//console.error("Video element not found");
}
A: Have you tried adding I?&autoplay=1 to the end of the youtube links?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55273352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is there a way to channel some HTTP server events into a single javascript function? I have a C# webserver which provides some html/js/css and images to a browser-based client.
I create the page dynamically using javascript, filling images and divs while receiving messages from a websocket C# server.
Sometimes, I receive a message from the websocket and I try to access a resource in the webserver which might be protected.
I have some custom http server events for this (say, 406) because otherwise the browser pops up a dialog box for digest auth login.
I was wondering whether there is a way to channel all 406 events into the same function, say
RegisterHttpEventHandler(406,myFunction);
I guess I could just wrap any request/dynamic load in try and catches, although I'd love a cleaner way of doing this.
EDIT
Here is an example of the workflow which I implemented so far, and which works fine.
// Websocket definition
var conn = new WebSocket('ws://' + addressIP);
// Websocket receiver callback
conn.onmessage = function (event) {
// my messages are json-formatted
var mObj = JSON.parse(event.data);
// check if we have something nice
if(mObj.message == "I have got a nice image for you"){
showImageInANiceDiv(mObj.ImageUrl);
}
};
// Dynamic load image
function showImageInANiceDiv(imgUrl )
{
var imgHtml = wrapUrlInErrorHandler(imgUrl);
$("#imageBox").html(imgHtml);
}
// function to wrap everything in an error handler
function wrapUrlInErrorHandler(Url)
{
return '<img src="' + Url + '" onerror="errorHandler(\''+ Url +'\');"/>';
}
// function to handle errors
function errorHandler(imgUrl)
{
console.log("this guy triggered an error : " + imgUrl);
}
//
A:
the onerror does not tell me what failed, so I have to make an XHR just to find it out. That's a minor thing
You could first try the XHR. If it fails, you know what happened, if it succeeds you can display the image from cache (see also here). And of course you also could make it call some globally-defined custom hooks for special status codes - yet you will need to call that manually, there is no pre-defined global event source.
I'd like to have something like a document.onerror handler to avoid using the wrapUrlInErrorHandler function every time I have to put an image in the page
That's impossible. The error event (MDN, MSDN) does not bubble, it fires directly and only on the <img> element. However, you could get around that ugly wrapUrlInErrorHandler if you didn't use inline event attributes, but (traditional) advanced event handling:
function showImageInANiceDiv(imgUrl) {
var img = new Image();
img.src = imgUrl;
img.onerror = errorHandler; // here
$("#imageBox").empty().append(img);
}
// function to handle errors
function errorHandler(event) {
var img = event.srcElement; // or use the "this" keyword
console.log("this guy triggered an "+event.type+": " + img.src);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13934240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Selecting records based on maximum value of another field I am trying to pull a list of distinct city names with their latitudes and longitudes based on the maximum value of the land area field. The best I get is a "SQL: Error correlating fields" error.
SELECT a.primary_city as city ;
, a.state as state_id ;
, SPACE(30) as state_name ;
, a.approximate_latitude as latitude ;
, a.approximate_longitude as longitude ;
FROM citystate a ;
WHERE a.area_land = ;
(SELECT MAX(VAL(b.area_land)) ;
FROM citystate b ;
WHERE (b.primary_city = a.primary_city ;
AND b.state = a.state)) ;
GROUP BY a.primary_city ;
, a.state ;
, a.approximate_latitude ;
, a.approximate_longitude
Not sure this will even work so hoping for some help.
Thanks.
A: Such SQL is not supported in VFP. You can write that as:
SELECT csa.primary_city as city ;
, csa.state as state_id ;
, SPACE(30) as state_name ;
, csa.approximate_latitude as latitude ;
, csa.approximate_longitude as longitude ;
FROM citystate csa ;
INNER JOIN ;
(SELECT primary_city, state, MAX(VAL(area_land)) as maxALand ;
FROM citystate ;
GROUP BY primary_city, state ) csb ;
ON csb.primary_city = csa.primary_city ;
AND csb.state = csa.state ;
WHERE VAL(csa.area_land) = csb.maxALand
A: VFP has not problem at all with 'such SQL', contrary to the pronouncement of master Basoz on the matter.
Normally you should get a type mismatch error because of the comparison of land_area to max(val(land_area)). Fox is nowhere near as eager with implicit conversions as, for example, MS SQL Server, and hence you have to match data types correctly.
However, not knowing what your table structures are it is difficult to offer specific pointers as to where the problem may be. Here are two similar queries using sample data available in VFP (using the freight cost as a stand-in for land_area value):
_VFP.DoCmd("set path to " + set("PATH") + ";" + _SAMPLES)
use Northwind/orders alia nw_orders in sele("nw_orders") shar noup agai
sele o.shipcountry, o.shipcity, o.shipname, o.orderid, o.freight ;
from nw_orders o ;
wher o.freight == ( ;
sele max(x.freight) from nw_orders x wher x.shipcountry == o.shipcountry ) ;
orde by 1 ;
into curs nw_result
brow last nowa
use Tastrade/orders alia tt_orders in sele("tt_orders") shar noup agai
sele o.ship_to_country, o.ship_to_city, o.ship_to_name, o.order_number, o.freight ;
from tt_orders o ;
wher o.freight == ( ;
sele max(x.freight) from tt_orders x wher x.ship_to_country == o.ship_to_country ) ;
orde by 1 ;
into curs tt_result
brow last nowa
Tastrade and Northwind are very similar in structure and contain very similar data; one or the other should be available in your VFP installation. However, neither sample database seems to in the public domain, at least not in the VFP incarnation. That is why I put Northwind first, since it is publicly available in versions for MS SQL Server etc. pp., and dumping its orders table to Fox format should not present insurmountable difficulties.
Here's a C# LINQPad script for running the Northwind query via OLEDB:
const string DIR = @"d:\dev\sub\FoxIDA\data\Northwind\";
const string SQL = @"
sele o.shipcountry, o.shipcity, o.shipname, o.orderid, o.freight
from orders o
wher o.freight == (sele max(x.freight) from orders x wher x.shipcountry == o.shipcountry)
orde by 1";
using (var con = new System.Data.OleDb.OleDbConnection("provider=VFPOLEDB;data source=" + DIR))
{
using (var cmd = new System.Data.OleDb.OleDbCommand(SQL, con))
{
con.Open();
cmd.ExecuteReader().Dump();
}
}
Obviously, you'd need to adapt the DIR definition to where the data is located on your machine. Equally obviously you'd have to use a 32-bit version of LINQPad, since there is no 64-bit OLEDB driver for VFP. Here's the result:
| {
"language": "en",
"url": "https://stackoverflow.com/questions/58511479",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: jquery closest() or children not working jquery code :-
$('.cycle-status-update')
.on('mouseenter', (function(){
$(this).closest('.delete-status-update-link').show();
}))
.on('mouseleave', (function(){
$(this).closest('.delete-status-update-link').hide();
}))
html code :-
.status-partial
[email protected]_updates.latest.limit(5).each do |status_update|
.row-fluid.cycle-status-update{:class => cycle("dark", "light")}
.row-fluid
.span11
= status_update.status
.span1
.delete-status-update-link
%strong= link_to "X", [@user,status_update], :remote => true, :method => :delete, :class => "delete-status-update"
.row-fluid
.time-ago-in-words
="#{time_ago_in_words(status_update.created_at)} ago"
what could be the issue here ?
A: use find()
$(this).find('.delete-status-update-link').show();
A: You have issues with your code, (function extra ( before function and after ):
$('.cycle-status-update')
.on('mouseenter', (function(){
$(this).closest('.delete-status-update-link').show();
}))//<----here
.on('mouseleave', (function(){ //<--before function
$(this).closest('.delete-status-update-link').hide();
}))
//^----here
try this:
$('.cycle-status-update').on('mouseenter', function(){
$('.delete-status-update-link').show();
}).on('mouseleave', function(){
$('.delete-status-update-link').hide();
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14767379",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-3"
} |
Q: Dynamically Populate the linear layout depending upon Markers on the Map I am working on the android project in which I need to draw a UI something like below. I need to divide the android screen in two half. In the Top Half I need to show the google maps. And in the Bottom Half I need to show the user information as soon as anyone clicked on the Google Maps. Top Half part is done and its working for me.
Problem Statement:-
I need to dynamically keep on creating the linear layout in the bottom half part depending on how many markers(users) are there on the maps. Below is the screen shot in which I have created two layouts in the bottom half part considering two user's are there on the google maps. But how can I make this thing to work dynamically, suppose if I have three user's on the google maps currently, then in the bottom half part three layout will get plotted automatically. I hope I am clear enough.
A: Yes you can do use a listview and create custom adapter. set your map as header in listview and use your listview rows for data representation.
Another thing which you can do is create listview and maps seperate. Align your listview just below to the map
Here i am giving you a simple example the code below will create listview. You may need to I have created custom adapter for your purpose.
public class my_map_activity extends Activity {
ArrayList<String> name = null;
ArrayList<String> gender = null;
ArrayList<String> latitude = null;
ArrayList<String> longitude = null;
Context activity_context = null;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
ListView user_listview = (ListView) findViewById(R.id.user_listview);
name = new ArrayList<String>();
gender = new ArrayList<String>();
latitude = new ArrayList<String>();
longitude = new ArrayList<String>();
for (int i = 0; i < 10; i++) {
name.add ("test user " + i);
gender.add ("Male");
latitude.add (""+ i);
longitude.add (""+ i);
}
custom_adapter list_adapter = new custom_adapter (this, android.R.layout.simple_list_item_1, name, gender, latitude, longitude);
user_listview.setAdapter(list_adapter);
}
}
Here is my main.xml file. Which will tell you how to create that layout. I am using Relative layout as my parent layout.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@+id/RelativeLayout1"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:orientation="vertical" >
<ImageView
android:id="@+id/imageView1"
android:layout_width="wrap_content"
android:layout_height="250dp"
android:layout_alignParentLeft="true"
android:layout_alignParentRight="true"
android:layout_alignParentTop="true"
android:src="@drawable/ic_launcher" />
<ListView
android:id="@+id/user_listview"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_alignParentLeft="true"
android:layout_below="@+id/imageView1" >
</ListView>
</RelativeLayout>
here i am posting you my custom adapter code which you can use for your listview purpose.
public class custom_adapter extends ArrayAdapter<String>{
ArrayList<String> name = null;
ArrayList<String> gender = null;
ArrayList<String> latitude = null;
ArrayList<String> longitude = null;
Context activity_context = null;
public custom_adapter(Context context,
int resource,
ArrayList<String> name ,
ArrayList<String> gender,
ArrayList<String> latitude,
ArrayList<String> longitude) {
super(context, resource, name );
this.name = name;
this.gender = gender;
this.latitude = latitude;
this.longitude = longitude;
activity_context = context;
}
static class ViewHolder
{
public TextView txt_name;
public TextView txt_gender;
public TextView txt_latitude;
public TextView txt_longitude;
public ImageView img_view;
}
@Override
public View getView (final int position,
View convertView,
ViewGroup parent)
{
View rowView = convertView;
ViewHolder holder = null;
if (rowView == null) {
LayoutInflater inflater = (LayoutInflater)activity_context.getSystemService(
Context.LAYOUT_INFLATER_SERVICE);
rowView = inflater.inflate(R.layout.row_file, null, false);
holder = new ViewHolder();
holder.txt_name = (TextView) rowView.findViewById(R.id.textView1);
holder.txt_gender = (TextView) rowView.findViewById(R.id.textView2);
holder.txt_latitude = (TextView) rowView.findViewById(R.id.textView3);
holder.txt_longitude = (TextView) rowView.findViewById(R.id.textView4);
holder.img_view = (ImageView) rowView.findViewById(R.id.imageView1);
rowView.setTag(holder);
} else {
holder = (ViewHolder) rowView.getTag();
}
if (holder != null) {
holder.txt_name.setText(name.get(position));
holder.txt_gender.setText(gender.get(position));
holder.txt_latitude.setText(latitude.get(position));
holder.txt_longitude.setText(longitude.get(position));
}
return rowView;
}
}
Here is the layout which you can inflate in your custom adapter and set it on listview.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@+id/RelativeLayout1"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="vertical" >
<ImageView
android:id="@+id/imageView1"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentLeft="true"
android:layout_alignParentTop="true"
android:src="@drawable/ic_launcher" android:layout_marginTop="10dp" android:layout_marginLeft="10dp" android:contentDescription="TODO"/>
<TextView
android:id="@+id/textView1"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentRight="true"
android:layout_alignParentTop="true"
android:layout_toRightOf="@+id/imageView1"
android:text="Medium Text"
android:textAppearance="?android:attr/textAppearanceMedium" />
<TextView
android:id="@+id/textView2"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentRight="true"
android:layout_below="@+id/textView1"
android:layout_toRightOf="@+id/imageView1"
android:text="Medium Text"
android:textAppearance="?android:attr/textAppearanceMedium" />
<TextView
android:id="@+id/textView3"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignLeft="@+id/textView2"
android:layout_alignParentRight="true"
android:layout_below="@+id/textView2"
android:text="Medium Text"
android:textAppearance="?android:attr/textAppearanceMedium" />
<TextView
android:id="@+id/textView4"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignLeft="@+id/textView3"
android:layout_alignParentRight="true"
android:layout_below="@+id/textView3"
android:text="Medium Text"
android:textAppearance="?android:attr/textAppearanceMedium" />
</RelativeLayout>
So i have given you all four files. Here i am explaining what every file will do.
*
*my_map_activity : This is the main activity which contains listivew and map both. I am using a big imageview. Instead of that imageview you can place your map.
*main.xml : This is layout file which contains both listview and map.
*custom_adapter : This is the custom adapter which i have created for you. I am extending array adapter. In this i am taking data from arraylist and inflating the row layout and the i am setting respective values.
*row_layout : This is the row layout which will contain imageview for user, name, gender and all other thing.
So i have created several things for you if you need any help then you can ask and try to get the concept of each file how i am using.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11927967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Is this a bug in Power BI? I have several .pbix files where I mostly consume live from SSAS. Last week I was able to see the model, (I downloaded today the december 2022 version) and after I installed it, I can't see them anymore... why? Is this the expected behavior? it seems like a bug...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75218905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How can I detect if in Android , soft-keyboard's delete has been long pressed when a particular TextInputLauout field is in focus I have many custom TextInputLayout with custom TextInputEditText views in the XML.
For some TextInputFields I need to give the below feature:
*
*When the field is in focus
*Soft keyboard should launch
*When the user presses the delete button for long approx (2 secs)
*It should clear the content in TextInputEditText
I tried --> Implementing onKeyPreIme(int keyCode, KeyEvent event) in Fragment
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74883903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Conditional Left Join I'm trying to figure out how to return columns only when needed.
Data will only exists in the 1_finance_add_details and the addresses e tables when t.trans_type=2
SELECT t.*,
a.company,
a.house_number,
a.flat_number,
a.house_name,
a.address1,
a.address2,
a.address3,
a.town,
a.county,
a.post_code,
a.country,
f.*,
e.company,
e.house_number,
e.flat_number,
e.house_name,
e.address1,
e.address2,
e.address3,
e.town,
e.county,
e.post_code,
e.country
FROM 1_transactions t
LEFT JOIN addresses a ON a.id=t.fk_addresses_id
LEFT JOIN 1_finance_add_details f ON t.id=f.fk_transactions_id
LEFT JOIN addresses e ON e.id=f.employ_fk_addresses_id
WHERE trans_location=1 AND t.id=19;
A: Not sure if this is the result that you want, but you can just add another condition in your left join like this:
FROM 1_transactions t
LEFT JOIN addresses a ON a.id=t.fk_addresses_id
LEFT JOIN 1_finance_add_details f ON t.id=f.fk_transactions_id
LEFT JOIN addresses e ON e.id=f.employ_fk_addresses_id AND t.trans_type=2
WHERE trans_location=1 AND t.id=19;
Note: this would render the field of addresses e NULL on every t with type != 2.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/21361315",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Hibernate enableFilter not working when loading entity by id I have 3 classes:
*
*Event
*PublicEvent extends Event
*PersonalEvent extends Event
my hibernate mapping file is somethings like bellow. i wanna add one filter for PersonalEvent and before loading this object, i was enabled that's filter. but this is not working. my hibernate version is 4.3.11.Final.
Event.hbm.xml
<hibernate-mapping>
<class name="org.calendar.Event">
...
<filter name="personalEventAuthorize" condition="person_ID = :personId" />
</class>
<joined-subclass name="org.calendar.PersonalEvent" extends="org.calendar.Event">
<key column="id" property-ref="id" />
...
<many-to-one name="person" column="person_ID" entity-name="org.person.Person" />
</joined-subclass>
<joined-subclass name="org.calendar.PublicEvent" extends="org.calendar.Event">
<key column="id" property-ref="id" />
...
</joined-subclass>
<filter-def name="personEventAuthorize">
<filter-param name="personId" type="integer" />
</filter-def>
</hibernate-mapping>
PersonalEventRepository
@Override
public PersonalEvent load(Long id) {
Filter filter = getSession().enableFilter("personEventAuthorize");
filter.setParameter("personId", getAuthenticatedPersonId());
return super.loadById(id);
}
hibernate generated SQL Query without my filter. what is my problem? why hibernate can not enabled filter for joined-subclass?
thanks for all...
A: This is not a bug, it's the intended behavior. I updated the Hibernate User Guide to make it more obvious.
The @Filter does not apply when you load the entity directly:
Account account1 = entityManager.find( Account.class, 1L );
Account account2 = entityManager.find( Account.class, 2L );
assertNotNull( account1 );
assertNotNull( account2 );
While it applies if you use an entity query (JPQL, HQL, Criteria API):
Account account1 = entityManager.createQuery(
"select a from Account a where a.id = :id",
Account.class)
.setParameter( "id", 1L )
.getSingleResult();
assertNotNull( account1 );
try {
Account account2 = entityManager.createQuery(
"select a from Account a where a.id = :id",
Account.class)
.setParameter( "id", 2L )
.getSingleResult();
}
catch (NoResultException expected) {
expected.fillInStackTrace();
}
So, as a workaround, use the entity query (JPQL, HQL, Criteria API) to load the entity.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/42173894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Node Js Firestore accessing a database value with a space new to node Js expecting this to be very easy!
in node JS with Firestore i want to access "display name": James, "Age": 22
Age has no space so i can just put
const newValue = doc.data();
const age = newValue.Age;
but display name contains a space
how to you type this?
const displayName = newValue.displayName;
A: You'll need to use [] notation to get at the property.
So:
const displayName = newValue["display name"];
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50743471",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Control Android LED from shell I have seen several questions (and blog posts, elsewhere) with Java code for controlling the notification LED on an Android device. That's not what I'm looking for.
I'm wondering if there is any way to access the appropriate commands / controls / frameworks from the shell (Perl, ruby).
What I want, ultimately, is a very simple "heartbeat" pulse - when the device is on and the display is off, blink at me.
Alternatively, if anyone has written a really simple "toy" app that blinks the LED, I'd love to play with it.
A: You can find all leds of your device under
/sys/class/leds/
In my case, I have the following leds
amber
button-backlight
flashlight
green
lcd-backlight
If I have a look inside "green", I see
> cd green
> ls
blink
brightness
currents
device
lut_coefficient
max_brightness
off_timer
power
pwm_coefficient
subsystem
trigger
uevent
These files are interfaces to the kernel module, which controls leds. In this case, I think they are char-devices. You can use the command "echo" and "cat" to communicate with the kernel module. Here an example..
echo 1 > brightness # Turn on led
echo 0 > brightness # Turn off led
For implementing a "heartbeat" pulse as you mentioned, I would have a look into "blink". If you not want to perform reverse engineering, this could be a good entry point to check what happens in the kernel leds-gpio.c
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17201867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: How to use palettes in SDL 2 I'm updating a program from SDL 1 to SDL 2 and need to use color palettes. Originally, I used SDL_SetColors(screen, color, 0, intColors); but that does't work in SDL 2. I'm trying to use:
SDL_Palette *palette = (SDL_Palette *)malloc(sizeof(color)*intColors);
SDL_SetPaletteColors(palette, color, 0, intColors);
SDL_SetSurfacePalette(surface, palette);
But SDL_SetPaletteColors() returns -1 and fails. SDL_GetError gives me no information.
How can I make a palette from a SDL_Color and then set it as my surface's palette?
A: It's hard to tell what your variables are and how you intend to use them without seeing your declarations.
Here's how I set up a grayscale palette in SDL_gpu:
SDL_Color colors[256];
int i;
for(i = 0; i < 256; i++)
{
colors[i].r = colors[i].g = colors[i].b = (Uint8)i;
}
#ifdef SDL_GPU_USE_SDL2
SDL_SetPaletteColors(result->format->palette, colors, 0, 256);
#else
SDL_SetPalette(result, SDL_LOGPAL, colors, 0, 256);
#endif
The result SDL_Surface already has a palette because it is has an 8-bit pixel depth (see note in https://wiki.libsdl.org/SDL_Palette).
A: It has been awhile since the OP posted the question and there has been no accepted answer. I ran into the same issue while trying to migrate a SDL 1.2 based game into using 2.0. here is what I did hoping it could help other who may be facing similar issue:
Replace:
SDL_SetColors(screen, color, 0, intColors);
With:
SDL_ SDL_SetPaletteColors(screen->format->palette, color, 0, intColors);
David
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29609544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Trouble predicting with tensorflow model I've trained a Deep Neural Network on the MNIST dataset. Here is the code for training.
n_classes = 10
batch_size = 100
x = tf.placeholder(tf.float32, [None, 784],name='Xx')
y = tf.placeholder(tf.float32,[None,10],name='Yy')
input = 784
n_nodes_1 = 300
n_nodes_2 = 300
def neural_network_model(data):
variables = {'w1':tf.Variable(tf.random_normal([input,n_nodes_1])),
'w2':tf.Variable(tf.random_normal([n_nodes_1,n_nodes_2])),
'w3':tf.Variable(tf.random_normal([n_nodes_2,n_classes])),
'b1':tf.Variable(tf.random_normal([n_nodes_1])),
'b2':tf.Variable(tf.random_normal([n_nodes_2])),
'b3':tf.Variable(tf.random_normal([n_classes]))}
output1 = tf.add(tf.matmul(data,variables['w1']),variables['b1'])
output2 = tf.nn.relu(output1)
output3 = tf.add(tf.matmul(output2, variables['w2']), variables['b2'])
output4 = tf.nn.relu(output3)
output5 = tf.add(tf.matmul(output4, variables['w3']), variables['b3'],name='last')
return output5
def train_neural_network(x):
prediction = neural_network_model(x)
name_of_final_layer = 'fin'
final = tf.nn.softmax_cross_entropy_with_logits_v2(logits=prediction,
labels=y,name=name_of_final_layer)
cost = tf.reduce_mean(final)
optimizer = tf.train.AdamOptimizer().minimize(cost)
hm_epochs = 3
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(hm_epochs):
epoch_loss = 0
for _ in range(int(mnist.train.num_examples/batch_size)):
epoch_x, epoch_y = mnist.train.next_batch(batch_size)
_,c=sess.run([optimizer,cost],feed_dict={x:epoch_x,y:epoch_y})
epoch_loss += c
print("Epoch",epoch+1,"Completed Total Loss:",epoch_loss)
correct = tf.equal(tf.argmax(prediction,1),tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct,'float'))
print('Accuracy on val_set:',accuracy.eval({x:mnist.test.images,y:mnist.test.labels}))
path = saver.save(sess,"net/network")
print("Saved to",path)
Here is my code for evaluating a single datapoint
def eval_neural_network():
with tf.Session() as sess:
new_saver = tf.train.import_meta_graph('net/network.meta')
new_saver.restore(sess, "net/network")
sing = np.reshape(mnist.test.images[0],(-1,784))
output = sess.run([y],feed_dict={x:sing})
print(output)
eval_neural_network()
The error that popped up is :
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Yy' with dtype float and shape [?,10]
[[Node: Yy = Placeholder[dtype=DT_FLOAT, shape=[?,10], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
I've researched this online for multiple days now and still could not get it to work. Any advice?
A: This complete example based on tensorflow github worked for me:
(I modified few lines of code by removing name scope for x, keep_prob and changing to tf.placeholder_with_default. There's probably a better way to do this somewhere.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import pandas as pd
import argparse
import sys
import tempfile
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
FLAGS = None
def deepnn(x):
"""deepnn builds the graph for a deep net for classifying digits.
Args:
x: an input tensor with the dimensions (N_examples, 784), where 784 is the
number of pixels in a standard MNIST image.
Returns:
A tuple (y, keep_prob). y is a tensor of shape (N_examples, 10), with values
equal to the logits of classifying the digit into one of 10 classes (the
digits 0-9). keep_prob is a scalar placeholder for the probability of
dropout.
"""
# Reshape to use within a convolutional neural net.
# Last dimension is for "features" - there is only one here, since images are
# grayscale -- it would be 3 for an RGB image, 4 for RGBA, etc.
with tf.name_scope('reshape'):
x_image = tf.reshape(x, [-1, 28, 28, 1])
# First convolutional layer - maps one grayscale image to 32 feature maps.
with tf.name_scope('conv1'):
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
# Pooling layer - downsamples by 2X.
with tf.name_scope('pool1'):
h_pool1 = max_pool_2x2(h_conv1)
# Second convolutional layer -- maps 32 feature maps to 64.
with tf.name_scope('conv2'):
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
# Second pooling layer.
with tf.name_scope('pool2'):
h_pool2 = max_pool_2x2(h_conv2)
# Fully connected layer 1 -- after 2 round of downsampling, our 28x28 image
# is down to 7x7x64 feature maps -- maps this to 1024 features.
with tf.name_scope('fc1'):
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7 * 7 * 64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
# Dropout - controls the complexity of the model, prevents co-adaptation of
# features.
keep_prob = tf.placeholder_with_default(1.0,())
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
# Map the 1024 features to 10 classes, one for each digit
with tf.name_scope('fc2'):
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
return y_conv, keep_prob
def conv2d(x, W):
"""conv2d returns a 2d convolution layer with full stride."""
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
"""max_pool_2x2 downsamples a feature map by 2X."""
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
def weight_variable(shape):
"""weight_variable generates a weight variable of a given shape."""
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
"""bias_variable generates a bias variable of a given shape."""
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
# Import data
mnist = input_data.read_data_sets("/tmp")
# Create the model
x = tf.placeholder(tf.float32, [None, 784],name='x')
# Define loss and optimizer
y_ = tf.placeholder(tf.int64, [None])
# Build the graph for the deep net
y_conv, keep_prob = deepnn(x)
with tf.name_scope('loss'):
cross_entropy = tf.losses.sparse_softmax_cross_entropy(
labels=y_, logits=y_conv)
cross_entropy = tf.reduce_mean(cross_entropy)
with tf.name_scope('adam_optimizer'):
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
with tf.name_scope('accuracy'):
correct_prediction = tf.equal(tf.argmax(y_conv, 1), y_)
correct_prediction = tf.cast(correct_prediction, tf.float32)
accuracy = tf.reduce_mean(correct_prediction)
graph_location = tempfile.mkdtemp()
print('Saving graph to: %s' % graph_location)
train_writer = tf.summary.FileWriter(graph_location)
train_writer.add_graph(tf.get_default_graph())
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(1000):
batch = mnist.train.next_batch(50)
if i % 100 == 0:
train_accuracy = accuracy.eval(feed_dict={
x: batch[0], y_: batch[1], keep_prob: 1.0})
print('step %d, training accuracy %g' % (i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
print('test accuracy %g' % accuracy.eval(feed_dict={
x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
sing = np.reshape(mnist.test.images[0],(-1,784))
output = sess.run(y_conv,feed_dict={x:sing,keep_prob:1.0})
print(tf.argmax(output,1).eval())
saver = tf.train.Saver()
saver.save(sess,"/tmp/network")
Extracting /tmp/train-images-idx3-ubyte.gz
Extracting /tmp/train-labels-idx1-ubyte.gz
Extracting /tmp/t10k-images-idx3-ubyte.gz
Extracting /tmp/t10k-labels-idx1-ubyte.gz
Saving graph to: /tmp/tmp17hf_6c7
step 0, training accuracy 0.2
step 100, training accuracy 0.86
step 200, training accuracy 0.8
step 300, training accuracy 0.94
step 400, training accuracy 0.94
step 500, training accuracy 0.96
step 600, training accuracy 0.88
step 700, training accuracy 0.98
step 800, training accuracy 0.98
step 900, training accuracy 0.98
test accuracy 0.9663
[7]
If you want to restore from a new python run:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import numpy as np
import argparse
import sys
import tempfile
from tensorflow.examples.tutorials.mnist import input_data
sess = tf.Session()
saver = tf.train.import_meta_graph('/tmp/network.meta')
saver.restore(sess,tf.train.latest_checkpoint('/tmp'))
graph = tf.get_default_graph()
mnist = input_data.read_data_sets("/tmp")
simg = np.reshape(mnist.test.images[0],(-1,784))
op_to_restore = graph.get_tensor_by_name("fc2/MatMul:0")
x = graph.get_tensor_by_name("x:0")
output = sess.run(op_to_restore,feed_dict= {x:simg})
print("Result = ", np.argmax(output))
A: The losses are oscillating like this but the predictions don't seem to be bad. It works.
It also extracts the mnist archive repeatedly. Accuracy also can reach 0.98 with a simpler network.
Epoch 1 Completed Total Loss: 47.47844
Accuracy on val_set: 0.8685
Epoch 2 Completed Total Loss: 10.217445
Accuracy on val_set: 0.9
Epoch 3 Completed Total Loss: 14.013474
Accuracy on val_set: 0.9104
[2]
import tensorflow as tf
import tensorflow.examples.tutorials.mnist.input_data as input_data
import numpy as np
import matplotlib.pyplot as plt
n_classes = 10
batch_size = 100
x = tf.placeholder(tf.float32, [None, 784],name='Xx')
y = tf.placeholder(tf.float32,[None,10],name='Yy')
input = 784
n_nodes_1 = 300
n_nodes_2 = 300
mnist = input_data.read_data_sets('MNIST_data/', one_hot=True)
def neural_network_model(data):
variables = {'w1':tf.Variable(tf.random_normal([input,n_nodes_1])),
'w2':tf.Variable(tf.random_normal([n_nodes_1,n_nodes_2])),
'w3':tf.Variable(tf.random_normal([n_nodes_2,n_classes])),
'b1':tf.Variable(tf.random_normal([n_nodes_1])),
'b2':tf.Variable(tf.random_normal([n_nodes_2])),
'b3':tf.Variable(tf.random_normal([n_classes]))}
output1 = tf.add(tf.matmul(data,variables['w1']),variables['b1'])
output2 = tf.nn.relu(output1)
output3 = tf.add(tf.matmul(output2, variables['w2']), variables['b2'])
output4 = tf.nn.relu(output3)
output5 = tf.add(tf.matmul(output4, variables['w3']), variables['b3'],name='last')
return output5
def train_neural_network(x):
prediction = neural_network_model(x)
name_of_final_layer = 'fin'
final = tf.nn.softmax_cross_entropy_with_logits_v2(logits=prediction,
labels=y,name=name_of_final_layer)
cost = tf.reduce_mean(final)
optimizer = tf.train.AdamOptimizer().minimize(cost)
hm_epochs = 3
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(hm_epochs):
for _ in range(int(mnist.train.num_examples/batch_size)):
epoch_x, epoch_y = mnist.train.next_batch(batch_size)
_,c=sess.run([optimizer,cost],feed_dict={x:epoch_x,y:epoch_y})
print("Epoch",epoch+1,"Completed Total Loss:",c)
correct = tf.equal(tf.argmax(prediction,1),tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct,'float'))
print('Accuracy on val_set:',accuracy.eval({x:mnist.test.images,y:mnist.test.labels}))
#path = saver.save(sess,"net/network")
#print("Saved to",path)
return prediction
def eval_neural_network(prediction):
with tf.Session() as sess:
new_saver = tf.train.import_meta_graph('net/network.meta')
new_saver.restore(sess, "net/network")
singleprediction = tf.argmax(prediction, 1)
sing = np.reshape(mnist.test.images[1], (-1, 784))
output = singleprediction.eval(feed_dict={x:sing},session=sess)
digit = mnist.test.images[1].reshape((28, 28))
plt.imshow(digit, cmap='gray')
plt.show()
print(output)
prediction = train_neural_network(x)
eval_neural_network(prediction)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51335512",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Azure Table Storage - Entity Design Best Practices Question Im writing a 'proof of concept' application to investigate the possibility of moving a bespoke ASP.NET ecommerce system over to Windows Azure during a necessary re-write of the entire application.
Im tempted to look at using Azure Table Storage as an alternative to SQL Azure as the entities being stored are likely to change their schema (properties) over time as the application matures further, and I wont need to make endless database schema changes. In addition we can build refferential integrity into the applicaiton code - so the case for considering Azure Table Storage is a strong one.
The only potential issue I can see at this time is that we do a small amount of simple reporting - i.e. value of sales between two dates, number of items sold for a particular product etc.
I know that Table Storage doesnt support aggregate type functions, and I believe we can achieve what we want with clever use of partitions, multiple entity types to store subsets of the same data and possibly pre-aggregation but Im not 100% sure about how to go about it.
Does anyone know of any in-depth documents about Azure Table Storage design principles so that we make proper and efficient use of Tables, PartitionKeys and entity design etc.
there's a few simplistic documents around, and the current books available tend not to go into this subject in much depth.
FYI - the ecommerce site has about 25,000 customers and takes about 100,000 orders per year.
A: I think there are three potential issues I think in porting your app to Table Storage.
*
*The lack of reporting - including aggregate functions - which you've already identified
*The limited availability of transaction support - with 100,000 orders per year I think you'll end up missing this support.
*Some problems with costs - $1 per million operations is only a small cost, but you can need to factor this in if you get a lot of page views.
Honestly, I think a hybrid approach - perhaps EF or NH to SQL Azure for critical data, with large objects stored in Table/Blob?
Enough of my opinion! For "in depth":
*
*try the storage team's blog http://blogs.msdn.com/b/windowsazurestorage/ - I've found this very good
*try the PDC sessions from Jai Haridas (couldn't spot a link - but I'm sure its there still)
*try articles inside Eric's book - http://geekswithblogs.net/iupdateable/archive/2010/06/23/free-96-page-book---windows-azure-platform-articles-from.aspx
*there's some very good best practice based advice on - http://azurescope.cloudapp.net/ - but this is somewhat performance orientated
A: Have you seen this post ?
http://blogs.msdn.com/b/windowsazurestorage/archive/2010/11/06/how-to-get-most-out-of-windows-azure-tables.aspx
Pretty thorough coverage of tables
A: If you have start looking at Azure storage such as table, it would do no harm in looking at other NOSQL offerings in the market (especially around document databases). This would give you insight into NOSQL space and how solution around such storages are designed.
You can also think about a hybrid approach of SQL DB + NOSQL solution. Parts of the system may lend themselves very well to Azure table storage model.
NOSQL solutions such as Azure table have their own challenges such as
*
*Schema changes for data. Check here and here
*Transactional support
*ACID constraints. Check here
A: All table design papers I have seen are pretty much exclusively focused on the topics of scalability and search performance. I have not seen anything related to design considerations for reporting or BI.
Now, azure tables are accessible through rest APIs and via the azure SDK. Depending on what reporting you need, you might be able to pull out the information you require with minimal effort. If your reporting requirements are very sophisticated, then perhaps SQL azure together with Windows Azure SQL Reporting services might be a better option to consider?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/5662393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: c++ overloading function including bool and integer arguments a simple and I guess easy to answer question (if I did not already got it myself). The following overloaded functions:
void BR(const bool set) { backwardReaction_[nReac_] = set; }
bool BR(const int reactionNumber) const { return backwardReaction_[reactionNumber]; }
The first function is a setter and the second a getter function. backwardReaction_ is of type std::vector<bool>. The problem occurs whenever I want to call the second function. Here I get a compiler error overload function BR(xy) ambigious.
int main()
.
.
const int i = 3;
bool a = chem.BR(i);
The compiler error is equal to:
chemistryTestProg.cpp: In function ‘int main()’:
chemistryTestProg.cpp:74:34: error: call of overloaded ‘BR(const int&)’ is ambiguous
const bool a = chem.BR(i);
^
In file included from ../../src/gcc/lnInclude/chemistryCalc.hpp:38:0,
from ../../src/gcc/lnInclude/chemistry.hpp:38,
from chemistryTestProg.cpp:35:
../../src/gcc/lnInclude/chemistryData.hpp:266:18: note: candidate: void AFC::ChemistryData::BR(bool)
void BR(const bool);
^~
../../src/gcc/lnInclude/chemistryData.hpp:322:22: note: candidate: bool AFC::ChemistryData::BR(int) const
bool BR(const int) const;
^~
I guess that I get the problem because of the types bool and int which are identically (true => int(1), false => int(0). As I am changing the getter name to, e.g., bool getBR(const int reactionNumber) {...} everything works fine. So I guess the problem is about the similarities of the bool and int treatment within c++. I also tried a variety of different calls such as:
const bool a = chem.BR(4)
const bool a = chem.BR(int(5))
const bool a = chem.BR(static_cast<const int>(2))
bool a = chem.BR(...)
Thus, I think it is really related to the bool andint overloading arguments. Nevertheless, I made a quick search and did not find too much about these two overload types and resulting problems. Tobi
A: This is because you declared BR(int), but not BR(bool), to be const. Then when you call BR(int) on a non-const object, the compiler has two conflicting matching rules: parameter matching favours BR(int), but const-ness matching favours BR(bool).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60681839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Convert Images to gif using ios I am trying to convert 10 UIImages in .gif format programatically using IOS Xcode 4.5 and save it in gallery, but am having no luck I even tried a question at stackoverflow
Create and and export an animated gif via iOS? I am not getting the point what are the variables like __bridge id used and how can i assign these images to the method please help ....
A: instead of converting image in to Gif apple provide UIImageView.animationImages proparty that you can Animation Number of images one by one like gif. like Bellow code:-
UIImage *statusImage = [UIImage imageNamed:@"status1.png"];
YourImageView = [[UIImageView alloc]
initWithImage:statusImage];
//Add more images which will be used for the animation
YourImageView.animationImages = [NSArray arrayWithObjects:
[UIImage imageNamed:@"status1.png"],
[UIImage imageNamed:@"status2.png"],
[UIImage imageNamed:@"status3.png"],
[UIImage imageNamed:@"status4.png"],
[UIImage imageNamed:@"status5.png"],
[UIImage imageNamed:@"status6.png"],
[UIImage imageNamed:@"status7.png"],
[UIImage imageNamed:@"status8.png"],
[UIImage imageNamed:@"status9.png"],
nil];
//setFrame of postion on imageView
YourImageView.frame = CGRectMake(
self.view.frame.size.width/2
-statusImage.size.width/2,
self.view.frame.size.height/2
-statusImage.size.height/2,
statusImage.size.width,
statusImage.size.height);
YourImageView.center=CGPointMake(self.view.frame.size.width /2, (self.view.frame.size.height)/2);
YourImageView.animationDuration = 1;
[self.view addSubview:YourImageView];
A: First of all you should check for your problem in Apple Documentation. Everything is given there. What you need is to Research & Read.
You should try the ImageIO framework. It can convert .png files into CGImageRef objects and then export the CGImageRefs to .gif files.
To save Images to Photo Library :UIImageWriteToSavedPhotosAlbum(imageToSave, nil, nil, nil);
Also Documented in UIKit.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16996690",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: PHP use variable to populate associative array I am currently populating an associative array like this:
$plates_data['data'][] = array("drilldown"=>$sImageUrl, "type"=>$Type, "job_no"=>$Job_No, "customer"=>$Customer);
I'd like to be able to do something like this:
$myvar = '"drilldown"=>$sImageUrl, "type"=>$Type, "job_no"=>$Job_No, "customer"=>$Customer';
$plates_data['data'][] = array($myvar);
This doesn't work, can someone tell me what i'm doing wrong?
A: You can use it like this
$array_holder = array("drilldown"=>$sImageUrl, "type"=>$Type, "job_no"=>$Job_No,"customer"=>$Customer);
$plates_data['data'][] = $array_holder;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/24748318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Discord.PY JSON So I was working on something and weird things are happening. My amounts.json is resetting and the code isn't adding to the amounts.json. (ex. I type the command that adds to the number and it does nothing to the amounts.json)
Code:
@bot.command(pass_context=True)
async def redeem(ctx, key):
with open('amounts.json') as f:
amounts = json.load(f)
id = int(ctx.message.author.id)
if key in amounts:
if amounts[key] < int(2):
await ctx.send("You have been given Buyer role!")
amounts[key] += int(1)
member = ctx.message.author
this_guild = member.guild
role = get(member.guild.roles, name='Buyer')
await member.add_roles(role)
Message = ctx.message
await Message.delete()
await ctx.send("You have been given Buyer role!")
_save()
else:
await ctx.send("Invalid Key!")
else:
await ctx.send("Invalid Key!")
JSON
{"196430670": 0}
A: You are not saving your JSON file back, based on the edited amounts dict.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60067229",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Run Code Before First View Controller is Initialized (Storyboard-based App) My app needs to perform some cleanup tasks -delete old data stored locally- every time on startup, before the initial view controller is displayed.
This is because that initial view controller loads the existing data when initialized, and displays it inside a table view.
I set up several breakpoints and found out that the initial view controller's initializer (init?(coder aDecoder: NSCoder)) is run before application(_:didFinishLaunchingWithOptions) - even before application(_:**will**FinishLaunchingWithOptions), to be precise.
I could place the cleanup code at the very top of the view controller's initializer, but I want to decouple the cleanup logic from any particular screen.
That view controller may end up not being the initial view controller one day.
Overriding the app delegate's init() method and putting my cleanup code there does get the job done, but...
Question:
Isn't there a better / more elegant way?
Note: For reference, the execution order of the relevant methods is as
follows:
*
*AppDelegate.init()
*ViewController.init()
*AppDelegate.application(_:willFinishLaunchingWithOptions:)
*AppDelegate.application(_:didFinishLaunchingWithOptions:)
*ViewController.viewDidLoad()
Clarification:
The cleanup task is not lengthy and does not need to run asynchronously: on the contrary, I'd prefer if my initial view controller isn't even instantiated until it completes (I am aware that the system will kill my app if it takes to long to launch. However, it's just deleting some local files, no big deal.).
I am asking this question mainly because, before the days of storyboards, app launch code looked like this:
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
// Override point for customization after application launch.
self.window.backgroundColor = [UIColor whiteColor];
// INITIAL VIEW CONTROLLER GETS INSTANTIATED HERE
// (NOT BEFORE):
MyViewController* controller = [[MyViewController alloc] init];
self.window.rootViewController = controller;
[self.window makeKeyAndVisible];
return YES;
}
...but now, because main storyboard loading happens behind the scenes, the view controller is initialized before any of the app delegate methods run.
I am not looking for a solution that requires to add custom code to my initial view controller.
A: I am not sure if there is more elegant way but there are definitely some other ways...
I'd prefer if my initial view
controller isn't even instantiated until it completes
This is not a problem. All you have to do is to delete a UIMainStoryboardFile or NSMainNibFile key from the Info.plist which tells the UIApplicationMain what UI should be loaded. Subsequently you run your "cleanup logic" in the AppDelegate and once you are done you initiate the UI yourself as you already shown in the example.
Alternative solution would be to create a subclass of UIApplicationMain and run the cleanup in there before the UI is loaded.
Please see App Life Cycle below:
A: *
*You can add a UIImageView on your initial ViewController which will contain the splash image of your app.
*In viewDidLoad()... make the imageview.hidden property False... do you cleanup operation and on completion of cleanup task make the imageview.hidden property TRUE.
By this user will be unaware of what job you are doing and this approach is used in many recognized app.
A: I faced a very simmilar situation
where i needed to run code, which was only ready after didFinishLaunchingNotification
i came up with this pattern which also works with state restoration
var finishInitToken: Any?
required init?(coder: NSCoder) {
super.init(coder: coder)
finishInitToken = NotificationCenter.default.addObserver(forName: UIApplication.didFinishLaunchingNotification, object: nil, queue: .main) { [weak self] (_) in
self?.finishInitToken = nil
self?.finishInit()
}
}
func finishInit() {
...
}
override func decodeRestorableState(with coder: NSCoder) {
// This is very important, so the finishInit() that is intended for "clean start"
// is not called
if let t = finishInitToken {
NotificationCenter.default.removeObserver(t)
}
finishInitToken = nil
}
alt you can make a notif for willFinishLaunchingWithOptions
A: Alternatively, if you want a piece of code to be run before everything, you can override the AppDelegate's init() like this:
@main
class AppDelegate: UIResponder, UIApplicationDelegate {
override init() {
DoSomethingBeforeEverthing() // You code goes here
super.init()
}
...
}
Few points to remember:
*
*You might want to check, what's allowed/can be done here, i.e. before app delegate is initialized
*Also cleaner way would be, subclass the AppDelegate, and add new delegate method that you call from init, say applicationWillInitialize and if needed applicationDidInitialize
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35054284",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How to work with maybe to detect sudoku values I want to form a list of Bools for if values suite a Sudoku format. i.e either Nothing or Just x where (1 <= x <= 9). Here is my code below:
import Data.Ix
import Data.Maybe
isSudokuValues :: (Ix a, Num a) => [Maybe a] -> [Bool]
isSudokuValues list = map (maybe True inRange(1, 9).fromJust) list
A: It is much easier to break out the actual value check to a helper function that works on a single element. That way you can easily see where you unpack the Maybe value
isSudokuValues :: (Ix a, Num a) => [Maybe a] -> [Bool]
isSudokuValues =
let
isSudokuValue :: (Ix a, Num a) => Maybe a -> Bool
isSudokuValue Nothing = True
isSudokuValue (Just x) = inRange (1, 9) x
in map isSudokuValue
A: I think you make two errors:
maybe has the following signature:
b -> (a -> b) -> Maybe a -> b
So you should use:
map (maybe True $ inRange (1,9)) list
the fromJust cannot be used, since then maybe would work on a (instead of Maybe a, and furthermore it is one of the tasks of maybe simply to allow safe data processing (such that you don't have to worry about whether the value is Nothing.
Some Haskell users furthermore consider fromJust to be harmfull: there is no guarantee that a value is a Just, so even if you manage to let it work with fromJust, it will error on Nothing, since fromJust cannot handle these. Total programming is one of the things most Haskell programmers aim at.
Demo (with ghci):
Prelude Data.Maybe Data.Ix> (map (maybe True $ inRange (1,9))) [Just 1, Just 15, Just 0, Nothing]
[True,False,False,True]
Prelude Data.Maybe Data.Ix> :t (map (maybe True $ inRange (1,9)))
(map (maybe True $ inRange (1,9))) :: (Num a, Ix a) => [Maybe a] -> [Bool]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33032501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to send message to django form Originally I made a script in PHP and because I put html in .php file it was easy to show a message when adding data to database was done.
Now I'd like to do the same in Django. I have simple form, and view that add the data. And when adding is done I want to display some div with information below the "send" button, so I don't want to load any other template. I want to show message in the same template where is my form.
Is it possible to do without ajax? How to do this?
A: Django has inbuilt messaging support for this type of flash messages.
You can add a success message like this in the view:
messages.success(request, 'Profile details updated.')
In your template, you can render it as follows:
{% if messages %}
<ul class="messages">
{% for message in messages %}
<li{% if message.tags %} class="{{ message.tags }}"{% endif %}>{{ message }}</li>
{% endfor %}
</ul>
{% endif %}
This is documented in this link:
https://docs.djangoproject.com/en/1.8/ref/contrib/messages/
A: If you are using class based views, then use the already implemented mixin
from django.contrib.messages.views import SuccessMessageMixin
from django.views.generic.edit import CreateView
from myapp.models import Author
class AuthorCreate(SuccessMessageMixin, CreateView):
model = Author
success_url = '/success/'
success_message = "%(name)s was created successfully"
A: How about passing a variable to the template ?
(@ views.py)
return render(request, "some_app/some_template.html", context={
'sent': True
})
and then somewhere in
(@ some_template.html)
{% if sent %}
<div> blah blah</div>
{% endif %}
Edit: fixed typos :/
A: Here's a basic version using a class based view for a contact page. It shows a success message above the original form.
mywebapp/forms.py
### forms.py
from django import forms
class ContactForm(forms.Form):
contact_name = forms.CharField(label='Your Name', max_length=40, required=True)
contact_email = forms.EmailField(label='Your E-Mail Address', required=True)
content = forms.CharField(label='Message', widget=forms.Textarea, required=True)
mywebapp/views.py
### views.py
from django.contrib.messages.views import SuccessMessageMixin
from django.core.mail import EmailMessage
from django.core.urlresolvers import reverse_lazy
from django.template.loader import get_template
from django.views.generic import TemplateView, FormView
from mywebapp.forms import ContactForm
class ContactView(SuccessMessageMixin, FormView):
form_class = ContactForm
success_url = reverse_lazy('contact')
success_message = "Your message was sent successfully."
template_name = 'contact.html'
def form_valid(self, form):
contact_name = form.cleaned_data['contact_name']
contact_email = form.cleaned_data['contact_email']
form_content = form.cleaned_data['content']
to_recipient = ['Jane Doe <[email protected]>']
from_email = 'John Smith <[email protected]>'
subject = 'Website Contact'
template = get_template('contact_template.txt')
context = {
'contact_name': contact_name,
'contact_email': contact_email,
'form_content': form_content
}
content = template.render(context)
email = EmailMessage(
subject,
content,
from_email,
to_recipient,
reply_to=[contact_name + ' <' +contact_email + '>'],
)
email.send()
return super(ContactView, self).form_valid(form)
templates/contact.html
### templates/contact.html
{% extends 'base.html' %}
{% block title %}Contact{% endblock %}
{% block content %}
{% if messages %}
<ul class="messages">
{% for message in messages %}
<li{% if message.tags %} class="{{ message.tags }}"{% endif %}>{{ message }}</li>
{% endfor %}
</ul>
{% endif %}
<form role="form" action="" method="post">
{% csrf_token %}
{{ form.as_p }}
<button type="submit">Submit</button>
</form>
{% endblock %}
templates/contact_template.txt
### templates/contact_template.txt
### used to render the body of the email
Contact Name: {{contact_name}}
Email: {{contact_email}}
Content: {{form_content}}
config/urls.py
### config/urls.py
from mywebapp.views import *
urlpatterns = [
url(r'^about/', AboutView.as_view(), name='about'),
url(r'^contact/$', ContactView.as_view(), name='contact'),
url(r'^$', IndexView.as_view()),
]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/31629785",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Tablesorter widgets break the pagination Using tablesorter without the widgets, I can set the size attribute in the tablesorterPager and it works well: I can change the table size dynamically, using a dropdown with several values, but if I refresh the page, it displays correctly the number of rows setted with the size attribute. But this behaviour changes if I use the widgets: if I select dynamically a different page size, refreshing the page it doesn't show the default number of rows. This happens simply including the jquery.tablesorter.widgets.js file, without really using any widgets.
The followings are 2 links that show the behaviour:
http://latoclient.it/file1.html
http://latoclient.it/file2.html
The 2 files are identical: the only difference is that in the file2.html I include also the jquery.tablesorter.widgets.js file.
I'm using the TableSorter (FORK) 2.18.4, pager plugin v2.21.0 and tableSorter 2.16+ widgets - updated 5/28/2014 (v2.17.1), but I have also tryied with different versions and the result is the same.
The javascript is the following:
<script type="text/javascript">
$(function() {
$("table")
.tablesorter().tablesorterPager({
container: $("#pager"),
size:5
});
});
</script>
so I really don't use the widget, but it's simply inclusion raises the issue.
If you select a value different from 5 in the dropdown below the table and refresh the page with F5, the file1.html correctly displays 5 rows, file2.html seems to cache somewhere the previously selected value. This is an unacceptable behaviour for my purposes.
Can you help me to fix this issue, please?
Thank you very much.
A: The issue you are seeing is because the pager savePages option is set to true by default.
In order for that option to work as expected, you need to include the storage widget, contained in the jquery.tablesorter.widgets.js file. Without the storage widget, the pager is not able to save the last user set page into local storage and/or cookies depending on browser support.
If you don't want the page to "remember" the last user set page size, then set that option to false:
$(function() {
$("table")
.tablesorter().tablesorterPager({
container: $("#pager"),
savePages: false,
size:5
});
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33246430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Angular 2 popup form on-click I need a pop-up screen on click in angular 2. A pop-up screen will include different tabs and form I tried with angular2-modal but it has only prompt are alert but not sure whether it support custom form creation.
If angular2-model support custom form how to do else Is there any other package which supports to achieve my task.
My code:
***users.component.html***
<input type="checkbox" name="scheduler2" (click)="helpWindow($event)">
***users.component.ts***
helpWindow(event) {
this.modal.alert()
.size('lg')
.isBlocking(true)
.keyboard(27)
.showClose(true)
.title('A simple Alert style modal window')
.body(`
<h4>Alert is a classic (title/body/footer) 1 button modal window that
does not block.</h4>
<b>Configuration:</b>
<ul>
<input type="text" name="text" />
</ul>`)
.open();
}
A: this is angular2-modal open function:
/**
* Opens a modal window inside an existing component.
* @param content The content to display, either string, template ref
or a component.
* @param config Additional settings.
* @returns {Promise<DialogRef>}
*/
open(content: ContainerContent, config?: OverlayConfig):
Promise<DialogRef<any>> {... }
As you see you can provide a ref to template or custom modal component.
See bootstrap-demo-page for details.
A: You can use angular2 material. It gas a nice dialog where you can inject any component and that component will show up in that model. It also come with lots of nice features like afterClose(subscriber). You can pass data to its child component.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43584551",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Applying user patterns in pytesseract I'm using pytesseract to try to detect certain pattern of strings in images.
As far as I understand, the correct use of user patterns will help pytesseract make a better scan for a certain pattern of string. However, I can't figure out how to put that to work. This question helps clarifying that to use I must use the config argument (pytesseract.pytesseract.image_to_string(image, config='), but I didn't get how to apply that to my case.
I'm trying to find this regex pattern: \d{5}\.?\d{5} \.?\d{6} ?\d{5}\.?\d{6} ?\d ?\d{14}. How should I apply that in user patterns to help tesseract make a better OCR scanning?
A: It is a little hard to find it.
Yes,the user-pattern in tesseract couldn't work well in the old version of tesseract.
Refer to this Pull Request on github.
And finally I found the example of how to use the user-pattern in tesseract.In your circumstance,you could try:
*
*Firstly, make sure the version of tesseract >= 4.0.(I recommend you install tesseract 5.x,because I used 5.x in my PC)
*Create a file called xxx.patterns.The content(with UNIX line endings (line-feed character) and a blank line at the end):
\d{5}\.?\d{5} \.?\d{6} ?\d{5}\.?\d{6} ?\d ?\d{14}
*Then try to use:
pytesseract.image_to_string("test.png", config="--user-patterns yourpath/xxx.patterns")
Finally, it worked for me(This is an example in documentation.):
Also you could refer to this documentation.
A: This might not be the answer you are looking for, but I faced a similar problem with tesseract a few months ago. You might want to take a look at whitelisting, more specifically, whitelisting all digits. Like this,
pytesseract.image_to_string(question_img, config="-c tessedit_char_whitelist=0123456789. -psm 6")
This however did not work for me, so I ended up using opencv knn, this does mean you need to know where each char is located though... First I stored some images of the characters I wanted to recognize. And added those detections to a temporary file:
frame[y:y + h, x:x + w].copy().flatten()
After labeling all those detections I trained them using the previously mentioned knn.
network = cv2.ml.KNearest_create()
network.train(data, cv2.ml.ROW_SAMPLE, labels)
network.save('pattern')
Now all chars can be analysed using.
chars = [
frame[y1:y1 + h, x1:x1 + w].copy().flatten(), #char 1
frame[y2:y2 + h, x2:x2 + w].copy().flatten(), #char 2
frame[yn:yn + h, xn:xn + w].copy().flatten(), #char n
]
output = ''
network = cv2.ml.KNearest_create()
network.load('pattern')
for char in chars:
ret, results, neighbours, dist = network.findNearest([char.astype(np.float32)], 3)
output = '{0}'.format(result)
After this you can just do your regex on your string. Total training and labeling only took me something like 2 hours so should be quite doable.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62560122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Meteor load static data once for several templates I'm working with meteor and FlowRouter. I have a collection of a country's administrative divisions and the data is about 2000 documents. I read this data in several routes so at the moment I'm subscribing to the same collection every time I visit one of the routes that is using this data.
This is causing a slow performance and a waste of resources. Given that this collection doesn't change, is there any way to load or subscribe to this data once and have it available for the whole app or specific routes?
Maybe save the data in settings.json and have it available as an object would be better?
Thanks in advance for any help.
A: You need to keep the subscriptions active between routes. You can do this using this package (written by the same author as FlowRouter so it all works nicely together):
https://github.com/kadirahq/subs-manager
Alternatively, create a Meteor method to return the data and save it in your Session. In this case it won't be reactive, so it depends on your needs.
A: Any subscription you make that's external to the routing will be in global scope, which will then mean that data from that subscription is available everywhere. All you need to do is set up the subscription say in the root layout file for your site and then that data will be kept in your local minimongo store at all times.
The Todo list collection in the Todo app example here is an example of this, this is the code from that example:
Tasks = new Mongo.Collection("tasks");
if (Meteor.isServer) {
// This code only runs on the server
Meteor.publish("tasks", function () {
return Tasks.find();
});
}
if (Meteor.isClient) {
// This code only runs on the client
Meteor.subscribe("tasks");
You can then query that local data as you would normally.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35202452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Why does following program shows error Q and D are not declared? I wrote this program for an FIR filter and used D flip flop for delay. I needed to implement filter with Impulse response h(n) = {1, -1}
Despite various efforts, it shows the same error that D and Q aren't defined/ declared properly. There was another error saying Q has been illegally redeclared, for that, I deleted the line where I defined Q output register, in the second module. Please point out to the error and tell me how to fix it.
module firfilter( dout, din, clock);
input din, clock;
output dout;
parameter b0 = 1'd1;
parameter b1 = 1'd1;
assign dout = b0 - b1 * Q;
always@ (posedge clock)
D < = din;
endmodule
module dff ( D, clock, Q);
input D, clock;
output Q;
always@ (posedge clock)
Q <= #(1) D;
endmodule
A: In verilog all signals declared in module are only visible in this module. You have ports D and Q declared as input and output ports of module dff which is ok, but you are trying to use D and Q in firfilter module which doesn't know anything about D and Q from module dff. What you should do is put an instance of module dff in module firfilter and connect its ports to signals like this:
module firfilter(
input din,
input clock,
output dout
);
parameter b0 = 1'd1;
parameter b1 = 1'd1;
wire Q;
reg D;
// instance of dff module:
dff dff_inst(.D(D), .clock(clock), .Q(Q) );
assign dout = b0 - b1 * Q;
always@ (posedge clock)
D <= din;
endmodule
module dff (
input D,
input clock,
output reg Q
);
always@ (posedge clock)
Q <= #(1) D;
endmodule
Also you need to know that you cannot drive wire signals inside always blocks so I changed them to regs.
Pay more attention to code formatting as your snippet is barely readable.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/40255099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: How can I pass a line from a file as argument to a c program via bash? I have a c program that accepts as argument a string. The string input as argument is a single line of a file, so for example one of my files contains the following:
0410000340000230
1111111111111111
1800400700032050
So far I've writen a bash script to automate the work:
#!/bin/bash
read -p "make clean"
make clean
read -p "make"
make
file=$1
while read line; do
echo $line
done < $file
So far, so good. Right now, I want to use the output from the echo $line command as argument to my program. I've tried:
echo $line | ./program
echo $line > ./program
echo $line < ./program
./program < echo $line
Sometimes it will just give me an error that there are no arguments for my program and at other times it will arrive at a segmentation fault.
Can someone tell me what I'm doing wrong?
The C code of the main program:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <string.h>
#include "functions.h"
int main(int argc, char * argv[])
{
if (argc != 2)
{
printf("Usage: ./Sudoku <Sudoku en une seule ligne. Represantation des cases vides avec 0>\n");
return 0;
}
FILE *f, *g;
int n, n_clauses, ** Sudoku;
n = (int) sqrt((double) strlen(argv[1]));
if (ceil(sqrt((double) strlen(argv[1]))) == sqrt((double) strlen(argv[1])))
{
Sudoku = (int **)malloc(n * sizeof(int *));
for (int i = 0; i < n; i++)
{
Sudoku[i] = (int *)malloc(n * sizeof(int));
}
remplir_sudoku(argv[1], n, Sudoku);
}
else
{
n = (int) sqrt((double) find_sudoku_length(argv[1]));
printf("\n %d\n", n);
Sudoku = (int **)malloc(n * sizeof(int *));
for (int i = 0; i < n; i++)
{
Sudoku[i] = (int *)malloc(n * sizeof(int));
}
remplir_sudoku_v2(argv[1], n, Sudoku);
}
afficher_sudoku(Sudoku, n);
n_clauses = nb_clauses(n);
n_clauses = n_clauses + nb_remplis(Sudoku, n);
f = fopen("CNF", "w");
fprintf(f, "c CNF\n");
fprintf(f, "p cnf %d %d\n", n*n*n, n_clauses);
transformer_en_cnf(f, Sudoku, n);
fclose(f);
return 0;
}
A: The segfault was caused by the input file containing carriage return literals, without the C program being written to handle the case.
The "ambiguous redirect" was caused by passing a filename with spaces without correct quoting. Always quote your expansions: <"$file", not <$file.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55997237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: How to call function expression of javascript from angular7 How to call function expression of JavaScript from angular 7?
My problem is similar to this
here is my test.js:
'use strict';
var val = (function() {
function launch() {
console.log("val");
};
return {
launch: launch
};
})();
I wanna call launch method from Angular Typescript
I am trying as:
declare var val2: any;
ngOnInit() {
val.launch();
}
A: It's quite hard to understand your question, but I'll have a shot:
The source code in Angular projects is composed of ES modules, ie functions and variables are passed from one file to another via imports and exports.
If you want to use val from test.js in some.component.ts, you might want to do that:
In test.js:
var val = /* ... */
export { val }
In some.component.ts:
import { val } from './test.js';
@Component({...})
export class SomeComponent {
ngOnInit() {
val.launch();
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53946828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: pg_restore not restoring a certain table? In a Django project, I have a model Question in the lucy_web app, but the corresponding lucy_web_question table does not exist, as seen from a \dt command in the database shell:
(lucy-web-CVxkrCFK) bash-3.2$ python manage.py dbshell
psql (10.4)
Type "help" for help.
lucy=> \dt
List of relations
Schema | Name | Type | Owner
--------+------------------------------+-------+---------
public | auditlog_logentry | table | lucyapp
public | auth_group | table | lucyapp
public | auth_group_permissions | table | lucyapp
public | auth_permission | table | lucyapp
public | auth_user | table | lucyapp
public | auth_user_groups | table | lucyapp
public | auth_user_user_permissions | table | lucyapp
public | defender_accessattempt | table | lucyapp
public | django_admin_log | table | lucyapp
public | django_content_type | table | lucyapp
public | django_migrations | table | lucyapp
public | django_session | table | lucyapp
public | lucy_web_checkin | table | lucyapp
public | lucy_web_checkintype | table | lucyapp
public | lucy_web_company | table | lucyapp
public | lucy_web_expert | table | lucyapp
public | lucy_web_expertsessiontype | table | lucyapp
public | lucy_web_family | table | lucyapp
public | lucy_web_lucyguide | table | lucyapp
public | lucy_web_notification | table | lucyapp
public | lucy_web_package | table | lucyapp
public | lucy_web_packagesessiontype | table | lucyapp
public | lucy_web_preactivationfamily | table | lucyapp
public | lucy_web_profile | table | lucyapp
public | lucy_web_questionanswer | table | lucyapp
public | lucy_web_questioncategory | table | lucyapp
public | lucy_web_session | table | lucyapp
public | lucy_web_sessioncategory | table | lucyapp
public | lucy_web_sessiontype | table | lucyapp
public | lucy_web_userapn | table | lucyapp
public | oauth2_provider_accesstoken | table | lucyapp
public | oauth2_provider_application | table | lucyapp
public | oauth2_provider_grant | table | lucyapp
public | oauth2_provider_refreshtoken | table | lucyapp
public | otp_static_staticdevice | table | lucyapp
public | otp_static_statictoken | table | lucyapp
public | otp_totp_totpdevice | table | lucyapp
public | two_factor_phonedevice | table | lucyapp
(38 rows)
We also have a staging environment, deployed on Aptible, which does appear to have these tables. Using the Aptible CLI to create a database tunnel, if I psql <connection_url> and dt I do see the lucy_web_question table:
db=# \dt
List of relations
Schema | Name | Type | Owner
--------+------------------------------+-------+---------
public | auditlog_logentry | table | aptible
public | auth_group | table | aptible
public | auth_group_permissions | table | aptible
public | auth_permission | table | aptible
public | auth_user | table | aptible
public | auth_user_groups | table | aptible
public | auth_user_user_permissions | table | aptible
public | defender_accessattempt | table | aptible
public | django_admin_log | table | aptible
public | django_content_type | table | aptible
public | django_migrations | table | aptible
public | django_session | table | aptible
public | lucy_web_checkin | table | aptible
public | lucy_web_checkintype | table | aptible
public | lucy_web_company | table | aptible
public | lucy_web_expert | table | aptible
public | lucy_web_expertsessiontype | table | aptible
public | lucy_web_family | table | aptible
public | lucy_web_lucyguide | table | aptible
public | lucy_web_notification | table | aptible
public | lucy_web_package | table | aptible
public | lucy_web_packagesessiontype | table | aptible
public | lucy_web_preactivationfamily | table | aptible
public | lucy_web_profile | table | aptible
public | lucy_web_question | table | aptible
public | lucy_web_questionanswer | table | aptible
public | lucy_web_questioncategory | table | aptible
public | lucy_web_questionprompt | table | aptible
public | lucy_web_session | table | aptible
public | lucy_web_sessioncategory | table | aptible
public | lucy_web_sessiontype | table | aptible
public | lucy_web_userapn | table | aptible
public | oauth2_provider_accesstoken | table | aptible
public | oauth2_provider_application | table | aptible
public | oauth2_provider_grant | table | aptible
public | oauth2_provider_refreshtoken | table | aptible
public | otp_static_staticdevice | table | aptible
public | otp_static_statictoken | table | aptible
public | otp_totp_totpdevice | table | aptible
public | two_factor_phonedevice | table | aptible
(40 rows)
Because the data on these test environments is not important, I'd like to pg_dump the Aptible database and pg_restore it on my local machine.
My local DATABASE_URL is postgres://lucyapp:<my_password>@localhost/lucy, so firstly, I did a pg_dump with --format=custom and specifying a --file as follows:
Kurts-MacBook-Pro-2:lucy2 kurtpeek$ touch staging_db_12_July.dump
Kurts-MacBook-Pro-2:lucy2 kurtpeek$ pg_dump postgresql://aptible:<some_aptible_hash>@localhost.aptible.in:62288/db --format=custom --file=staging_db_12_July.dump
Kurts-MacBook-Pro-2:lucy2 kurtpeek$ ls -lhtr | tail -1
-rw-r--r-- 1 kurtpeek staff 1.5M Jul 12 18:09 staging_db_12_July.dump
This results in a 1.5Mb .dump file, which I tried to restore from using pg_restore with the --no-owner option and --role=lucyapp (in order change the owner from aptible to lucyapp). However, this gives rise to a large number of 'already exists' errors, of which one is shown below:
Kurts-MacBook-Pro-2:lucy2 kurtpeek$ pg_restore staging_db_12_July.dump --dbname=lucy --no-owner --role=lucyapp
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 3522; 0 0 COMMENT EXTENSION plpgsql
pg_restore: [archiver (db)] could not execute query: ERROR: must be owner of extension plpgsql
Command was: COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';
pg_restore: [archiver (db)] Error from TOC entry 2; 3079 16392 EXTENSION hstore
pg_restore: [archiver (db)] could not execute query: ERROR: permission denied to create extension "hstore"
HINT: Must be superuser to create this extension.
Command was: CREATE EXTENSION IF NOT EXISTS hstore WITH SCHEMA public;
pg_restore: [archiver (db)] Error from TOC entry 3523; 0 0 COMMENT EXTENSION hstore
pg_restore: [archiver (db)] could not execute query: ERROR: extension "hstore" does not exist
Command was: COMMENT ON EXTENSION hstore IS 'data type for storing sets of (key, value) pairs';
pg_restore: [archiver (db)] Error from TOC entry 197; 1259 16515 TABLE auditlog_logentry aptible
pg_restore: [archiver (db)] could not execute query: ERROR: relation "auditlog_logentry" already exists
Command was: CREATE TABLE public.auditlog_logentry (
id integer NOT NULL,
object_pk character varying(255) NOT NULL,
object_id bigint,
object_repr text NOT NULL,
action smallint NOT NULL,
changes text NOT NULL,
"timestamp" timestamp with time zone NOT NULL,
actor_id integer,
content_type_id integer NOT NULL,
remote_addr inet,
additional_data jsonb,
CONSTRAINT auditlog_logentry_action_check CHECK ((action >= 0))
);
WARNING: errors ignored on restore: 294
The problem is that if I \dt again in the python manage.py dbshell, I still don't see the lucy_web_question table.
I've come across this solution, Django : Table doesn't exist, for my situation, but in my case the Question model is imported and used as a foreign key in so many places that I thought it would be easier just to restore a database. Why is it not restoring the lucy_web_question table, though?
A: It seems the problem was that the lucyapp user did not have sufficient privileges to create the table. I basically had to ensure that the \dn+ command produced this result:
lucy=# \dn+
List of schemas
Name | Owner | Access privileges | Description
--------+----------+----------------------+------------------------
public | postgres | postgres=UC/postgres+| standard public schema
| | =UC/postgres +|
| | lucyapp=UC/postgres |
(1 row)
where lucyapp has both USAGE (U) and CREATE (C) privileges. Following https://www.postgresql.org/docs/9.0/static/sql-grant.html, this can be achieved with the commands
GRANT USAGE ON SCHEMA public TO lucyapp;
GRANT CREATE ON SCHEMA public TO lucyapp;
I also made lucyapp a superuser prior to running these commands, although that is not recommended for production.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51316479",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: A puzzled result after calling pandas.get_dummies() I get data like this way:
train.MSZoning.value_counts()
Out:
RL 1151
RM 218
FV 65
RH 16
C (all) 10
Name: MSZoning, dtype: int64
And I try label encode it in this way:
C (all) => 0
Fv => 1
RH => 2
RL => 3
RM => 4
SO, I guess I print value_counts()again which will be like this way:
Out:
0 10
1 65
2 16
3 1151
4 218
And I try to use Pandas.get_dummies() like this:
t = pd.get_dummies(train.MSZoning)
print(t)
Out:
C (all) FV RH RL RM
0 0 0 0 1 0
1 0 0 0 1 0
2 0 0 0 1 0
3 0 0 0 1 0
4 0 0 0 1 0
5 0 0 0 1 0
...
And I print pd.Dataframe(t).describe() to get the description of it.
C (all) FV RH RL RM
count 1460.000000 1460.000000 1460.000000 1460.000000 1460.000000
mean 0.006849 0.044521 0.010959 0.788356 0.149315
std 0.082505 0.206319 0.104145 0.408614 0.356521
min 0.000000 0.000000 0.000000 0.000000 0.000000
25% 0.000000 0.000000 0.000000 1.000000 0.000000
50% 0.000000 0.000000 0.000000 1.000000 0.000000
75% 0.000000 0.000000 0.000000 1.000000 0.000000
max 1.000000 1.000000 1.000000 1.000000 1.000000
BUT when trying to use pd.get_dummies() in this way, I get something different which puzzled me:
train.MSZoning = pd.get_dummies(train.MSZoning)
Out:
print(train.MSZoning)
0 1
1 1
2 1
3 1
4 1
5 1
...
train.MSZoning.describe()
Out:
count 1460.000000
mean 0.993151
std 0.082505
min 0.000000
25% 1.000000
50% 1.000000
75% 1.000000
max 1.000000
Name: MSZoning, dtype: float64
I am wondering why it gets two different results after calling function get_dummies() and assigning it?
So if not mind, could anyone help me?
Sincerely appreciated.
A: I think you should reconsider this line:
train.MSZoning = pd.get_dummies(train.MSZoning)
You are assigning a DataFrame to a Series.
Not sure what's going on there but my guess is that is not your intention.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53763162",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to access Vue Component from plain vanilla DOM Situation
I'm building a SPA with Vue.
There are pop-up notifications, made with Vue-Noty ( https://github.com/renoguyon/vuejs-noty ).
I have a vuex action, checking session status, if it finds out the session is expired, there will be a noty displayed ("Your Session has expired").
Now i want to add a link ("Login here"), but i'm facing a
Problem
It is possible to add <a> tags to the notification. they work fine too.
but Vue SPAs don't like plain <a> tags of course...
I need to have a <router-link>. but that doesn't "just" work, it has to be parsed out of a .vue file to generate an <a> which works for single-page-navigation.
Noty CAN handle tags, but if i just add it to the Noty like
this._vm.$noty.error('Your Session has expired, please <router-link to="/login">log in again</router-link>.')
, it will of course come out as a literal <router-link> tag, which will not work.
So my next thought was to put an <a> there with an onClick="this.$router.push('/login')"
Of course, this also doesn't work, because there is no thisin a plain DOM element.
My next thought was: maybe i can access the vue router via window.document, but i didn't find a way to do it
Question
How can i call the $router.push method from outside the vue component or instance, from the "plain DOM"?
A: You can try :
app.__vue__.$router.push({'name' : 'home'})
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62813688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Setting internal page links including anchors in sitecore 8 The new design for Inserting an Internal link( using General Link field type) in Sitecore 8 doesn't include Anchor attribute field but in an older version (i.e. sitecore 7), this field is available.
Is there a way by which authors can add anchor to an internal link in Sitecore 8?
Note: I need to add an internal link from one page to another page(with an anchor) and not an anchor within the same page.
I have attached screenshots from sitecore 7 and sitecore 8.
Insert link pop up in sitecore 8
Insert link pop up in sitecore 7
A: There is the blue Anchor button up in the left corner and also an Insert anchor button on the field itself:
A: This is a known issue with the Sitecore 8 SPEAK dialog. The temporary workaround is to revert to the previous dialog by commenting out (or better yet <patch:delete />) the following line in the /App_Config/Include/Sitecore.Speak.Applications.config:
<override dialogUrl="/sitecore/shell/Applications/Dialogs/Internal%20link.aspx" with="/sitecore/client/applications/dialogs/InsertLinkViaTreeDialog" />
The corresponding public reference number for this issue is #407189
A: There is a solution to customise the Speak dialog to add this back in here:
https://exlrtglobal.atlassian.net/wiki/display/~Drazen.Janjicek/2016/03/18/Extending+Sitecore+8.1+-+Adding+support+for+anchors+in+internal+links
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33006910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Changing the Folder security permissions using Excel VBA I am trying to write some code to put a text file into a secure folder. The folder has attributes already set to read-only so that the files within are secure and cannot be altered but still read.
The FileSystemObject will allow me to use the attribute property which I can set to 1 (read-only) but this is easily overridden.
My next port of call was to GetAclInformation etc.
I downloaded some code and I got through a large portion of it but, at GetAclInformation it crashes Excel.
I then continued to look and so used the ADsSecurity dll. This returns an error stating
the ActiveX cannot create the object.
I have downloaded a copy of the dll and put it into the windows\syswow64 directory and then registered it with RegSvr32 which returned a success.
I can add the references required and see the object in the object viewer. But trying both late and early binding has no affect and it still errors saying the ActiveX cannot create the object.
Does anyone have any other ideas or a suggestion on what to try?
Sub TestApproval()
Dim oSec As New ADsSecurity
Dim oSd As Object, oDac1 As Object, oAce As Object
Set oSec = New ADsSecurity
Set oSd = oSec.GetSecurityDescriptor(CStr("FILE://C:\Test"))
Set oDac1 = oSd.DiscretionaryAcl
For Each oAce In oDac1
Debug.Print oAce.trustee & "|" & oAce.AceType & "|" & oAce.AccessMask & "|" & oAce.AceFlags & "|" & oAce.Flags & "|" & oAce.ObjectType & "|" & oAce.InheritedObjectType
Next oAce
Set oSec = Nothing
Set oSd = Nothing
Set oDac1 = Nothing
End Sub
Thanks in advance :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/40403321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Subsets and Splits