text
stringlengths 15
59.8k
| meta
dict |
---|---|
Q: C# Newbie: Change Entity Framework DataBound DataGrid created with WPF drag-and-drop I'm new to C# and I created a WPF GUI using Visual Studio 2010 Ultimate (Windows 7 x64).
I used drag-and-drop technology to databind a datagrid to an SQL Server 2008 database table using Entity Framework technology.
This all works very well, despite my lack of C# skills.
I was able to hide columns using statements like:
dataGridInputEmails.Columns[0].Visibility = Visibility.Hidden;
And I cleared the datagrid using:
dataGridInputEmails.Columns.Clear();
Now I want to populate the datagrid with a string that I created by reading in a text file and using:
string[] parts = slurp.Split(delimiters,StringSplitOptions.RemoveEmptyEntries);
But, I cannot figure out how to populate the datagrid.
TIA
A: Try this:
dataGridInputEmails.ItemsSource = parts;
You shouldn't have to do dataGridInputEmails.Columns.Clear(); first.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/6556771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Pandas maximum difference of values with day-to-day intervals Continuation of previous previous question
df
Timestamp Spot DK1 Spot DK2 Ubalance DK1 Ubalance DK2
0 2020-01-01T00:00:00+01:00 33.42 33.42 34.00 34.00
1 2020-01-01T01:00:00+01:00 31.77 31.77 34.00 34.00
2 2020-01-01T02:00:00+01:00 31.57 31.57 34.00 34.00
3 2020-01-01T03:00:00+01:00 31.28 31.28 34.00 34.00
4 2020-01-01T04:00:00+01:00 30.85 30.85 26.00 26.00
5 2020-01-01T05:00:00+01:00 30.14 30.14 25.00 25.00
6 2020-01-01T06:00:00+01:00 30.17 30.17 24.00 24.00
7 2020-01-01T07:00:00+01:00 30.00 30.00 24.00 24.00
8 2020-01-01T08:00:00+01:00 30.63 30.63 24.00 24.00
9 2020-01-01T09:00:00+01:00 30.59 30.59 25.00 25.00
10 2020-01-01T10:00:00+01:00 30.27 30.27 25.00 25.00
11 2020-01-01T11:00:00+01:00 30.34 30.34 25.00 25.00
12 2020-01-01T12:00:00+01:00 30.59 30.59 30.59 30.59
13 2020-01-01T13:00:00+01:00 30.04 30.04 30.04 30.04
14 2020-01-01T14:00:00+01:00 30.60 30.60 30.60 30.60
15 2020-01-01T15:00:00+01:00 31.09 31.09 31.09 31.09
16 2020-01-01T16:00:00+01:00 31.53 31.53 31.53 31.53
17 2020-01-01T17:00:00+01:00 31.78 31.78 33.50 33.50
18 2020-01-01T18:00:00+01:00 31.64 31.64 33.50 33.50
19 2020-01-01T19:00:00+01:00 31.44 31.44 31.44 31.44
20 2020-01-01T20:00:00+01:00 31.35 31.35 31.35 31.35
21 2020-01-01T21:00:00+01:00 31.07 31.07 31.07 31.07
22 2020-01-01T22:00:00+01:00 30.96 30.96 25.00 25.00
23 2020-01-01T23:00:00+01:00 30.61 30.61 21.00 21.00
24 2020-01-02T00:00:00+01:00 30.78 30.78 20.00 20.00
25 2020-01-02T01:00:00+01:00 30.64 30.64 20.00 20.00
26 2020-01-02T02:00:00+01:00 30.43 30.43 20.00 20.00
27 2020-01-02T03:00:00+01:00 28.79 28.79 23.00 23.00
28 2020-01-02T04:00:00+01:00 28.42 28.42 22.73 22.73
29 2020-01-02T05:00:00+01:00 28.75 28.75 23.00 23.00
30 2020-01-02T06:00:00+01:00 33.38 34.16 22.50 22.50
31 2020-01-02T07:00:00+01:00 31.79 42.07 22.28 22.28
32 2020-01-02T08:00:00+01:00 31.83 44.89 22.50 22.50
33 2020-01-02T09:00:00+01:00 31.74 45.26 23.00 23.00
34 2020-01-02T10:00:00+01:00 31.63 45.57 24.00 24.00
35 2020-01-02T11:00:00+01:00 31.32 45.09 25.00 25.00
36 2020-01-02T12:00:00+01:00 31.07 45.16 25.00 25.00
37 2020-01-02T13:00:00+01:00 31.06 44.90 25.00 25.00
38 2020-01-02T14:00:00+01:00 31.07 44.06 26.00 26.00
39 2020-01-02T15:00:00+01:00 31.26 44.84 26.00 26.00
40 2020-01-02T16:00:00+01:00 31.41 44.40 27.50 27.50
41 2020-01-02T17:00:00+01:00 31.40 46.05 26.00 46.05
42 2020-01-02T18:00:00+01:00 31.10 46.72 26.00 26.00
43 2020-01-02T19:00:00+01:00 30.75 45.26 25.32 25.32
44 2020-01-02T20:00:00+01:00 30.47 39.32 20.25 20.25
45 2020-01-02T21:00:00+01:00 30.10 30.10 16.50 16.50
46 2020-01-02T22:00:00+01:00 29.71 29.71 16.50 16.50
47 2020-01-02T23:00:00+01:00 24.99 24.99 15.00 15.00
48 2020-01-03T00:00:00+01:00 18.93 18.93 15.00 15.00
49 2020-01-03T01:00:00+01:00 9.98 9.98 9.98 9.98
I want to generate a separate series of values in which the maximum difference of each day's values are in. Preferably able to choose of which columns to include.
Therefore, the first value in the series should be 12.5, since the largest and lowest values of the first day are 33.5 and 21.00, respectively.
Expected outcome:
Day Max diff
0 2020-01-01 12.50
1 2020-01-02 31.72
So far, I've tried this - but this gives me a rolling, 24-hour interval, crossing days, which is what I try to avoid:
cols = ['Spot DK1','Spot DK2','Ubalance DK1','Ubalance DK2']
N = 24
battery['dif'] = (battery[cols].stack(dropna=False)
.rolling(len(cols) * N)
.agg(lambda x: x.max() - x.min())
.groupby(level=0)
.max())
A: Idea is aggregate minimal and maximal values per days, so possible get max and min by max and min levels in df1 and last subtract to new df2:
df['Timestamp'] = pd.to_datetime(df['Timestamp'])
cols = ['Spot DK1','Spot DK2','Ubalance DK1','Ubalance DK2']
df1 = (df.set_index('Timestamp')[cols]
.groupby(pd.Grouper(freq='D', level='Timestamp'))
.agg(['min','max']))
s1 = df1.xs('max', level=1, axis=1).max(axis=1)
s2 = df1.xs('min', level=1, axis=1).min(axis=1)
df2 = s1.sub(s2).rename_axis('Day').reset_index(name='Max diff')
print (df2)
Day Max diff
0 2020-01-01 00:00:00+01:00 13.00
1 2020-01-02 00:00:00+01:00 31.72
2 2020-01-03 00:00:00+01:00 8.95
Details:
print (df1)
Spot DK1 Spot DK2 Ubalance DK1 \
min max min max min max
Timestamp
2020-01-01 00:00:00+01:00 30.00 33.42 30.00 33.42 21.00 34.0
2020-01-02 00:00:00+01:00 24.99 33.38 24.99 46.72 15.00 27.5
2020-01-03 00:00:00+01:00 9.98 18.93 9.98 18.93 9.98 15.0
Ubalance DK2
min max
Timestamp
2020-01-01 00:00:00+01:00 21.00 34.00
2020-01-02 00:00:00+01:00 15.00 46.05
2020-01-03 00:00:00+01:00 9.98 15.00
| {
"language": "en",
"url": "https://stackoverflow.com/questions/69297420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Generic JQuery Function? I'm using Jeditable to edit dozens of fields such as first_name, last_name, birthday, etc.
My problem is that I'm drowning in detail: I keep having to create a new selector such as $('#edit_first_name').editable or $('#edit_birthday').editable to making the field editable as well as create a bunch of SQL commands specific to each field to insert them once they're edited.
My question is: Is there a way I can create something generic or OO in JQuery so that I don't don't have to endlessly create code that essentially does the same thing?
My guess is I can create some "generic" function that will create $('#edit_someField').editable on the fly by feeding some JSON array, which was created by doing a SELECT on all field names I'm interested in. I imagine that's exactly what JQuery plugins do.
Any direction on how I can accomplish this would be much appreciated.
EDIT
One solution I have come up with is to put the table, column name and id in the id value of whatever I want to edit.
For example, if I want to edit the first_name of id=6 on table Person, then in the id I will put <span class="editable" id="Person:first_name:6">myFirstName</span>. When I send the id to my save.php file, I use a preg_split to insert my data into the table.
A: Mark all your editable input fields with the class "editable". (Change to suit.)
$('.editable').each(function() {
$(this).editable('mysaveurl.php');
});
That's all you need for the basic functionality. Obvious improvements can be made, depending on what else you need. For example, if you are using tooltips, stick them in the title attribute in your HTML, and use them when you call editable().
Your actual PHP/Ruby/whatever code on the server is just going to look at the id parameter coming in, and use that to save to the appropriate field in the database.
A: Just use a class to make them all editable, and using their id (or a attribute that you have attached to the tag) you can specify the field you are updating.
Then just pass the field and the value to your DB.
i don't know the plugin you are using
but, assuming it has some sort of event handling in it...
something like this maybe?
$(".editable").editable({
complete : function() {
var _what = $(this).attr("id");
var _with = $(this).val();
$.post("yourPostPage.php",{what:_what, where:_where});
}
});
Without knowing more about your environment I wouldn't be able to help further.
A: The jQuery way of assigning some common code to a bunch of objects is to either give all the objects the same class so you can just do this:
$(".myEditableFields").editable("enable");
This will call the editable add-on method for all objects in your page with class="myEditableFields".
If you just have a list of IDs, you can just put them all in the single selector like this and they will each get called for the editable jQuery method:
$("#edit_first_name, #edit_birthday, #edit_last_name, #edit_address").editable("enable");
Sometimes, the easy way it to create a select that just identifies all editable fields within a parent div:
$("#editForm input").editable("enable");
This would automatically get all input tags inside the container div with id=editForm.
You can of course, collect common code into your own function in a traditional procedural programming pattern and just call that when needed like this:
function initEditables(sel) {
$(sel).editable("enable");
// any other initialization code you want here or event handlers you want to assign
}
And, then just call it like this as needed:
initEditables("#edit_first_name");
To help further, we would need to know more about what you're trying to do in your code and what your HTML looks like.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/7521745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: jQuery - Exclude an li from this function? I have a ul list of 8 li's on the last li it has the id #search - I don't want the dropdown applid to this, how can I exclude it? here's my code..
$(document).ready(function () {
$('#navigation li').hover(function () {
// Show the sub menu
$('ul', this).stop(true,true).slideDown(300);
},
function () {
//hide its submenu
$('ul', this).stop(true,true).slideUp(200);
});
});
Thanks
A: Use jQuery's .not()
$('#navigation li').not("#search").hover(function () {
// Show the sub menu
$('ul', this).stop(true,true).slideDown(300);
},
function () {
//hide its submenu
$('ul', this).stop(true,true).slideUp(200);
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/10010575",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Shift a column in R Suppose that I have a dataframe like the following:
library(tidyverse)
df <- tibble(x = c(1,2,3), y = c(4,5,6))
# A tibble: 3 x 2
x y
<dbl> <dbl>
1 1 4
2 2 5
3 3 6
And I would like to shift a column, adding a column like:
# A tibble: 3 x 3
x y shifted_x
<dbl> <dbl> <dbl>
1 1 4 NA
2 2 5 1
3 3 6 2
Basically, I want to work with timeseries, so I would like to get previous values and use them as feature. In python I know I can do:
for i in range(1,11):
df[f'feature_{i}']=df['sepal_length'].shift(i)
Where df is a pandas Dataframe.
Any equivalent code for doing this in R?
A: An equivalent in R tidyverse is dplyr::lag. Create the column in mutate and update the object by assigning (<-) back to the same object 'df'
library(dplyr)
df <- df %>%
mutate(shifted_x = lag(x))
or if we need to use the shift, there is shift in data.table
library(data.table)
setDT(df)[, shifted_x := shift(x)]
Also, if we need to create more than one column, shift can take a vector of values in n
setDT(df)[, paste0('shifted', 1:3) := shift(x, n = 1:3)]
A: We can also use the flag function from the collapse package.
library(collapse)
df %>%
ftransform(shifted_x = flag(x))
# # A tibble: 3 x 3
# x y shifted_x
# * <dbl> <dbl> <dbl>
# 1 1 4 NA
# 2 2 5 1
# 3 3 6 2
| {
"language": "en",
"url": "https://stackoverflow.com/questions/69967450",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: BCP error "Unable to open BCP host data-file" I've just create a new table in my sqlserver name exporttable
now I'm trying to push out using cmd bcp but om getting this following error:
SQLState = S1000, NativeError = 0 Error = [Microsoft][ODBC Driver 13
for SQL Server]Unable to open BCP host data-file
Here is my path:
C:\Users\Serge>BCP Testing.bdo.Exporttable out "C:\Users\Serge\Desktop" -C -T
anyone can help ?
After trying Shnugos suggestion to add a filename I got this error:
SQLState = S0002, NativeError = 208 Error = [Microsoft][ODBC Driver 13
for SQL Server][SQL Server]Invalid object name
'Testing.bdo.ExportTable'. SQLState = 37000, NativeError = 11529 Error
= [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]The metadata could not be determined because every code path results in an error;
see previous errors for some of these. –
A: From the error I take, that the data file cannot be opened:
C:\Users\Serge>BCP Testing.bdo.Exporttable out "C:\Users\Serge\Desktop\MyFile.txt" -C -T
I think, you have to add a filename behind the \Desktop. Desktop is an existing directory and cannot be opened as file ...
And - btw - it might be necessary to add -S Servername...
UPDATE
Found this here
Whenever I get this message, it's because of one of three things:
1) The path/filename is incorrect (check your typing / spelling)
2) The file does not exist. (make sure the file is where you expect it
to be)
3) The file is already open by some other app. (close the other app to
release the file)
For 1) and 2) - remember that paths are relative to where bcp is
executing. Make sure that bcp.exe can access the file/path from it's
context.
/Kenneth
A: If you are running BCP through xp_cmdshell, run the following-->
xp_cmdshell 'whoami';
GO
--Make sure whatever user value you get back has full access to the file in question
A: Run: EXEC master..xp_cmdshell 'DIR C:\Users\Serge\Desktop', this will show if you have access to the path.
Remember if you are accessing SQL remotely or over a network, the output ie. "C:\Users\Serge\Desktop" will be the C drive on the SQL Server, not your remote PC you are working on.
A: I know this is old, but you also appear to have the schema spelled wrong.
C:\Users\Serge>BCP Testing.bdo.Exporttable out "C:\Users\Serge\Desktop" -C -T
s/b
C:\Users\Serge>BCP Testing.dbo.Exporttable out "C:\Users\Serge\Desktop" -C -T
| {
"language": "en",
"url": "https://stackoverflow.com/questions/39465354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Logtalk : Load a file with camelcase naming on Windows With logtalk 3.1.2, under OS X and Linux, no problem to load a file with a camelcase name, but an exception is thrown on Windows (ERROR : file does not exist).
logtalk_load(mypath(myFileNameInCameCase))
What's wrong ?
A: Some backend Prolog compilers, such as SWI-Prolog when running on Windows, down-case file names when expanding file paths into absolute file paths. This caused a failure in the Logtalk compiler when going from the file argument in the compilation and loading predicates to an absolute file path and its components (directory, name, and extension). A workaround have been found and committed to the current git version. Thanks for the bug report.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33387111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Appending javascript value to gridview hyperlink. Any ideas why my code doesn't work? We have a javascript that takes start date and end date, calculates the difference between them and stores the value in a variable called hourDiff.
Here is relevant part of the js.
To avoid a scope issue, a gobal variable called hourDiff is defined at top of the javascript outside any function"
var hourDiff = 0;
//Calulate the time difference and store the value in hourDiff
hourDiff = endDate - startDate;
The idea is to eventually appendd hourDiff to the gridview hyperlink control below and pass the name-value pairs as querystring to another page called Reseve.aspx
<asp:HyperLink ID="siteId" class="js_siteid" style="color:#111" runat="server" navigateurl='<%# String.Format("Reserve.aspx?id={0}&groupsize={1}&facilityFees={2}&extrahour={3}&depoitAmt={4}&cancelAmt={5}&keydeptAmt={6}", Eval("siteId"), Eval("capacity"),Eval("RentalFeeAmount"),Eval("ExtraHourAmount"),Eval("DepositAmount"),Eval("CancellationAmount"),Eval("KeyDepositAmount")) %>' Text='Select' />
Since the hyperlink is not exposed to the javascript until after search button is clicked, I have another javascript below that is used to apend hourdiff to the hyperlink and passed to another page.
This javascript is placed at the bottom of the html page:
<script type="text/javascript">
var links = document.getElementsByClassName("js_siteid");
if ( links.length > 0 )
{
links[0].onclick = function() {
this.href += ( "&hoursdiff=" + hourDiff );
return true;
}
}
</script>
</body>
</html>
<script type="text/javascript">
**var hourDiff = 0;**
$(window).load(function () {
$("#txtFromDate").datepicker();
$('#timeStart').timepicker({ showPeriod: true,
onHourShow: OnHourShowCallback,
onMinuteShow: OnMinuteShowCallback
});
$("#txtToDate").datepicker();
$('#timeEnd').timepicker({ showPeriod: true,
onHourShow: OnHourShowCallback,
onMinuteShow: OnMinuteShowCallback
});
function OnHourShowCallback(hour) {
if ((hour > 20) || (hour < 6)) {
return false; // not valid
}
return true; // valid
}
function OnMinuteShowCallback(hour, minute) {
if ((hour == 20) && (minute >= 30)) { return false; } // not valid
if ((hour == 6) && (minute < 30)) { return false; } // not valid
return true; // valid
}
$('#btnSearch').on('click', function () {
var sDate = $("#txtFromDate").val();
var sTime = $("#timeStart").val();
var eDate = $("#txtToDate").val();
var eTime = $("#timeEnd").val();
var startDate = new Date(sDate + " " + sTime).getHours();
var endDate = new Date(eDate + " " + eTime).getHours();
//Calulate the time difference
**hourDiff = endDate - startDate;**
//alert(hourDiff);
//Check if hour difference is less than 4 hours and show the message accordingly
if (hourDiff < 4) {
var r = false; $($("<div>A mininum of 4 hours is required!</div>")).dialog({ closeOnEscape: false, resizable: false, modal: true, open: function (event, ui) { $(".ui-dialog-titlebar-close").hide(); }, buttons: { Close: function () { r = false; $(this).dialog("close"); } }, close: function () { return r; } });
return false;
}
//Add the check condition if the user is above the 4 hours time frame
if (hourDiff > 4) {
var r = confirm("There may be additional fees for going over the 4 hours!");
if (r == true) { // pressed OK
return true;
} else { // pressed Cancel
return false;
}
}
});
});
</script>
Each time I compile my code and clic the Select link, hourDiff always displays the value of 0 rather than the difference between start date and end date.
Any ideas what I am doing wrong?
Protected Sub ValidateDuration(ByVal sender As Object, ByVal args As ServerValidateEventArgs)
Dim validator As Control = DirectCast(sender, Control)
Dim row As Control = validator.NamingContainer
Dim startHour As Integer = Integer.Parse(DirectCast(row.FindControl("startHour"), DropDownList).SelectedValue)
Dim startMinutes As Integer = Integer.Parse(DirectCast(row.FindControl("startMinutes"), DropDownList).SelectedValue)
Dim startAmPm As String = DirectCast(row.FindControl("startAmPm"), DropDownList).SelectedValue
Select Case startAmPm
Case "AM"
If startHour = 12 Then
startHour = 0
End If
Case "PM"
If startHour <> 12 Then
startHour += 12
End If
Case Else
args.IsValid = True
Return
End Select
Dim endHour As Integer = Integer.Parse(DirectCast(row.FindControl("endHour"), DropDownList).SelectedValue)
Dim endMinutes As Integer = Integer.Parse(DirectCast(row.FindControl("endMinutes"), DropDownList).SelectedValue)
Dim endAmPm As String = DirectCast(row.FindControl("endAmPm"), DropDownList).SelectedValue
Select Case endAmPm
Case "AM"
If endHour = 12 Then
endHour = 0
End If
Case "PM"
If endHour <> 12 Then
endHour += 12
End If
Case Else
args.IsValid = True
Return
End Select
Dim hoursDiff As Integer = endHour - startHour
If endMinutes < startMinutes Then
hoursDiff -= 1
End If
args.IsValid = hoursDiff >= 2
End Sub
A: Update 2015-02-27 ~13:40 EDT: Appending hiddenfield to hyperlink serverside
I'm using GridView1 as I do not know the name of your gridview.
In the GridView1 RowDatabound Event (note the addition of "&hourDiff={7}" to the end of the format string and the addiition of the hiddenfield value in the parameter list):
Protected Sub GridView1_RowDataBound(sender As Object, e As System.Web.UI.WebControls.GridViewRowEventArgs) Handles GridView1.RowDataBound
If e.Row.RowType = DataControlRowType.DataRow Then
dim hl as HyperLink = if(e.Row.FindControl("siteId"), Nothing)
If hl IsNot Nothing then
hl.NavigateURL = String.Format("Reserve.aspx?id={0}&groupsize={1}" & _
"&facilityFees={2}&extrahour={3}&depoitAmt={4}&cancelAmt={5}" & _
"&keydeptAmt={6}&hourDiff={7}",
DataBinder.Eval("siteId"),
DataBinder.Eval("capacity"),
DataBinder.Eval("RentalFeeAmount"),
DataBinder.Eval("ExtraHourAmount"),
DataBinder.Eval("DepositAmount"),
DataBinder.Eval("CancellationAmount"),
DataBinder.Eval("KeyDepositAmount"),
hf1.Value )
End If
End If
End Sub
Problem 1: Each time I compile my code and click the Select link
the Select link in a Gridview is going to cause a postback and therefore hourDiff is going to be 0 everytime since a postback is going to force all js to be re-evaluated
Problem 2:
Be aware that every control that causes a postback is going to reset your page javascript. One way to get around that is to save to and restore from hidden field controls (<asp:HiddenField ID="hf1" runat="server" ClientIDMode="Static"...>). then you can access like this: $('#hf1').val(3.14); and the value is preserved between postbacks.
Cookies or local storage are other options
Also, is there any reason that the calculation must occur clientside? because you have another problem once you redirect to a new page. Be aware that redirecting, even to the same page, is not a postback.
Update 2015-02-26 14:30 EDT
I cannot see your search code so I'm going to make an assumption that no matter how the user picks a start/end date/time there is a Search button that causes a postback with four fields (assuming asp:TextBox's) of Search data, I'll call them: StartDate, StartTime, EndDate, EndTime.
The Search Button is going to cause a postback which should make the search fields available for use in the code behind.
Build 2 DateTime variables (S, E) and use the DataDiff() function to determine the difference you need.
In your case it would be something like this:
' At class level define the variable:
Dim hourDiff as DateTime = nothing
' In the gridview databinding event do your calculation
Private Sub GridView1_DataBinding(sender As Object, e As System.EventArgs) Handles GridView1.DataBinding
Dim S as New DateTime( <replace with relevant parameters> )
Dim E as New DateTime( <replace with relevant parameters> )
hourDiff = DataDiff(DateInterval.Hour, S, E)
End Sub
' Then in the row databound event append the difference to the hyperlink
Private Sub GridView1_RowDataBound(sender As Object, e As System.Web.UI.WebControls.GridViewRowEventArgs) Handles GridView1.RowDataBound
If e.Row.RowType = DataControlRowType.DataRow Then
dim hl as HyperLink = if(e.Row.FindControl("siteId"), Nothing)
If hl IsNot Nothing then
hl.NavigateURL = <build your Hyperlink and Append hourDiff>
End If
End If
End Sub
Amended 2015-02-26 18:06EDT
i would have preferred the option of using hiddenField if you could
explain how to tie hf1 to hourdiff.
Add a hidden field to your page like this:
<asp:HiddenField ID="hf1" runat="server" ClientIDMode="Static">
to js add this:
// define hourDiff
var hourDiff;
// Using jquery place an object reference to the hidden field into hourDiff
hourDiff = $("#hf1");
// Do your time calculations and assign results to hourDiff
hourDiff.val( time_calc_results );
// The above places the results in the hidden field Value property
// which will be available in the code behind as as `hf1.Value` after postback
I am spending more time trying to fix errors on the code you just posted.
Well that's to be expected as it's mostly from memory and is partially pseudo code.
With regards to your latest code, does that mean that I try the javascript
I posted at the top or use both?
The intent was that the search criteria could be selected Clientside or Serverside, the choice is yours. But you have to be careful of problem 1. you need a way to preserve the calculation between posts. Hence my suggestion for the Hidden Field.
Worst case Scenario: 5+ postbacks
In the worst case scenario you post back every time a user selects a time and date so at minimum you have 5 postbacks, one for each start and end date and time and then clicking search. Posting back like this makes it pointless to do the calculations client side. I hope you understand why. Ideally, in this situation the user has made all the selections prior to selecting Search and then you calculation once server side.
Scenarios 2: 1 Postback
In this scenario you have js or jQ code picker that allow the user to select times and dates WITHOUT posting back to the server. This is a great way to collect data.
*
*If Clientside, you need to make sure the calculation is complete and stored in the Hidden Field before you post back
*If Serverside you need to grab the four field and do the calculation in the Search button click event or in the Gridview DataBinding event.
In either case you need to modify the Hyperlink href. This could be done client side...maybe. It depends, if you cause a postback before getting a chance to update the href of the select link.
To me it looks like you added your own hyperlink to each row. If that is so then you should be able to modify all the hyperlinks with a bit jquery code after you calc hourDiff
// Assuming hourDiff is defined as
var hourDiff = $('#hf1');
... (other stuff)
// and you calc and assign to hourDiff as this:
hourDiff.val( endDate - startDate);
... (maybe more other stuff)
// then you modify your hyperlinks like this
$( "#GridView1 a.js_siteid" ).each( function( i, e ) {
this.href += "&diff=" + hourDiff.val();
} );
Also, I don't understand:
Dim E as New DateTime( <replace with relevant parameters> )
It's the VB equivalent of what you are doing in the js code with the Date objects. There are over a dozen ways to instantiate one so I just left it up to you to pick one that works for you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/28727148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Custom legend text on ASP.NET Chart bound by DataBindCrossTable I'm binding an ASP.NET Chart control with DatabindCrossTable and everything works well, except the legend text that is applied.
My table looks like this:
Year Week Value
2015 1 530
2015 2 680
...
2016 1 887
2016 2 991
...
2017 1 990
2017 2 1021
...
I'm binding my Chart control this way:
chrtValuesByWeekByYear.DataBindCrossTable(myTable.Rows, "Year", "Week", "Value", "")
My problem is that the legend text is displaying "Year - YYYY", like the image below. How can I just display "YYYY" in the legend?
A: There's plenty of opportunity to configure your Legend and Series, but when you call DataBindCrossTable, you're delegating everything to this method. The only thing you're left with is to overwrite whatever you want after the fact.
So, right after you call DataBindCrossTable, you can for instance, simply do:
foreach (Series s in chrtValuesByWeekByYear.Series)
s.Name = s.Name.Remove(0, 7);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44683785",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: A function like: ng-click="alert('Clicked')" For a mockup I need a simple mechanism like
ng-click="alert('Clicked')"
but the code above is not working, can someone help me? I don't want to touch the Controller..
A: Refer to previous answer, ng-click = "alert('Hello World!')" will work only if $scope points to window.alert i.e
$scope.alert = window.alert;
But even it creates eval problem so correct syntax must be:
HTML
<div ng-click = "alert('Hello World!')">Click me</div>
Controller
$scope.alert = function(arg){
alert(arg);
}
A: As far as I know, ng-click works only within your $scope. So you would only be able to call functions, defined in your $scope itself. To use alert, you may try accessing the window element, just by using window.alert('Clicked').
EIDT: Thanks to Ibrahim. I forgot. You shouldn't be able to use window.alert, as far as you won't define:
$scope.alert = window.alert;
In this case, using alert('Clicked') in your ng-clicked directive should work. But at the end, this method would not solve your problem of not touching your controller.
A: You can use
onclick="alert('Clicked')"
for debugging purposes
A: As Neeraj mentioned, the accepted answer does not work. Add something like this to your controller to avoid an Illegal Invocation error:
$scope.alert = alert.bind(window);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34899173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Powerbuilder 12 classic version Scc option Unavailable I can't perform any of the source control options in powerbuilder classic version 12. They are just greyed out. at startup i get standard message Connection to source control established. objects under pbl shows sign of out of syc. but i can't use option Get Latest Version as its greyed out. we are using Microsoft Team Foundation Server fo Souce control. operating system is Windows 7 32 bit.
A: Verify that you have this registry key:
HKLM\SOFTWARE\SourceCodeControlProvider\InstalledSCCProviders
I've seen some source control tools either not use it, or remove it, and PowerBuilder looks there for the SCC vendors. If there are none there, then PB won't show the SCC options as available.
A: Another thing to check - Make sure the workspace file (*.pbw) is not read-only. Right click on the file and verify that the *.pbw file does not have the Read-Only attribute checkbox selected.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12327749",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Windows Clipboard, Store Image Using MS Forms Data Object - VBA I want to put an image on the Windows clipboard using VBA. I currently do this for text essentially as described here using the DataObject in the MS Forms 2.0 Library.
I want to do the same thing using an image file, putting the image on the clipboard instead of text.
My working code used for text is below, I have tried using the image file path as the variable but just stores the text. Also tried "SetImage" as a method but throws error as below.
Dim DataObj As MSForms.DataObject
Set DataObj = New MSForms.DataObject
DataObj.SetText str1 <--------------- works for string variable
DataObj.SetImage strImageFilePath <----------- Error Method not available
DataObj.PutInClipboard
| {
"language": "en",
"url": "https://stackoverflow.com/questions/42676272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: AWS Connect - Getting permission error on Wisdom:getContent I am implementing Connect Wisdom in my CCP UI. I am able to search for content and suggestions appear for documents I had previously uploaded but when i click on a suggestion it fails to load and upon checking the api call I see the error:
{"errorCode":"AccessDeniedException","message":"User: arn:aws:sts::xxxxxxxxx:assumed-role/AWSServiceRoleForAmazonConnect_Of0Wxxxxxxba/_<connect-username> is not authorized to perform: wisdom:GetContent on resource: arn:aws:wisdom:us-east-1:xxxxxxxxx:content/xxxxx-xxxxx-xxxxx-xxxx-xxxxx/xxxxx-xxxxx-xxxxx-xxxx-xxxxx"}
It being an assumed role and set by AWS means I cannot change what is already there. I have enabled Wisdom in the agent Security Profile this account is using.
I am thinking it could be an issue with how I registered content. I didnt create a Salesforce or Service Now integration but rather registered a knowledge base and uploaded content via .txt files to the knowledge base. The fact that suggestions appear when i search indicates i've done something right, its just the retrieving full document that is the problem.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/72803796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Error in Loading UITableView - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section {
return self.nameList.count;
}
-(UITableViewCell *)tableview:(UITableView *)tableview cellForRowAtIndexPath:(NSIndexPath *)indexPath{
static NSString *CellIdentifier=@"Cell";
UITableViewCell *cell=[tableview dequeueReusableCellWithIdentifier:CellIdentifier];
if (cell==nil) {
cell=[[UITableViewCell alloc]initWithFrame:CGRectZero reuseIdentifier:CellIdentifier];
//cell = [[UITableViewCell alloc]
// initWithStyle:UITableViewCellStyleDefault
// reuseIdentifier:CellIdentifier];
}
//Setup the cell
NSString *cellValue=[ListofItems objectAtIndex:indexPath.row];
cell.text=cellValue;
cell.textLabel.text = [self.nameList objectAtIndex: [indexPath row]];
return cell;
}
- (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView {
return 1;
}
Hi guys,
I have these codes and I cant see my values in my tableview.?And it says this:'program received signal sigabrt'
Can you help me where is my mistake?
A: return self.nameList.count;
which is the value of nameList.count when tableview numberOfRowsInSection is called? Try to set it to a fixed value just to test. Maybe you are not setting namelist properly.
- (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section {
return 1;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/9113263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Android TextView width I already searched internet for the solution, but none of them works for me...
I have 3 EditText's and 2 TextViews. I can't figure out how I can trim those TextViews (space after '+' and '=').
there is my .xml file:
<TableLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent" >
<TableRow android:layout_marginTop="40dp" >
<EditText
android:id="@+id/num1"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:ems="4"
android:inputType="number" />
<TextView
android:id="@+id/textView1"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="+"
android:textAppearance="?android:attr/textAppearanceLarge" />
<EditText
android:id="@+id/num2"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:inputType="number"
android:ems="4" />
<TextView
android:id="@+id/textView3"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="="
android:textAppearance="?android:attr/textAppearanceLarge" />
<EditText
android:id="@+id/wynik"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:ems="4"
android:inputType="text" >
<requestFocus />
</EditText>
</TableRow>
</TableRow>
A: android:paddingRight="0dp" should work
A: Change the layout by setting weight like this. This will do the trick for you.
<TableRow android:layout_marginTop="40dp"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:weightSum="8">
<EditText
android:id="@+id/num1"
android:layout_width="0dp"
android:layout_weight="2"
android:layout_height="wrap_content"
android:ems="4"
android:inputType="number" />
<TextView
android:id="@+id/textView1"
android:layout_width="0dp"
android:layout_weight="1"
android:layout_height="wrap_content"
android:text="+"
android:gravity="center"
android:textAppearance="?android:attr/textAppearanceLarge" />
<EditText
android:id="@+id/num2"
android:layout_width="0dp"
android:layout_weight="2"
android:layout_height="wrap_content"
android:inputType="number"
android:ems="4" />
<TextView
android:id="@+id/textView3"
android:layout_width="0dp"
android:layout_weight="1"
android:layout_height="wrap_content"
android:text="="
android:gravity="center"
android:textAppearance="?android:attr/textAppearanceLarge" />
<EditText
android:id="@+id/wynik"
android:layout_width="0dp"
android:layout_weight="2"
android:layout_height="wrap_content"
android:ems="4"
android:inputType="text" >
<requestFocus />
</EditText>
</TableRow>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/28502615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: comparing strings to pointers? Comparing strings in C So this is my code for an assignment for school, and right now my problem is in my inputID function. Where the comment says "If the same!!!!!!!!!!!!!!!!!!!!!!!!!!!", I try to compare a string given by the user and a string stored in my array of strings "IDArray". I try using the strcmp function however I keep getting an error. Any help with this would be much appreciated. The contents of the text file it reads from is shown below the code. Thank You
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define MAX_RECORDS 1000
#define MAX_INPUT 40
void writeFile();
void inputPass();
void inputID();
void userInput();
void printDB();
void readFile();
void inputInit();
void DBInit();
void init();
FILE *fp;
char **IDArray;
char **passwordArray;
char *IDInput;
char *passInput;
int main()
{
init();
readFile();
printf("\n\n\tWelcome to CPS_633, Lab 1\t\n\n");
userInput();
writeFile();
printDB();
return 0;
}
void writeFile()
{
fp = fopen("Database_Table.txt", "w");
int i;
for (i = 0; i < MAX_RECORDS; i++)
{
if (IDArray[i][0] != '\0')
{
fprintf(fp, "%s\t%s\n", IDArray[i], passwordArray[i]);
}
else
{
break;
}
}
fclose(fp);
}
void printDB()
{
printf("\nUsername\tPassword\n");
int i;
int databaseLength = 0;
for (i = 0; i <= MAX_RECORDS; i++)
{
if (IDArray[i][0] != '\0')
{
printf("%s\t\t%s\n", IDArray[i], passwordArray[i]);
}
else
{
break;
}
}
}
void inputPass()
{
int correct = 1, strLength;
printf("Please Enter Your Password (no special characters): ");
fgets(passInput, MAX_INPUT, stdin);
strLength = strlen(passInput) - 1;
if (strLength > 0 && passInput[strLength] == '\n')
{
passInput[strLength] = '\0';
}
while (correct)
{
int k;
int specialCase = 1;
for (k = 0; k <= strLength; k++) //Searches for special characters
{
switch (passInput[k])
{
case '~':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '!':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '`':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '@':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '#':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '$':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '%':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '^':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '&':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '*':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '(':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case ')':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '-':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '_':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '=':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '+':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '[':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case ']':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '{':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '}':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '|':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '\\':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case ':':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case ';':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '"':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case ',':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '<':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '.':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '>':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '?':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
case '/':
printf("Special Character(s) Found\n");
specialCase = 0;
break;
}
if (specialCase == 0)
break;
}
if (specialCase != 0)
{
if (strLength > 12) //Password longer than 12 characters
{
printf("This password is too long, the characters after the 12th has been cut off\n");
int i;
for (i = 0; i < MAX_RECORDS; i++)
{
if (passwordArray[i][0] == '\0')
{
strncpy(passwordArray[i], passInput, 12);
break;
}
else
{
continue;
}
}
correct = 0;
}
else //Password shorter than 12 characters
{
int i, j;
for (j = strLength; j <= 12; j++) //Pads password with "-"
{
passInput[j] = '-';
}
for (i = 0; i < MAX_RECORDS; i++) // Traverses array for first empty slot
{
if (passwordArray[i][0] == '\0') //If empty insert
{
strncpy(passwordArray[i], passInput, 12);
break;
}
else // If not empty continue
{
continue;
}
}
correct = 0;
}
}
else
{
printf("Please Enter Your Password (no special characters): ");
fgets(passInput, MAX_INPUT, stdin);
strLength = strlen(passInput) - 1;
if (strLength > 0 && passInput[strLength] == '\n')
{
passInput[strLength] = '\0';
}
}
}
}
void inputID()
{
int correct = 1, strLength;
printf("Please Enter Your Username ID: ");
fgets(IDInput, MAX_INPUT, stdin);
strLength = strlen(IDInput) - 1;
if (strLength > 0 && IDInput[strLength] == '\n')
IDInput[strLength] = '\0';
while (correct)
{
if (strLength > 32) //If longer than 32 characters
{
printf("This Username ID is longer than 32 characters\n");
printf("Please Enter Your Username ID: ");
fgets(IDInput, MAX_INPUT, stdin);
strLength = strlen(IDInput) - 1;
if (strLength > 0 && IDInput[strLength] == '\n')
IDInput[strLength] = '\0';
}
else if (strLength < 4) //If shorter than 4 characters
{
printf("This Username ID is shorter than 4 characters\n");
printf("Please Enter Your Username ID: ");
fgets(IDInput, MAX_INPUT, stdin);
strLength = strlen(IDInput) - 1;
if (strLength > 0 && IDInput[strLength] == '\n')
IDInput[strLength] = '\0';
}
else //If acceptable length
{
int i;
for (i = 0; i < MAX_RECORDS; i++)
{
if (IDArray[i][0] != '\0') //If element occupied, compare
{
if (strcmp(IDArray[i], inputID) == 0) // If the same!!!!!!!!!!!!!!!!!!
{
printf("Found Match");
correct = 0;
break;
}
else //If not the same
{
continue;
}
}
else //If element empty, insert
{
strcpy(IDArray[i], IDInput);
break;
}
}
correct = 0;
}
}
}
void userInput()
{
inputID();
inputPass();
}
void readFile()
{
fp = fopen("Database_Table.txt", "r");
char line[MAX_INPUT];
if (fp == NULL)
{
perror("Error in opening file");
}
else
{
int i = 0;
while (!feof(fp))
{
if (fgets(line, sizeof(line), fp) == NULL)
{
break;
}
else
{
sscanf(line, "%s\t%s", IDInput, passInput);
strcpy(IDArray[i], IDInput);
strcpy(passwordArray[i], passInput);
i++;
}
}
}
fclose(fp);
}
void inputInit()
{
IDInput = (char *)malloc(sizeof(char) * MAX_INPUT);
passInput = (char *)malloc(sizeof(char) * MAX_INPUT);
}
void DBInit()
{
IDArray = (char **)malloc(sizeof(char *) * MAX_RECORDS);
passwordArray = (char **)malloc(sizeof(char *) * MAX_RECORDS);
int i, j;
for (i = 0; i < MAX_RECORDS; i++)
{
IDArray[i] = (char *)malloc(sizeof(char) * MAX_INPUT);
passwordArray[i] = (char *)malloc(sizeof(char) * MAX_INPUT);
for (j = 0; j < MAX_INPUT; j++)
{
IDArray[i][j] = '\0';
passwordArray[i][j] = '\0';
}
}
}
void init()
{
DBInit();
inputInit();
}
ID11 PASSWORD1
ID22 PASSWORD2
ID33 PASSWORD3
ID44 PASSWORD4
ID55 PASSWORD5
ID55 PASSWORD5
A: You have a typo in line 325 (the line with that comment): it's IDInput you want to compare with, not inputID which is the name of the function it's in.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/39822866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: Matlab filter2 and opencv sobel produce different image gradients I need to implement histogram of oriented gradients for patches in an image (one HOG feature vector per patch, not one HOG for the whole image). I have been using the Matlab code at this link and translating the code to opencv python. I have made some changes to fit it for my purpose, one of the main difference between the Matlab and Python codes is in the way I get the gradient of each cell, while in Matlab I am using filter2 as was used in the above link, in Opencv I use Sobel operator. My problem is that the gradients that these two methods produce are different and I had a hard time fixing it. I tried changing both the image and kernel numerical representations. I also tried using filter2D in opencv, also imfilter in Matlab, but basically none of them worked. Here is the Matlab code for calculating the gradient using filter2:
blockSize=26;
cellSize=floor(blockSize/2);
cellPerBlock=4;
numBins=9;
dim = [444,262];
RGB = imread('testImage.jpg');
img= rgb2gray(RGB);
img = imresize(img, [262,444], 'bilinear', 'Antialiasing',false);
%operators
hx = [-1,0,1];
hy = [-1;0;1] ;
%derivatives
dx = filter2(hx, double(img));
dy = filter2(hy, double(img));
% Remove the 1 pixel border.
dx = dx(2 : (size(dx, 1) - 1), 2 : (size(dx, 2) - 1));
dy = dy(2 : (size(dy, 1) - 1), 2 : (size(dy, 2) - 1));
% Convert the gradient vectors to polar coordinates (angle and magnitude).
ang = atan2(dy, dx);
ang(ang < 0) = ang(ang < 0)+ pi;
mag = ((dy.^2) + (dx.^2)).^.5;
And this is the Python OpenCV version that I wrote using Sobel operator:
blockSize=26
cellSize=int(blockSize/2)
cellPerBlock=4
numBins=9
dim = (444,262)
angDiff=10**-6
img = cv2.imread('3132 2016-04-25 12-35-43-53991.jpg',0)
img = cv2.resize(img, dim, interpolation = cv2.INTER_LINEAR)
sobelx = cv2.Sobel(img.astype(float),cv2.CV_64F,1,0,ksize=1)
sobelx = sobelx[1 : np.shape(sobelx)[0] - 1, 1 : np.shape(sobelx)[1] - 1]
sobely = cv2.Sobel(img.astype(float),cv2.CV_64F,0,1,ksize=1)
sobely = sobely[1 : np.shape(sobely)[0] - 1, 1 : np.shape(sobely)[1] - 1]
mag, ang = cv2.cartToPolar(sobelx, sobely)
ang[ang>np.pi+angDiff]= ang[ang>np.pi+angDiff] - np.pi
Edit: I have followed the post HERE, using bilinear method in Matlab and cv2.INTER_LINEAR in OpenCV, as well as deactivating Antialiasing in Matlab, but still the two resized images do not exactly match. Here is a part of the resized image for a test image in Matlab:
And this is the same part from OpenCV:
2nd Edit: It turns out the way rounding happens causes this difference. So, I changed my OpenCV code to :
img = cv2.resize(img.astype(float), dim, interpolation = cv2.INTER_LINEAR)
and the Matlab:
imresize(double(img), [262,444], 'bilinear', 'Antialiasing',false);
And now both give me the same result.
I think the problem is caused by the derivative methods. I have checked cv2.filter2D in OpenCV, but still the results are different. I hope someone can give me a hint on what possibly could have caused the problem.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/41384784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Do you have any personal visual clue in your code for other user and not the compiler? i use certain suffix in my variable name, a combination of an underscore and a property. such as :
$variable_html = variable that will be parse in html code.
$variable_str = string variable
$variable_int = integer variable
$variable_flo = float variable.
Do you have other visual clues? Maybe something you write for variable, function name, class strucure, or other stuff that helps others to read and not only for compiler ?
A: You're describing Hungarian Notation: Do people use the Hungarian Naming Conventions in the real world?
There's lots of discussion on Stack Overflow about people's feelings on the topic.
A: It nearly is Hungarian Notation, but when you use Hungarian Notation it is more common to use a prefix instead of a suffix.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3441118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: MethodInfo.Invoke Works only in Debug Mode for Prism EventAggregator I have a extension method for Prism's EventAgregator to publish an event using reflection. The implementation is as follows:
MethodInfo raiseMethod = typeof(Extensions).GetMethod("Raise", BindingFlags.Public | BindingFlags.Static).MakeGenericMethod(obj.GetType());
raiseMethod.Invoke(null, new object[] {eventAggregator, obj, eventType});
This method calls an extension method which requires a typed parameter. This code, and the eventing work fine, but only in Debug mode. When switching to a Release build the event never arrives at the subscriber.
I have tried using the optional parameter during subscription keepSubscriberReferenceAlive but that does not fix the problem.
Any idea on how to fix this issue?
Update
I found that the issue is not releated to the above. It seems there is a filter in place which only allowed event from within the same assembly. But this really doesn't explain why the code worked while in debug mode.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/5870692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to fix a value error when i fit my Machine learning model Hi there, I am learning how to write code for the fist time. I am following a video on Youtube which asks me to write the following code:
tf.random.set_seed(42)
model = tf.keras.Sequential([tf.keras.layers.Dense(1)])
model.compile(loss=tf.keras.losses.mae,optimizer=tf.keras.optimizers.SGD(), # sgd is short for stochastic gradient descentmetrics=["mae"])
model.fit(X, y)
And below is what I see when I try to run it:
ValueError Traceback (most recent call last)<ipython-input-42-c251d8a2cc59> in <module>1314 # 3. Fit---> 15 model.fit(X, y)
1 frames/usr/local/lib/python3.8/dist-packages/keras/engine/training.py in tf__train_function(iterator)13 try:14 do_return = True---> 15 retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)16 except:17 do_return = False
ValueError: in user code:
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1249, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1233, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1222, in run_step **
outputs = model.train_step(data)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1023, in train_step
y_pred = self(x, training=True)
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.8/dist-packages/keras/engine/input_spec.py", line 250, in assert_input_compatibility
raise ValueError(
ValueError: Exception encountered when calling layer 'sequential_9' (type Sequential).
Input 0 of layer "dense_9" is incompatible with the layer: expected min_ndim=2, found ndim=1. Full shape received: (None,)
Call arguments received by layer 'sequential_9' (type Sequential):
• inputs=tf.Tensor(shape=(None,), dtype=float64)
• training=True
• mask=None
Can someone please help me figure out what the problem is. Thank you
Didn't try any solution as I didn't know what to do.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75506305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: How to integrate cognito identity pool with another AWS account for API Gateway access I have a working project in AWS Account A which authenticates users using cognito user pool. Have successfully limited access to certain API Gateway endpoints (using AWS_IAM authorizers) by using fine grained roles, policies, and identity pool. This is all working fine. Now, I am trying to figure how to get API Gateway end point in another AWS account (Account B) to use these same credentials (AccesskeyId, SecretAccessKey and SessionToken) from Account A to be able to hit the API Gateway end point in account B without creating an identity pool id etc in Account B.
I tried one approach where I added another resource to existing policy in Account A for one of the policies which is attached to a role to which a user is attached. Like this
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "execute-api:Invoke",
"Resource": [
"arn:aws:execute-api:us-east-1:<Account A id>:<api gateway resourceId account A>/*/*/*",
"arn:aws:execute-api:us-east-1:<Account B id>:<api gateway resourceId account B>/*/*/*"
]
}
]
}
so by adding the second resource arn:aws:execute-api:us-east-1:<Account B id>:<api gateway resourceId account B>/*/*/* my end points in Account B seems to work when a user who authenticates in Account A, gets the credentials (AccesskeyId, SecretAccessKey and SessionToken) and using the same credentials can access the endpoints in Account B. To make this work, on Account B api gateway, I had to enable AWS_IAM authorizer as well.
So was wondering if this is a valid approach for cross account authorization? Are there any other ways where we don't have to manually update these policies specifically? Any thoughts?
A: IMO approach is valid, make sure that APIs resource policy allows only assumed identity role to perform actions (assuming this is your use case).
You can also change the authorization type to Cognito and use the Cognito user access token and scopes to authorize access. Then you do not need to manage policies, see https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-cross-account-cognito-authorizer.html.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/71812280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: GIT: For http/https cloned repos only: git push failure shows password in clear text We are using https to clone a repo as following through Jenkins build:
git clone https://${repo_username}:${repo_password}@internalgit.com/scm/project/repo.git -b ${branch_name} $tmp
In above command: ${repo_username} & ${repo_password} are Jenkins variables passed as secrets so they are not logged as clear text.
However, this adds the user credentials to the git remote URL and in case of any push failure it shows the credentials in clear text in the following error:
[ERROR] To https://user:[email protected]/scm/project/repo.git
[ERROR] ! [remote rejected] master -> master (pre-receive hook declined)
[ERROR] error: failed to push some refs to 'https://user:[email protected]/scm/project/repo.git'
There can be a number of valid reasons for a push failure, however, printing credentials on screen is not acceptable.
Is there a way either to:
*
*mask the above error message.
*updating the remote URL to lose the password, without the password once again being prompted during push.
Following work arounds work but NOT acceptable in our use case:
*
*Store password in credential cache (using credential.helper)
*Using ssh clones instead of https.
A: Assuming that output is visible but input is not:
git clone https://${repo_username}:${repo_password}@internalgit.com/scm/project/repo.git -b ${branch_name} $tmp | sed "s/${repo_password}/<redacted>/g"
should do what you want.
I misread the question; for this answer to work you'd have to run it on each push (i.e. git push 2>&1 | sed "s/${repo_password}/<redacted>/g". I also missed that git prints this to stderr, so unless you want to use process substition, it will be difficult to redirect output.
You should escape the password in case it contains any special regex characters (as otherwise sed may match more or less than you mean too). This answer has some ready solutions for escaping strings to use with sed.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50633677",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: ReactJS apolloClient Could not find "client" in the context or passed in as an option I am not getting why it is firing me this problem, and it was worked same way before in my another application. I just tried last 3 days, i coulndt figure out this issue yet.
I found this solution on stackoverflow: React Apollo Error: Invariant Violation: Could not find "client" in the context or passed in as an option
But it is not solved my problem
Can anyone help me in this case?
THis is my App.js
import EmpTable from './components/empTable';
import { ApolloProvider } from '@apollo/react-hooks';
import { ApolloClient, InMemoryCache } from '@apollo/client';
const client = new ApolloClient({
uri: 'http://localhost:8000/graphql/',
cache: new InMemoryCache(),
});
function App() {
return (
<ApolloProvider client={client}>
<EmpTable />
</ApolloProvider>
);
}
export default App;
and this is my EmplyeeTable
import { gql, useQuery } from "@apollo/client";
function EmpTable() {
const GET_EMPLOYEE = gql`
query getEmp($id: String) {
employeeById(id: $id) {
id
name
role
}
}
`;
const {refetch} = useQuery(GET_EMPLOYEE)
return (
<div className="row">
{/* some div */}
</div>
);
}
export default EmpTable;
I am getting following error with this code:
Could not find "client" in the context or passed in as an option. Wrap the root component in an <ApolloProvider>, or pass an ApolloClient instance in via options.
new InvariantError
src/invariant.ts:12
9 | export class InvariantError extends Error {
10 | framesToPop = 1;
11 | name = genericMessage;
> 12 | constructor(message: string | number = genericMessage) {
13 | super(
14 | typeof message === "number"
15 | ? `${genericMessage}: ${message} (see https://github.com/apollographql/invariant-packages)`
View compiled
invariant
src/invariant.ts:27
24 | message?: string | number,
25 | ): asserts condition {
26 | if (!condition) {
> 27 | throw new InvariantError(message);
28 | }
29 | }
30 |
The error is too long i just puted few of them here. Can anyone please let me know what exactly the issue?
A: Try importing ApolloProvider from @apollo/client
import { ApolloProvider } from '@apollo/client';
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65634034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I get rid of this TypeError, and why is it occurring given that I am simply reassigning a value in a list? Code:
http://pastie.org/1961455
Trackback:
Traceback (most recent call last):
File "C:\Users\COMPAQ\Desktop\NoughtsCrosses.py", line 149, in <module>
main ()
File "C:\Users\COMPAQ\Desktop\NoughtsCrosses.py", line 144, in main
move = computer_move(computer, board, human)
File "C:\Users\COMPAQ\Desktop\NoughtsCrosses.py", line 117, in computer_move
board[i] = computer
TypeError: 'str' object does not support item assignment
As you can see for in my tic-tac-toe program, the board[i] = computer line in the computer_move function is the one (if I am reading this right) causing the error. But if I know this right, item assignment is allowed in lists, and I create a local copy of "board" for my function so that I can reassign values and whatnot within the function...
Any input at all would be greatly appreciated. This is my first serious piece of code, so if the function in question looks too mangled
A: The problem is here:
def computer_move (computer, board, human):
best = (4,0,8,2,6,1,3,5,7)
board = board [:]
for i in legal_moves(board):
board[i] = computer
if winner(board) == computer:
return i
board = EMPTY
At the end of the function, you assign EMPTY to board, but EMPTY is an empty string, as defined on line 4. I assume you must have meant board[i] = EMPTY.
A: In line 120, you reassign board to EMPTY (ie an empty string). So from that point on, board is no longer a list, so you can't assign board[i]. Not quite sure what you meant to do there.
Generally, your code would greatly benefit from using object-orientation - with Board as a class, keeping track of its member squares.
A: Looks like board is a string. I get the same error when I do this:
>>> s = ''
>>> s[1] = 1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'str' object does not support item assignment
| {
"language": "en",
"url": "https://stackoverflow.com/questions/6099215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Why can Scala compiler not infer Stream type operations? Lets say I want to have a Stream of squares. A simple way to declare it would be:
scala> def squares(n: Int): Stream[Int] = n * n #:: squares(n + 1)
But doing so, yields an error:
<console>:8: error: overloaded method value * with alternatives:
(x: Double)Double <and>
(x: Float)Float <and>
(x: Long)Long <and>
(x: Int)Int <and>
(x: Char)Int <and>
(x: Short)Int <and>
(x: Byte)Int
cannot be applied to (scala.collection.immutable.Stream[Int])
def squares(n: Int): Stream[Int] = n * n #:: squares(n + 1)
^
so, why can't Scala infer the type of n which is obviously an Int? Can someone please explain what's going on?
A: It's just a precedence issue. Your expression is being interpreted as n * (n #:: squares(n + 1)), which is clearly not well-typed (hence the error).
You need to add parentheses:
def squares(n: Int): Stream[Int] = (n * n) #:: squares(n + 1)
Incidentally, this isn't an inference problem, because the types are known (i.e., n is known to be of type Int, so it need not be inferred).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13553020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to create a dynamic model for days of week names in schedule? For the gym schedule, I need to create a model with which I will get a list of seven days names. The list will be updated daily, so in the first place will always be the current day of the week. Every day the list will be shifted to one day, so, for example, in the cell in which the word “Wednesday” stands today, tomorrow the word “Thursday” will appear, the day after tomorrow - “Friday” and so on. The names of the days should then be retrieved into the view and passed to the template.
I tried to do it like this:
import calendar
import datetime
from datetime import *
from django.db import models
from django.utils.translation import ugettext as _
class DayOfWeekSchedule(models.Model):
"""General dynamic schedule for a week"""
DOW_CHOICES = (
(1, _("Monday")),
(2, _("Tuesday")),
(3, _("Wednesday")),
(4, _("Thursday")),
(5, _("Friday")),
(6, _("Saturday")),
(7, _("Sunday")),
)
day_of_week = models.PositiveSmallIntegerField(
choices=DOW_CHOICES,
verbose_name=_('Day of week')
)
def days_of_week(self):
my_date = date.today()
current_day_name = calendar.day_name[my_date.weekday()]
index1 = DOW_CHOICES.index(current_day_name) #from Monday=0 (i.e. Friday=4)
list_daynames = list(DOW_CHOICES[index1:] + DOW_CHOICES)[:7] #list of names, current is first
list_index = (2, 3, 4, 5, 6, 7, 1) #to compensate systems difference
list2 = list(list_index[index1:] + list_index)[:7]
context_days = dict(zip(list_keys, list_daynames))
return context_days
But
DOW_CHOICES.index (current_day_name)
and
list_daynames = list (DOW_CHOICES [index1:] + DOW_CHOICES) [: 7]
, as expected, do not work, because .index () is not applicable to tuples of this type.
And how to do it, I do not know.
A: Maybe you're overthinking this. If I understand what you want correctly, you could just do something like this:
from datetime import datetime, timedelta
days = {
'1': _('Monday'),
'2': _('Tuesday'),
'3': _('Wednesday'),
'4': _('Thursday'),
'5': _('Friday'),
'6': _('Saturday'),
'7': _('Sunday')
}
DOW_CHOICES = []
today = datetime.today()
for i in range(7):
day_number = (today + timedelta(days=i)).isoweekday()
day = days[str(day_number)]
DOW_CHOICES.append((day_number, day))
This gets the today's date and creates a list of tuples with day number and day. The result will be:
[
(3, 'Wednesday'),
(4, 'Thursday'),
(5, 'Friday'),
(6, 'Saturday'),
(7, 'Sunday'),
(1, 'Monday'),
(2, 'Tuesday')
]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53304438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Shape 1 of numpy array Consider
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
v = np.array([1, 0, 1])
In Python's view, x has shape (4, 3) and v shape (3, ). Why didn't Python view v as having shape (, 3). Also, why do v and v.T have the same shape (3, ). IMHO, I think if v has shape (3, ) then v.T should have shape (, 3)?
A: (3,) does not mean the 3 is first. It is simply the Python way of writing a 1-element tuple. If the shape were a list instead, it would be [3].
(, 3) is not valid Python. The syntax for a 1-element tuple is (element,).
The reason it can't be just (3) is that Python simply views the parentheses as a grouping construct, meaning (3) is interpreted as 3, an integer.
A: As you know, numpy arrays are n-dimensional. The shape tells dimensions in the order. If it is 1-D you will see only 1st dimension, 2-D only 2 dimensions and so on.
Here x is a 2-D array while v is a 1-D array (aka vector). That is why when you do shape on v you see (3,) meaning it has only one dimension whereas x.shape gives (4,3). When you transpose v, then that is also a 1-D array. To understand this better, try another example. Create a 3-D array.
z=np.ones((5,6,7))
z.shape
print (z)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55272504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Error 1004 at the line d = Worksheets(A(i)).Cells(B(j), l) + d I am getting error at this command
d = Worksheets(A(i)).Cells(B(j), l) + d
Where I have stored the name of the sheets in Array A(i) Array B(j) is having integer values. I have already declared both the arrays along-with d as integer.
Thanks in advance,
Please find the code below
Sub checksum()
Dim A(50) As String
Dim B(5) As Integer
Dim i As Integer, j As Integer, d As Integer, k As Integer, p As Integer, l As Integer, s As Integer
s = 1
A(1) = "TREND M S&G"
A(2) = "TREND M RAZORS"
A(3) = "TREND M RZ ACC"
A(4) = "TREND M GROOM"
A(5) = "TREND BODY GROOM"
A(6) = "TREND Multi"
A(7) = "TREND BRDM"
A(8) = "TREND PRCSN"
A(9) = "TREND M H CLIP"
A(10) = "TREND PTB&A"
A(11) = "TREND rch"
A(12) = "TREND batt"
A(13) = "TREND refills"
A(14) = "TREND BABY"
A(15) = "TREND BREAST FEED"
A(16) = "TREND breast pad"
A(17) = "TREND breast pumps"
A(18) = "TREND REUSABLE"
A(19) = "TREND DISPOSABLE"
A(20) = "TREND TODDLER"
A(21) = "TREND TODDLER C&P"
A(22) = "TREND FEED ACCESS"
A(23) = "TREND SOOTHING"
A(24) = "TREND P&H"
A(25) = "TREND TEETHERS"
B(1) = 3
B(2) = 6
B(3) = 9
B(4) = 12
ThisWorkbook.Sheets.Add After:=Sheets(Worksheets.Count), Count:=1, Type:=xlWorksheet
For i = 1 To 25
Worksheets(Worksheets.Count).Cells(i, 1) = A(i)
For j = 1 To 4
d = 0
If i > 10 Then k = 54 And p = 70
If i < 11 Then k = 56 And p = 66
For l = k To p
d = Worksheets(A(i)).Cells(B(j), l) + d
Next l
If d = 100 Then Worksheets(Worksheets.Count).Cells(i, j + 1) = "Fine"
If d <> 100 Then Worksheets(Worksheets.Count).Cells(i, j + 1) = "Error"
Next j
Next i
End Sub
A: This does not work; the And Operator cannot be used this way:
If i > 10 Then k = 54 And p = 70
If i < 11 Then k = 56 And p = 66
Change it to:
If i > 10 Then
k = 54
p = 70
Else
k = 56
p = 66
End If
A: I don't know what's in the cell that you're referencing, but based on what I can see here, I'm guessing it contains an integer. If so, then you need to access the value of the cell:
d = Worksheets(A(i)).Cells(B(j), l).Value2 + d
You'll need to do the same with your last couple of lines
If d = 100 Then Worksheets(Worksheets.Count).Cells(i, j + 1).Value2 = "Fine"
If d <> 100 Then Worksheets(Worksheets.Count).Cells(i, j + 1).Value2 = "Error"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/22313901",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Calculate in dynamic created textbox How can I add values in my dynamically created texbox in another textbox ? For example I have 6 textbox named from TextBox0 to TextBox6 and I want to add the values in TextBox0 to TextBox5 then set that values in TextBox6. How can I do that?
This is my code for create dynamic textbox:
static int column = 0;
private void button1_Click(object sender, EventArgs e)
{
int i = 0;
if (column > 0)
{
do
{
TextBox tb = new TextBox();
tb.Text = "";
tb.Name = "TextBox" + (i + column * 6);
TextBox t = (TextBox)Controls["TextBox" + i.ToString()];
Point p = new Point(15 + (column * 125), 5 + (i * 25));
tb.Location = p;
this.Controls.Add(tb);
i++;
} while (i <= 5);
}
else
{
do
{
TextBox tb = new TextBox();
tb.Text = "";
tb.Name = "TextBox" + i;
TextBox t = (TextBox)Controls["TextBox" + i.ToString()];
Point p = new Point(15, 5 + (i * 25));
tb.Location = p;
this.Controls.Add(tb);
i++;
} while (i <= 5);
}
column++;
}
A: private const string _textBoxName = "TextBox";
The method count textboxes sum by given range of text box ids. Be aware this will throw exception if the text box texts / name id are not intgeres or
private int Count(int from, int to)
{
int GetIdFromTextBox(TextBox textBox) => int.Parse(new string(textBox.Name.Skip(_textBoxName.Length).ToArray()));
var textBoxes = Controls.OfType<TextBox>().ToList();
var textBoxesWithIds = textBoxes.Select(textBox => (textBox: textBox, id: GetIdFromTextBox(textBox))).ToList();
var sum = textBoxesWithIds.Where(x => x.id >= from && x.id <= to).Sum(x => int.Parse(x.textBox.Text));
return sum;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53389086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How can I set Search Bar in list view using custom adapter? I want to add search bar in a list view using adapter class can anybody help me below is my code.
this is adapter class that I am using.
Cat_Headers_Home_Adapter.java
public class Cat_Headers_Home_Adapter extends SectionAdapter {
private Activity cntx;
private List<Categories> categoriesList;
private List<CatHeaders> categoriesHeadersList;
public Cat_Headers_Home_Adapter(Activity cntx,List<Categories>categoriesList, List<CatHeaders> categoriesHeadersList) {
this.cntx = cntx;
this.categoriesList = categoriesList;
this.categoriesHeadersList = categoriesHeadersList;
}
@Override
public int numberOfSections() {
return categoriesHeadersList.size();
}
@Override
public int numberOfRows(int section) {
return categoriesList.size();
}
@Override
public Object getRowItem(int section, int row) {
return null;
}
@Override
public boolean hasSectionHeaderView(int section) {
return true;
}
@Override
public View getRowView(int section, int row, View convertView, ViewGroup parent) {
View view = convertView;
Holder holder;
if (view == null) {
LayoutInflater inflater = (LayoutInflater) cntx
.getSystemService(Context.LAYOUT_INFLATER_SERVICE);
view = inflater.inflate(R.layout.adapter_home_service_cat, null);
holder = new Holder();
view.setTag(holder);
} else {
holder = (Holder) view.getTag();
}
Categories categories = categoriesList.get(section);
holder.service_name = (TextView) view.findViewById(R.id.service_name);
holder.count = (TextView) view.findViewById(R.id.total);
holder.service_icon = (ImageView) view.findViewById(R.id.service_icon);
holder.service_layout = (LinearLayout)view.findViewById(R.id.service_layout);
holder.service_name.setText(categories.catName);
holder.count.setText(categories.count);
holder.service_icon.setImageResource(categories.catImage);
holder.service_layout.setVisibility(View.VISIBLE);
return view;
}
private class Holder {
TextView service_name;
TextView count;
ImageView service_icon;
LinearLayout service_layout;
}
@Override
public int getSectionHeaderViewTypeCount() {
return 2;
}
@Override
public int getSectionHeaderItemViewType(int section) {
return section % 2;
}
@Override
public View getSectionHeaderView(int section, View convertView, ViewGroup parent) {
if (convertView == null) {
convertView = (TextView) cntx.getLayoutInflater().inflate(cntx.getResources().getLayout(android.R.layout.simple_list_item_1), null);
}
((TextView) convertView).setText(categoriesHeadersList.get(section).catName);
((TextView) convertView).setTextColor(cntx.getResources().getColor(android.R.color.white));
convertView.setBackgroundColor(cntx.getResources().getColor(R.color.red));
return convertView;
}
@Override
public void onRowItemClick(AdapterView<?> parent, View view, int section, int row, long id) {
super.onRowItemClick(parent, view, section, row, id);
Toast.makeText(cntx, "Section " + section + " Row " + row, Toast.LENGTH_SHORT).show();
}
}
this is a class i want to add a search bar on it
Home_fragment.java
public class Home_Fragment extends Fragment {
private HeaderListView list;
private EditText searchBar;
private int[] images;
private String[] array;
private String[] headers;
private String[] fake_count;
ArrayList<HashMap<String, String>> listView;
private List<Categories> categoriesList = new ArrayList<>();
private List<CatHeaders> categoriesListHeader = new ArrayList<>();
public static Home_Fragment newInstance(int index) {
Home_Fragment f = new Home_Fragment();
// Supply index input as an argument.
Bundle args = new Bundle();
args.putInt("position", index);
f.setArguments(args);
return f;
}
public int getShownIndex() {
return getArguments().getInt("index", 0);
}
public Home_Fragment() {
// Required empty public constructor
}
@Override
public void onCreate(@Nullable Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
listView = new ArrayList<>();
images = Utils.getResourcesImages();
// images = getActivity().getResources().getIntArray(R.array.service_images_for_list_headers);
array = getActivity().getResources().getStringArray(R.array.services);
headers = getActivity().getResources().getStringArray(R.array.header_title);
fake_count = getActivity().getResources().getStringArray(R.array.fake_count);
for (int a = 0; a < images.length; a++) {
int imageId = images[a];
String count = fake_count[a];
String name = array[a];
Categories categoriesObj;
if (a == 0 || a == 29 || a == 36 || a == 42 || a == 53) {
categoriesObj = new Categories(name, imageId, count, true);
} else {
categoriesObj = new Categories(name, imageId, count, false);
}
categoriesList.add(categoriesObj);
}
for (int a = 0; a < headers.length; a++) {
CatHeaders categoriesObj;
categoriesObj = new CatHeaders(headers[a]);
categoriesListHeader.add(categoriesObj);
}
Log.d("HOME Message Fragment", "Home Message Fragment");
}
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
// Inflate the layout for this fragment
View rootView = inflater.inflate(R.layout.fragment_home, container, false);
list = (HeaderListView) rootView.findViewById(R.id.list_view);
searchBar = (EditText) rootView.findViewById(R.id.search_field);
searchBar.addTextChangedListener(new TextWatcher() {
@Override
public void beforeTextChanged(CharSequence s, int start, int count, int after) {
}
@Override
public void onTextChanged(CharSequence s, int start, int before, int count) {
}
@Override
public void afterTextChanged(Editable s) {
if (s.length() > 2) {
searchServiceFromList(s.toString());
} else {
setOldView();
}
}
});
Arrays.sort(array);
listView.clear();
for (int i = 0; i < array.length; i++) {
HashMap<String, String> map = new HashMap<>();
map.put("service_name", array[i]);
map.put("icon_position", Integer.toString(i));
map.put("count", fake_count[i]);
listView.add(map);
}
Cat_Headers_Home_Adapter adapter = new Cat_Headers_Home_Adapter(getActivity(),categoriesList,categoriesListHeader);
list.setAdapter(adapter);
// list.setOnItemClickListener(new AdapterView.OnItemClickListener() {
// @Override
// public void onItemClick(AdapterView<?> parent, View view, int position, long id) {
// HashMap<String, String> map = listView.get(position);
// // Toast.makeText(getActivity(), map.get("service_name").toString(), Toast.LENGTH_SHORT).show();
// // objBook.show();
// // objBook.setService(map.get("service_name").toString());
// /* Post_New_Job_Fragment fragment = new Post_New_Job_Fragment();
// Bundle bundle = new Bundle();
// bundle.putString("service_name", map.get("service_name").toString());
// fragment.setArguments(bundle);
// final FragmentTransaction ft = getFragmentManager().beginTransaction();
// ft.replace(R.id.container, fragment, "NewFragmentTag");
// ft.addToBackStack(null);
// ft.commit();*/
// Intent i = new Intent(getActivity(), Post_New_Job_Activity.class);
// i.putExtra("service_name", map.get("service_name").toString());
// i.putExtra("icon_position", map.get("icon_position").toString());
// getActivity().startActivityForResult(i, 200);
//
//
// }
// });
return rootView;
}
@Override
public void startActivityForResult(Intent intent, int requestCode) {
super.startActivityForResult(intent, requestCode);
}
public void searchServiceFromList(String s) {
// ArrayList<HashMap<String, String>> tempList = new ArrayList<>();
// for (int i = 0; i < listView.size(); i++) {
// HashMap<String, String> map = listView.get(i);
// if ((map.get("service_name").toString().toLowerCase()).contains(s.toLowerCase())) {
// tempList.add(map);
// }
// }
//
// Home_Adapter adapter = new Home_Adapter(getActivity(), tempList, images);
// list.setAdapter(adapter);
}`enter code here`
public void setOldView() {
// Home_Adapter adapter = new Home_Adapter(getActivity(), listView, images);
// list.setAdapter(adapter);
}
}`enter code here`
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36130081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Rate limit exceeded in Face API What should I do when i encountered rate limit exceeded for face api other than using Task.Delay(1000)?
I have about 50 records and detect/identify/verify in 2 seconds. For the identifyasync, I set the confidence threshold to be 0.0f and the max number of candidates returned to be 50. I tried to use Task.Delay(1000) and reduced the number of candidates, but it doesn't help to solve my problem.
Please give me advice on how to resolve this issue as i'm new to this.
A: I wrote a library RateLimiter to handle this kind of constraints. It is composable, asynchroneous and cancellable.
Its seems that Face API quota limit of 10 calls per second, so you can write:
var timeconstraint = TimeLimiter.GetFromMaxCountByInterval(10, TimeSpan.FromSeconds(1));
for(int i=0; i<1000; i++)
{
await timeconstraint.Perform(DoFaceAPIRequest);
}
private Task DoFaceAPIRequest()
{
//send request to Face API
}
It is also available as a nuget package.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49359508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: My form answers are not attaching to email I have wrote in scripting a simple question list, I need the responses to be returned in a txt file via email when the person hits submit. The front end of the file works, the email creates but the form does not post. Can anyone help with this scripting please?
Coding listed below:
<form action="mailto:[email protected]?subject=Test" id="form" method="post" name="form" >
<!-- PAGE HEADER -->
<table bgcolor=#D1DEE5>
<tr>
<td width="833px"align="center">
<input class="title" name="Title" value="Customer Satisfaction Survey">
</td>
</tr>
</table>
<!-- QUESTIONS -->
<p>
<table>
<tr>
<td>
<p> Welcome message
<p>
</ul>
</td>
</tr>
</table>
<br>
<table>
<tr class="shaded">
<td align="left">
<p><b>Please tell us based on your experience, how satisfied you are with the following services:</b>
</td>
<td align="center" width="50px">Very satisfied</td>
<td align="center" width="50px">Satisfied</td>
<td align="center" width="50px">Dissatisfied</td>
<td align="center" width="50px">Very Dissatisfied</td>
<td align="center" width="50px">N/A</td>
</tr>
<tr>
<td>A</td>
<td align="center" width="50px"><input type="radio" name="q1" value="Very satisfied"></td>
<td align="center" width="50px"><input type="radio" name="q1" value="Satisfied"></td>
<td align="center" width="50px"><input type="radio" name="q1" value="Dissatisfied"></td>
<td align="center" width="50px"><input type="radio" name="q1" value="Very Dissatisfied"></td>
<td align="center" width="50px"><input type="radio" name="q1" value="N/A"></td>
</tr>
</table>
<br>
<table class="outlineTable" bgcolor=#D1DEE5>
<tr>
<td align="left" rowspan=5 width=500 style="vertical-align:top" style="padding-top:5px">
<p><b>Please add comments to explain your answers</b>
<br><textarea name="Comments10" id="Comments10" rows="7" cols="55"></textarea>
</td>
<td align="left">
Month being scored
</td>
<td align="left" class="submitButton">
<input class="name" name="Month">
</td>
</tr>
<tr>
<td align="left">
Name
</td>
<td align="left" class="submitButton">
<input class="name" name="Name">
</td>
</tr>
<tr>
<td align="left">
Date
</td>
<td align="left" class="submitButton">
<input class="name" name="Date">
</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left" class="submitButton">
<input class="button2" type="submit" value="Click here to submit results">
</td>
</tr>
</table>
<br>
<table bgcolor=#D1DEE5>
<tr>
<td align="center">
<h1> Many thanks for taking the time to complete this survey
</td>
</tr>
</table>
<p>
</form>
</body>
</html>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/37750925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Call many Observable if operator I'm with a problem .. I have a flow that does the following:
*
*Create a file
*If there is a configuration variable set to true (for example,
weekday = true call a service on my server and sign a document
*Upload the file to a server through a service.
As I said in step 2, only if the variable is true, you must call that service, if it were false, you should do step 3, the problem is that I do not know how to call several Observables at once and also condition them.
I have seen that with mergeMap I can make calls in parallel, but this does not work in my case since I must verify if the variable exists before I can call a service or another.
A: Hopefully this pipe extract will be of some help. If var is true then make an HTTP request and pipe the response to the map operator. If there's an HTTP error it will pipe null. If the var is false it will pipe the value in the of statement to the map operator.
mergeMap((var) =>
if (var === true) {
return this.myService.doSomeHttpRequest().pipe(
catchError(() => {
return of(null)
})
) else {
return of('something')
}
)),
filter(res => res != null),
map(res => {
if (res === 'something') {
} else
}
})
This is coded freehand, sorry for any missing brackets and things like that but it should get you on the right route. If you want the pipe to not pass anything to map then you could consider using the filter operator. This will block the pipe from executing anything further.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52485367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: MYSQL Query - When col1 has only 'Special' use that else use one of the other Following dataset
ID | Description | Type
-------------------------------
12204 | ABC | Special
12204 | DEF | Connector
12541 | GHI | Special
12541 | JKL | Special
12541 | MNO | Hybrid
13292 | PQR | Resistor
13292 | STU | Connector
13292 | VWX | Hybrid
14011 | YZa | Special
14012 | bcd | Resistor
What I want from it is the following:
ID | Description | Type
-------------------------------
12204 | DEF | Connector
12541 | MNO | Hybrid
13292 | PQR | Resistor or Connector or Hybrid <-- doesn't matter
14011 | YZa | Special
14012 | bcd | Resistor
So, all I need is the whole dataset according to the Type. If there is only "Special" as Type possible, then I need to use it, but if not then I want to use the one of the others.
I figured out to group everything but then it uses the first row which mostly contains Special...
My Query so far:
SELECT ID, Description, Type FROM anyDatabase GROUP BY ID HAVING count(Type) > 1
Hope anyone can help :)
Adi
A: You can try it like this
SELECT
l.id,
l.description,
IF(r.type IS NULL, l.type, r.type) AS `Type`
FROM newtable as l
LEFT JOIN (SELECT *
FROM newtable
WHERE type <> 'Special') as r
on r.id = l.id
GROUP BY l.id
SQL Fiddle Demo
A: This could be a solution:
SELECT
newtable.id,
newtable.description,
newtable.type
FROM newtable
INNER JOIN(
SELECT
id,
MAX(CASE WHEN Type!='Special' THEN Type END) type
FROM newtable
GROUP BY id
) mx
ON newtable.id=mx.id
AND newtable.type=COALESCE(mx.type, 'Special')
Fiddle here (thanks to raheel shan for the fiddle!)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16077435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to modify string to char array in C/C++? i'm writing a program called 'Zuma'. The program works like this.
Input:
ACCBA // a string make up of char from 'A' to 'Z'
5 // number of inputs
1 B // insert char 'B' to position '1' of the string
0 A // and so on...
2 B
4 C
0 A
When 3 same chars next to each other, we erase/remove/delete them from the string.
For example, when we insert char 'C' to position 2 of string 'ABCC', we got 'AB' because
'CCC' are removed from the string.
Output:
ABCCBA
AABCCBA
AABBCCBA // the process is AABBCCCBA -> AABBBA -> AAA -> -
- // if the string is empty, we output "-"
A
This is my code with string:
#include <iostream>
using namespace std;
int main()
{
int n, pos;
int k = 0;
int length = 0;
string zuma, marble; // i use string
cin >> zuma;
cin >> n;
for (int i = 0; i < n; ++i)
{
cin >> pos >> marble;
zuma.insert(pos, marble);
length = zuma.length(); // length of current string
// compare each char from pos[i] with pos[i+1] and pos[i+2]
// and then ++i until end of string
while (k != length && length >= 3)
{
if (zuma[k] == zuma[k + 1] && zuma[k] == zuma[k + 2])
{
zuma.erase(k, 3); // erase 3 same char in the string
k = 0; // set k to zero to start from pos[0] again
}
else
k++;
}
// if string is not empty
if (!zuma.empty())
{
cout << zuma << endl; // output the current char in the string
k = 0;
}
else
cout << "-" << endl;
}
return 0;
}
This is my code with char array:
#include <iostream>
#include <cstdio>
#include <cstring>
using namespace std;
void append (char subject[], const char insert[], int pos) {
char buf[100] = {};
strncpy(buf, subject, pos);
int len = strlen(buf);
strcpy(buf+len, insert);
len += strlen(insert);
strcpy(buf+len, subject+pos);
strcpy(subject, buf);
}
int main()
{
int n, pos;
int k = 0;
int length = 0;
char zuma[100], marble[100];
scanf("%s", zuma);
scanf("%d", &n);
for (int i = 0; i < n; ++i)
{
scanf("%d %s", &pos, marble);
append(zuma, marble, pos); // acts like string::insert
length = strlen(zuma);
while (k != length && length >= 3)
{
if (zuma[k] == zuma[k + 1] && zuma[k] == zuma[k + 2])
{
//zuma.erase(k, 3); // need help with this part to remove 3 same chars like string::erase
k = 0;
}
else
k++;
}
if (strlen(zuma) != 0)
{
printf("%s\n", zuma);
k = 0;
}
else
printf("%s\n","-");
}
return 0;
}
My problem is how to write a function to remove 3 same chars just like what string::erase do?
Thanks for your help!!
A: You can use memmove to copy the remainder of the string to the position of the characters to remove. Use strlen to determine how much bytes to move. Note you cannot use strcpy because the source and destination buffers overlap.
if (zuma[k] == zuma[k + 1] && zuma[k] == zuma[k + 2])
{
int len = strlen(zuma+k+3) + 1; // +1 to copy '\0' too
memmove(zuma+k, zuma+k+3, len);
k = 0;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26312273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Logback config, puppet and application versions I am busy testing a new approach to managing a java application that uses logback on a puppet-managed host, and was wondering if anyone had some advice on the best approach for this. I am stuck with a catch 22 situation.
The java application is deployed to a host by an automated system (CI). The deployment writes an application version number to a file (e.g. /etc/app.version may contain "0001")
The logback config file (logback.xml) is managed by puppet.
I am trying to configure the application to include it's version number in the logging layout (e.g. <pattern>VERSION: %version%</pattern> . However, I am not sure on the approach, as there isn't an "include" function for the logback config file (to include a file with the version number into the logback config). At the same time, I don't see a way to get puppet to do a client-side template build, using the host-side file (I've tried using a template approach, but the template is compiled on the puppet server side).
Any idea's on how to get this working?
A: I would write a custom fact. Facts are executed on the client.
Eg:
logback/manifests/init.pp
file { '/etc/logback.xml':
content => template('logback/logback.xml.erb')
}
logback/templates/logback.xml.erb
...
<pattern>VERSION: <%= scope.lookupvar('::my_app_version') %></pattern>
...
logback/lib/facter/my_app_version.rb
Facter.add('my_app_version') do
setcode do
begin
File.read('/etc/app.version')
rescue
nil
end
end
end
Hope that helps. I think in Puppet < 3.0 you will have to set "pluginsync = true" in puppet.conf to get this to work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17967612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Mysql multiple select query to show total result by month I want to use select query for every month 1 to 12 like this. I searched from internet sub query format was little like this but it is not correct format according to my query. When I run this query for single month it is showing correct result. i want to show result in html table like this
SQL Fiddle
Html table Format
SELECT
(SELECT SUM(`current_sales`) FROM `orders` WHERE YEAR(order_date) ='2017' AND MONTH(order_date) ='10' GROUP BY pro_id) AS October,
(SELECT SUM(`current_sales`) FROM `orders` WHERE YEAR(order_date) ='2017' AND MONTH(order_date) ='11' GROUP BY pro_id) AS November,
(SELECT SUM(`current_sales`) FROM `orders`WHERE YEAR(order_date) ='2017' AND MONTH(order_date) ='12' GROUP BY pro_id) AS December
A: You're on the right path. But as you want to get sum() of current_sales for each month, you shouldn't use Group by in your subqueries as it will return multiple rows. Instead just put where condition to fetch rows for same pro_id as query is currently executing using Group by clause.
Following query will work:
select tmp.pro_id,
tmp.product_name,
tmp.nsp,
tmp.Jan,
tmp.Feb,
tmp.Mar,
tmp.Apr,
(tmp.Jan+tmp.Feb+tmp.Mar+tmp.Apr) as Q1,
tmp.May,
tmp.Jun,
tmp.Jul,
tmp.Aug,
(tmp.May+tmp.Jun+tmp.Jul+tmp.Aug) as Q2,
tmp.Sep,
tmp.Oct,
tmp.Nov,
tmp.`Dec`,
(tmp.Sep+tmp.Oct+tmp.Nov+tmp.`Dec`) as Q3
From
(
SELECT o.pro_id,
p.product_name,
p.nsp,
(case when coalesce(sum(month(order_date) = 1),0) <> 0
then
(SELECT SUM(`current_sales`) FROM `orders` WHERE pro_id = o.pro_id and YEAR(order_date) ='2017' AND MONTH(order_date) ='1')
else 0
end
) as Jan,
(case when coalesce(sum(month(order_date) = 2),0) <> 0
then
(SELECT SUM(`current_sales`) FROM `orders` WHERE pro_id = o.pro_id and YEAR(order_date) ='2017' AND MONTH(order_date) ='2')
else 0
end
) as Feb,
(case when coalesce(sum(month(order_date) = 3),0) <> 0
then
(SELECT SUM(`current_sales`) FROM `orders` WHERE pro_id = o.pro_id and YEAR(order_date) ='2017' AND MONTH(order_date) ='3')
else 0
end
) as Mar,
(case when coalesce(sum(month(order_date) = 4),0) <> 0
then
(SELECT SUM(`current_sales`) FROM `orders` WHERE pro_id = o.pro_id and YEAR(order_date) ='2017' AND MONTH(order_date) ='4')
else 0
end
) as Apr,
(case when coalesce(sum(month(order_date) = 5),0) <> 0
then
(SELECT SUM(`current_sales`) FROM `orders` WHERE pro_id = o.pro_id and YEAR(order_date) ='2017' AND MONTH(order_date) ='5')
else 0
end
) as May,
(case when coalesce(sum(month(order_date) = 6),0) <> 0
then
(SELECT SUM(`current_sales`) FROM `orders` WHERE pro_id = o.pro_id and YEAR(order_date) ='2017' AND MONTH(order_date) ='6')
else 0
end
) as Jun,
(case when coalesce(sum(month(order_date) = 7),0) <> 0
then
(SELECT SUM(`current_sales`) FROM `orders` WHERE pro_id = o.pro_id and YEAR(order_date) ='2017' AND MONTH(order_date) ='7')
else 0
end
) as Jul,
(case when coalesce(sum(month(order_date) = 8),0) <> 0
then
(SELECT SUM(`current_sales`) FROM `orders` WHERE pro_id = o.pro_id and YEAR(order_date) ='2017' AND MONTH(order_date) ='8')
else 0
end
) as Aug,
(case when coalesce(sum(month(order_date) = 9),0) <> 0
then
(SELECT SUM(`current_sales`) FROM `orders` WHERE pro_id = o.pro_id and YEAR(order_date) ='2017' AND MONTH(order_date) ='9')
else 0
end
) as Sep,
(case when coalesce(sum(month(order_date) = 10),0) <> 0
then
(SELECT SUM(`current_sales`) FROM `orders` WHERE pro_id = o.pro_id and YEAR(order_date) ='2017' AND MONTH(order_date) ='10')
else 0
end
) as Oct,
(case when coalesce(sum(month(order_date) = 11),0) <> 0
then
(SELECT SUM(`current_sales`) FROM `orders` WHERE pro_id = o.pro_id and YEAR(order_date) ='2017' AND MONTH(order_date) ='11')
else 0
end
) as Nov,
(case when coalesce(sum(month(order_date) = 12),0) <> 0
then
(SELECT SUM(`current_sales`) FROM `orders` WHERE pro_id = o.pro_id and YEAR(order_date) ='2017' AND MONTH(order_date) ='12')
else 0
end
) as `Dec`
from products p
inner join orders o
on p.pro_id = o.pro_id
group by o.pro_id
)tmp
group by tmp.pro_id
;
Click here for DEMO
Although, I have another approach for your task which has huge query with many in-built functions of Mysql like Group_concat(), Substring_Index() etc.
Have a look at another approach:
select tmp2.pro_id,
tmp2.product_name,
tmp2.nsp,
tmp2.Jan,
tmp2.Feb,
tmp2.Mar,
tmp2.Apr,
(tmp2.Jan+tmp2.Feb+tmp2.Mar+tmp2.Apr) as Q1,
tmp2.May,
tmp2.Jun,
tmp2.Jul,
tmp2.Aug,
(tmp2.May+tmp2.Jun+tmp2.Jul+tmp2.Aug) as Q2,
tmp2.Sep,
tmp2.Oct,
tmp2.Nov,
tmp2.`Dec`,
(tmp2.Sep+tmp2.Oct+tmp2.Nov+tmp2.`Dec`) as Q3
from
(
select tmp.pro_id,
tmp.product_name,
tmp.nsp,
(case when coalesce(sum(tmp.month=1),0) <> 0
then
substring_index
(
substring_index
(
Group_concat(tmp.total order by tmp.month separator ','),
',',
(find_in_set
(1,
Group_concat(tmp.month order by tmp.month separator ',')
)
)
),
',',
-1
)
else 0
end
) as Jan,
(case when coalesce(sum(tmp.month=2),0) <> 0
then
substring_index
(
substring_index
(
Group_concat(tmp.total order by tmp.month separator ','),
',',
(find_in_set
(2,
Group_concat(tmp.month order by tmp.month separator ',')
)
)
),
',',
-1
)
else 0
end
) as Feb,
(case when coalesce(sum(tmp.month=3),0) <> 0
then
substring_index
(
substring_index
(
Group_concat(tmp.total order by tmp.month separator ','),
',',
(find_in_set
(3,
Group_concat(tmp.month order by tmp.month separator ',')
)
)
),
',',
-1
)
else 0
end
) as Mar,
(case when coalesce(sum(tmp.month=4),0) <> 0
then
substring_index
(
substring_index
(
Group_concat(tmp.total order by tmp.month separator ','),
',',
(find_in_set
(4,
Group_concat(tmp.month order by tmp.month separator ',')
)
)
),
',',
-1
)
else 0
end
) as Apr,
(case when coalesce(sum(tmp.month=5),0) <> 0
then
substring_index
(
substring_index
(
Group_concat(tmp.total order by tmp.month separator ','),
',',
(find_in_set
(5,
Group_concat(tmp.month order by tmp.month separator ',')
)
)
),
',',
-1
)
else 0
end
) as May,
(case when coalesce(sum(tmp.month=6),0) <> 0
then
substring_index
(
substring_index
(
Group_concat(tmp.total order by tmp.month separator ','),
',',
(find_in_set
(6,
Group_concat(tmp.month order by tmp.month separator ',')
)
)
),
',',
-1
)
else 0
end
) as Jun,
(case when coalesce(sum(tmp.month=7),0) <> 0
then
substring_index
(
substring_index
(
Group_concat(tmp.total order by tmp.month separator ','),
',',
(find_in_set
(7,
Group_concat(tmp.month order by tmp.month separator ',')
)
)
),
',',
-1
)
else 0
end
) as Jul,
(case when coalesce(sum(tmp.month=8),0) <> 0
then
substring_index
(
substring_index
(
Group_concat(tmp.total order by tmp.month separator ','),
',',
(find_in_set
(8,
Group_concat(tmp.month order by tmp.month separator ',')
)
)
),
',',
-1
)
else 0
end
) as Aug,
(case when coalesce(sum(tmp.month=9),0) <> 0
then
substring_index
(
substring_index
(
Group_concat(tmp.total order by tmp.month separator ','),
',',
(find_in_set
(9,
Group_concat(tmp.month order by tmp.month separator ',')
)
)
),
',',
-1
)
else 0
end
) as Sep,
(case when coalesce(sum(tmp.month=10),0) <> 0
then
substring_index
(
substring_index
(
Group_concat(tmp.total order by tmp.month separator ','),
',',
(find_in_set
(10,
Group_concat(tmp.month order by tmp.month separator ',')
)
)
),
',',
-1
)
else 0
end
) as Oct,
(case when coalesce(sum(tmp.month=11),0) <> 0
then
substring_index
(
substring_index
(
Group_concat(tmp.total order by tmp.month separator ','),
',',
(find_in_set
(11,
Group_concat(tmp.month order by tmp.month separator ',')
)
)
),
',',
-1
)
else 0
end
) as Nov,
(case when coalesce(sum(tmp.month=12),0) <> 0
then
substring_index
(
substring_index
(
Group_concat(tmp.total order by tmp.month separator ','),
',',
(find_in_set
(12,
Group_concat(tmp.month order by tmp.month separator ',')
)
)
),
',',
-1
)
else 0
end
) as `Dec`
from
(
select o.pro_id,
p.product_name,
p.nsp,
sum(o.current_sales) as total,
month(order_date) as month
from
products p
inner join orders o
on p.pro_id = o.pro_id
group by o.pro_id,month(order_date)
)tmp
group by tmp.pro_id
)tmp2
group by tmp2.pro_id
;
Click here for Demo
Now, you can run both the queries against your actual data and select one with less execution time.
Hope it helps!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/47489341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to prevent and OrderedCollection from removing its elements in a #do: loop? Consider the following written code:
oc do: [:elem | self doSomethingWith: elem]
As we all know, the potential problem here is to somehow have #doSomethingWith: reach out oc (an OrderedCollection) and remove some of its elements.
The recommended solution is to write the above as
oc copy do: [:elem | self doSomethingWith: elem].
Well, yes, but we do not copy all collections every time we enumerate them. Do we?
The actual problem is that the processing of every element can be so difficult to follow that it could end up removing elements without us knowing. In our case above, if some element of oc gets somehow removed in the context of #doSomethingWith: we will get an Error. Won't we?
Not really. The problem will go unnoticed. Look at this example:
oc := #(1 2 4) asOrderedCollection.
oc do: [:i | i even ifTrue: [oc remove: i]]
In this case we will not get and error and also element 4 will not get processed (i.e., in this case it will not get removed). So we will be silently skipping elements from the enumeration.
Why is this? Well, because of the way #do: is implemented. Look at Squeak for instance:
do: aBlock
"Override the superclass for performance reasons."
| index |
index := firstIndex.
[index <= lastIndex]
whileTrue:
[aBlock value: (array at: index).
index := index + 1]
See? lastIndex is dynamically checked and that is why we don't go beyond the current size and no Error is signaled.
My question is whether this is on purpose or there is a better solution. One that could work would be to save lastIndex in a temporary before iterating, but I'm not sure if that would be preferred.
A: 1) as aka.nice already pointed out, it is not a good idea to fetch and remember the initial lastIndex. This will probably make things worse and lead to more trouble.
2) OrderedCollection as provided is not really prepared and does not like the receiver being modified while iterating over it.
3) A better solution would be to collect the elements to remove first, and then after the do:-processing remove them in a second step. However, I understand, that you cannot do this.
Possible solutions for you:
a) create a subclass of OrderedCollection, with redefined do:- and redefined removeXXX- and addXXX- methods. The later ones need to tell the iterator (i.e. the do-method) about what is going on.
(being careful if the index being removed/added is before the current do-index...).
The notification could be implemented via a proceedable signal/exception, which is signalled in the modifying methods and caught in the do-loop code.
b) create a wrapper class as subclass of Seq.Collection, which has the original collection as instvar and forwards selected messages to its (wrapped) original collection.
Similar to above, redefine do: and the remove/add methods in this wrapper and do the appropriate actions (.e. again signalling what changed).
Be careful where to keep the state, if the code needs to be reentrant (i.e. if another one does a loop on the wrapped collection); then you would have to keep the state in the do-method and use signals to communicate the changes.
Then enumerate the collection with sth like:
(SaveLoopWrapper on:myCollection) do:[: ...
].
and make sure that the code which does the remove also sees the wrapper-instance; not myCollection, so that the add/remove are really caught.
If you cannot to the later, there is another hack, coming to my mind: using MethodWrappers, you can change an individual instance's behavior and introduce hooks.
For example, create a subclass of OrderedCollection, with those hooks in, you could:
myColl changeClassTo: TheSubclassWithHooks
before iterating.
Then (protected by an ensure:) undo the wrapping after the loop.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/39708432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to rename multiple folders in batch I have several folders with the following pattern as name:
123 - 1234 - string1 - string2
and would like to rename them all like
string1 - string2
using a batch file.
I was searching for something like:
@echo off
setlocal EnableDelayedExpansion
for /D %%f in (C:\Users\*) do (
set string=%%f
for /f "tokens=1,2,3,4 delims=-" %%a in (%%f) do (set part1=%%a)&(set part2=%%b)&(set part3=%%c)&(set part4=%%d)
set newstring=part3 - part4
rename "string" "newstring"
)
Unfortunately it isn't working and I've no idea what's wrong... Do you have better ideas?
A: You must enclose the variable names in exclamation points to cause them to be expanded, as in !part3!. This must be done every place you want the value of a variable. The exclamation points are used for delayed expansion within a FOR loop. You can use percents for normal expansion, but not within a loop that also sets the value.
Also, your inner FOR /F loop must use double quotes within the IN() clause. As currently written, it is attempting to open a file with the name of your folder.
But there is a simpler way in your case:
@echo off
for /d %%F in (c:\users\*-*-*-*) do for /f "tokens=2* delims=-" %%A in ("%%~nxF") do ren "%%F" "%%B"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/23838751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: having trouble accessing MySql database by power bi use unable to connect we encountered a error? While trying to access the database from POWER BI to MYSQL getting a error as
UNABLE TO CONNECT
we encountered a error while trying toconnect
Details: "An error happened while reading data from the provider: 'Could not load file or assembly 'System.EnterpriseServices, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. Either a required impersonation level was not provided, or the provided impersonation level is invalid. (Exception from HRESULT: 0x80070542)'
A: In the data source setting, can you remove the existing SQL server source connection and try again?
You can set permission when creating the data source.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/71185418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: How to sum a triple nested map with Java 8 Collectors I have this map
Map<LocalDate, Map<Integer, Map<EHourQuarter, Double>>>
EHourQuarter is an enum:
public enum EHourQuarter {
FIRST(0, 14, 15),
SECOND(15, 29, 30),
THIRD(30, 44, 45),
FOURTH(45, 59, 60);
private Integer start;
private Integer end;
private Integer value;//this is for UI purposes
}
With values like:
{2020-07-07 -> {0 -> {EHourQuarter.FIRST -> 5.5, EHourQuarter.SECOND -> 10.2, ...},
1 -> {EHourQuarter.FIRST -> 33.2, EHourQuarter.SECOND -> 30.1, ...}, ...},
2020-07-08 -> {0 -> {EHourQuarter.FIRST -> 5.5, EHourQuarter.SECOND -> 10.2, ...},
1 -> {EHourQuarter.FIRST -> 33.2, EHourQuarter.SECOND -> 30.1, ...}, ... }
It's a map of LocalDate of map of Integer (hour: from 0 to 23) of map of EHourQuarter of Double.
And I need to get a Map<Integer(hour), Map<EHourQuarter, Double>> containing the accumulated of every date, meaning that if dates 2020-07-07 to 2020-07-10 (4 days) contained in hour 0 each one 5 for every EHourQuarter, then the result should show in hour 0, each quarter with a value of 20.
Additionally, if by making that you could help me also with mapping that to a List of DTOs like this ones,
public class QuarterlyOccupancyDTO {
private Integer hour;
private Integer minute;//this is the value property of EHourQuarter
private Double occupancy;
}
I'd much appreciate it.
At the end, the list of DTOs should contain the sum of all dates grouped by hour and minute (value property of EHourQuarter).
This is an example.
NOTE: a map can contain multiple dates and the aim is to group/sum all .
Given this map:
{
"2020-06-26":{
"0":{
"FOURTH":0.0,
"FIRST":0.0,
"THIRD":0.0,
"SECOND":0.0
},
"1":{
"FOURTH":0.0,
"FIRST":0.0,
"THIRD":0.0,
"SECOND":0.0
},
"2":{
"FOURTH":0.0,
"FIRST":0.0,
"THIRD":0.0,
"SECOND":0.0
},
"3":{
"FOURTH":0.0,
"FIRST":0.0,
"THIRD":0.0,
"SECOND":0.0
},
"4":{
"FOURTH":0.0,
"FIRST":0.0,
"THIRD":0.0,
"SECOND":0.0
},
"5":{
"FOURTH":0.0,
"FIRST":0.0,
"THIRD":0.0,
"SECOND":0.0
},
"6":{
"FOURTH":0.0,
"FIRST":0.0,
"THIRD":0.0,
"SECOND":0.0
},
"7":{
"FOURTH":0.0,
"FIRST":0.0,
"THIRD":0.0,
"SECOND":0.0
},
"8":{
"FOURTH":0.0,
"FIRST":0.0,
"THIRD":0.0,
"SECOND":0.0
},
"9":{
"FOURTH":5.0,
"FIRST":5.0,
"THIRD":5.0,
"SECOND":5.0
},
"10":{
"FOURTH":5.0,
"FIRST":5.0,
"THIRD":5.0,
"SECOND":5.0
},
"11":{
"FOURTH":5.0,
"FIRST":5.0,
"THIRD":5.0,
"SECOND":5.0
},
"12":{
"FOURTH":5.0,
"FIRST":5.0,
"THIRD":5.0,
"SECOND":5.0
},
"13":{
"FOURTH":5.0,
"FIRST":5.0,
"THIRD":5.0,
"SECOND":5.0
},
"14":{
"FOURTH":5.0,
"FIRST":5.0,
"THIRD":5.0,
"SECOND":5.0
},
"15":{
"FOURTH":5.0,
"FIRST":5.0,
"THIRD":5.0,
"SECOND":5.0
},
"16":{
"FOURTH":5.0,
"FIRST":5.0,
"THIRD":5.0,
"SECOND":5.0
},
"17":{
"FOURTH":5.0,
"FIRST":5.0,
"THIRD":5.0,
"SECOND":5.0
},
"18":{
"FOURTH":0.0,
"FIRST":5.0,
"THIRD":0.0,
"SECOND":0.0
},
"19":{
"FOURTH":0.0,
"FIRST":0.0,
"THIRD":0.0,
"SECOND":0.0
},
"20":{
"FOURTH":0.0,
"FIRST":0.0,
"THIRD":0.0,
"SECOND":0.0
},
"21":{
"FOURTH":0.0,
"FIRST":0.0,
"THIRD":0.0,
"SECOND":0.0
},
"22":{
"FOURTH":0.0,
"FIRST":0.0,
"THIRD":0.0,
"SECOND":0.0
},
"23":{
"FOURTH":0.0,
"FIRST":0.0,
"THIRD":0.0,
"SECOND":0.0
}
}
}
A list like this should be the answer:
[
{
"hour":0,
"minute":15,
"occupancy":0.0
},
{
"hour":0,
"minute":30,
"occupancy":0.0
},
{
"hour":0,
"minute":45,
"occupancy":0.0
},
{
"hour":0,
"minute":60,
"occupancy":0.0
},
{
"hour":1,
"minute":15,
"occupancy":0.0
},
{
"hour":1,
"minute":30,
"occupancy":0.0
},
{
"hour":1,
"minute":45,
"occupancy":0.0
},
{
"hour":1,
"minute":60,
"occupancy":0.0
},
{
"hour":2,
"minute":15,
"occupancy":0.0
},
{
"hour":2,
"minute":30,
"occupancy":0.0
},
{
"hour":2,
"minute":45,
"occupancy":0.0
},
{
"hour":2,
"minute":60,
"occupancy":0.0
},
{
"hour":3,
"minute":15,
"occupancy":0.0
},
{
"hour":3,
"minute":30,
"occupancy":0.0
},
{
"hour":3,
"minute":45,
"occupancy":0.0
},
{
"hour":3,
"minute":60,
"occupancy":0.0
},
{
"hour":4,
"minute":15,
"occupancy":0.0
},
{
"hour":4,
"minute":30,
"occupancy":0.0
},
{
"hour":4,
"minute":45,
"occupancy":0.0
},
{
"hour":4,
"minute":60,
"occupancy":0.0
},
{
"hour":5,
"minute":15,
"occupancy":0.0
},
{
"hour":5,
"minute":30,
"occupancy":0.0
},
{
"hour":5,
"minute":45,
"occupancy":0.0
},
{
"hour":5,
"minute":60,
"occupancy":0.0
},
{
"hour":6,
"minute":15,
"occupancy":0.0
},
{
"hour":6,
"minute":30,
"occupancy":0.0
},
{
"hour":6,
"minute":45,
"occupancy":0.0
},
{
"hour":6,
"minute":60,
"occupancy":0.0
},
{
"hour":7,
"minute":15,
"occupancy":0.0
},
{
"hour":7,
"minute":30,
"occupancy":0.0
},
{
"hour":7,
"minute":45,
"occupancy":0.0
},
{
"hour":7,
"minute":60,
"occupancy":0.0
},
{
"hour":8,
"minute":15,
"occupancy":0.0
},
{
"hour":8,
"minute":30,
"occupancy":0.0
},
{
"hour":8,
"minute":45,
"occupancy":0.0
},
{
"hour":8,
"minute":60,
"occupancy":0.0
},
{
"hour":9,
"minute":15,
"occupancy":5.0
},
{
"hour":9,
"minute":30,
"occupancy":5.0
},
{
"hour":9,
"minute":45,
"occupancy":5.0
},
{
"hour":9,
"minute":60,
"occupancy":5.0
},
{
"hour":10,
"minute":15,
"occupancy":5.0
},
{
"hour":10,
"minute":30,
"occupancy":5.0
},
{
"hour":10,
"minute":45,
"occupancy":5.0
},
{
"hour":10,
"minute":60,
"occupancy":5.0
},
{
"hour":11,
"minute":15,
"occupancy":5.0
},
{
"hour":11,
"minute":30,
"occupancy":5.0
},
{
"hour":11,
"minute":45,
"occupancy":5.0
},
{
"hour":11,
"minute":60,
"occupancy":5.0
},
{
"hour":12,
"minute":15,
"occupancy":5.0
},
{
"hour":12,
"minute":30,
"occupancy":5.0
},
{
"hour":12,
"minute":45,
"occupancy":5.0
},
{
"hour":12,
"minute":60,
"occupancy":5.0
},
{
"hour":13,
"minute":15,
"occupancy":5.0
},
{
"hour":13,
"minute":30,
"occupancy":5.0
},
{
"hour":13,
"minute":45,
"occupancy":5.0
},
{
"hour":13,
"minute":60,
"occupancy":5.0
},
{
"hour":14,
"minute":15,
"occupancy":5.0
},
{
"hour":14,
"minute":30,
"occupancy":5.0
},
{
"hour":14,
"minute":45,
"occupancy":5.0
},
{
"hour":14,
"minute":60,
"occupancy":5.0
},
{
"hour":15,
"minute":15,
"occupancy":5.0
},
{
"hour":15,
"minute":30,
"occupancy":5.0
},
{
"hour":15,
"minute":45,
"occupancy":5.0
},
{
"hour":15,
"minute":60,
"occupancy":5.0
},
{
"hour":16,
"minute":15,
"occupancy":5.0
},
{
"hour":16,
"minute":30,
"occupancy":5.0
},
{
"hour":16,
"minute":45,
"occupancy":5.0
},
{
"hour":16,
"minute":60,
"occupancy":5.0
},
{
"hour":17,
"minute":15,
"occupancy":5.0
},
{
"hour":17,
"minute":30,
"occupancy":5.0
},
{
"hour":17,
"minute":45,
"occupancy":5.0
},
{
"hour":17,
"minute":60,
"occupancy":5.0
},
{
"hour":18,
"minute":15,
"occupancy":5.0
},
{
"hour":18,
"minute":30,
"occupancy":0.0
},
{
"hour":18,
"minute":45,
"occupancy":0.0
},
{
"hour":18,
"minute":60,
"occupancy":0.0
},
{
"hour":19,
"minute":15,
"occupancy":0.0
},
{
"hour":19,
"minute":30,
"occupancy":0.0
},
{
"hour":19,
"minute":45,
"occupancy":0.0
},
{
"hour":19,
"minute":60,
"occupancy":0.0
},
{
"hour":20,
"minute":15,
"occupancy":0.0
},
{
"hour":20,
"minute":30,
"occupancy":0.0
},
{
"hour":20,
"minute":45,
"occupancy":0.0
},
{
"hour":20,
"minute":60,
"occupancy":0.0
},
{
"hour":21,
"minute":15,
"occupancy":0.0
},
{
"hour":21,
"minute":30,
"occupancy":0.0
},
{
"hour":21,
"minute":45,
"occupancy":0.0
},
{
"hour":21,
"minute":60,
"occupancy":0.0
},
{
"hour":22,
"minute":15,
"occupancy":0.0
},
{
"hour":22,
"minute":30,
"occupancy":0.0
},
{
"hour":22,
"minute":45,
"occupancy":0.0
},
{
"hour":22,
"minute":60,
"occupancy":0.0
},
{
"hour":23,
"minute":15,
"occupancy":0.0
},
{
"hour":23,
"minute":30,
"occupancy":0.0
},
{
"hour":23,
"minute":45,
"occupancy":0.0
},
{
"hour":23,
"minute":60,
"occupancy":0.0
}
]
A: First using flatMap create Stream<SimpleEntry<Integer, EHourQuarter>, Double> then using toMap collect as Map<SimpleEntry<Integer, EHourQuarter>, Double>. Then map into you DTO class.
List<QuarterlyOccupancyDTO> result = map.entrySet().stream()
.flatMap(d -> d.getValue().entrySet().stream()
.flatMap(h -> h.getValue().entrySet().stream().map(
e -> new SimpleEntry<>(new SimpleEntry<>(h.getKey(), e.getKey()), e.getValue()))))
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue, (a, b) -> a + b))
.entrySet()
.stream()
.map(m -> new QuarterlyOccupancyDTO(m.getKey().getKey(), m.getKey().getValue().getValue(), m.getValue()))
.collect(Collectors.toList());
Note: As you don't show your code some part may not work. Full code here
A: First group and sum the occupancy by hour/quarter pair
(Avoid nested flatMap, as it makes code less readable)
Map<Entry<Integer, Integer>, Double> groups
= map.entrySet()
.stream()
// Flatten the outer map, since you don't care about the days
.flatMap(de -> de.getValue().entrySet().stream())
// Flatten the map by combining hour key and quarter key into a single one
.flatMap(he -> he.getValue()
.entrySet()
.stream()
.map(qe -> new SimpleEntry<>(new SimpleEntry<>(he.getKey(), qe.getKey().getValue()), qe.getValue())))
// Sum the occupancy per each hour/quarter pair
.collect(groupingBy(Entry::getKey, summingDouble(Entry::getValue)));
Then map the grouped entries into your DTO objects
List<QuarterlyOccupancyDTO> list =
groups.entrySet()
.stream()
.map(e -> new QuarterlyOccupancyDTO(e.getKey().getKey(), e.getKey().getValue(), e.getValue()))
.collect(toList());
Another pure functional approach:
(It's pure, but seems less readable, IMO)
Collection<QuarterlyOccupancyDTO> dtos =
map.entrySet()
.stream()
// Flatten the outer map, since you don't care about the days
.flatMap(de -> de.getValue().entrySet()
.stream())
// Flatten the map by merging hour key and quarter key into a single one
.flatMap(he -> he.getValue()
.entrySet()
.stream()
.map(qe -> new SimpleEntry<>(new SimpleEntry<>(he.getKey(), qe.getKey().getValue()),
qe.getValue())))
// Map each entry into a DTO object and then reduce the occupancy per each hour/quarter pair
.collect(
groupingBy(Entry::getKey,
mapping(e -> new QuarterlyOccupancyDTO(e.getKey().getKey(), e.getKey().getValue(), e.getValue()),
reducing(new QuarterlyOccupancyDTO(0, 0, 0.0),
(a, b) -> new QuarterlyOccupancyDTO(b.getHour(), b.getMinute(), a.getOccupancy() + b.getOccupancy())))))
.values();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62790358",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: React - Get the ref present in child component in the parent component I'm not trying to do anything hacky using refs. I just need the ref to the element because the element is a canvas, and to draw on a canvas you need its ref.
class Parent extends Component {
clickDraw = () => {
// when button clicked, get the canvas context and draw on it.
// how?
}
render() {
return (
<div>
<button onClick={this.clickDraw}> Draw </button>
<Child />
</div>
);
}
}
class Child extends Component {
componentDidMount() {
const ctx = this.canvas.getContext('2d');
// draw something on the canvas once it's mounted
ctx.fillStyle = "#FF0000";
ctx.fillRect(0,0,150,75);
}
render() {
return (
<canvas width={300}
height={500}
ref={canvasRef => this.canvas = canvasRef}>
</canvas>
);
}
}
=====
Something I tried (which technically works but feels strange) is define the <canvas> in the parent, so in its ref function, this refers to the parent component. Then I pass the <canvas> and this.canvas to the child as two separate props. I return the <canvas> (named this.props.canvasJSX) in the child's render function, and I use this.canvas (named this.props.canvasRef) to get its context to draw on it. See below:
class Parent extends Component {
clickDraw = () => {
// now I have access to the canvas context and can draw
const ctx = this.canvas.getContext('2d');
ctx.fillStyle = "#00FF00";
ctx.fillRect(0,0,275,250);
}
render() {
const canvas = (
<canvas width={300}
height={500}
ref={canvasRef => this.canvas = canvasRef}>
</canvas>
);
return (
<div>
<button onClick={this.clickDraw}> Draw </button>
<Child canvasJSX={canvas}
canvasRef={this.canvas} />
</div>
);
}
}
class Child extends Component {
componentDidMount() {
const ctx = this.props.canvasRef.getContext('2d');
// draw something on the canvas once it's mounted
ctx.fillStyle = "#FF0000";
ctx.fillRect(0,0,150,75);
}
render() {
return this.props.canvas;
}
}
Is there a more standard way of achieving this?
A: You should actually be using the first approach and you can access the child elements refs in the parent
class Parent extends Component {
clickDraw = () => {
// when button clicked, get the canvas context and draw on it.
const ctx = this.childCanvas.canvas.getContext('2d');
ctx.fillStyle = "#00FF00";
ctx.fillRect(0,0,275,250);
}
render() {
return (
<div>
<button onClick={this.clickDraw}> Draw </button>
<Child ref={(ip) => this.childCanvas = ip}/>;
</div>
);
}
}
class Child extends Component {
constructor() {
super();
this.canvas = null;
}
componentDidMount() {
const ctx = this.canvas.getContext('2d');
// draw something on the canvas once it's mounted
ctx.fillStyle = "#FF0000";
ctx.fillRect(0,0,150,75);
}
render() {
return (
<canvas width={300}
height={500}
ref={canvasRef => this.canvas = canvasRef}>
</canvas>
);
}
}
You can only use this approach is the child component is declared as a class.
A: If it cannot be avoided the suggested pattern extracted from the React docs would be:
import React, {Component} from 'react';
const Child = ({setRef}) => <input type="text" ref={setRef} />;
class Parent extends Component {
constructor(props) {
super(props);
this.setRef = this.setRef.bind(this);
}
componentDidMount() {
// Call function on Child dom element
this.childInput.focus();
}
setRef(input) {
this.childInput = input;
}
render() {
return <Child setRef={this.setRef} />
}
}
The Parent passes a function as prop bound to Parent's this. React will call the Child's ref callback setRef and attach the childInput property to this which as we already noted points to the Parent.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43601440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Live update Pie Chart I want to create very useful and easy way to live update Pie chart. For example:
import javafx.application.Application;
import javafx.collections.FXCollections;
import javafx.collections.ObservableList;
import javafx.event.EventHandler;
import javafx.scene.Group;
import javafx.scene.Scene;
import javafx.scene.chart.PieChart;
import javafx.scene.control.Label;
import javafx.scene.input.MouseEvent;
import javafx.scene.paint.Color;
import javafx.stage.Stage;
public class MainApp extends Application {
@Override
public void start(Stage stage) {
Scene scene = new Scene(new Group());
stage.setTitle("Imported Fruits");
stage.setWidth(500);
stage.setHeight(500);
ObservableList<PieChart.Data> pieChartData =
FXCollections.observableArrayList(
new PieChart.Data("Grapefruit", 13),
new PieChart.Data("Oranges", 25),
new PieChart.Data("Plums", 10),
new PieChart.Data("Pears", 22),
new PieChart.Data("Apples", 30));
final PieChart chart = new PieChart(pieChartData);
chart.setTitle("Imported Fruits");
final Label caption = new Label("");
caption.setTextFill(Color.DARKORANGE);
caption.setStyle("-fx-font: 24 arial;");
for (final PieChart.Data data : chart.getData()) {
data.getNode().addEventHandler(MouseEvent.MOUSE_PRESSED,
new EventHandler<MouseEvent>() {
@Override public void handle(MouseEvent e) {
caption.setTranslateX(e.getSceneX());
caption.setTranslateY(e.getSceneY());
caption.setText(String.valueOf(data.getPieValue())
+ "%");
}
});
}
((Group) scene.getRoot()).getChildren().addAll(chart, caption);
stage.setScene(scene);
stage.show();
}
public static void main(String[] args) {
launch(args);
}
}
When I display the chart I want to call Java Method and update the chart like this:
PieChartUpdate(valueOne, valueTwo, valueThree);
Can you show me how I can edit the code in order to make the live updates more easy to use?
A: As far as i could see, all classes that are used to establish a PieChart, like PieChart.Data and of course the ObservableList are already designed so that they will update the PieChart the moment something changes, be it the list itself or values inside the Data Objects. See the binding chapters how this is done. But you don't need to write your own bindings for the PieChart.
The code below should do what you want. Use addData(String name, double value) to create a new Data object for your pie chart, or update an existing one which has the same name like the first parameter of the method. The PieChart will automatically play a animation when changes are made to the list (new Data object added) or a Data object got changed.
//adds new Data to the list
public void naiveAddData(String name, double value)
{
pieChartData.add(new Data(name, value));
}
//updates existing Data-Object if name matches
public void addData(String name, double value)
{
for(Data d : pieChartData)
{
if(d.getName().equals(name))
{
d.setPieValue(value);
return;
}
}
naiveAddData(name, value);
}
A: Just in case someone feels extremely lost and isn't sure how to implement denhackl's answer, here is a working version of what he tried to explain.
import javafx.application.Application;
import javafx.collections.FXCollections;
import javafx.collections.ObservableList;
import javafx.event.EventHandler;
import javafx.scene.Group;
import javafx.scene.Scene;
import javafx.scene.chart.PieChart;
import javafx.scene.control.Label;
import javafx.scene.input.MouseEvent;
import javafx.scene.paint.Color;
import javafx.stage.Stage;
public class LivePie extends Application {
ObservableList<PieChart.Data> pieChartData;
@Override
public void start(Stage stage) {
Scene scene = new Scene(new Group());
stage.setTitle("Imported Fruits");
stage.setWidth(500);
stage.setHeight(500);
this.pieChartData =
FXCollections.observableArrayList();
addData("Test", 5.1);
addData("Test2", 15.1);
addData("Test3", 3.1);
addData("Test1", 4.9);
addData("Test2", 15.1);
addData("Test3", 2.1);
addData("Test5", 20.1);
final PieChart chart = new PieChart(pieChartData);
chart.setTitle("Imported Fruits");
final Label caption = new Label("");
caption.setTextFill(Color.DARKORANGE);
caption.setStyle("-fx-font: 24 arial;");
((Group) scene.getRoot()).getChildren().addAll(chart, caption);
stage.setScene(scene);
stage.show();
}
public void naiveAddData(String name, double value)
{
pieChartData.add(new javafx.scene.chart.PieChart.Data(name, value));
}
//updates existing Data-Object if name matches
public void addData(String name, double value)
{
for(javafx.scene.chart.PieChart.Data d : pieChartData)
{
if(d.getName().equals(name))
{
d.setPieValue(value);
return;
}
}
naiveAddData(name, value);
}
public static void main(String[] args) {
launch(args);
}
}
Many thanks to the creator of the topic and the answers provided!
A: Here's a good introductory article on using properties and binding.
http://docs.oracle.com/javafx/2/binding/jfxpub-binding.htm
| {
"language": "en",
"url": "https://stackoverflow.com/questions/21268973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: The error of std::sort I created some instance of Node class and vector of Node class,then I pushed those instance into vector,
and I created function object "ListCompare" to sort vector.
But,I am getting the error "No matching function for call to object of type "ListCompare" " in sort function.
Why am I getting the error?
I wrote the code and the error below.
#include <iostream>
#include "cocos2d.h"
using namespace cocos2d;
class Node
{
public:
Node(int x,int y,CCPoint playerTilePos): m_tilePosX(x),m_tilePosY(y),m_costFromStart(0),m_costFromNextToGoal(0),m_playerTilePos(playerTilePos){};
Node(const Node &obj);
virtual ~Node(){};
int getPosX(void) const { return m_tilePosX; }
int getPosY(void) const { return m_tilePosY; }
int getTotalCost(void) const { return m_costFromStart + m_costFromNextToGoal; }
int getConstFromStart(void) const { return m_costFromStart; }
void setCostFromStart(int var) { m_costFromStart = var; }
int getCostFromNextToGoal(void)const { return ( std::abs((int)m_playerTilePos.x - m_tilePosX) + std::abs((int)m_playerTilePos.y - m_tilePosY) );}
void setCostNextToGoal(int var) { m_costFromNextToGoal = var; }
bool operator == (Node node)
{
return (m_tilePosX == node.m_tilePosX && m_tilePosY == node.m_tilePosY);
}
void operator = (Node node)
{
m_tilePosX = node.m_tilePosX;
m_tilePosY = node.m_tilePosY;
m_costFromStart = node.m_costFromStart;
m_costFromNextToGoal = node.m_costFromNextToGoal;
}
private:
int m_tilePosX;
int m_tilePosY;
int m_costFromStart;
int m_costFromNextToGoal;
CCPoint m_playerTilePos;
};
std::vector<Node>List;
class ListCompare{
public:
bool operator()(Node& pNode1,Node& pNode2)
{
return pNode1.getTotalCost() > pNode2.getTotalCost();
}
};
//--------------------------------------------------------------
// START
//--------------------------------------------------------------
void main()
{
List openList;
//initialize
CCPoint pos = ccp(100,100);
Node startNode(10,10,pos);
//cost is 1000
startNode.setCostNextToGoal(1000);
std::cout << startNode.getTotalCost << std::endl; //totalcost = 0 + 1000 = 1000
openList.pushBack(startNode);
Node nextNode(20,20,pos);
NextNode.setCostNextToGoal(2000);
std::cout << NextNode.getTotalCost << std::endl; //totalcost = 0 + 2000 = 2000
openList.pushBack(NextNode);
std::sort(openList.begin(),openList.end(),ListCompare());
}
--------------------------------The error----------------------------------
template<typename _Tp, typename _Compare>
inline const _Tp&
__median(const _Tp& __a, const _Tp& __b, const _Tp& __c, _Compare __comp)
{
// concept requirements
__glibcxx_function_requires(_BinaryFunctionConcept<_Compare,bool,_Tp,_Tp>)
if (__comp(__a, __b)) //the error part."No matching function for call to object of type "ListCompare" "
if (__comp(__b, __c))
return __b;
else if (__comp(__a, __c))
return __c;
else
return __a;
else if (__comp(__a, __c))
return __a;
else if (__comp(__b, __c))
return __c;
else
return __b;
}
A: In ListCompare::operator() you need to take the parameters as const references.
class ListCompare
{
public:
bool operator()(const Node& pNode1, const Node& pNode2) const
{
return pNode1.getTotalCost() > pNode2.getTotalCost();
}
};
| {
"language": "en",
"url": "https://stackoverflow.com/questions/24584946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: Issue with serializing JSON from a rest call Newbie developer here. I am trying to make a call to a public API. The API receives the name of a drink as a string and returns information and recipe for that name. The response from the API looks like this:
{
"drinks":[
{
"id": ...
"name": ...
"recipe": ...
"category": ...
"alcoholic": ...
... many other fields ...
},
{
...
}
...
]
}
I am only interested in name, recipe and category. I have a domain class for this purpose that looks like this
@Data
@NoArgsConstructor
@AllArgsConstructor
@JsonIgnoreProperties(ignoreUnknown = true)
public class Drink {
@JsonProperty("name")
private String name;
@JsonProperty("category")
private String category;
@JsonProperty("recipe")
private String recipe;
}
I also implemented a client to call the endpoint using restTemplate. Here is the call that client makes:
ResponseEntity<List<Drink>> response = restTemplate.exchange(
url,
HttpMethod.GET,
null,
new ParameterizedTypeReference<List<Drink>>() {
});
My goal is to call the API, get the response and only the fields that I want and store it in a list of Drink. However when I try to run the app locally and make a call I am getting this error:
Caused by: org.springframework.http.converter.HttpMessageNotReadableException: JSON parse error: Cannot deserialize value of type `java.util.ArrayList<Drink>` from Object value (token `JsonToken.START_OBJECT`); nested exception is com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot deserialize value of type `java.util.ArrayList<Drink>` from Object value (token `JsonToken.START_OBJECT`)
When I use ResponseEntity<String> instead, it works but returns the whole json as a string, which does not seem like a good approach. How can I get this approach to work?
A: The problem is mismatch between json structure and object structure. The object you deserialize into must represent correctly the json. It's an object with a field drinks, which is an array of objects(drinks in your case). Correct java class would be:
public class Wrapper {
private List<Drink> drinks;
//getters and setters
@Override
public String toString() {
return "Wrapper{" +
"drinks=" + drinks +
'}';
}
}
Other option would be to write custom deserializer, which can extract drinks from the tree before deserializing directly into a list.]
Edit: Added toString() override for debugging purposes.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/72551705",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Migrating from Eclipse to IntelliJ - Moving a project linked with EGit I have several projects linked to bitbucket repositories using EGit. If I use IntelliJ IDEA for development instead, is there anything I need to keep in mind when migrating the projects over, and keeping them on that repository and updated with IntelliJ's git integration?
A: Considering the IntelliJ-git integration requires a git, make sure you have the latest (1.8.3+) installed.
Otherwise, you should be able to use the same working tree/repo as the one used by Eclipse Egit. No special migration should be required.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18806932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: I want to use jQuery("#location").load or .get ("file.php") but only returns document.write output not html from php file I get that the jQuery .load function does indeed return the results from a load of a php file, but I don't want to convert every line of html inside my php file to be:
echo " ... ";
Inside my php file that I want to dynamically load into a div tag, I have a php include of another php file, 100+ lines of html code, and some javascript to create custom form content.
I want the resulting html from this php file to be loaded into the div tag.
Is jQuery load or get the right function, or do I need to explicitly write out the ajax code or something else?
Edit: Currently it only displays the document.write output from the JavaScript; none of the HTML code is displayed/returned from .load or .get.
File example:
inside index.php
jQuery("#mainContent").load("another.php");
also tried
jQuery("#mainContent").get("another.php");
and
$("#mainContent").load("another.php");
Here is the other file (snippet example) for another.php
<?php include 'navigation.php'; ?>
<b>Here comes the rain again...</b>
<script language="JavaScript">document.write("this part only is loaded");
</script>
A: "I want the resulting html from this php file to be loaded into the div tag"
Well, that's exactly what .load does:
$('#target').load('http://www.example.com/page.php');
Loads the content of http://www.example.com/page.php into the element named target.
If this isn't working, you need to be clearer on what the problem is.
A: So based on the discussions, it appears that it is NOT possible to load a php file that contains a mix of php, html, and javascript document.write in it and put that within a div tag in another file dynamically.
What IS possible, at least the only solution I have found thus far, is to include the php file with other php code and html and even a script tag but NO document.write in it.
To add the customizations I had in the document.write javascript, I instead used the following example to send the javascript variables to the php file, have it process it as a php variable and send back the full file. (Based on: Jquery load() and PHP variables )
$('#mainContent').load( "another.php",
{
'key1': js_array['key'],
'key2': js_variable_value
}
);
Then the parameters are posted to the file test.php and are accessible as:
<?php
$key1 = $_POST['key1']
$key2 = $_POST['key2']
?>
// some html code
...
<img alt="photograph" src="<?php echo $key1; ?>" />
Would love to know a better solution.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/8964428",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to get "Anchor" cell of SelectedCells in WPF DataGrid? I'm coding WPF Program that shows DataTable by binding to DataGrid's ItemsSource property.
And I set following DataGrid's properties in order to select multiple cells.
SelectionMode="Extended"
SelectionUnit="CellOrRowHeader"
Furthermore, I want to add a handler in code behind that sets value to DataGrid's SelectedCells property and change "Anchor" cell (I mean, Anchor cell is only cell that is always contained in SelectedCells during selecting other cells with Shift Key down).
But I don't know how to access Anchor cell's information programmatically.
Please give me advice.
Thanks for your help.
A: I realise that this question is a little old now, however hopefully this information will help someone.
As far as I can tell, there is no public API available for DataGrid that will allow the reading of the selection anchor, nor the setting of the selection anchor without clearing the existing selection.
To work around this, I had a look at WPF's source code (this for .NET Framework 4.8, this for .NET (previously named .NET Core) at the time of writing) and identified that the selection anchor appears to be solely handled by a field of type Nullable<DataGridCellInfo> called _selectionAnchor.
For anyone who has worked with adjusting DataGrid selection programmatically, you just need to set _selectionAnchor after the operation for subsequent SHIFT selection operations to behave properly.
If you wish to clear the selection anchor having just programmatically deselected all cells, just set the selection anchor to null.
You can find more information on reflection here.
TLDR: Use reflection to access the private _selectionAnchor field
eg:
// These lines of code are not intended to do anything more than demonstrate usage of reflection to get and set this private field.
// In reality you'd be passing SetValue() something other than the value the field was already set to.
var fieldInfo = typeof(DataGrid).GetField("_selectionAnchor", BindingFlags.Instance | BindingFlags.NonPublic);
var anchorCellInfo = fieldInfo.GetValue(dataGridInstance) as DataGridCellInfo?;
fieldInfo.SetValue(dataGridInstance, anchorCellInfo);
NOTE: Using reflection to access private/internal functionality always carries the risk that the maintainer of the thing you're accessing could update those private/internal parts at any point in a future release without prior warning. That said, from what I can see, this particular functionality has been pretty much unchanged for years in WPF so you should hopefully be alright.
To be on the safe side, you could potentially add unit tests to verify that the private/internal functionality you're accessing continues to exist and behave in the same way you need it to, so that your application does not subtly break at runtime after updating a package on which you're using reflection to access private/internal functionality.
I should emphasise that it is unusual for a project's unit tests to test functionality in its dependencies, however this seems to me to be a somewhat unique case.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51847423",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Example of a simple web project using ES6 with webpack Right now I am building frontend assets for my php cms projects using gulp. I split javascript into multiple files and concatenate them with a thing called rigger, but could easily do the same with gulp-concat.
I am trying to move to webpack workflow with ES6 modules now. I look for a good example, but all the ones I can find are way too complicated. None of them use jQuery and simple jQuery plugins like lightbox, carousel and such.
Please suggest a good example of a small web project (preferably a github repo) that has simple javascript partials baked as ES6 modules.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49364248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Refactoring Javascript code to use Transduction I have a doubt related to transduction in functional programming
If I have an operation -
arr.map(A).map(B).filter(C).filter(D).map(E).map(F)
To remove the intermediate creation of arrays, can I rewrite it as -
import _ from "lodash"
const mapReducer = (mapperFn) => (combinerFn) => (acc, val) =>
combinerFn(acc, mapperFn(val));
const filterReducer = (predicateFn) => (combinerFn) => (acc, val) =>
predicateFn(val) ? combinerFn(acc, val) : acc;
const push = (arr, x) => { arr.push(x); return arr }
const transducer = _.flowRight(
mapReducer(_.flowRight(B, A)),
filterReducer(_.flowRight(D, C)),
mapReducer(_.flowRight(F, E))
)
arr.reduce(transducer(push), [])
| {
"language": "en",
"url": "https://stackoverflow.com/questions/70640327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Read character datetimes without timezones I am trying to import in R a text file including datetimes. Times are stored in character format, without timezone information, but we know it is French time (Europe/Paris).
An issue arise for the days of timezone change: e.g. there is a time change from 2018-10-28 03:00:00 CEST to 2018-10-28 02:00:00 CET, thus we have duplicates in our character format, and R cannot tell wether it is CEST or CET.
Consider the following example:
data_in <- "date,val
2018-10-28 01:30:00,25
2018-10-28 02:00:00,26
2018-10-28 02:30:00,27
2018-10-28 02:00:00,28
2018-10-28 02:30:00,29
2018-10-28 03:00:00,30"
library(readr)
data <- read_delim(data_in, ",", locale = locale(tz = "Europe/Paris"))
We end up having duplicates in our dates:
data$date
[1] "2018-10-28 01:30:00 CEST" "2018-10-28 02:00:00 CEST" "2018-10-28 02:30:00 CET" "2018-10-28 02:00:00 CEST"
[5] "2018-10-28 02:30:00 CET" "2018-10-28 03:00:00 CET"
Expected output would be:
data$date
[1] "2018-10-28 01:30:00 CEST" "2018-10-28 02:00:00 CEST" "2018-10-28 02:30:00 CEST" "2018-10-28 02:00:00 CET"
[5] "2018-10-28 02:30:00 CET" "2018-10-28 03:00:00 CET"
Any idea how to solve the issue (besides telling people to use UTC or ISO formats). I guess the only way is to suppose the dates are sorted, so we can tell the first ones are CEST.
A: If you are certain that your time is always-increasing, then you can look for an apparent decrease (of time-of-day) and manually insert the TZ offset to the string, then parse as usual. I added some logic to look for this decrease only around 2-3am so that if you have multiple days of data spanning midnight, you would not get a false-alarm.
data <- read.csv(text = data_in)
fakedate <- as.POSIXct(gsub("^[-0-9]+ ", "2000-01-01 ", data$date))
decreases <- cumany(grepl(" 0[23]:", data$date) & c(FALSE, diff(fakedate) < 0))
data$date <- paste(data$date, ifelse(decreases, "+0100", "+0200"))
data
# date val
# 1 2018-10-28 01:30:00 +0200 25
# 2 2018-10-28 02:00:00 +0200 26
# 3 2018-10-28 02:30:00 +0200 27
# 4 2018-10-28 02:00:00 +0100 28
# 5 2018-10-28 02:30:00 +0100 29
# 6 2018-10-28 03:00:00 +0100 30
as.POSIXct(data$date, format="%Y-%m-%d %H:%M:%S %z", tz="Europe/Paris")
# [1] "2018-10-28 01:30:00 CEST" "2018-10-28 02:00:00 CEST" "2018-10-28 02:30:00 CEST"
# [4] "2018-10-28 02:00:00 CET" "2018-10-28 02:30:00 CET" "2018-10-28 03:00:00 CET"
My use of "2000-01-01" was just some non-DST day so that we can parse the timestamp into POSIXt and calculate a diff on it. (If we didn't insert a date, we could still use as.POSIXct with a format, but if you ever ran this on one of the two DST days, you might get different results since as.POSIXct("01:02:03", format="%H:%M:%S") always assumes "today".
This is obviously a bit fragile with its assumptions, but perhaps it'll be good enough for what you need.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/57497940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Laravel get with polymorphic relationship after save() I have something like that:
$order = new Order();
$order->name = "lorem"
//some polymorphic relationship (hasOne)
$order->user()->save(new User());
return $order;
the problem is that I want to get order returned with user. I tried something like that and it worked:
$order = new Order();
$order->name = "lorem"
//some polymorphic relationship (hasOne)
$order->user()->save(new User());
$order->user;
return $order;
But I feel that I am doing something wrong and there should be better way to achieve the same result, so how should I do that?
A: Laravel Eloquent has 2 methods, load and with, you may choose the ideal one for you (in this case load).
You may use the following code:
$order = new Order();
$order->name = "lorem"
//some polymorphic relationship (hasOne)
$order->user()->save(new User());
return $order->load('user');
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54846835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Angular2 form reset with submitted values How can we reset an Angular2 form with submitted data, when I use this.form.reset(), the form get updated with the initial values, I want to reset it with the submitted values, to give the user a chance to modify some inputs without retyping all the values.
I'm using Angular 2.2.0.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/42012830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Jenkins is not properly cloning git code in freestyle Jobs In Jenkins, I have multiple freestyle jobs. In these jobs I am cloning the code from git. As shown below picture:
source code management
But sometimes the code doesn't want to be properly cloned from the repository.
Is there any way to solve this problem? Can anyone please help regarding this issue. It will be very helpful for me.
Thanks in Advance.
A: I faced the same issue before.
it's look like git do something like smart cloning which only clone the changes made to the repository.
issue resolved once i added an additional behavior to to Wipe out repository & force clone like below:
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74427707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: twitter typeahead with minimized dataset My end goal is to create a type-ahead (TA) that can be used in a key-value setting for follow-on operations. I am having an issue with initialization of a basic one.
I have a table of over 50M addresses. I realize this is too much for a TA, so my initial thought is to split the dataset by zipcode which reduces teh amount to about 6K-10K possibilities. To that end, I have the following code:
var addresses = new Bloodhound({
datumTokenizer: function (datum) {
return Bloodhound.tokenizers.whitespace(datum.value);
},
queryTokenizer: Bloodhound.tokenizers.whitespace,
prefetch: {
url: "http://live2.offrs.com/buyerherodev/data/addressTA.cfm?zip="+zip,
cacheKey: "change here!"
}
});
$('.typeahead').typeahead(null, {
name: 'addresses',
source: addresses,
display: 'Address',
});
Of note, zip is passed on by URL GET Variable. In my example, I'm using 20001
The JSON return format is: [{"ParcelID":"", "Address":""},{"ParcelID":"", "Address":""}]
I need the Address to be searchable, and the ParcelID stored in a value when selected for further operations.
The HTML:
<div id="menu">
<input tabindex="-1" spellcheck="false" autocomplete="off" readonly="" style="position: absolute; top: 20px; left: 0px; border-color: transparent; box-shadow: none; opacity: 1; background: rgb(255, 255, 255) none repeat scroll 0% 0%;" class="form-control typeahead tt-hint" type="text">
<input dir="auto" style="position: relative; vertical-align: top; background-color: transparent;" spellcheck="false" autocomplete="off" id="address" class="form-control typeahead tt-input" placeholder="Locate by Address..." type="text">
<pre style="position: absolute; visibility: hidden; white-space: pre; font-family: "Helvetica Neue",Helvetica,Arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; word-spacing: 0px; letter-spacing: 0px; text-indent: 0px; text-rendering: optimizelegibility; text-transform: none;" aria-hidden="true"></pre>
<span style="position: absolute; top: 100%; left: 0px; z-index: 100; display: none; right: auto;" class="tt-dropdown-menu">
<div class="tt-dataset-states"></div>
</span>
<div id= "fulldata"></div>
</div>
Any help would be greatly appreciated.
Thanks Much
| {
"language": "en",
"url": "https://stackoverflow.com/questions/31777057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What version of drupal is installed? I have a Drupal installation but no functioning website. How do I find what version of Drupal is installed through the command line?
I cannot login to the site, so the answers here:
How to find version of Drupal installed do not help. I am limited to using the command line with access to the file system.
A: If you have Drush installed, you can use drush status and it will print out the Drush version and the Drupal version you have installed.
If not, go into your Drupal root directory and take a look at the CHANGELOG.txt file. It will list the most recent upgrade (the current version) at the top.
Edit
If you do not have the CHANGELOG.txt file, and don't have Drush, you can look in the includes/bootstrap.inc file, as suggested in the third answer to the question you linked to.
This is defined as a global PHP variable in /includes/bootstrap.inc within D7. Example: define('VERSION', '7.14'); So use it like this...
if (VERSION >= 7.1) {
do_something();
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35849635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to open an R data file in R window I have some data in R that I intend to analyze. However, the file is not displaying the data. Instead, It is only showing a variable in the data. The following is the procedure I used to load the data and the output produced.
load("C:\Users\user\AppData\Local\Temp\1_29_923-Macdonell.RData")
data=load("C:\Users\user\AppData\Local\Temp\1_29_923-Macdonell.RData")
data
[1] "HeightFinger"
How do I get to view the data?
A: If you read ?help, it says that the return value of load is:
A character vector of the names of objects created, invisibly.
This suggests (but admittedly does not state) that the true work of the load command is by side-effect, in that it inserts the objects into an environment (defaulting to the current environment, often but not always .GlobalEnv). You should immediately have access to them from where you called load(...).
For instance, if I can guess at variables you might have in your rda file:
x
# Error: object 'x' not found
# either one of these on windows, NOT BOTH
dat = load("C:\\Users\\user\\AppData\\Local\\Temp\\1_29_923-Macdonell.RData")
dat = load("C:/Users/user/AppData/Local/Temp/1_29_923-Macdonell.RData")
dat
# [1] "x" "y" "z"
x
# [1] 42
If you want them to be not stored in the current environment, you can set up an environment to place them in. (I use parent=emptyenv(), but that's not strictly required. There are some minor ramifications to not including that option, none of them earth-shattering.)
myenv <- new.env(parent = emptyenv())
dat = load("C:/Users/user/AppData/Local/Temp/1_29_923-Macdonell.RData",
envir = myenv)
dat
# [1] "x" "y" "z"
x
# Error: object 'x' not found
ls(envir = myenv)
# [1] "x" "y" "z"
From here you can get at your data in any number of ways:
ls.str(myenv) # similar in concept to str() but for environments
# x : num 42
# y : num 1
# z : num 2
myenv$x
# [1] 42
get("x", envir = myenv)
# [1] 42
Side note:
You may have noticed that I used dat as my variable name instead of data. Though you are certainly allowed to use that, it can bite you if you use variable names that match existing variables or functions. For instance, all of your code will work just fine as long as you load your data. If, however, you run some of your code without pre-loading your objects into your data variable, you'll likely get an error such as:
mean(data$x)
# Error in data$x : object of type 'closure' is not subsettable
That error message is not immediately self-evident. The problem is that if not previously defined as in your question, then data here refers to the function data. In programming terms, a closure is a special type of function, so the error really should have said:
# Error in data$x : object of type 'function' is not subsettable
meaning that though dat can be subsetted and dat$x means something, you cannot use the $ subset method on a function itself. (You can't do mean$x when referring to the mean function, for example.) Regardless, even though this here-modified error message is less confusing, it is still not clearly telling you what/where the problem is located.
Because of this, many seasoned programmers will suggest you use unique variable names (perhaps more than just x :-). If you use my suggestion and name it dat instead, then the mistake of not preloading your data will instead error with:
mean(dat$x)
# Error in mean(dat$x) : object 'dat' not found
which is a lot more meaningful and easier to troubleshoot.
A: There are two ways to save R objects, and you've got them mixed up. In the first way, you save() any collection of objects in an environment to a file. When you load() that file, those objects are re-created with their original names in your current environment. This is how R saves and resotres workspaces.
The second way stores (serializes) a single R object into a file with the saveRDS() function, and recreates it in your environment with the readRDS() function. If you don't assign the results of readRDS(), it'll just print to your screen and drift away.
Examples below:
# Make a simple dataframe
testdf <- data.frame(x = 1:10,
y = rnorm(10))
# Save it out using the save() function
savedir <- tempdir()
savepath <- file.path(savedir, "saved.Rdata")
save(testdf, file = savepath)
# Delete it
rm(testdf)
# Load without assigning - and it's back in your environment
load(savepath)
testdf
# But if you assign the results of load, you just get the name of the object
wrong <- load(savepath)
wrong
# Compare with the RDS:
rds_path <- file.path(savedir, "testdf.rds")
saveRDS(testdf, file = rds_path)
rm(testdf)
testdf <- readRDS(file = rds_path)
testdf
Why the two different approaches? The save()-environment approach is good for creating a checkpoint of your entire environment that you can restore later - that's what R uses it for - but that's about it. It's too easy for such an environment to get cluttered, and if an object you load() has the same name as an object in your current environment, it will overwrite that object:
testdf$z <- "blah"
load(savepath)
testdf # testdf$z is gone
The RDS method lets you assign the name on read, as you're looking to do here. It's a little more annoying to save multiple objects, sure, but you probably shouldn't be saving objects very often anyway - recreating objects from scratch is the best way to ensure that your R code does what you think it does.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43397892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How can i make this a valid WSDL? I'm trying to generate code from this WSDL using the following command:
svcutil /noConfig /language:C# /out:ICatalog.cs http://schemas.opengis.net/csw/2.0.2/profiles/ebrim/1.0/wsdl/2.0/csw-ebrim-interface.wsdl
However svcutil cannot read it, and the xMethods WSDL validator says it's invalid.
What's invalid about it? How can i get svcutil to generate my interface code?
A: I'm not sure if the WSDL is valid or not but what you are trying to do won't work. Svcutil can generate code only for WSDL files of version 1.1. Yours is a WSDL version 2.0.
The WSDL validator you pointed in your question returns the following message when given your WSDL:
faultCode=INVALID_WSDL: Expected element 'http://schemas.xmlsoap.org/wsdl/:definitions'.
It parses the file as WSDL 1.1. and expects the root to be definitions, but that changed in WSDL 2.0 to description.
If you have a valid WSDL 2.0 file, Svcutil2 might be able to generate the code for you, but this tool isn't stable yet.
To validate the WSDL, I guess you could use the validator from The Woden project, which isn't stable either, but is basically the only one that I know that has moved passed the "toy project" status.
The idea is this: version 1.1 is still the "de facto" language when taking WSDL. Not many WS frameworks have adopted WSDL 2.0 so (just my 2 cents!) I think it's better to stick with "the standard" untill support is fully matured for WSDL 2.0.
A: try to add reference
References > Add Reference > COM > System.ServiceModel
| {
"language": "en",
"url": "https://stackoverflow.com/questions/7417898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Get Embedded pdf File I have looked through the forums and found many seemingly related questions, but nothing has helped thus far. I want to be able to get select pdfs from various websites. Here is a snippet that I'm using successfully for most of the documents I'm interested in.
if (!String.IsNullOrEmpty(filePaths[1]))
{
var myRequest = (HttpWebRequest)WebRequest.Create(filePaths[1]);
myRequest.Method = "GET";
WebResponse myResponse = myRequest.GetResponse();
var sr = new StreamReader(myResponse.GetResponseStream(), Encoding.UTF8);
var fileBytes = sr.ReadToEnd();
using (var sw = new StreamWriter("<localfilepath/name")
{
sw.Write(fileBytes);
}
}
The problem comes when I try to get this document: http://www.azdor.gov/LinkClick.aspx?fileticket=r_I2VeNlcCQ%3d&tabid=265&mid=921
If I use the above code, I get a DotNetNuke error. I tried utilizing a WebClient as many other posts have suggested, but get the same error.
When I use this code:
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
request.UserAgent = @"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko/20100101 Firefox/16.0";
request.ContentType = "application/x-unknown";
request.Method = "GET";
using (WebResponse response = request.GetResponse())
{
using (Stream stream = response.GetResponseStream())
{
var sr2 = new StreamReader(stream, Encoding.UTF8);//.ASCII);
var srt = sr2.ReadToEnd();
var a = srt.Length;
using (var sw = new StreamWriter("WebDataTestdocs/testpdf.pdf"))
{
sw.Write(srt);
}
}
}
I get a file back, but it says it was corrupted. Also utilizing UTF8 makes the file size bigger than the one I get when going to the site. If I make the Encoding.ASCII, the file size is correct, but still am getting the corrupted file error. I can see the English text in the file by opening it with notepad, so I'm not sure what is corrupted exactly.
Any help that can be offered would be greatly appreciated, I've been at this for quite a while!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13795184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Looking for the easiest way to extract an unknown substring from within a string. (terms separated by slashes) The initial string:
initString = '/digital/collection/music/bunch/of/other/stuff'
What I want: music
*
*Specifically, I want any term (will never include slashes) that would come between collection/ and /bunch
How I'm going about it:
if(initString.includes('/digital/collection/')){
let slicedString = initString.slice(19); //results in 'music/bunch/of/other/stuff'
let indexOfSlash = slicedString.indexOf('/'); //results, in this case, to 5
let desiredString = slicedString.slice(0, indexOfSlash); //results in 'music'
}
Question:
How the heck do I accomplish this in javascript in a more elegant way?
*
*I looked for something like an endIndexOf() that would replace my hardcoded .slice(19)
*
*lastIndexOf() isn't what I'm looking for, because I want the index at the end of the first instance of my substring /digital/collection/
*I'm looking to keep the number of lines down, and I couldn't find anything like a .getStringBetween('beginCutoff, endCutoff')
Thank you in advance!
A: your title says "index" but your example shows you wanting to return a string. If, in fact, you are wanting to return the string, try this:
if(initString.includes('/digital/collection/')) {
var components = initString.split('/');
return components[3];
}
A: If the path is always the same, and the field you want is the after the third /, then you can use split.
var initString = '/digital/collection/music/bunch/of/other/stuff';
var collection = initString.split("/")[2]; // third index
In the real world, you will want to check if the index exists first before using it.
var collections = initString.split("/");
var collection = "";
if (collections.length > 2) {
collection = collections[2];
}
A: You can use const desiredString = initString.slice(19, 24); if its always music you are looking for.
A: If you need to find the next path param that comes after '/digital/collection/' regardless where '/digital/collection/' lies in the path
*
*first use split to get an path array
*then use find to return the element whose 2 prior elements are digital and collection respectively
const initString = '/digital/collection/music/bunch/of/other/stuff'
const pathArray = initString.split('/')
const path = pathArray.length >= 3
? pathArray.find((elm, index)=> pathArray[index-2] === 'digital' && pathArray[index-1] === 'collection')
: 'path is too short'
console.log(path)
A: Think about this logically: the "end index" is just the "start index" plus the length of the substring, right? So... do that :)
const sub = '/digital/collection/';
const startIndex = initString.indexOf(sub);
if (startIndex >= 0) {
let desiredString = initString.substring(startIndex + sub.length);
}
That'll give you from the end of the substring to the end of the full string; you can always split at / and take index 0 to get just the first directory name form what remains.
A: You can also use regular expression for the purpose.
const initString = '/digital/collection/music/bunch/of/other/stuff';
const result = initString.match(/\/digital\/collection\/([a-zA-Z]+)\//)[1];
console.log(result);
The console output is:
music
A: If you know the initial string, and you have the part before the string you seek, then the following snippet returns you the string you seek. You need not calculate indices, or anything like that.
// getting the last index of searchString
// we should get: music
const initString = '/digital/collection/music/bunch/of/other/stuff'
const firstPart = '/digital/collection/'
const lastIndexOf = (s1, s2) => {
return s1.replace(s2, '').split('/')[0]
}
console.log(lastIndexOf(initString, firstPart))
| {
"language": "en",
"url": "https://stackoverflow.com/questions/58564271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Filter strings within <...> using a Windows 10 script In the file testxml.txt, there are a series of strings enclosed within, and separated by < and >.
I want to filter in those strings 2 values: text="any character" and bounds="[number][number]".
<?xml version='1.0' encoding='UTF-8' standalone='yes' ?><hierarchy rotation="0"><node index="0" text="" resource-id="" class="android.widget.FrameLayout" package="com.facebook.katana" content-desc="" checkable="false" checked="false" clickable="false" enabled="true" focusable="false" focused="false" scrollable="false" long-clickable="false" password="false" selected="false" bounds="[0,0][1080,2076]"><node index="0" text="Create account" resource-id="com.facebook.katana:id/(name removed)" class="android.widget.TextView" package="com.facebook.katana" content-desc="" checkable="false" checked="false" clickable="false" enabled="true" focusable="true" focused="false" scrollable="false" long-clickable="false" password="false" selected="false" bounds="[168,108][533,264]" /></node></hierarchy>UI hierchary dumped to: /dev/tty//
The expected output is 1 file, output.txt, which contains
text=""|bounds="[0,0][1080,2076]"
text="Create account"|bounds="[168,108][533,264]"
....
text=""|bounds="[][]"
A: Ok. First, you must process your file with a huge line into shorter lines via the solution posted at this answer, that create the fields.txtfile.
After that, run this Batch file:
@echo off
setlocal EnableDelayedExpansion
rem Create a "NewLine" value
for %%n in (^"^
%Do NOT remove this line%
^") do (
rem Process fields.txt file
(for /F "delims=" %%a in (fields.txt) do (
set "field=%%a"
rem Remove closing ">"
set "field=!field:~0,-1!"
rem Split field into lines ending in quote-space
for /F "delims=" %%b in (^"!field:" ="%%~n!^") do echo %%b
)) > lines.txt
)
rem Process lines
set "data="
for /F "skip=2 delims=" %%a in (lines.txt) do (
set "tag=%%a"
if "!tag:~0,5!" equ "<node" (
rem Start of new node: show data of previous one, if any
if defined data (
echo text=!text!^|bounds=!bounds!
)
rem Define the data in the same line of "<node"
set !tag:* =!
set "data=1"
) else if "!tag:~0,6!" equ "</node" (
rem Show data of last node
echo text=!text!^|bounds=!bounds!
) else (
rem Define data
2> NUL set %%a
)
)
Output sample:
text=""|bounds="[0,0][1080,2076]"
text=""|bounds="[0,0][1080,2076]"
text=""|bounds="[0,108][1080,2076]"
text=""|bounds="[0,108][1080,2076]"
text=""|bounds="[0,108][1080,276]"
text=""|bounds="[0,108][168,276]"
A: Assuming file.xml is this:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<hierarchy rotation="0">
<node index="0" text="" resource-id="" class="android.widget.FrameLayout" package="com.facebook.katana" content-desc="" checkable="false" checked="false" clickable="false" enabled="true" focusable="false" focused="false" scrollable="false" long-clickable="false" password="false" selected="false" bounds="[0,0][1080,2076]" />
<node index="0" text="Create account" resource-id="com.facebook.katana:id/(name removed)" class="android.widget.TextView" package="com.facebook.katana" content-desc="" checkable="false" checked="false" clickable="false" enabled="true" focusable="true" focused="false" scrollable="false" long-clickable="false" password="false" selected="false" bounds="[168,108][533,264]" />
</hierarchy>
Do
[xml]$xml = get-content file.xml
$xml.hierarchy.node | select text,bounds
text bounds
---- ------
[0,0][1080,2076]
Create account [168,108][533,264]
Dumping out every "node" in that other file:
Select-Xml //node file2.xml | % node | select text,bounds
A: @Aacini . This is the content of the file lines.txt
<?xml version='1.0' encoding='UTF-8' standalone='yes'
<hierarchy rotation="0
<node index="0
text="
resource-id="
class="android.widget.FrameLayout
package="com.facebook.katana
content-desc="
checkable="false
checked="false
clickable="false
enabled="true
focusable="false
focused="false
scrollable="false
long-clickable="false
password="false
selected="false
bounds="[0,0][1080,2076]
A: You can give a try for this batch file :
@echo off
If exist output.txt Del output.txt
Powershell -C "$regex = [regex]'(text=\x22[\s\S]+?\x22)|(bounds=\x22[\s\S]+?\x22)';$data = GC "test.txt";ForEach ($m in $regex.Matches($data)) {$m.Value>>output.txt};"
If exist output.txt Start "" output.txt
| {
"language": "en",
"url": "https://stackoverflow.com/questions/73318726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: How to notify user when someone follows? How can we notify a user when he has a new follower?
create_notification is suppose to trigger a notification, but I get different errors with different attempts.
relationship.rb
class Relationship < ActiveRecord::Base
after_create :create_notification
has_many :notifications
belongs_to :follower, class_name: "User"
belongs_to :followed, class_name: "User"
validates :follower_id, presence: true
validates :followed_id, presence: true
private
def create_notification # How do we need to rewrite this?
notifications.create(
followed_id: followed_id,
follower_id: follower_id,
user: follower_id.user,
read: false
)
end
end
I used this tutorial as a guide for building my notifications and the Hartl tutorial for building the users and relationships.
relationships_controller.rb
class RelationshipsController < ApplicationController
before_action :logged_in_user
def create # The notification is trigger after create.
@user = User.find(params[:followed_id])
current_user.follow(@user)
respond_to do |format|
format.html { redirect_to @user }
format.js
end
end
def destroy
@user = Relationship.find(params[:id]).followed
current_user.unfollow(@user)
respond_to do |format|
format.html { redirect_to @user }
format.js
end
end
end
Sorry my user model is a bit messy.
class User < ActiveRecord::Base
acts_as_tagger
acts_as_taggable
has_many :notifications
has_many :activities
has_many :activity_likes
has_many :liked_activities, through: :activity_likes, class_name: 'Activity', source: :liked_activity
has_many :liked_comments, through: :comment_likes, class_name: 'Comment', source: :liked_comment
has_many :valuation_likes
has_many :habit_likes
has_many :goal_likes
has_many :quantified_likes
has_many :comment_likes
has_many :authentications
has_many :habits, dependent: :destroy
has_many :levels
has_many :combine_tags
has_many :valuations, dependent: :destroy
has_many :comments
has_many :goals, dependent: :destroy
has_many :quantifieds, dependent: :destroy
has_many :results, through: :quantifieds
has_many :notes
accepts_nested_attributes_for :habits, :reject_if => :all_blank, :allow_destroy => true
accepts_nested_attributes_for :notes, :reject_if => :all_blank, :allow_destroy => true
accepts_nested_attributes_for :quantifieds, :reject_if => :all_blank, :allow_destroy => true
accepts_nested_attributes_for :results, :reject_if => :all_blank, :allow_destroy => true
has_many :active_relationships, class_name: "Relationship",
foreign_key: "follower_id",
dependent: :destroy
has_many :passive_relationships, class_name: "Relationship",
foreign_key: "followed_id",
dependent: :destroy
has_many :following, through: :active_relationships, source: :followed
has_many :followers, through: :passive_relationships, source: :follower
attr_accessor :remember_token, :activation_token, :reset_token
before_save :downcase_email
before_create :create_activation_digest
validates :name, presence: true, length: { maximum: 50 }
VALID_EMAIL_REGEX = /\A[\w+\-.]+@[a-z\d\-.]+\.[a-z]+\z/i
validates :email, presence: true, length: { maximum: 255 },
format: { with: VALID_EMAIL_REGEX },
uniqueness: { case_sensitive: false }, unless: -> { from_omniauth? }
has_secure_password
validates :password, length: { minimum: 6 }
scope :publish, ->{ where(:conceal => false) }
User.tag_counts_on(:tags)
def count_mastered
@res = habits.reduce(0) do |count, habit|
habit.current_level == 6 ? count + 1 : count
end
end
def count_challenged
@challenged_count = habits.count - @res
end
def self.from_omniauth(auth)
where(provider: auth.provider, uid: auth.uid).first_or_initialize.tap do |user|
user.provider = auth.provider
user.image = auth.info.image
user.uid = auth.uid
user.name = auth.info.name
user.oauth_token = auth.credentials.token
user.oauth_expires_at = Time.at(auth.credentials.expires_at)
user.password = (0...8).map { (65 + rand(26)).chr }.join
user.email = (0...8).map { (65 + rand(26)).chr }.join+"@mailinator.com"
user.save!
end
end
def self.koala(auth)
access_token = auth['token']
facebook = Koala::Facebook::API.new(access_token)
facebook.get_object("me?fields=name,picture")
end
# Returns the hash digest of the given string.
def User.digest(string)
cost = ActiveModel::SecurePassword.min_cost ? BCrypt::Engine::MIN_COST :
BCrypt::Engine.cost
BCrypt::Password.create(string, cost: cost)
end
# Returns a random token.
def User.new_token
SecureRandom.urlsafe_base64
end
# Remembers a user in the database for use in persistent sessions.
def remember
self.remember_token = User.new_token
update_attribute(:remember_digest, User.digest(remember_token))
end
# Forgets a user. NOT SURE IF I REMOVE
def forget
update_attribute(:remember_digest, nil)
end
# Returns true if the given token matches the digest.
def authenticated?(attribute, token)
digest = send("#{attribute}_digest")
return false if digest.nil?
BCrypt::Password.new(digest).is_password?(token)
end
# Activates an account.
def activate
update_attribute(:activated, true)
update_attribute(:activated_at, Time.zone.now)
end
# Sends activation email.
def send_activation_email
UserMailer.account_activation(self).deliver_now
end
def create_reset_digest
self.reset_token = User.new_token
update_attribute(:reset_digest, User.digest(reset_token))
update_attribute(:reset_sent_at, Time.zone.now)
end
# Sends password reset email.
def send_password_reset_email
UserMailer.password_reset(self).deliver_now
end
# Returns true if a password reset has expired.
def password_reset_expired?
reset_sent_at < 2.hours.ago
end
def good_results_count
results.good_count
end
# Follows a user.
def follow(other_user)
active_relationships.create(followed_id: other_user.id)
end
# Unfollows a user.
def unfollow(other_user)
active_relationships.find_by(followed_id: other_user.id).destroy
end
# Returns true if the current user is following the other user.
def following?(other_user)
following.include?(other_user)
end
private
def from_omniauth?
provider && uid
end
# Converts email to all lower-case.
def downcase_email
self.email = email.downcase unless from_omniauth?
end
# Creates and assigns the activation token and digest.
def create_activation_digest
self.activation_token = User.new_token
self.activation_digest = User.digest(activation_token)
end
end
users_controller.rb
class UsersController < ApplicationController
before_action :logged_in_user, only: [:index, :edit, :update, :destroy,
:following, :followers]
before_action :correct_user, only: [:edit, :update]
before_action :admin_user, only: :destroy
def tag_cloud
@tags = User.tag_counts_on(:tags)
end
def index
@users = User.paginate(page: params[:page])
end
def show
@user = User.find(params[:id])
if current_user == @user
@habits = @user.habits
@valuations = @user.valuations
@accomplished_goals = @user.goals.accomplished
@unaccomplished_goals = @user.goals.unaccomplished
@averaged_quantifieds = @user.quantifieds.averaged
@instance_quantifieds = @user.quantifieds.instance
else
@habits = @user.habits.publish
@valuations = @user.valuations.publish
@accomplished_goals = @user.goals.accomplished.publish
@unaccomplished_goals = @user.goals.unaccomplished.publish
@averaged_quantifieds = @user.quantifieds.averaged.publish
@instance_quantifieds = @user.quantifieds.instance.publish
end
end
def new
@user = User.new
end
def create
@user = User.new(user_params)
if @user.save
@user.send_activation_email
flash[:info] = "Please check your email to activate your account."
redirect_to root_url
else
render 'new'
end
end
def edit
@user = User.find(params[:id])
end
def update
@user = User.find(params[:id])
if @user.update_attributes(user_params)
flash[:success] = "Profile updated"
redirect_to @user
else
render 'edit'
end
end
def destroy
User.find(params[:id]).destroy
flash[:success] = "User deleted"
redirect_to users_url
end
def following
@title = "Following"
@user = User.find(params[:id])
@users = @user.following.paginate(page: params[:page])
render 'show_follow'
end
def followers
@title = "Followers"
@user = User.find(params[:id])
@users = @user.followers.paginate(page: params[:page])
render 'show_follow'
end
private
def user_params
if params[:conceal] = true
params.require(:user).permit(:name, :email, :tag_list, :password, :conceal, :password_confirmation, valuations_attributes: [:name, :tag_list, :conceal], activities_attributes: [:conceal, :action, :trackable_id, :trackable_type])
else
params[:user][:valuations][:conceal] = false
params.require(:user).permit(:name, :image, :tag_list, :email, :password, :password_confirmation, valuations_attributes: [:name, :tag_list], activities_attributes: [:action, :trackable_id, :trackable_type])
end
end
# Before filters
# Confirms a logged-in user.
def logged_in_user
unless logged_in?
store_location
flash[:danger] = "Please log in."
redirect_to login_url
end
end
# Confirms the correct user.
def correct_user
@user = User.find(params[:id])
redirect_to(root_url) unless current_user?(@user)
end
# Confirms an admin user.
def admin_user
redirect_to(root_url) unless current_user.admin?
end
end
schema.rb
create_table "notifications", force: true do |t|
t.integer "habit_id"
t.integer "quantified_id"
t.integer "valuation_id"
t.integer "goal_id"
t.integer "comment_id"
t.integer "user_id"
t.integer "habit_like_id"
t.integer "quantified_like_id"
t.integer "valuation_like_id"
t.integer "goal_like_id"
t.integer "comment_like_id"
t.integer "likes"
t.integer "followed_id"
t.integer "follower_id"
t.boolean "read"
t.datetime "created_at", null: false
t.datetime "updated_at", null: false
end
create_table "relationships", force: true do |t|
t.integer "follower_id"
t.integer "followed_id"
t.datetime "created_at", null: false
t.datetime "updated_at", null: false
end
Please let me know if you need further explanation or code :-] Keep the dream alive!
A: You could simply extend your User#follow method to something like this:
# Follows a user.
def follow(other_user)
active_relationships.create(followed_id: other_user.id)
UserMailer.new_follower(other_user).deliver_now
end
Then add a new_follower(user) method to your UserMailer in a similar way than the already existing password_reset method.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30906226",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to assign variable names to each for loop item - Powershell I would like to be able to iterate through an array of monitors for a computer (pulled through WMI) and then assign each one its own value, so that I can output the results to a csv, with each monitor, mfg, date,sn on its own column.
For example:
Monitor 1 MFG Monitor 1 SN Monitor 2 MFG Monitor 2 SN
Dell 12345 HP 05156
I know how to output to a csv already, just am having problems assiging each to its own variable to use.
$Monitors = Get-WmiObject -Namespace root\wmi -Class wmiMonitorID -ComputerName "PC"
$Monitors | % {$i=0} {"$_";$i++}
This gets me each instance name of each monitor on the machine, but how would I go about grabbing the Serial etc for each ?
I tried using $_.SerialNumberID to pull it out, but am unable to figure it out
A: Well...here is how you would do it. It looks like the data for some of the things in wmi needs to be converted to be readable.
$Monitors = Get-WmiObject -Namespace root\wmi -Class wmiMonitorID
$obj = Foreach ($Monitor in $Monitors)
{
[pscustomobject] @{
'MonitorMFG' = [char[]]$Monitor.ManufacturerName -join ''
'MonitorSerial' = [char[]]$monitor.SerialNumberID -join ''
'MonitorMFGDate' = $Monitor.YearOfManufacture
}
}
$obj
$obj | export-csv
Edit...alternative that more closely matches the formatting you are wanting...I think the above is better though personally.
$Monitors = Get-WmiObject -Namespace root\wmi -Class wmiMonitorID
$i = 1
$obj = new-object -type psobject
Foreach ($Monitor in $Monitors)
{
$obj | add-member -Name ("Monitor$i" +"MFG") -Value ([char[]]$Monitor.ManufacturerName -join '') -MemberType NoteProperty -Force
$obj | add-member -Name ("Monitor$i" + "Serial") -Value ([char[]]$monitor.SerialNumberID -join '') -MemberType NoteProperty -Force
$obj | add-member -Name ("Monitor$i" + "MFGDate") -Value ($Monitor.YearOfManufacture) -MemberType NoteProperty -Force
$i++
}
$obj
$obj | export-csv
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26259714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: iPhone 5 [[UIScreen mainScreen] bounds].size.height
Possible Duplicate:
How to develop or migrate apps for iPhone 5 screen resolution?
How to deal with iPhone 5 screen size?
What to check in order to support the iPhone 5's longer screen?
With the new screen size on iPhone 5, anyone know or guess what [[UIScreen mainScreen] bounds].size.height will return in an app running on iPhone 5? On all other iPhones, including Retina, it returns 480 - I assume it will now return 568 ?
Or, will it run in the "centered, letterboxed" mode unless some other configuration is made in the app to "allow" the full iPhone 5 resolution?
A: Yes, the height is 568. If you want to remove "letterboxing", please see this post:
iPhone 5 letterboxing / screen resize
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12398521",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Using Nginx with Wordpress to perform a non-template URLs redirections The Case
We have 2 websites, each website has a different URL structure:
*
*www.domain1.com/category/sub-category/post-id/slug
*www.domain2.net/category/sub-category/yyyy/mm/dd/slug
Each website has a separate CMS (Java-based), a separate database (Postgres), and separate storage (Amazon S3).
The Plan
We are going to have one multisite CMS (WordPress) that serves the 2 previous websites, with one database (MySQL), and one storage.
All of that has been done and we are in the final stages: migration has been done from the old database to one new WordPress database, the Amazone buckets have been merged into one new bucket, and the multisite functionality works fine.
The Problem
We want to adopt the 2nd website's URL structure and use it for both sites, so all the new URLs will have this structure:
*
*www.domain1.com/category/sub-category/yyyy/mm/dd/slug
*www.domain2.net/category/sub-category/yyyy/mm/dd/slug
Our concern is about the old URLs of the 1st website which are already shared on social media and used by other websites, How to redirect the old structure URLs to the new one, there's no way to get the yyyy, mm, or dd from the old URLs, we need to inquiry that from the database using the legacy id of the post.
An Idea
I had an idea that we can create a table (could be an Amazon DynamoDB table or Redis) with 2 columns: legacy-id, and new-url, so I will pre-populate the new URLs based on the legacy IDs to prevent overloading the database with many inquiries such as these to get the new URLs.
BUT, is there a way to link Nginx with this table, so Nginx will extract the post-id from the old URL and use that table to get the new-url based on the extracted post-id.
Any help on this guys?
A: I think dynamic rewrite based on the data in Redis is possible using embedded Lua.
Check these two links: Lua module, Lua Redis driver
Another solution is to write the redirect logic using the language of your choice with some web-server for that language and then use proxy_pass nginx directive to proxy your requests to that web-server.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54724254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Spark re partition logic for databricks scalable cluster Databricks spark cluster can auto-scale as per load.
I am reading gzip files in spark and doing repartitoning on the rdd to get parallelism as for gzip file it will be read on signle core and generate rdd with one partition.
As per this post ideal number of partitions is the number of cores in the cluster which I can set during repartitioning but in case of auto-scale cluster this number will vary as per the state of cluster and how many executors are there in it.
So, What should be the partitioning logic for an auto scalable spark cluster?
EDIT 1:
The folder is ever growing, gzip files keep coming periodically in it, the size of gzip file is ~10GB & uncompressed size is ~150GB. I know that multiple files can be read in parallel. But for a single super large file databricks may try to auto scale the cluster however even though after scaling the cores in cluster have increased, my dataframe would have less number of partitions (based on previous state of cluster where it may be having lesser cores).
Even though my cluster will auto scale(scale out), the processing will be limited to number of partitions which I do by
num_partitions = <cluster cores before scaling>
df.repartition(num_partitions)
A: For a splittable file/data the partitions will be mostly created automatically depending on cores, operation being narrow or wide, file size etc. Partitions can also be controlled programmatically using coalesce and repartition. But for a gzip/un-splittable file there will be just 1 task for a file and it can be as many parallel as many cores available (like you said).
For dynamic cluster one option you have is to point your job to a folder/bucket containing large number of gzip files. Say you have 1000 files to process and you have 10 cores then 10 will in parallel. When dynamically your cluster increases to 20 then 20 will run in parallel. This happens automatically and you needn't code for this. The only catch is that you can't scale fewer files than the available cores. This is a known deficiency of un-splittable files.
The other option would be to define the cluster size for the job based the number and size of files available. You can find an emparical formula based on the historical run time. Say you have 5 large files and 10 small files (half size of large) then you may assign 20 cores (10 + 2*5) to efficiently use the cluster resources.
A: A standard gzip file is not splittable, so Spark will handle the gzip file with just a single core, a single task, no matter what your settings are [As of Spark 2.4.5/3.0]. Hopefully the world will move to bzip2 or other splittable compression techniques when creating large files.
If you directly write the data out to Parquet, you will end up with a single, splittable parquet file. This will be written out by a single core.
If stuck with the default gzip codec, would be better to re-partition after the read, and write out multiple parquet files.
from pyspark.sql.types import StructType, StructField, StringType, DoubleType, IntegerType
schema = StructType([
StructField("a",IntegerType(),True),
StructField("b",DoubleType(),True),
StructField("c",DoubleType(),True)])
input_path = "s3a://mybucket/2G_large_csv_gzipped/onebillionrows.csv.gz"
spark.conf.set('spark.sql.files.maxPartitionBytes', 1000 * (1024 ** 2))
df_two = spark.read.format("csv").schema(schema).load(input_path)
df_two.repartition(32).write.format("parquet").mode("overwrite").save("dbfs:/tmp/spark_gunzip_default_remove_me")
I very recently found, and initial tests are very promising, a splittable gzip codec. This codec actually reads the file multiple times, and each task scans ahead by some number of bytes (w/o decompressing) then starts the decompression.
The benefits of this pay off when it comes time to write the dataframe out as a parquet file. You will end up with multiple files, all written in parallel, for greater throughput and shorter wall clock time (your CPU hours will be higher).
Reference: https://github.com/nielsbasjes/splittablegzip/blob/master/README-Spark.md
My test case:
from pyspark.sql.types import StructType, StructField, StringType, DoubleType, IntegerType
schema = StructType([
StructField("a",IntegerType(),True),
StructField("b",DoubleType(),True),
StructField("c",DoubleType(),True)])
input_path = "s3a://mybucket/2G_large_csv_gzipped/onebillionrows.csv.gz"
spark.conf.set('spark.sql.files.maxPartitionBytes', 1000 * (1024 ** 2))
df_gz_codec = (spark.read
.option('io.compression.codecs', 'nl.basjes.hadoop.io.compress.SplittableGzipCodec')
.schema(schema)
.csv(input_path)
)
df_gz_codec.write.format("parquet").save("dbfs:/tmp/gunzip_to_parquet_remove_me")
| {
"language": "en",
"url": "https://stackoverflow.com/questions/59465708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Android - Different colors in different devices I'm developing an application for Android, and the colors of drawables changes when I'm testing the app on Samsung Galaxy S2, and when I test the application on Samsung Galaxy Europe or on emulator, appears the real colors of the drawable. For exemple, the gradient white and black, is different on Samsung S2.
Why this happening?? Can I do something on application to show the real colors on Samsung S2?
A: By default when we use transparent images in our app mostly the transparent part will show its parent.
For example if we use transparent images for menus in android.
then by default the theme of the device/background will be the parent view for this image.
According to me better to change the image and try.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/9992818",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: how to get original string value from encrypted string? I am encrypting password using DigestUtils.sha256Hex("password").I get encrypted password as 5e884898da28047151d0e56f8dc6292773603d0d6aabbdd62a11ef721d1542d8
I want original password string from encrypted . How shall i get it?
Please help me.
Thanks
A: The whole point of Sha256 hashing is that you cannot decrypt it. When doing a login check, you should hash the user entered password and match it with the one you've stored in your datalayer.
A: DigestUtils.sha256Hex is not encription it is hash. Main property of hash it is irreversible
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26252976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Place Graph on Kivy as a Widget with Button I want to put a graph that self updates. So when I press a Kivy button on my GUI script written with Python and it's supposed to take me to a Screen. The Screen has a graph on a part of the Screen and buttons on the rest of the screen. However, I add the graph and it opens in a separate window. How do I do that?
I used the matplotlib library for plotting the graph.
I used plt.show() and it opens a different window.
A: To prevent matplotlib from stealing the focus of your Window you need to tell it to do so for some strange reason (maybe because of high usage of MPL in IPython notebook?)
import matplotlib
matplotlib.use('Agg')
After those lines the window should stop stealing. The updating on the other hand is something way different and that's why there's garden.graph and garden.matplotlib.
If you intend to just draw some simple plots, just use the Graph. If you really need MPL and/or you want to do something complex, use the MPL backend (garden.matplotlib).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44477698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Why multiple let bindings are possible inside a method in F# A colleague reviewed my code and asked why I was using the following pattern:
type MyType() =
member this.TestMethod() =
let a = 1
// do some work
let a = 2 // another value of a from the work result
()
I told him that that was a bad pattern and my intent was to use let mutable a = .... But then he asked why at all it is possible in F# class to have such sequential bindings, while it is not possible in a module or an .fsx file? In effect, we are mutating an immutable value!
I answered that in a module or an .fsx file a let binding becomes a static method, therefore it is straightforward that the bindings with the same name will conflict in the same way as two class properties with the same name do. But I have no idea why this is possible inside a method!
In C#, I have found useful to scope variables, especially in unit tests when I want to make up some different values for test cases by just copy-pasting pieces of code:
{
var a = 1;
Assert.IsTrue(a < 2);
}
{
var a = 42;
Assert.AreEqual(42, a);
}
In F#, we could not only repeat the same let bindings, but change an immutable one to a mutable one and then mutate it later the usual way:
type MyType() =
member this.TestMethod() =
let a = 1
// do some work
let a = 2 // another value of a from the work result
let mutable a = 3
a <- 4
()
Why we are allowed to repeat let bindings in F# methods? How should I explain this to a person who is new to F# and asked "What is an immutable variable and why I am mutating it?"
Personally, I am interested in what design choices and trade offs were made to allow this? I am comfortable with the cases when the different scopes are easily detectable, e.g. when a variable is in constructor and then we redefine it in the body, or when we define a new binding inside a for/while loops. But two consecutive bindings at the same level are somewhat counter-intuitive. It feels like I should mentally add in to the end of each line as in the verbose syntax to explain the scopes, so that those virtual ins are similar to C#'s {}
A: The F# code:
let a = 1
let a = 2
let a = 3
a + 1
is just a condensed (aka "light") version of this:
let a = 1 in
let a = 2 in
let a = 3 in
a + 1
The (sort of) C# equivalent would be something like this:
var a = 1;
{
var a = 2;
{
var a = 3;
return a + 1;
}
}
In the context of having nested scopes, C# doesn't allow shadowing of names,
but F# and almost all other languages do.
In fact, according to the font of all knowledge
C# is unusual in being one of the few languages that explicitly disallow shadowing in this situation.
This might be because C# is a relatively new language. OTOH F# copies much of its design from OCaml,
which in turn is based on older languages, so in some sense, the design of F# is "older" than C#.
A: Tomas Petricek already explains that this isn't mutation, but shadowing. One follow-up question is: what is it good for?
It isn't a feature I use every day, but sometimes I find it useful, particularly when doing property-based testing. Here's an example I recently did as part of doing the Tennis Kata with FsCheck:
[<Property>]
let ``Given player has thirty points, when player wins, then the new score is correct``
(points : PointsData)
(player : Player) =
let points = points |> pointTo player Thirty
let actual = points |> scorePoints player
let expected =
Forty {
Player = player
OtherPlayerPoint = (points |> pointFor (other player)) }
expected =? actual
Here, I'm shadowing points with a new value.
The reason is that I want to explicitly test the case where a player already has Thirty points and wins again, no matter how many points the other player has.
PointsData is defined like this:
type Point = Love | Fifteen | Thirty
type PointsData = { PlayerOnePoint : Point; PlayerTwoPoint : Point }
but FsCheck is going to give me all sorts of values of PointsData, not only values where one of the players have Thirty.
This means that the points arriving as the function argument don't really represent the test case I'm interested in. To prevent accidental usage, I shadow the value in the test, while still using the input as a seed upon which I can build the actual test case value.
Shadowing can often be useful in cases like that.
A: I think it is important to explain that there is a different thing going on in F# than in C#. Variable shadowing is not replacing a symbol - it is simply defining a new symbol that happens to have the same name as an existing symbol, which makes it impossible to access the old one.
When I explain this to people, I usually use an example like this - let's say we have a piece of code that does some calculation using mutation in C#:
var message = "Hello";
message = message + " world";
message = message + "!";
This is nice because we can gradually build the message. Now, how can we do this without mutation? The trick is to define new variable at each step:
let message1 = "Hello";
let message2 = message1 + " world";
let message3 = message2 + "!";
This works - but we do not really need the temporary states that we defined during the construction process. So, in F# you can use variable shadowing to hide the states you no longer care about:
let message = "Hello";
let message = message + " world";
let message = message + "!";
Now, this means exactly the same thing - and you can nicely show this to people using Visual F# Power Tools, which highlight all occurrences of a symbol - so you'll see that the symbols are different (even though they have the same name).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/32051233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Selenium Python - HTTP ERROR 405 - Method Not Allowed I am developing a Selenium Python script that submits information thru an internal system forms.
After inputting all information in the forms page and click on submit button, I get the an error message:
HTTP ERROR 405 - Method Not Allowed message
Sometimes it allows to submit 1 or 2 times, after that, the the above message is shown.
I know some guys who use Selenium in VBA at the same system and it works perfectly....
Any clue how can I solve that?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74118839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: URL Rewriting Error With Wordpress I'm using Wordpress. But i have a problem with URL Rewriting. I'm using this scheme :
http://www.domain.com/post-title/
http://www.domain.com/another-post-title/
There isn't problem with this. This is working. But, i have a directory (zencart) on root. I'm browsing to this URL :
http://www.domain.com/zencart/
I'm forwarding to Wordpress' 404 Error page (Nothing Found For Zencart) .
(Because there isn't any post which have zencart title, zencart is name of a directory)
This is my .htaccess file :
(also you can look here for more readability)
(I'm using W3 Total Cache plugin)
# BEGIN W3TC Browser Cache
<IfModule mod_deflate.c>
<IfModule mod_setenvif.c>
BrowserMatch ^Mozilla/4 gzip-only-text/html
BrowserMatch ^Mozilla/4\.0[678] no-gzip
BrowserMatch \bMSIE !no-gzip !gzip-only-text/html
BrowserMatch \bMSI[E] !no-gzip !gzip-only-text/html
</IfModule>
<IfModule mod_headers.c>
Header append Vary User-Agent env=!dont-vary
</IfModule>
AddOutputFilterByType DEFLATE text/css application/x-javascript text/html text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon
</IfModule>
<FilesMatch "\.(css|js)$">
FileETag None
<IfModule mod_headers.c>
Header set X-Powered-By "W3 Total Cache/0.9.1.3"
</IfModule>
</FilesMatch>
<FilesMatch "\.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$">
FileETag None
<IfModule mod_headers.c>
Header set X-Powered-By "W3 Total Cache/0.9.1.3"
</IfModule>
</FilesMatch>
<FilesMatch "\.(asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|swf|tar|tif|tiff|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$">
FileETag None
<IfModule mod_headers.c>
Header set X-Powered-By "W3 Total Cache/0.9.1.3"
</IfModule>
</FilesMatch>
# END W3TC Browser Cache
# BEGIN W3TC Page Cache
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteCond %{HTTP_USER_AGENT} (2\.0\ mmp|240x320|alcatel|amoi|asus|au\-mic|audiovox|avantgo|benq|bird|blackberry|blazer|cdm|cellphone|danger|ddipocket|docomo|dopod|elaine/3\.0|ericsson|eudoraweb|fly|haier|hiptop|hp\.ipaq|htc|huawei|i\-mobile|iemobile|j\-phone|kddi|konka|kwc|kyocera/wx310k|lenovo|lg|lg/u990|lge\ vx|midp|midp\-2\.0|mmef20|mmp|mobilephone|mot\-v|motorola|netfront|newgen|newt|nintendo\ ds|nintendo\ wii|nitro|nokia|novarra|o2|openweb|opera\ mobi|opera\.mobi|palm|panasonic|pantech|pdxgw|pg|philips|phone|playstation\ portable|portalmmm|ppc|proxinet|psp|pt|qtek|sagem|samsung|sanyo|sch|sec|sendo|sgh|sharp|sharp\-tq\-gx10|small|smartphone|softbank|sonyericsson|sph|symbian|symbian\ os|symbianos|toshiba|treo|ts21i\-10|up\.browser|up\.link|uts|vertu|vodafone|wap|willcome|windows\ ce|windows\.ce|winwap|xda|zte) [NC]
RewriteRule .* - [E=W3TC_UA:_low]
RewriteCond %{HTTP_USER_AGENT} (acer\ s100|android|archos5|blackberry9500|blackberry9530|blackberry9550|cupcake|docomo\ ht\-03a|dream|htc\ hero|htc\ magic|htc_dream|htc_magic|incognito|ipad|iphone|ipod|lg\-gw620|liquid\ build|maemo|mot\-mb200|mot\-mb300|nexus\ one|opera\ mini|samsung\-s8000|series60.*webkit|series60/5\.0|sonyericssone10|sonyericssonu20|sonyericssonx10|t\-mobile\ mytouch\ 3g|t\-mobile\ opal|tattoo|webmate|webos) [NC]
RewriteRule .* - [E=W3TC_UA:_high]
RewriteCond %{HTTPS} =on
RewriteRule .* - [E=W3TC_SSL:_ssl]
RewriteCond %{SERVER_PORT} =443
RewriteRule .* - [E=W3TC_SSL:_ssl]
RewriteCond %{HTTP:Accept-Encoding} gzip
RewriteRule .* - [E=W3TC_ENC:.gzip]
RewriteCond %{REQUEST_METHOD} !=POST
RewriteCond %{QUERY_STRING} =""
RewriteCond %{REQUEST_URI} \/$
RewriteCond %{REQUEST_URI} !(\/wp-admin\/|\/xmlrpc.php|\/wp-(app|cron|login|register|mail)\.php|wp-.*\.php|index\.php) [NC,OR]
RewriteCond %{REQUEST_URI} (wp-comments-popup\.php|wp-links-opml\.php|wp-locations\.php) [NC]
RewriteCond %{HTTP_COOKIE} !(comment_author|wp-postpass|wordpress_\[a-f0-9\]\+|wordpress_logged_in) [NC]
RewriteCond "/home/btk1sk/public_html/wp-content/w3tc/pgcache/$1/_index%{ENV:W3TC_UA}%{ENV:W3TC_SSL}.html%{ENV:W3TC_ENC}" -f
RewriteRule (.*) "/wp-content/w3tc/pgcache/$1/_index%{ENV:W3TC_UA}%{ENV:W3TC_SSL}.html%{ENV:W3TC_ENC}" [L]
</IfModule>
# END W3TC Page Cache
# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
# END WordPress
A: I think if you change this line:
RewriteCond %{REQUEST_URI} !(\/wp-admin\/|\/xmlrpc.php|\/wp-(app|cron|login|register|mail)\.php|wp-.*\.php|index\.php) [NC,OR]
to this:
RewriteCond %{REQUEST_URI} !(\/zencart\/|\/wp-admin\/|\/xmlrpc.php|\/wp-(app|cron|login|register|mail)\.php|wp-.*\.php|index\.php) [NC,OR]
you'll be able to access the contents of your 'zencart' directory.
A: We solved that creating a Wordpress page using the same "url". But as you said, it's not a dynamic solution. The fact is that we have not many folders at the same level of WP index :o... Greetings!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/5107336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: using pointer * when I want to change the value of a variable #include <stdio.h>
int main(void) {
int x = 5;
int y = 10;
int* px = &x;
int* py = &y;
printf("the adress of %d is %p.\n",x,px);
printf("the adress of %d is %p.\n",y,py);
*px = 8;//change the value at the adress of px
*py = 18;
//what happens if instead of *px = 8, just x = 8?
x = 99;
printf("the adress of %d is %p.\n",x,px);
printf("the adress of %d is %p.\n",y,py);
//so It did change, what's the point of using *px when changing the value?
return 0;
}
Hello, I started learning C in school, and I bump into a difficulty which could be extremely simple, but at this point I do not know about it.
I am trying to change the value of x, and I used the method *px = 8 first to change it.
but right after that I used x = 99, and it changed too, so I do not know what is the difference between them.
I appreciate any feedback, thank you!
A:
Q: I used the method *px = 8 first to change it
A: The value of both "x" and "*px" changed to "8", correct?
Q: but right after that I used x = 99, and it changed too
A: Cool. So now both "x" and "*px" are 99. This is what you expected, correct?
Q: so I do not know what is the difference between them.
In this example, they're both equivalent. They both accomplish exactly the same thing.
Why use pointers at all? Because the variable "x" is tied to a specific memory location: you can't change it.
The pointer "px", on the other hand, can be changed at runtime to point to DIFFERENT memory locations, if you needed to.
Why is that useful? There are many different reasons. For example:
*
*When you malloc() dynamic memory, the system gives you a pointer. You don't know or care where in memory it's located: you just use it.
*Pointers are frequently needed to traverse the elements in a linked list
*The ability to increment (++) and decrement (--) pointers simplifies many algorithms
*Etc. etc.
You might find these articles helpful:
*
*C - Pointers
*Pointer (computer programming)
A: Other answers explained what is happening, but in your question I also read you ask 'why' someone would want to use references instead of direct accessing of a memory address.
There are many reasons why anything is the way it is in computer science, but a few I think most people wouldn't argue that are helpful for a beginning theoretical understanding:
A lot of Computer Science as a discipline is efficient/optimal management of resources, in many cases computer memory, but I/O and CPU make up two other resources we manage to make 'good' software. You may recall that a simple computer can be thought of as a CPU, RAM, and Storage, connected by a "bus".
Up until relatively recently, most software wasn't being run on a machine with lots and lots of RAM. It was necessary that memory be managed for performance purposes, and though we have access to higher memory machines today - memory must still be managed!
We manage that in more recently created languages with abstraction - or in this context, doing things for people when we know with a high degree of certainty what they are trying to do. We call languages where much of resource management is abstracted and simplified as "high-level" languages. You are "talking" to the machine with your code at a high level of control relative to machine code (you can think of that as the ones and zeros of binary).
TLDR; Memory is finite, and pointers generally take up less space than the data they point to, allowing for an efficiency in address space utilization. It also allows for many other neat things, like virtualized address spaces.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/69170346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Object couldn't be converted to string I'm getting this error message when trying to make a PDO connection:
Object of class dbConnection could not be converted to string in (line)
This is my code:
class dbConnection
{
protected $db_conn;
public $db_name = "todo";
public $db_user = "root";
public $db_pass = "";
public $db_host = "localhost";
function connect()
{
try {
$this->db_conn = new PDO("mysql:host=$this->$db_host;$this->db_name", $this->db_user, $this->db_pass);
return $this->db_conn;
}
catch (PDOException $e) {
return $e->getMessage();
}
}
}
The error is on the PDO line. Just in case, I insert the code where I access to the connect() method:
class ManageUsers
{
public $link;
function __construct()
{
$db_connection = new dbConnection();
$this->link = $db_connection->connect();
return $link;
}
function registerUsers($username, $password, $ip, $time, $date)
{
$query = $this->link->prepare("INSERT INTO users (Username, Password, ip, time1, date1) VALUES (?,?,?,?,?)");
$values = array($username, $password, $ip, $time, $date);
$query->execute($values);
$counts = $query->rowCount();
return $counts;
}
}
$users = new ManageUsers();
echo $users->registerUsers('bob', 'bob', '127.0.0.1', '16:55', '01/01/2015');
A: Change your connection setting to the following:
class dbConnection
{
protected $db_conn;
public $db_name = "todo";
public $db_user = "root";
public $db_pass = "";
public $db_host = "localhost";
function connect()
{
try {
$this->db_conn = new PDO("mysql:host={$this->db_host};{$this->db_name}", $this->db_user, $this->db_pass); //note that $this->$db_host was wrong
return $this->db_conn;
}
catch (PDOException $e) {
//handle exception here or throw $e and let PHP handle it
}
}
}
In addition, returning values in a constructor has no side-effects (and should be prosecuted by law).
A: Please follow below code , its tested on my server and running fine .
class Config
{
var $host = '';
var $user = '';
var $password = '';
var $database = '';
function Config()
{
$this->host = "localhost";
$this->user = "root";
$this->password = "";
$this->database = "test";
}
}
function Database()
{
$config = new Config();
$this->host = $config->host;
$this->user = $config->user;
$this->password = $config->password;
$this->database = $config->database;
}
function open()
{
//Connect to the MySQL server
$this->conn = new PDO('mysql:host='.$this->host.';dbname='.$this->database, $this->user,$this->password);
if (!$this->conn)
{
header("Location: error.html");
exit;
}
return true;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38243373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: How can I render a simple constructor in react app? /src/App.js
Line 22:16: Parsing error: Unexpected token, expected ";"
20 |
21 | const App = () => {
> 22 | constructor() {
| ^
23 | super()
24 | this.state = {
25 | input:
A: You can't add a constructor to a function - especially not to an arrow function. Constructor belongs to a class.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/59319939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Isolate part of url with php and then print it in html element I am building a gallery in WordPress and I'm trying to grab a specific part of my URL to echo into the id of a div.
This is my URL:
http://www.url.com/gallery/truck-gallery-1
I want to isolate the id of the gallery which will always be a number(in this case its 1). Then I would like to have a way to print it somewhere, maybe in the form of a function.
A: You should better use $_SERVER['REQUEST_URI']. Since it is the last string in your URL, you can use the following function:
function getIdFromUrl($url) {
return str_replace('/', '', array_pop(explode('-', $url)));
}
@Kristian 's solution will only return numbers from 0-9, but this function will return the id with any length given, as long as your ID is separated with a - sign and the last element.
So, when you call
echo getIdFromUrl($_SERVER['REQUEST_URI']);
it will echo, in your case, 1.
A: If the ID will not always be the same number of digits (if you have any ID's greater than 9) then you'll need something robust like preg_match() or using string functions to trim off everything prior to the last "-" character. I would probably do:
<?php
$parts = parse_url($_SERVER['REQUEST_URI']);
if (preg_match("/truck-gallery-(\d+)/", $parts['path'], $match)) {
$id = $match[1];
} else {
// no ID found! Error handling or recovery here.
}
?>
A: Use the $_SERVER['REQUEST_URI'] variable to get the path (Note that this is not the same as the host variable, which returns something like http://www.yoursite.com).
Then break that up into a string and return the final character.
$path = $_SERVER['REQUEST_URI'];
$ID = $path[strlen($path)-1];
Of course you can do other types of string manipulation to get the final character of a string. But this works.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/6499659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Is this a nasty global hash? I heard that certain practices such as global variables are often frowned upon. I would like to find out if it is generally also frowned upon to place a hash at the level shown below. If so, how should it be done so that it is smiled upon?
class Dictionary
@@dictionary_hash = {"Apple"=>"Apples are tasty"}
def new_word
puts "Please type a word and press enter"
new_word = gets.chomp.upcase
puts "Thanks. You typed: #{new_word}"
@@dictionary_hash[new_word] = "#{new_word} means something about something. More on this later."
D.finalize
return new_word.to_str
end
def finalize
puts "To enter more, press Y then press Enter. Otherwise just press Enter."
user_choice = gets.chomp.upcase
if user_choice == "Y"
D.new_word
else
puts @@dictionary_hash
end
end
D = Dictionary.new
D.new_word
end
A: You should check the difference between:
*
*global, class and instance variables
*class and instance methods
Your are close to a working exemple with an instance variable:
class Dictionary
def initialize
@dictionary_hash = {"Apple"=>"Apples are tasty"}
end
def new_word
puts "Please type a word and press enter"
new_word = gets.chomp.upcase
puts "Thanks. You typed: #{new_word}"
@dictionary_hash[new_word] = "#{new_word} means something about something. More on this later."
finalize
new_word
end
def finalize
puts "To enter more, press Y then press Enter. Otherwise just press Enter."
user_choice = gets.chomp.upcase
if user_choice == "Y"
new_word
else
puts @dictionary_hash
end
end
end
d = Dictionary.new
d.new_word
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34089878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: SSPI Schannel API - Can credential handles be re-used? I am currently adding SSPI Schannel API support to libcurl in order to make it possible to use SSL enabled protocols on Windows without any external dependency, such as OpenSSL.
I already have a working SSL/TLS implementation, but I have a very specific question regarding the re-use of credential handles returned by the function AcquireCredentialsHandle.
Is it correct and possible to re-use SSL/TLS sessions by instead of creating a new handle, re-using an existing one and passing it to InitializeSecurityContext multiple times?
My work on the Schannel module for libcurl can be found here, and the experimental version that tries to re-use can be found here.
I would appreciate any kind of hint or feedback on this one. So, can credential handles be re-used in such a way? And is it correct?
Thanks in advance!
A: I found the answer to my question and record it here for others:
*
*It has been asked before and a first answer can be found here.
*The following information can be found on this MSDN page:
Your application obtains credentials by calling the AcquireCredentialsHandle function, which returns a handle to the requested credentials. Because credentials handles are used to store configuration information, the same handle cannot be used for both client-side and server-side operations. This means that applications that support both client and server connections must obtain a minimum of two credentials handles.
Therefore it can be assumed safe to re-use the same credential handle for multiple connections. And I verified that it indeed makes Schannel re-use the SSL/TLS session. This has been tested on Windows 7 Professional SP1.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/10084257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: A/libc﹕ Fatal signal 11 (SIGSEGV) at 0xdeadd00d (code=1) I am trying to run telegram. In my LogCat this error appears:
A/libc﹕ Fatal signal 11 (SIGSEGV) at 0xdeadd00d (code=1), thread 1768 (m.telegram.beta
My logcat:
I/dalvikvm﹕ | schedstat=( 181904898 336868213 182 ) utm=10 stm=8 core=0
I/dalvikvm﹕ #00 pc 0008f4ad /system/lib/libdvm.so
I/dalvikvm﹕ #01 pc 00073efa /system/lib/libdvm.so
I/dalvikvm﹕ #02 pc 00074024 /system/lib/libdvm.so
I/dalvikvm﹕ #03 pc 0003879a /system/lib/libdvm.so
I/dalvikvm﹕ #04 pc 0003d6b8 /system/lib/libdvm.so
I/dalvikvm﹕ #05 pc 000c7fa9 /data/data/telegram.beta/lib/libtmessages.20.so
I/dalvikvm﹕ at java.lang.Runtime.nativeLoad(Native Method)
I/dalvikvm﹕ at java.lang.Runtime.loadLibrary(Runtime.java:368)
I/dalvikvm﹕ at java.lang.System.loadLibrary(System.java:535)
A/libc﹕ Fatal signal 11 (SIGSEGV) at 0xdeadd00d (code=1), thread 1768 (m.telegram.beta)
A: Just find select org.telegram.messenegr in spinner of Android Monitor section.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36261342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: My code ends at certain line I am practicing my net Java skills and I have created a server Java class that connects to the specified by the user port number:
import java.net.*;
import java.util.*;
import java.io.*;
public class WebServerMain {
public WebServerMain(int port){
try{
String content = " ";
String userChoice = " ";
ServerSocket ss = new ServerSocket(port);
System.out.println("Server: Initialised a socket at port " +
port);
Socket conn = ss.accept();
System.out.println("Server: Awating for a client");
InputStreamReader ir = new
InputStreamReader(conn.getInputStream());
PrintWriter out = new PrintWriter(conn.getOutputStream(),true);
BufferedReader userIn = new BufferedReader(ir);
userChoice = userIn.readLine();
System.out.println("Server: The choice we got is: " +
userChoice);
System.out.println("Server: User have chosen GET Method");
BufferedReader in = new BufferedReader(new
FileReader("test.html"));
String str;
System.out.println("All good here");
while ((str = in.readLine()) != null) {
content += str;
}
in.close();
System.out.println("Server: The content is " + content);
out.println(content);
} catch(IOException e){
e.getMessage();
}
}
public static void main(String[] args){
String path;
int port;
Scanner in = new Scanner(System.in);
System.out.println("Please enter the port number");
port = in.nextInt();
WebServerMain server = new WebServerMain(port);
}
}
The output looks like this:
It seems that code exits befire the new BufferReader in reads the html file and outputs to the client.
I have chosen Safari browser and connected to the local host via the same port.
Also please NOTE, the file name for the BufferedReader in is hardcoded just to test it, I think I will need to use the regular expression to remove GET in order to read the file properly.
A: public WebServerMain(int port) {
try {
String content = " ";
String userChoice = " ";
ServerSocket ss = new ServerSocket(port);
System.out.println("Server: Initialised a socket at port " + port);
Socket conn = ss.accept();
System.out.println("Server: Awating for a client");
InputStreamReader ir = new InputStreamReader(conn.getInputStream());
PrintWriter out = new PrintWriter(conn.getOutputStream(), true);
BufferedReader userIn = new BufferedReader(ir);
userChoice = userIn.readLine();
System.out.println("Server: The choice we got is: " + userChoice);
System.out.println("Server: User have chosen GET Method");
BufferedReader in = new BufferedReader(new FileReader(
//"C:/Users/jayaprakash.j/Desktop/1.html"));//
userChoice.split(" ")[1]));
String str;
System.out.println("All good here");
while ((str = in.readLine()) != null) {
content += str;
}
in.close();
System.out.println("Server: The content is " + content);
out.println("HTTP/1.1 200 OK");
out.println("Content-Type: text/html");
out.println("Content-Length: " + content.length());
out.println();
out.println(content);
out.close();
conn.close();
} catch (IOException e) {
e.getMessage();
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/47014657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: For-loop with string possible? I need something like this. Is it possible in Java?
for(String i = "A"; i < "Z"; i++) {
System.out.println(i);
}
A: You can do it by:
for (char alphabet = 'A'; alphabet <= 'Z'; alphabet++) {
System.out.println(alphabet);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/47293626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: DAG, Scala/Akka I was wondering the best implementation of a DAG (Directed Acyclic Graph) using Scala/Akka where each node can be potentially on different VMs, different computers.
Think of each node as a Scala actor who would only act on its direct parents (in graph terms), how would you implement this at a high level? There are several concepts in Akka that could be applied:
• Normal actors with discards if the sender is not a direct parent
• Event Bus and Classifiers
• …
I am still struggling to choose a best approach, at least in theory, knowing that for a DAG, you need to be able to handle diamond scenarios (where a node depends on two or more nodes that in turn depend on one unique node).
Any pointers to papers, articles, ideas would be welcomed.
Regards
A: This article describes one way of Distributed graph processing with Akka:
http://letitcrash.com/post/30257014291/distributed-in-memory-graph-processing-with-akka
A: Simply use Akka Remote with some DIYing
| {
"language": "en",
"url": "https://stackoverflow.com/questions/10500532",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Query to json firebase object or redux state I'm building my first web app with React, Redux and Firebase, it's about registering the assistance of students (alumnos) to lessons (clases). I've the following database structure (the same in Redux state, this.props).
"clases" : {
"-LpQkLyGXEd-Up8hExTx" : {
"alumnos" : [ "-LpKkSh0E5jiuM0JCCpS", "-LpQi33M-0OSS4Jvup8k" ],
"fechaClase" : "20-09-2019",
"profesor" : "Nacho",
"tema" : "Misión"
},
"-LpQmExVsWtW1uPHLK52" : {
"alumnos" : [ "-LpJvbXb2FjgZvvBv3ei", "-LpKkSh0E5jiuM0JCCpS", "-LpQi33M-0OSS4Jvup8k", "-LpQiDGlRWITax2t6U2A" ],
"fechaClase" : "22-09-2019",
"profesor" : "Nacho",
"tema" : "Bautismo"
},
"-LpQqZ_uWu8HxROagVjN" : {
"alumnos" : [ "-LpKkSh0E5jiuM0JCCpS", "-LpQi33M-0OSS4Jvup8k", "-LpQiPCS2cIK7opMNqyH" ],
"fechaClase" : "21-09-2019",
"profesor" : "Manzo, Ignacio",
"tema" : "Bautismo"
}
I want to select a student (alumno), and know which lessons have made. Do I have to make a double map to the object? Can you give some help?
This is the deployed app https://metanoia-ic.herokuapp.com/
This is my github repository: https://github.com/tonicanada/metanoia
Thanks!
A: I would use Object.keys to be able to use filter and include Array.prototype methods. Asuming that clases is the object containing all the state, I would do something like this:
const studentId = this.props.studentId; // or any other value
const clasesPerStudent = Object.keys(clases).filter(clase =>
clases[clase].alumnos.includes(studentId)
);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/58066663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to create a Lock in 2 timer tasks Java reliably (So will wait when other is running) I am running 2 timer tasks. it's something like
@Autowired
private Insertion insertion;
@Autowired
private Updation updation;
*
*insert some data in DB
timer.schedule(insertion,1000,5000)
public run() {
if(!Updationhappening) {
//start insertion
}
else {
//wait
}
}
*update that data with something
timer.schedule(updation,1000,5000)
if(!InsertionHappening) {
//start updation
} else {
//wait
}
However i want to pause the update when insertion is running .
I know maybe i can do this with a volatile variable or Locks but i am not able to get any implementation about this locking system. Can anyone suggest an example implementation between 2 different nodes
Thank you in advance
A: You can order updates and inserts with PriorityBlockingQueue to process inserts with priority.
A: Thank you for your inputs Everyone
I found a solution to the issue
i used REENTRANT Locks to solve the issue . made a static Lock object in A global file and made lock.tryLock() in both the file to solve the issue
| {
"language": "en",
"url": "https://stackoverflow.com/questions/39412105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: GCC: "__unused__" vs just "unused" in variable attributes According to GCC's own documentation on variable attributes, the correct syntax for declaring an attribute unused is __attribute__((unused)).
However, in many examples and other code online, I frequently see __attribute__((__unused__)) instead, and they appear to both work.
Is there a reason for either specifying or omitting the __ in either case? Does it make any difference, and is there a preferred version? Are there any situations where using one and not the other might cause problems?
Presumably the same applies to other attribute parameters as well?
A: At the top of the very page you linked, it tells you:
You may also specify attributes with ‘__’ preceding and following
each keyword. This allows you to use them in header files without
being concerned about a possible macro of the same name. For example,
you may use __aligned__ instead of aligned.
Identifiers containing double underscores (__) are reserved to the implementation. Hence no user program could legally define them as macros.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27139518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: JUnit 5 tests are executed in parallel inside IntelliJ and locally with Maven, but not inside Maven Docker container on Google Cloud Build I am using JUnit 5 in my Maven project.
My junit-platform.properties looks like this:
junit.jupiter.execution.parallel.enabled=true
junit.jupiter.execution.parallel.mode.default=concurrent
When running the tests inside IntelliJ or with Maven locally, the test are executed in parallel.
However, when running them on Cloud Build inside a Maven Docker container, they seem to run sequentially.
This is how they are called:
steps:
- name: 'maven:3-jdk-11-slim'
args: [
'mvn',
# https://stackoverflow.com/a/53513809/3067148
'-Dorg.slf4j.simpleLogger.showDateTime=true',
'-Dorg.slf4j.simpleLogger.dateTimeFormat=HH:mm:ss,SSS',
'-B',
'test'
]
What could be the reason why they aren't executed in parallel?
A:
Properties such as the desired parallelism and the maximum pool size can be configured using a ParallelExecutionConfigurationStrategy. The JUnit Platform provides two implementations out of the box: dynamic and fixed. Alternatively, you may implement a custom strategy.
Keep in mind that the ParallelExecutionConfigurationStrategy class is still in the "EXPERIMENTAL" state and it is not yet stable. Unintended behaviour may occur.
As you don't set a specific configuration strategy, the following section applies:
If no configuration strategy is set, JUnit Jupiter uses the dynamic configuration strategy with a factor of 1. Consequently, the desired parallelism will be equal to the number of available processors/cores.
Find more details at https://junit.org/junit5/docs/current/user-guide/#writing-tests-parallel-execution-config
| {
"language": "en",
"url": "https://stackoverflow.com/questions/58336937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Scalar subquery cannot have more than one column unless using SELECT AS STRUCT to build STRUCT values I'm trying to write a query in BigQuery that select some rows based on row number and sum the columns.
In details, I have two tables: in table1 there are for each id the number of rows of table2 that I want to sum. An example of the two tables is below.
table1
table2
The desired output is:
id
points1
points2
points3
a
86
99
31
b
91
59
15
c
122
183
118
I created a UDF that tooks the 'neighbors' n1, n2 and n3 and sum the rows of table2 whose row_num is in n1, n2 and n3; then I recalled the UDF in my query below.
create temp function sum_points(neighbors array<int>)
returns array<int>
as (sum((select * from `project.dataset.table2` where row_num in unnest(neighbors))));
with cte as (
select id, array_agg(nn) as neighbors
from `project.dataset.table1`, unnest([n1, n2, n3]) nn
group by id
)
select id, sum_points(neighbors) from cte
However, I got the following error:
Scalar subquery cannot have more than one column unless using SELECT AS STRUCT to build STRUCT values; failed to parse CREATE [TEMP] FUNCTION statement at [5:9]
and it is not very clear to me what that means. I tried to replace the select inside with statement with select struct<array<int>> but it did not work.
A: Better option would be to join tables and do aggregate.
You can join tables based on row_num from table2 and n1, n2, n3 from table1 as below.
SELECT
id,
SUM(points1) AS points1,
SUM(points2) AS points2,
SUM(points3) AS points3
FROM table1 JOIN table2
ON row_num in (n1, n2, n3)
GROUP BY id
Output of the query:
id
points1
points2
points3
a
86
99
31
c
122
183
118
b
91
59
15
| {
"language": "en",
"url": "https://stackoverflow.com/questions/72041517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Automapper Map Property as Collection I need to map ICustomerAddresses to my own custom object Address, or List < Address >. How can I use automapper to indicate that the property Customer.ICustomerAddresses maps to my custom Address?
To, illustrate, I have a an interface that has its properties listed like this:
public interface ICustomer
{
ICustomerAddresses Addresses;
}
In this case, ICustomerAddresses is a collection of ICustomerAddress. However, ICustomerAddress is not a simple IEnumerable, it contains properties that contain the collection, like this:
public interface ICustomerAddresses : IBusinessObjectCollection
{
ICustomerAddress this[int nIndex] { get; }
ICustomerAddress CreateNew();
ICustomerAddress AddNew();
}
Automapper cannot figure out on its own that ICustomerAddresses is really just a collection of ICustomerAddress, so how do I tell it that's the case?
Thanks in advance!
A: A custom type converter should work fine. Here's a quick example (thrown together -- not tested). Also, I added a "Length" property to the ICustomerAddresses so I knew how many to loop through:
public class AddressConverter : TypeConverter<ICustomerAddresses, IList<Address>>
{
protected override IList<Address> ConvertCore(ICustomerAddresses source)
{
var addresses = new List<Address>();
for (var i = 0; i < source.Length; i++)
{
var addr = source[i];
addresses.Add(new Address
{
Addr1 = addr.Addr1,
Zip = addr.Zip
});
}
return addresses;
}
}
And you could probably utilize Automapper inside the loop too to convert the ICustomerAddress to an Address instead of doing it manually like I did.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/9058632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: how to exclude image files within public folder In the public folder, I have two folder hk and sg, they have different image files.
What I want is if I build hk package, I only copy images from hk folder.
How to exclude the sg folder in ember-cli?
A: Ember uses Broccoli.js for it's build pipeline. Broccoli is build around the concept of trees. Please have a look in it's documentation for details.
You could exclude files from the tree using a plugin called broccoli-funnel. It expects an input node, which could be either a directory name as a string or an existing broccoli tree as a first argument. A configuration object should be provided as a second argument. The files or folders that should be excluded could be specified by exclude option on that object.
A broccoli tree is created as part of the build process in ember-cli-build.js. The function exported from that file should return a tree. By default it returns the tree created by app.toTree() directly. But you could customize that tree using broccoli-funnel before.
This diff shows how default ember-cli-build.js as provided by blueprint of Ember CLI 3.16.0 could be customized to exclude a specific file:
diff --git a/ember-cli-build.js b/ember-cli-build.js
index d690a25..9d072b4 100644
--- a/ember-cli-build.js
+++ b/ember-cli-build.js
@@ -1,6 +1,7 @@
'use strict';
const EmberApp = require('ember-cli/lib/broccoli/ember-app');
+const Funnel = require('broccoli-funnel');
module.exports = function(defaults) {
let app = new EmberApp(defaults, {
@@ -20,5 +21,7 @@ module.exports = function(defaults) {
// please specify an object with the list of modules as keys
// along with the exports of each module as its value.
- return app.toTree();
+ return new Funnel(app.toTree(), {
+ exclude: ['file-to-exclude'],
+ });
};
You should explicitly add broccoli-funnel to your dependencies even so it's available as a indirect dependency:
// if using npm
npm install -D broccoli-funnel
// if using yarn
yarn add -D broccoli-funnel
Broccoli-funnel does not only support exact file names but also regular expressions, glob strings or functions to define the files to exclude. Please have a look in it's documentation for details.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60181923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Why multiple JADE GUI appear on refreshing JSP page I want to run JADE middleware GUI from JSP, for that I created a "Dynamic Web Project" and made right click on WebContent folder and select New .. Other ..type JsP file and gave the name e.g. runJADE.jsp. Now add the following details inside runJADE.jsp
<%@ page import="jade.core.*"%>
<% try {
String [] _args = {"-gui"};
jade.Boot.main(_args);
} catch (Exception ex) {
out.println(ex);
ex.printStackTrace();
}
%>
<HTML>
<BODY>
JADE is running.
</BODY>
</HTML>
I also tried the following code in JSP page:
<%@ page import="jade.core.Profile,jade.core.ProfileImpl,jade.core.Runtime,jade.wrapper.AgentContainer %>
<% try {
Profile profile = new ProfileImpl();
profile.setParameter(Profile.GUI, "true");
AgentContainer mainContainer = Runtime.instance().createMainContainer(profile);
} catch (Exception ex) {
out.println(ex);
ex.printStackTrace();
}
%>
<HTML>
<BODY>
JADE is running.
</BODY>
</HTML>
The code gives fine result by launching the JADE but refreshing the page leads to create another GUI. Am I missing some basic step? and how to avoid this problem?
A: Your approach is deeply flawed. Every time you load your JSP, it creates another Jade container. When using Jade, you generally only want to have a single container on a single machine.
A better approach would be to start up your container when your entire application starts up. Then, when you want to trigger the Jade GUI, you should just start the RMA agent inside your Jade container, as described in this answer: https://stackoverflow.com/a/20462656/2047962
Now, let's take a step back and talk about your code. I would strongly recommend that you stop using scriptlets in your JSPs RIGHT NOW. It's a bad practice that leads to really poor designs. If you don't know the alternatives, please read "How to avoid Java code in JSP files?".
Second, why are you mixing web applications and desktop Swing UIs? Why are you opening a Swing GUI when someone accesses your web page in a browser? If you were to deploy that application, your clients would get to a page that says "JADE is running", but they wouldn't see anything. The GUI would open on the server, not the client. I can't see why you would want this functionality in a web environment.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18076898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Do indexes persist on derived tables? Say I have a large table containing genomic positions from various files as follows:
CREATE TABLE chromosomal_positions (
file_id INT,
chromosome_id INT,
position INT
)
I want to compare the contents of 1 file to the contents of everything all the other files, for overlaps. So I use to derived tables.
SELECT Count(*)
FROM (SELECT *
FROM chromosomal_positions
WHERE variant_file_id = 1) AS file_1
JOIN (SELECT *
FROM chromosomal_positions
WHERE variant_file_id != 1) AS other_files
ON ( file_1.chromosome_id = other_files.chromosome_id
AND file_1.position = other_files.position )
Now if I add a compound index on file_id, chromosome_id , position in that order, will the derived tables be able to use that index?
(Using Postgres)
A: It's not so much that PostgreSQL "preserves" indexes across subqueries, as that the rewriter can often simplify and restructure your query so that it operates on the base table directly.
In this case the query is unnecessarily complicated; the subqueries can be entirely eliminated, making this a trivial self-join.
SELECT count(*)
FROM chromosomal_positions file_1
INNER JOIN chromosomal_positions other_files
ON ( file_1.chromosome_id = other_files.chromosome_id
AND file_1.position = other_files.position )
WHERE file1.variant_file_id = 1
AND other_files.variant_file_id != 1;
so an index on (chromosome_id, position) would be clearly useful here.
You can experiment with index choices and usage as you go, using explain analyze to determine what the query planner is actually doing. For example, if I wanted to see if:
then I would
CREATE INDEX cp_f_c_p ON chromosomal_positions(file_id, chromosome_id , position);
-- Planner would prefer seqscan because there's not really any data;
-- force it to prefer other plans.
SET enable_seqscan = off;
EXPLAIN SELECT count(*)
FROM (
SELECT *
FROM chromosomal_positions
WHERE file_id = 1
) AS file_1
INNER JOIN (
SELECT *
FROM chromosomal_positions
WHERE file_id != 1
) AS other_files
ON ( file_1.chromosome_id = other_files.chromosome_id
AND file_1.position = other_files.position )
and get the result:
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=78.01..78.02 rows=1 width=0)
-> Hash Join (cost=29.27..78.01 rows=1 width=0)
Hash Cond: ((chromosomal_positions_1.chromosome_id = chromosomal_positions.chromosome_id) AND (chromosomal_positions_1."position" = chromosomal_positions."position"))
-> Bitmap Heap Scan on chromosomal_positions chromosomal_positions_1 (cost=14.34..48.59 rows=1930 width=8)
Filter: (file_id <> 1)
-> Bitmap Index Scan on cp_f_c_p (cost=0.00..13.85 rows=1940 width=0)
-> Hash (cost=14.79..14.79 rows=10 width=8)
-> Bitmap Heap Scan on chromosomal_positions (cost=4.23..14.79 rows=10 width=8)
Recheck Cond: (file_id = 1)
-> Bitmap Index Scan on cp_f_c_p (cost=0.00..4.23 rows=10 width=0)
Index Cond: (file_id = 1)
(11 rows)
(view on explain.depesz.com)
showing that while it will use the index, it's actually only using it for the first column. It won't use the rest, it's just filtering on file_id. So the following index is just as good, and smaller and cheaper to maintain:
CREATE INDEX cp_file_id ON chromosomal_positions(file_id);
Sure enough, if you create this index Pg will prefer it. So no, the index you propose does not appear to be useful, unless the planner thinks it's just not worth using at this data scale, and might choose to use it in a completely different plan with more data. You really have to test on the real data to be sure.
By contrast, my proposed index:
CREATE INDEX cp_ci_p ON chromosomal_positions (chromosome_id, position);
is used to find chromosomal positions with id = 1, at least on an empty dummy data set. I suspect the planner would avoid a nested loop on a bigger data set than this, though. So again, you really just have to try it and see.
(BTW, if the planner is forced to materialize a subquery then it does not "preserve indexes on derived tables", i.e. materialized subqueries. This is particularly relevant for WITH (CTE) query terms, which are always materialized).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/21932119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Im trying to understand subversion / git I am trying to understand and learn how to use subversion or git. Which do you recommend?
It seem so complicated and confusing.
I do a lot of PHP web development projects. I have a habit backing up project folders like project_name_version_date
What is the best way to learn for beginner?
How do you upload updated project from local machine to live website to a different server, how that be done from subversion / git? and reupload to new version again?
Can 2 or 3 people work on the same project at the same time? do they load the code files from the server? wouldn't it conflict each other... like they had removed the class objects, functions, etc.
A:
What is the best way to learn for
beginner?
*
*http://svnbook.red-bean.com/nightly/en/svn.intro.quickstart.html
*http://book.git-scm.com/
How do you upload updated project from
local machine to live website to a
different server, how that be done
from subversion / git? and reupload to
new version again?
With version control, you have a central repository and one or more working copies (git adds in a local repository at every working copy to allow distributed storage/management).
In your case, your live website and your development copy are both working copies - you checkout from the central repository to each of these locations. You then work on the development copy, and commit those changes to the central repository. Once you are ready to release, you simply perform an update on the live working copy which pulls all the changes from the central repository.
Can 2 or 3 people work on the same
project at the same time? do they load
the code files from the server?
wouldn't it conflict each other...
like they had removed the class
objects, functions, etc.
Yes - each person has a working copy which they make changes to.
Conflicts are likely, and will happen from time to time - but SVN and Git can deal with a lot very easily. For example, changes to code in different places of the same file are simply merged in. Changes to code at the same place in the same file will typically require manual intervention.
Perhaps the worst conflicts that can occur are what SVN calls 'tree conflicts' - changes in the folder structure. This can be a real PITA to fix - but you have to really go out of your way to cause them.
That said, the potential for conflicts (and difficulty in resolving them) in non-version controlled environments is far, far greater.
There are some practices which help prevent major conflicts from occurring:
*
*Modular code
*Clear delineation of work - stop programmers treading on each others toes
*Update your local copy before committing
*Commit small, commit often - how small? Hard to say, you start to get a feel for this... but think in terms of functions and functionality.
I think Git is probably better to start with if you don't use anything else already - the distributed system means that you are more able to resolve conflicts at the local level before pushing up to the central repository. All of my projects are use SVN (at the office and at home), but I'm starting to use Git through the Drupal project, and like what I've seen so far.
A: GIT is the newer paradigm for version control. SVN has been around for a while but GIT and Mercurial are gaining traction as they allow for "distributed version control." This means that there is no central server for your code. Everyone has the entire history and can send "patches" to each other to share code. However, git and mercurial do support a workflow very similar to having a central repository. Integrating git and "gerrit" is really great for working on a project with multiple people.
I suggest skipping svn because svn is an older technology that will actually hinder your understanding of git / mercurial because it is a different paradigm and uses different processes. GIT / mercurial works awesome just locally (no server and you are the only one using) and also for large teams.
GIT is more powerful but harder to use while mercurial has the basics in a more usable form factor.
A: I will suggest , go for git , you can access your repository on web also. With subversion, as far my knowledge goes, there is no such provision. But this should not be only reason. I have used both, git is very useful. Git's main advantage is that you don't have to be connected to master repository always. You can read more in this question which has been explained in nice way Why is Git better than Subversion?
A: As you are new to the whole Version/Source Control concept. I suggest you read a bit about VC in general.
The best way to learn would be to actually use a VCS for your day to day projects. Yes many people can work on the same things at once. And then 'conflicts' can happen. But the modern VCS lets you do something called merging.
I suggest you start with learning about git. As you are new to the whole thing it shouldn't be very hard for you. But IMHO learning SVN(which is a 'centralized version control system) and then moving in to git (which is a distributed version control system) tends to complicate things. A lot of people feel distributed VCS are the future. So i suggest you start learning either git or Hg, both are good VCSs.
Good Luck!
A: You should read a tutorial on Subversion and stay very far away from Git until you are reasonably experienced in SVN.
The best way to learn is follow a hands-on tutorial and then try the same things out yourself during your normal development workflow.
And certainly many people can work on the same project, that is one of the main reasons we use version control.
A: I would strongly recommend checking out http://hginit.com/ It uses Mercurial which is very similar to Git, but the important thing to gain from hginit is how distributed version control (and version control in general) work both independently and as part of a team. Once you understand the concepts in hginit, which distributed source control tool tool you pick is not nearly as important.
A: Subversion and git are very different in the way they deal with code, but the goal is the same.
Subversion is a centralized solution to source control. Git is a distributed solution to source control. In both cases, you can use them by yourself only or an arbitrarily large team can use them.
Subversion is centralized, meaning there is only one server. All the code lives on that machine. It's a little simpler conceptually to deal with this.
Git is distributed, meaning there are a lot of servers (everyone is a server and a client). This makes it a little more complicated to understand which one is the "right" server.
Right now, you are creating a copy of the files you want to keep somewhere else on the disk and using that as a backup. When you do this, you are doing several steps at a time (at a conceptual level):
*
*Deciding which files you want to backup
Most source control needs your help to tell it which files to track. This is known as the add command in both git and subversion.
*Backing up those files so that future changes do not impact them
This is done by commiting the changes to the files your source control is tracking for you. In subversion this is done with commit. In git, add makes git aware of the changes you make to the files and commit makes the changes saved in a permanent manner.
*Labeling the copy of the files in a manner that makes sense to you
This is done in multiple manners in the different source control technologies. Both have a history of the commits, the concept of branches/tags and different repositories (though code doesn't normally change between repositories in subversion).
In Subversion, the code is on a single server. Everyone gets it from there (originally with checkout, afterward with update). In git, you fetch remote changes from another repository.
Git has somewhat superior conflict resolution (two people changing the same thing at the same time) to subversion, but you can still get into plenty of merge headaches if you are not careful. Git rebase can be helpful, though I've only used it in an individual manner so I have not had much conflict resolution practice yet.
Git has some tools that some people swear by (bisect and grep) that I have found to be somewhat useful. Your mileage may vary :)
A: It's largely a matter of comfort, but since you are a beginner (and presumably willing to spend some time learning a new skill), I will recommend git. As someone who has used svn and git, I think your workflow looks like it would be helped by git.
Here's some questions you need to answer for yourself when thinking svn vs. git:
*
*Are you in a company where policy prevents you from using git? (If yes, use svn).
*Do you have a workflow wherein you like to create a 'topic branch' to develop a feature, and then merge it into main when you believe that feature works well? (If yes, git is better, because it actually encourages you to create branches, and merge them back very easily, whereas svn is total hell as far as creating branches and merging them into main is concerned).
*Do you prefer to commit often in small chunks so that you have a nice clean record of changes that you can selectively unroll? (I would argue that this style of development is supported better by git).
*Look at this command: http://www.kernel.org/pub/software/scm/git/docs/git-cherry-pick.html . I cannot tell you how many times I have had to do a custom release wherein I pull specific features and 'compose' an image / website / app for a quick demo. If you think you will be doing this often, think how you would do it in subversion. I can predict in advance that it's not going to be a picnic :)
There are tons of other reasons, but if your workflows look like what I have described, then you should go with git.
You can learn git from this great resource: http://progit.org/book
And remember, if you are stuck with svn and would still like to use git, that is easy too!
http://progit.org/book/ch8-1.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/5826237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: How can I get a URL entered into WinForms OpenFileDialog? If a user enters a URL into a Windows Forms OpenFileDialog then the dialog box (on more modern versions) of Windows will download the file and open it from a temporary directory. Is there any way to get at the entered URL? Could the new-fangled IFileDialog help?
Please note that I am not looking for the file:// equivalent of a local file. This is for when the user enters the location of something on the Internet into the file dialog. e.g. http://example.com/path.
This asks essentially the same question, but didn't get a useful answer, perhaps because he asks that the result appear in the FileName property.
A: It's possible to set a windows hook to listen for text changes. This code currently picks up value changes from all fields, so you will need to figure out how to only detect the ComboBox filename field.
public class MyForm3 : Form {
public MyForm3() {
Button btn = new Button { Text = "Button" };
Controls.Add(btn);
btn.Click += btn_Click;
}
[DllImport("user32.dll", SetLastError = true)]
private static extern IntPtr SetWinEventHook(uint eventMin, uint eventMax, IntPtr hmodWinEventProc, WinEventProc lpfnWinEventProc, uint idProcess, uint idThread, uint dwFlags);
[DllImport("user32.dll")]
private static extern bool UnhookWinEvent(IntPtr hWinEventHook);
[DllImport("user32.dll")]
private static extern int GetWindowText(IntPtr hWnd, StringBuilder text, int count);
[DllImport("user32.dll", SetLastError = true, CharSet = CharSet.Auto)]
private static extern int GetClassName(IntPtr hWnd, StringBuilder lpClassName, int nMaxCount);
[DllImport("user32.dll", SetLastError = true)]
private static extern uint GetWindowThreadProcessId(IntPtr hWnd, out uint lpdwProcessId);
private const int WINEVENT_OUTOFCONTEXT = 0;
private const uint EVENT_OBJECT_VALUECHANGE = 0x800E;
void btn_Click(object sender, EventArgs e) {
uint pid = 0;
uint tid = 0;
using (var p = Process.GetCurrentProcess())
GetWindowThreadProcessId(p.MainWindowHandle, out pid);
var hHook = SetWinEventHook(EVENT_OBJECT_VALUECHANGE, EVENT_OBJECT_VALUECHANGE, IntPtr.Zero, CallWinEventProc, pid, tid, WINEVENT_OUTOFCONTEXT);
OpenFileDialog d = new OpenFileDialog();
d.ShowDialog();
d.Dispose();
UnhookWinEvent(hHook);
MessageBox.Show("Original filename: " + OpenFilenameText);
}
private static String OpenFilenameText = "";
private static WinEventProc CallWinEventProc = new WinEventProc(EventCallback);
private delegate void WinEventProc(IntPtr hWinEventHook, int iEvent, IntPtr hWnd, int idObject, int idChild, int dwEventThread, int dwmsEventTime);
private static void EventCallback(IntPtr hWinEventHook, int iEvent, IntPtr hWnd, int idObject, int idChild, int dwEventThread, int dwmsEventTime) {
StringBuilder sb1 = new StringBuilder(256);
GetClassName(hWnd, sb1, sb1.Capacity);
if (sb1.ToString() == "Edit") {
StringBuilder sb = new StringBuilder(512);
GetWindowText(hWnd, sb, sb.Capacity);
OpenFilenameText = sb.ToString();
}
}
}
A: If you only want to get URL (not download file), set CheckFileExists flag to false.
Example code below
string urlName = null;
using (var dlg = new OpenFileDialog())
{
dlg.CheckFileExists = false;
dlg.ShowDialog();
urlName = dlg.FileName;
urlName = Path.GetFileName(urlName);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/32686429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Angular.js Authentication with Node.js I'm unable to login with front-end might be problem with Angular-ui-Router. When I click on login button first I got an error as 'Possibly unhandled rejection' then I injected '$qProvider' in config file after adding $qProvider I don't see any error in console but the page is not changing it state whereas in the network tab i can see the Token is sent from the server. Can some one please help me.
App.config.js
angular.module('myApp', ['ui.router', 'myAppModel'])
.config(function($qProvider, $stateProvider, $urlRouterProvider) {
$urlRouterProvider.otherwise('/');
$qProvider.errorOnUnhandledRejections(false);
$stateProvider
.state('base', {
abstract: true,
views: {
'nav@': {
template: '<navigation1></navigation1>'
}
}
})
.state('base.login', {
parent: 'base',
url: '/',
views: {
'main@': {
template: '<login></login>'
}
}
})
.state('base.register', {
parent: 'base',
url: '/register',
views: {
'main@': {
template: '<register></register>'
}
}
})
.state('base.dashboard', {
parent: 'base',
url: '/dashboard',
views: {
'main@': {
template: '<dashboard></dashboard>'
}
}
})
});
login.js
angular.module('myAppModel')
.component('login', {
templateUrl: '/components/login/login.html',
controller: function loginCtrl(Authentication, $state) {
var ctrl = this;
ctrl.pageHeader = 'Login';
ctrl.credentials = {
email: '',
password: ''
};
ctrl.onSubmit = function () {
ctrl.formError = '';
if(!ctrl.credentials.email || !ctrl.credentials.password) {
ctrl.formError = 'All fields required, please try again';
return false;
}else {
ctrl.doLogin();
}
};
ctrl.doLogin = function () {
ctrl.formError = '';
Authentication.login(ctrl.credentials).then(function(status) {
console.log(status);
$state.go('base.dashboard');
});
};
}
});
Authentication-service.js
angular.module('myAppModel')
.service('Authentication', function ($window, $http) { //Register new service with application and inject $window service
var saveToken = function (token) { //create a saveToken method to save a value to localStorage
$window.localStorage['app-token'] = token;
};
var getToken = function () { //Create a getToken method to read a value from localStorage
return $window.localStorage['app-token'];
}
var register = function (user) {
return $http.post('/api/users/register', user).then(function(data){
saveToken(data.token);
});
};
var login = function (user) {
return $http.post('/api/users/login', user).then(function (data) {
saveToken(data.token);
});
};
var logout = function () {
$window.localStorage.removeItem('app-token');
};
var isLoggedIn = function () {
var token = getToken(); //Get token from storage
if(token) { //If token exists get payload decode it, and parse it to JSON
var payload = JSON.parse($window.atob(token.split('.')[1]));
return payload.exp > Date.now() / 1000; //validate whether expiry date has passed
}else {
return false;
}
};
//Getting User Information from the JWT
var currentUser = function () {
if (isLoggedIn()) {
var token = getToken();
var payload = JSON.parse($window.atob(token.split('.')[1]));
return {
email: payload.email,
name: payload.name
};
}
};
return {
saveToken: saveToken, //Expose methods to application
getToken: getToken,
register: register,
login: login,
logout: logout,
isLoggedIn: isLoggedIn,
currentUser: currentUser
};
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43290498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to save information in multiple text boxes with save and load buttons I'm trying to create a simple Android program that has text boxes for name, address, phone number, etc. When the user puts this information in and hits save it clears the text boxes, and when they hit the load button it retrieves the info. I know how to do it with one EditText box, but I can't figure out multiple. Can I do this inside one try/catch statement, or do I need more than one? This is what I have right now:
public class MainActivity extends ActionBarActivity {
private EditText textBoxName;
private EditText textBoxAddress;
private EditText textBoxCity;
private EditText textBoxPhone;
private EditText textBoxEmail;
private static final int READ_BLOCK_SIZE = 100;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
textBoxName = (EditText) findViewById(R.id.txtName);
textBoxAddress = (EditText) findViewById(R.id.txtAddress);
textBoxCity = (EditText) findViewById(R.id.txtCity);
textBoxPhone = (EditText) findViewById(R.id.txtPhone);
textBoxEmail = (EditText) findViewById(R.id.txtEmail);
Button saveBtn = (Button) findViewById(R.id.btnSave);
Button loadBtn = (Button) findViewById(R.id.btnLoad);
saveBtn.setOnClickListener(new View.OnClickListener() {
public void onClick(View v) {
String strName = textBoxName.getText().toString();
String strAddress = textBoxAddress.getText().toString();
String strCity = textBoxCity.getText().toString();
String strPhone = textBoxPhone.getText().toString();
String strEmail = textBoxEmail.getText().toString();
try {
FileOutputStream fOut = openFileOutput("textfile.txt", MODE_WORLD_READABLE);
OutputStreamWriter osw = new OutputStreamWriter(fOut);
//write the string to the file
osw.write(strName);
osw.flush();
osw.close();
//display file saved messages
Toast.makeText(getBaseContext(), "File saved successfully!",
Toast.LENGTH_SHORT).show();
//clears the EditText
textBoxName.setText("");
textBoxAddress.setText("");
textBoxCity.setText("");
textBoxPhone.setText("");
textBoxEmail.setText("");
}
catch (IOException ioe)
{
ioe.printStackTrace();
}
}
});
loadBtn.setOnClickListener(new View.OnClickListener() {
public void onClick(View v) {
try
{
FileInputStream fIn = openFileInput("textfile.txt");
InputStreamReader isr = new InputStreamReader(fIn);
char[] inputBuffer = new char[READ_BLOCK_SIZE];
String s = "";
int charRead;
while ((charRead = isr.read(inputBuffer))>0)
{
//convert the chars to a String
String readString = String.copyValueOf(inputBuffer, 0, charRead);
s += readString;
inputBuffer = new char[READ_BLOCK_SIZE];
}
//set the EditText to the text that has been read
textBoxName.setText(s);
textBoxAddress.setText(s);
textBoxCity.setText(s);
textBoxPhone.setText(s);
textBoxEmail.setText(s);
Toast.makeText(getBaseContext(), "File loaded successfully!",
Toast.LENGTH_SHORT).show();
}
catch (IOException ioe)
{
ioe.printStackTrace();
}
}
});
}
@Override
public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu; this adds items to the action bar if it is present.
getMenuInflater().inflate(R.menu.main, menu);
return true;
}
@Override
public boolean onOptionsItemSelected(MenuItem item) {
// Handle action bar item clicks here. The action bar will
// automatically handle clicks on the Home/Up button, so long
// as you specify a parent activity in AndroidManifest.xml.
int id = item.getItemId();
if (id == R.id.action_settings) {
return true;
}
return super.onOptionsItemSelected(item);
}
}
A: You can use Shared Preferences to store and retrieve information in Android.
A: you can use shared preferences for this purpose . Just put the value into shared preference and load when the user need this info like username and password saved locally into any login form.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26452742",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to parse the array from one activity to fragment in android? I am bulding an android application where an user select There
favorite stuff.
The name of stuff is added in an array by clicking on Image of this
stuff.
Now I need is that how can I parse the value of that array to any
fragment and show in my spinner list.
For example like user select Mobile and tablet by click there image
this value add in an array name stuffarray now I need is that this
array value pass to my fragment on an submitted button and when I
click on an spinner of my fragment it Should have mobile and tablet value in there
list.
Here is my code for stuff selection :
submite = (ImageButton) findViewById(R.id.nextscreen);
next.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View arg0) {
// TODO Auto-generated method stub
Intent innext = new Intent(getApplicationContext(), MainActivitytabnew.class);
startActivity(innext);
});
img1 = (ImageButton) findViewById(R.id.imageButton1);
img1.setBackgroundResource(R.drawable.mobile);
img1.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View v) {
// TODO Auto-generated method stub
isClicked1=!isClicked1;
if (isClicked1) {
img1.setImageResource(R.drawable.mobile);
start();
stuff1 = "mobile";
myList.add(stuff1);
}else {
img1.setImageResource(R.drawable.mobile);
myList.remove(sport1);
//sport1 = "";
txt1.setText("");
}
}
});
img2 = (ImageButton) findViewById(R.id.imageButton2);
img2.setBackgroundResource(R.drawable.tablet);
img2.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View v) {
// TODO Auto-generated method stub
isClicked2=!isClicked2;
if (isClicked2) {
img2.setImageResource(R.drawable.tablet);
start();
stuff2 = "tablet";
myList.add(stuff2);
}else {
img2.setImageResource(R.drawable.tablet);
// sport2 = "";
myList.remove(sport2);
}
}
});
A: seems that your are passing List<String> to fragment.
You should use Bundle to hold your data and then pass to fragment use Fragment.setArguments.
here is a example:
Bundle data = new Bundle();
data.putStringArrayList("your_argument_name", dataList);
Fragment f = ...;
f.setArguments(data);
here is how to read value in your fragment's code, like in onCreateView :
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
Bundle data = getArguments();
if (data != null) {
List<String> dataList = data.getStringArrayList("yout_argument_name");
}
...
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26858821",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Number of pairs that create a real number when placed in the exponent of complex number i In my intro Python class, we're exploring the use of complex numbers. I'm asked to use for loops to find the number of pairs of variables x and y that, for a given value of N, satisfy the conditions
a) 0 <= x < y <= N and
b) i^x + i^y is a real number (not complex)
Since the value of i is sqrt(-1), this happens only when x and y are even numbers:
sqrt(-1)^0 = 1 => this is real
sqrt(-1)^1 = sqrt(-1) => this is complex
sqrt(-1)^2 = -1 => this is real
sqrt(-1)^3 = -sqrt(-1) => this is complex
and so on
So, I've written code as follows
a1 = 0
N = int(raw_input('Please select a value for N: '))
for x in range(0, N):
for y in range(x+1, N+1):
if x%2==0 and y%2==0:
a1 = a1+1
This utilizes the mod operator to determine whether the values for x and y are even or odd, and, if both values are even, it advances the count variable a1 by 1. It is given to us that for N = 100, our total count, what I have as a1, should be 1900. However, at the end of my loop, my a1 is 1275. I'm having trouble analyzing the code to determine where the error is.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/39583390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Django creates Static Folder When uploading my statics with `collectstatic' on Google Cloud Storage, it does upload the files into the main root of the bucket, instead in a folder called "/static/", so the web on production can't read the statics. How can I create the folder "/static/" and upload the files there on GCS?
This are my settings:
ALLOWED_HOSTS = ["*"]
DEBUG = True
INSTALLED_APPS += ["django_extensions", ]
DEFAULT_FILE_STORAGE = "storages.backends.gcloud.GoogleCloudStorage"
STATICFILES_STORAGE = "storages.backends.gcloud.GoogleCloudStorage"
GS_BUCKET_NAME = "static-bucket"
GS_MEDIA_BUCKET_NAME = "bucket-name"
A: Requirement for loading staticfiles GCS
Go to GCP: Cloud Storage (GCS) and click on CREATE BUCKET (fill-up as needed)
Once created, you can make it public if you want it to act like a CDN of your website (storage of your static files such as css, images, videos, etc.)
Go to your newly created bucket
Go to Permissions and then Click Add members
Add a new member "allUsers" with role "Cloud Storage - Storage Object Viewer"
Reference: https://cloud.google.com/storage/docs/quickstart-console
Main references:
https://django-storages.readthedocs.io/en/latest/backends/gcloud.html
https://medium.com/@umeshsaruk/upload-to-google-cloud-storage-using-django-storages-72ddec2f0d05
Prerequisite steps
Go to GCP: Cloud Storage (GCS) and click on CREATE BUCKET (fill-up as needed)
Once created, you can make it public if you want it to act like a CDN of your website (storage of your static files such as css, images, videos, etc.)
Go to your newly created bucket
Go to Permissions and then Click Add members
Add a new member "allUsers" with role "Cloud Storage - Storage Object Viewer"
Reference: https://cloud.google.com/storage/docs/quickstart-console
Step one 1 (easier and faster, but requires constant manual copying of files to the GCS)
Configure your Django's static file settings in your settings.py
STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'templates'),
os.path.join(BASE_DIR, "yourapp1", "templates"),
os.path.join(BASE_DIR, "yourapp2", "static"),
os.path.join(BASE_DIR, "watever"),
"/home/me/Music/TaylorSwift/",
"/home/me/Videos/notNsfw/",
]
STATIC_ROOT = "/var/www/mywebsite/"
STATIC_URL = "https://storage.googleapis.com/<your_bucket_name>/"
Step 2 If you have HTML files or CSS files that access other static files, make sure that they reference those other static files with this updated STATIC_URL setting.
In your home.html
{% load static %}
<link rel="stylesheet" type="text/css" href="{% static 'home/css/home.css' %}">
Then in your home.css file
background-image: url("../assets/img/myHandsomeImage.jpg");
You can actually read the reference from the link i have provided.
https://django-storages.readthedocs.io/en/latest/backends/gcloud.html
https://medium.com/@umeshsaruk/upload-to-google-cloud-storage-using-django-storages-72ddec2f0d05
You home CSS link now will translate to :
https://storage.googleapis.com/[your_bucket_name]/home/css/home.css
If you wish, you could just put the absolute path (complete URL), but such configuration would always require you to update the used URLs manually, like if you switched to development mode and wanted to just access the static files locally instead of from GCS.
This would copy all files from each directory in STATICFILES_DIRS to STATIC_ROOT directory.
python3 manage.py collectstatic
# or if your STATIC_ROOT folder requires permissions to write to it then:
# sudo python3 manage.py collectstatic
Okay after searching through stackoverflow i see this problem have been solved and i don't want this to be a form of a duplicate. So here is the link to stackoverflow solution:Serve Static files from Google Cloud Storage Bucket (for Django App hosted on GCE) .
| {
"language": "en",
"url": "https://stackoverflow.com/questions/69622200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: C# FileInfo Name foreach I have this code:
FileInfo finfo = new FileInfo(Path.Combine(Directory.GetCurrentDirectory(), "cleo", file.Key));
DialogResult modifiedcleofiles = MessageBox.Show("error", "error title", MessageBoxButtons.OKCancel, MessageBoxIcon.Warning);
if(modifiedcleofiles == DialogResult.OK)
{
Message.Info(finfo.Name); //It prints filename ok
if (!Directory.Exists(Path.Combine(Directory.GetCurrentDirectory(), "gs", "CLEO")))
Directory.CreateDirectory(Path.Combine(Directory.GetCurrentDirectory(), "gS", "CLEO"));
foreach (FileInfo filemove in finfo) //I don't know how to make foreach
{
filemove.MoveTo(Path.Combine(Directory.GetCurrentDirectory(), "gS", "CLEO", filemove.Name));
}
}
So it's my problem. This code works fine without foreach, it works with:
finfo.MoveTo(Path.Combine(Directory.GetCurrentDirectory(), "gS", "CLEO", finfo.Name));
It's ok, but if there are more than 2 files, etc.. It shows two messageboxes.. I wan't to move all finfo files. NOTE all finfo files, not all files which exists in folder ! Thanks in advance :( P.S do not give -karma or etc, I'm newbie..
P.S I'm tried this code:
foreach (FileInfo filemove in finfo.Directory.EnumerateFiles()) {
filemove.MoveTo(Path.Combine(Directory.GetCurrentDirectory(), "gS", "CLEO", filemove.Name));
}
It move's all files from that folder, but I need only finfo..
A: Just add
.Where(i => i.Extension.Equals(".finfo", StringComparison.InvariantCultureIgnoreCase))
to the enumerable. Or better, use the other overload of EnumerateFiles:
foreach (FileInfo filemove in finfo.Directory.EnumerateFiles("*.finfo")) {
Also, note that MoveTo only works on the same logical drive. If that's not enough for you, you will need to add your own copy+delete method for moving files accross drives.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/31586039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I build a dictionary of members with children from mysql in python? I am developing a unilevel Multi Level Marketing (MLM) application which stores data with mysql columns id, parent_id
with each item having one parent. each item can have up to 50 children. Given a particular id, how do I get all its children and children of its children etc. and its parents and parent's parents, etc. I want to get this data into a dic and display. Thanks
| {
"language": "en",
"url": "https://stackoverflow.com/questions/66650326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Subsets and Splits