text
stringlengths 15
59.8k
| meta
dict |
---|---|
Q: How do I use a date function in Access? I'm trying to put items into a table which have been deemed over due. I have the date of when items were completed last in a list, and I know the frequency of how often the items need to be completed.
Ex. I have a cleaning that happened on 4/7/17 and it needs to be cleaned 30 days after it was last cleaned, and I have a cleaning that occurred 1/13/17 and it needs to be cleaned 90 days after it was last cleaned.
How can I get Access to show me overdue items in a separate list? If it helps, I will click a button before going to this table. The thing is, not every item needs to be cleaned at the same frequency. To my knowledge, Access doesn't have the date functions like Excel and you cannot type functions into a cell. Thanks!
A: dim datedue as date, lastdate as date
datedue = Dateadd("d", 30, lastdate)
If datedue < Date() then
'do stuff
End if
This is basic syntax for checking dates. Since you didnt try anythin on your own, this is all you get.
Have fun :)
A: You don't "type functions into a cell", you set the ControlSource of a textbox. And Access has dozens of date functions.
However, you could start with a query:
Select
*,
DateAdd("d", [CleaningFrequency], [LastCleaned]) As NextCleaning,
IIf(DateDiff("d", [LastCleaned], Date()) > [CleaningFrequency], "Overdue", Null) As [Status],
IIf(DateDiff("d", [LastCleaned], Date()) = [CleaningFrequency], "Yes", Null) As [Clean Today]
From
YourTable
Of course, replace field and table names with those of yours.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44009683",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Conversion from type 'DBNull' to type 'Date' is not valid on tryparse I am attempting to do a DateTime.TryParse from a date-formatted string, to a new column that I created as part of a DataView table. I am getting this error when the code runs Conversion from type 'DBNull' to type 'Date' is not valid.
This is the line of code:
DateTime.TryParse(dr("IX_ArticleStartDate"), dr("nStartDate"))
And these are the values in my watch when it errors out.
+ dr("IX_ArticleStartDate") "2015/3/11" {String} Object
+ dr("nStartDate") {} Object
I was under the impression that a TryParse would return a NULL value if it fails to convert datatypes. Is there something that I should be doing different to convert this string to a DateTime datatype?
dr is instantiated as a DataRow
Dim dr As DataRow = dv0.Table.Rows(i)
A: VB implicitly try's to cast the DBNull Value to DateTime, since the method signature of DateTime.TryParse is
Public Shared Function TryParse(s As String, ByRef result As Date) As Boolean
which fails. You can use a variable instead:
dim startDate as DateTime
If DateTime.TryParse(dr("IX_ArticleStartDate").ToString(), startDate) Then
dr("startDate") = startDate
End If
A: A DateTime is a value type and cannot have a null value. You can do something like this:
Dim result As DateTime
Dim myDate As DateTime? = If(Not dr.IsNull("IX_ArticleStartDate") AndAlso _
DateTime.TryParse(dr.IsNull("IX_ArticleStartDate").ToString(), result), _
result, New DateTime?)
In this example, the variable myDate is not actually a DateTime, it's a Nullable(Of DateTime) as indicated by the question mark ? after DateTime in the declaration (see MSDN). The TryParse method actually takes a string value as the first argument and an output parameter which is a DateTime value as the second argument. If the parse is successful, it returns True and sets the output parameter to the parsed DateTime value; on the other hand, if the parse is not successful, it returns False and sets the output parameter to DateTime.MinDate which is not very useful because it's difficult to distinguish between a valid use of DateTime.MinDate and null.
The example makes use of a ternary operator which either parses and returns the date value, if it's not null, or else returns a nullable date instead (New DateTime?).
You would then use myDate.HasValue() accordingly to determine if the value is null, and if it's not null, you can use its value: myDate.Value.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29392499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Generating multiple data with SQL query I have 2 tables as below
Product_Asset:
PAId Tracks
1 2
2 3
Product_Asset_Resource:
Id PAId TrackNumber
1 1 1
2 1 2
3 2 1
4 2 2
5 2 3
I would like to know if I can generate the data in product_asset_resource table based on product_asset table using TSQL query (without complex cursor etc.)
For example, if the number of tracks in product_asset is 3 then I need to populate 3 rows in product_asset_resource with track numbers as 1,2,3
A: You can do this with the help of a Tally Table.
WITH E1(N) AS( -- 10 ^ 1 = 10 rows
SELECT 1 FROM(VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))t(N)
),
E2(N) AS(SELECT 1 FROM E1 a CROSS JOIN E1 b), -- 10 ^ 2 = 100 rows
E4(N) AS(SELECT 1 FROM E2 a CROSS JOIN E2 b), -- 10 ^ 4 = 10,000 rows
CteTally(N) AS(
SELECT TOP(SELECT MAX(Tracks) FROM Product_Asset)
ROW_NUMBER() OVER(ORDER BY(SELECT NULL))
FROM E4
)
SELECT
Id = ROW_NUMBER() OVER(ORDER BY pa.PAId, t.N),
pa.PAId,
TrackNumber = t.N
FROM Product_Asset pa
INNER JOIN CteTally t
ON t.N <= pa.Tracks
ONLINE DEMO
A: Try this,I am not using any Tally Table
declare @Product_Asset table(PAId int,Tracks int)
insert into @Product_Asset values (1 ,2),(2, 3)
;with CTE as
(
select PAId,1 TrackNumber from @Product_Asset
union all
select pa.PAId,TrackNumber+1 from @Product_Asset pa
inner join cte c on pa.PAId=c.PAId
where c.TrackNumber<pa.Tracks
)
select ROW_NUMBER()over(order by paid)id, * from cte
IMHO,Recursive CTE or sub query or using temp table performance depend upon example to example.
I find Recursive CTE more readable and won't use them unless they exhibit performance problem.
I am not convince that Recursive CTE is hidden RBAR.
CTE is just syntax so in theory it is just a subquery
We can take any example to prove that using #Temp table will improve the performance ,that doesn't mean we always use temp table.
Similarly in this example using Tally Table may not improve the performance this do not imply that we should take help of Tally Table at all.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36928831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: sending email with Indy to a gmail address message goes to spam Hello I'm trying to send a message from delphi (indy) to a gmail address. When I go to Gmail I found my message in spam folder. If I try to send the same message with PHPMailer from web it works correctly. This is the code.
Thanks
//setup SMTP
SMTP.Port := 25;
SMTP.ConnectTimeout := 1000;
SMTP.Host := 'smtp.xxxxxx.it';
SMTP.Username := '[email protected]';
SMTP.Password := 'xxxxxx';
SMTP.Connect();
if SMTP.Authenticate then
begin
//setup mail message
MailMessage.From.Name := 'xxxxxx';
MailMessage.From.Address := '[email protected]';
MailMessage.Recipients.EMailAddresses := '[email protected]';
MailMessage.Subject := ledSubject.Text;
MailMessage.ContentType := 'multipart/mixed';
htmpart := TIdText.Create(MailMessage.MessageParts, nil);
htmpart.Body := Body.Lines;
htmpart.ContentType := 'text/html';
//send mail
try
try
SMTP.Send(MailMessage);
except on E:Exception do
StatusMemo.Lines.Insert(0, 'ERROR: ' + E.Message);
end;
finally
if SMTP.Connected then SMTP.Disconnect;
end;
end;
A: Probably I've found the problem. I look the message in 'original mode' I found in the header that google says 'MISSING ID' and I try to add this code:
MailMessage.MsgId := '[email protected]';
MailMessage.ExtraHeaders.Values['Message-Id'] := MailMessage.MsgId;
Now it seems to work fine.
thanks
A: Have you tried changing HeloName and MailAgent of IdSMTP? If you use the same domain with PHPMailer, my guess is GMail considers emails coming from your application as spam because it doesn't detect/like the application which is sending them.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14482160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Problem with random variable elements for Web Scrapping I'd like to create a relative function to access every game report available in this table: https://fbref.com/fr/comps/13/calendrier/Scores-et-tableaux-Ligue-1
I started making a relative URL, but there is an element in the URL that I believe to be random.
Here is what I refer to:
*
*cbdc95fe for: https://fbref.com/fr/matchs/cbdc95fe/Lille-Auxerre-7-Aout-2022-Ligue-1
*
*00173ae0 for: https://fbref.com/fr/matchs/00173ae0/Nantes-Lille-12-Aout-2022-Ligue-1
To overcome this limit, do you know efficient ways to open all links within the table?
Thanks for your help !
A: using the requests and lxml libraries (lxml for xpath) this becomes a fairly straightforward task:
1 import requests
2 from lxml import etree
3
4 s = requests.session()
5 r = s.get("https://fbref.com/fr/comps/13/calendrier/Scores-et-tableaux-Ligue-1")
6 tree = etree.HTML(r.content)
7 matchreporturls = tree.xpath('//td[@data-stat="match_report"]/a[text()="Rapport de match "]/@href')
8
9 for matchreport in matchreporturls:
10 r = s.get("https://fbref.com" + matchreport)
11 # do something with the response data
12 print('scraped {0}'.format(r.url))
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74180964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Which one is better for (mysql) performance? Which one is better for (mysql) performance ?
"1500 (300x5) Tables with 365 rows per table" OR "(300x5)1500x365 rows in 1 table" ?
(I will get the data(s) with php.)
365 -> days number of the year
*
*if second: I will use "date" and "x_id" to get the 1 row from (300x5)1500x365 rows.
*if first: I will use "date" and "table_name" to get 1 row from 365 rows.**
A: Second one.
http://datacharmer.blogspot.com/2009/03/normalization-and-smoking.html
[edited after updates to question]
In second case you can create an index on columns (x_id,date) which will improve performance of WHERE x_id = ? AND date = ? searches. Some ~550000 rows is not much for well indexed table.
A: Assuming that all the relationships in the data are one-to-one, then the second option of using a single table is the better (normalized) approach.
But there's no detail to the multi-table option, or the data that is being stored. Bad design, like a table per user, is responsible for performance - not the fact of numerous tables.
Update
Things are still quite vague, but the data you want to store is daily and over the course of years. What makes it different that you would consider separate tables with identical formats? The identical tables prompts me to suggest single main table, with some supporting tables involved for things like status and/or type codes to differentiate between records in a single table that were obvious in a separate table approach.
A: That depend on the engine you are using and also if you are going to have more read or more write.
For that look at the way each engine lock the table regarding read and write.
But 1500 table is a lot. I would expect something like 10 table.
1 table is not a bad choice either but if one day you want to have multiple server it gonna be more easy to spread them with 10 table.
Also if you change the structure of the table is going to be long with 1 table. (I know that it shouldn't append but the fact it does)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3431282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Lambda Expression using Foreach Clause
Possible Duplicate:
Why is there not a ForEach extension method on the IEnumerable interface?
EDIT
For reference, here's the blog post which eric referred to in the comments
https://ericlippert.com/2009/05/18/foreach-vs-foreach/
ORIG
More of a curiosity I suppose but one for the C# Specification Savants...
Why is it that the ForEach() clause doesn't work (or isn't available) for use on IQueryable/IEnumerable result sets...
You have to first convert your results ToList() or ToArray()
Presumably theres a technical limitation to the way C# iterates IEnumerables Vs. Lists...
Is it something to do with the Deferred Execution's of IEnumerables/IQuerable Collections.
e.g.
var userAgentStrings = uasdc.UserAgentStrings
.Where<UserAgentString>(p => p.DeviceID == 0 &&
!p.UserAgentString1.Contains("msie"));
//WORKS
userAgentStrings.ToList().ForEach(uas => ProcessUserAgentString(uas));
//WORKS
Array.ForEach(userAgentStrings.ToArray(), uas => ProcessUserAgentString(uas));
//Doesn't WORK
userAgentStrings.ForEach(uas => ProcessUserAgentString(uas));
A: It's perfectly possible to write a ForEach extension method for IEnumerable<T>.
I'm not really sure why it isn't included as a built-in extension method:
*
*Maybe because ForEach already existed on List<T> and Array prior to LINQ.
*Maybe because it's easy enough to use a foreach loop to iterate the sequence.
*Maybe because it wasn't felt to be functional/LINQy enough.
*Maybe because it isn't chainable. (It's easy enough to make a chainable version that yields each item after performing an action, but that behaviour isn't particularly intuitive.)
public static void ForEach<T>(this IEnumerable<T> source, Action<T> action)
{
if (source == null) throw new ArgumentNullException("source");
if (action == null) throw new ArgumentNullException("action");
foreach (T item in source)
{
action(item);
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/858978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "51"
} |
Q: Sort an NSMutableArray / Dictionary with several objects I am having a problem that I think I am overcomplicating.
I need to make either an NSMutableArray or NSMutableDictionary. I am going to be adding at least two objects like below:
NSMutableArray *results = [[NSMutableArray alloc] init];
[results addObject: [[NSMutableArray alloc] initWithObjects: [NSNumber numberWithInteger:myValue01], @"valueLabel01", nil]];
This gives me the array I need but after all the objects are added I need to be able to sort the array by the first column (the integers - myValues). I know how to sort when there is a key, but I am not sure how to add a key or if there is another way to sort the array.
I may be adding more objects to the array later on.
A: Quick reference to another great answer for this question:
How to sort NSMutableArray using sortedArrayUsingDescriptors?
NSSortDescriptors can be your best friend in these situations :)
A: What you have done here is create a list with two elements: [NSNumber numberWithInteger:myValue01] and @"valueLabel01". It seems to me that you wanted to keep records, each with a number and a string? You should first make a class that will contain the number and the string, and then think about sorting.
A: Doesn't the sortedArrayUsingComparator: method work for you? Something like:
- (NSArray *)sortedArray {
return [results sortedArrayUsingComparator:(NSComparator)^(id obj1, id obj2)
{
NSNumber *number1 = [obj1 objectAtIndex:0];
NSNumber *number2 = [obj2 objectAtIndex:0];
return [number1 compare:number2]; }];
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12165525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Cant find the file i created on android tablet /data directory in android is empty. I have installed a few applications and i also wrote code to create a file in /data/Myapp directory ,
I have these lines in OnCreate method , but i dont see any files in /data directory after running the program.
File fileDir = getFilesDir();
String filedir=fileDir.toString();
Log.v("TEST","FILEDIR--->"+filedir);
//output i got is FILEDIR--->/data/data/com.android.Test/files
String strNewFileName = "test1.txt";
String strFileContents = " MY FIRST FILE CREATION PRGM";
File newFile = new File(fileDir, strNewFileName);
try{
boolean filestat = newFile.createNewFile();
Log.v("TEST"," CREATE FILE =>"+ filestat);
//output is CREATE FILE =>true indicating success
FileOutputStream fo =
new FileOutputStream(newFile.getAbsolutePath());
fo.write(strFileContents.getBytes());
fo.close();
}
catch(IOException e)
{Log.v("TEST","Exception on create ");}
A: On real device, you can't see these folders if it's not rooted. see this question.
If you want to write files in your app directory, then your code is OK. You can check that it is there in the same way you created it - from your application:
File myFile = new File(getFilesDir() + "/test1.txt");
if (myFile.exists())
{
//Good!
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/5458327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: how to sort parameters with Jenkins parametrized job? Problem:
A Jenkins job is starting to have a lot of parameter (over 20).
Is there a way to group them, have title, etc...?
I looked up and found none.
A: There is no way of sorting parameters automatically that I'm aware of. You can arrange them via Drag&Drop in your project's config manually, of course.
You can group them using the Parameter Separator Plugin:
Meta Data → [✔] This build is parameterized → Add Parameter → Parameter Separator
| {
"language": "en",
"url": "https://stackoverflow.com/questions/31683273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How can I copy dependencies from a library project to the main project? I have an Asp.Net project (Vb.Net) that references a managed dll (library written in C#). That library project has several unmanaged dependencies dlls in a lib folder (copied into bin/Release/lib folder during build). The library is not a part of the main solution.
My library uses [DllImport] that references an unmanaged dll. But to let the unmanaged dll be found, I call SetDllDirectory():
string path = // don't know how to generate the path
SetDllDirectory(path);
I am struggling with generating the path to the unmanaged dll and its dependencies. I can start with my main project bin folder or something. But what should I do next? E.g. is there a way to copy the unmanaged dlls from the library's bin/Release/lib folder to my main project's bin folder? Or some other solution?
A: There are two ways you can go about this:
*
*Add the Dlls to the VS project as a file, then set Build Action to None and Copy to output directory as Copy. This should ensure that any external dependencies of the referenced library are copied.
*Add a command like xcopy "$(ProjectDir)lib\*.*" "$(OutDir)\" /y /e or xcopy "$(ProjectDir)lib\*.*" "$(TargetDir)" /y to the Build Events section in Project properties. This should copy the \lib directory from the project root to the output.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51383856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to get iframe images to print as part of full document in IE I have two IFRAMEs. Each IFRAME is loaded dynamically as the result of submitting a FORM. The result of each FORM submit is a dynamically generated jpeg image (Content-type: image/jpeg). The following page displays correctly (you can see both images). If you try to print this page in IE however, you just get an outline of the images, but not the actual images. It prints correctly in Safari and Chrome. How can you get the IFRAME images to print along with the rest of the document in IE?
I am using IE 9.0.8112.16421.
<HTML>
<HEAD>
<TITLE>View Image</TITLE>
</HEAD>
<BODY onLoad="top.document.Form1.submit();top.docment.Form2.submit();">
<FORM NAME=Form1 ACTION="URL" METHOD=POST Target=Img1>
<INPUT TYPE=HIDDEN NAME=ID VALUE=123>
<INPUT TYPE=HIDDEN NAME=Cmd VALUE=DispImage1>
</FORM>
<FORM NAME=Form2 ACTION="URL" METHOD=POST Target=Img2>
<INPUT TYPE=HIDDEN NAME=ID VALUE=345>
<INPUT TYPE=HIDDEN NAME=Cmd VALUE=DispImage2>
</FORM>
<IFRAME NAME=Img1 SRC=Javascript:'' WIDTH=500 HEIGHT=250 FRAMEBORDER=0 SCROLLING=no>
</IFRAME><BR>
<IFRAME NAME=Img2 SRC=Javascript:'' WIDTH=500 HEIGHT=250 FRAMEBORDER=0 SCROLLING=no>
</IFRAME>
</BODY>
</HTML>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16324719",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Building Framework as "Generic iOS Device" Causes Use of Undeclared Type in Project Using Framework Basically, as stated in the title, when I build my Cocoa Touch Framework for "Generic iOS Device," it causes "Use of Undeclared Type" compilation errors in my XCode project using the framework. However, when I build the XCode Project for "Generic iOS Device" too, the errors go away.
My question is: How can I build the framework in such a way that it can be used for simulators as well as a generic iOS device?
I was under the impression that building a Cocoa Touch Framework for "Generic iOS Device" would allow it to be used in any build configuration. Is this incorrect?
Is there something that has to be changed in the build settings or schemes?
Thanks!
A: Update: I was mistaken, and due to simulators and iPhones having different architectures, you have to compile the framework for each one respectively. However, I was able to create a "fat framework" by following this Medium article: https://medium.com/@hassanahmedkhan/a-noobs-guide-to-creating-a-fat-library-for-ios-bafe8452b84b
This fat framework can be used for both "Generic iOS Device" and the simulator.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54975976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Error using Firebase from Chrome App I am attempting to use Firebase from within a Chrome App (not an extension) and have not been able to work around the following error:
Refused to load the script
'https://{my firebase id}.firebaseio.com/.lp?start=t&ser=81980096&cb=15&v=5'
because it violates the following Content Security Policy directive:
"default-src 'self' chrome-extension-resource:". Note that
'script-src' was not explicitly set, so 'default-src' is used as a
fallback.
my manifest.json file looks like this:
{
"manifest_version": 2,
"name": "Simple Demo",
"version": "1",
"icons": {
"128": "icon_128.png"
},
"permissions": ["https://*.firebaseio.com/"],
"app": {
"background": {
"scripts": ["background.js"]
}
},
"minimum_chrome_version": "28"
}
I have the firebase.js script local to the chrome app, the issue seems to be that it tries to load other scripts which isn't permitted.
Any ideas are appreciated.
A: Well, You can use firebase REST API Approach, in this case, I've used for my chrome app and its working fine. This don't require to have firebase SDK to be added!
Read the following docs,
https://firebase.googleblog.com/2014/03/announcing-streaming-for-firebase-rest.html
https://firebase.google.com/docs/reference/rest/database/
https://firebase.google.com/docs/database/rest/retrieve-data
Approach is very simple
Protocol is known as SSE (Server Sent Events) where you listen to specific server URL and when something changes, you get the event callback.
In this case, firebase officially provides SSE based mechanism
A sample code snippet is,
//firebaseUrl will be like https://xxxxx.firebaseio.com/yourNode.json or https://xxxxx.firebaseio.com/yourNode/.json
//notice ".json" at the end of URL, whatever node you want to listen, add .json as suffix
var firebaseEventSS = new EventSource(firebaseUrl);
//and you will have callbacks where you get the data
firebaseEventSS.addEventListener("patch", function (e) {
var data = e.data;
}, false);
firebaseEventSS.addEventListener("put", function (e) {
var data = e.data;
}, false);
Just check few EventSource examples and combine with Firebase Doc for REST API and you are all set!
Don't forget to close() when it's done.
firebaseEventSS.close();
A: This is an old question, but the solution offered by the Firebase team is to use a different URL protocol for the database.
So instead of using https://<app>.firebaseio.com/ you'd use wss://<app>.firebaseio.com/.
I found the solution here. Here is an answer provided by the Firebase team on this subject.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29399594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Start SSH daemon in Laravel sail I'm using Laravel Sail and have published the Dockerfile (so I can edit it manually). The reason being that I need to use an OpenSSH server on the container itself for a program I'm using.
I am able to install OpenSSH by adding && apt-get install -y openssh-server to the packages section and configure it with the following:
RUN mkdir /var/run/sshd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
EXPOSE 22
The issue though is that it does not start when the container starts.
According to this answer you can add an ENTRYPOINT as follows:
ENTRYPOINT service ssh restart && bash
However you can only have one ENTRYPOINT and Laravel Sail has this by default:
ENTRYPOINT ["start-container"]
I have tried the following:
*
*add /usr/sbin/sshd to the start-container script - it didn't start
*add CMD ["/usr/sbin/sshd", "-D"] into the Dockerfile as per this answer - it gave a permission denied error when starting the container
A: As the php container uses supervisor you can add another application to the config. Reference for the config can be found here.
Example
ssh.ini:
[program:sshd]
command=/usr/sbin/sshd -D
| {
"language": "en",
"url": "https://stackoverflow.com/questions/71288487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: xcode = I sended data from one view to the other, with prepareForSegue, but can't send it to another view in prepareForSegue I have a problem. First I sended information from textfield with prepareForSegue. I sended from WelkomViewController to TableViewController. There I put the data in a NSMutableDictionary. It's a person and his values. Then I put the person in another NSMutableDictionary for multiple persons. In the tableview I create a list with only the full name of all the persons. And with a click on the row I want to give a new view with the details of the person that the client clicked on. But I can't send the data with prepareForSegue it seems.(see image)
What do I do wrong? Thx for every reply!
project
A: You're trying to use a dictionary like an array, which doesn't work (obviously). You should not count on a fixed order for [self.personen allKeys], it can change at any point during runtime, which would result in the wrong person being passed along.
You need to use a mutable array as your storage vessel for people names, then address them by the row of the index path.
A: First of all you need to use NSmutablearray to save data of people so make sure that self.personen is an NSmutablearray then you need just to use ObjectAtindex function to get the selected row .
vcGevensView.persoon=[self.personen objectAtIndex:indexPath.row];
| {
"language": "en",
"url": "https://stackoverflow.com/questions/39002063",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: i can't limit a javascript animation in time I have to code an animation for an array. Every box, one by one, will take a white color for 3000 ms. But it's not working. Can you help me?
function searchNumber(){
var taille;
for(var i=0; i<taille; i++) {
tabRect[i].hide("5000");
tabRect[i].show();
}
}
A: You can do it with CSS3, is not necessary javascript for that:
https://jsfiddle.net/rzcdqh8k/
.animation {
width: 100px;
height: 100px;
background-color: red; //original color
-webkit-animation-name: example;
-webkit-animation-duration: 3s;
-webkit-animation-iteration-count: infinite;
animation-name: example;
animation-duration: 3s;
animation-iteration-count: infinite;
}
/* Chrome, Safari, Opera */
@-webkit-keyframes example {
from {background-color: red;} //original color
to {background-color: white;}
}
/* Standard syntax */
@keyframes example {
from {background-color: red;} //original color
to {background-color: white;}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36249449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: Combining Provider with a steam provider not updating I am working on a technique to determine if some has elevated rights to show an edit icon. I am using Firebase for auth and firestore for back end.
My thoughts were to have the page do a quick check for a record within a certain section that requires the user to be in that section. IE there is a section called /admins/. The rules will only let you read that data if your uid is in that list. I have that working.
So I built a FutureProvider:
final adminCheckProvider = FutureProvider<bool>((ref) async {
bool admin = false;
User? _uid = ref.watch(authStateChangesProvider).value;
if (_uid != null) {
print(_uid.uid);
// Check for Admin Document
final _fireStore = FirebaseFirestore.instance;
await _fireStore
.doc(FireStorePath.admin(_uid.uid))
.get()
.then((DocumentSnapshot documentSnapshot) {
if (documentSnapshot.exists) {
print('Document Exists: ${documentSnapshot.exists}');
return true;
}
});
}
return false;
});
and have a widget that is watching this provider.
class AdminEdit extends ConsumerWidget {
const AdminEdit({Key? key}) : super(key: key);
@override
Widget build(BuildContext context, WidgetRef ref) {
AsyncValue<bool> isAdmin = ref.watch(adminCheckProvider);
return isAdmin.when(
data: (isAdmin) {
print('Data Is In: $isAdmin');
if (isAdmin) {
return Text('Is An Admin');
}
return Text('Is Not Admin');
},
loading: () => Text('Is Not Admin'),
error: (error, stackTrace) => Text('Error: $error'),
);
}
}
I am seeing that the call originally returns false, but when the data is returned and it is determined the document does exist but it never sends out the all clear. I have tried this several different ways and have yet to have this return true. Here is some terminal out put
Data Is In: false
<<UID>>
Document Exists: true
Data Is In: false
If my approach to this is wrong I wouldn't mind hearing about that either.
Thanks in advance guys!
A: Looks like you are returning false as default and only returning true inside your future, not the actual FutureProvider.
Try something like this and it should work:
final adminCheckProvider = FutureProvider<bool>((ref) async {
User? _uid = ref.watch(authStateChangesProvider).value;
if (_uid != null) {
print(_uid.uid);
// Check for Admin Document
final _fireStore = FirebaseFirestore.instance;
final doc = await _fireStore.doc(FireStorePath.admin(_uid.uid)).get();
return doc.exists;
} else {
return false;
}
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/72045478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Should I reinstall all Python libraries if I install Anaconda's Python as primary in my system? I have been using Python on Windows for some time to analyze survey data, usually available in the form of Excel files. For this reason I have installed several libraries, including pywin32, holoview, bokeh, pandas, numpy and so on.
Now I have found that there is a Python distribution called Anaconda which is a prerequisite for some artificial intelligence libraries that I would like to use.
I downloaded it, but when I install it, it recommends me to register the Python included in Anaconda as primary. This would mean that it would be seen as such by all the tools I use, such as PyCharm.
If I understand correctly, it is possible to have several Python installations on a PC, but what happens to the libraries? I mean, if I make Anaconda's Python primary, do I have to reinstall all the libraries I used before to run the programs I have already written?
I can't find an answer in the Anaconda FAQ, so before proceeding with the installation, I would need to better understand what conflicts I might possibly create on my system.
A: Here are the some answers from my side.
1. Will the libraries & files conflit?
No. - Both local & Anaconda will have separete site packages folders to store installed libraries.No matter how many different versions of python you install there will be separate site-packages folders named with respective versions to store installed libraries.
2. Should I need to re-install packages again that I'm alredy using in older python before I run a program on anaconda?
Yes. Local python will use - cmd -WIndows command prompot
Anoconda will use - Anaconda prompt - Which will be installed along with installation. Both Anconda and local python maintains separate storage locations in order to store & process data which includes libraries, settings, Environments, cache....
3.if we selects Anaconda as primary. This would mean that it would be seen as such by all the tools I use, such as PyCharm?
No. Pycharm will have old configuartion whatever you using currently
even thouh we install anaconda & make its a primary. But, still you
can use anaconda from pycharm by creating a virtual environmnet for it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74209563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Make current OAuth User Identity retrievable via IoC in WebAPI project I have an Owin based OAuth 2.0 implemention in ASP.NET WebAPI 2 application. Given that I set a correct claims to a validated identity, In my controller code I an get the currently authenticated user via this.User property.
I need to setup Ninject kernel that way so on each request the PerRequestScope object is instanciated that would hold a current user, but it would be available in an injectable manner, and not just from controller's code.
Please assist on how this can be done. Thank you.
A: Ok, found it.
kernel.Bind<IPrincipal>()
.ToMethod(context => HttpContext.Current.User)
.InRequestScope();
Will allow anyone injecting IPrincipal to get the current user name.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/28431725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Merging a dataframe in a dict with another dataframe in Python I have a dict that currently looks like this:
raw_data = {'Series_Date':['2017-03-10','2017-03-10','2017-03-10','2017-03-13','2017-03-13','2017-03-13'],'Value':[1,1,1,1,1,1],'Type':['SP','1M','3M','SP','1M','3M'],'Desc':['Check SP','Check 1M','Check 3M','Check SP','Check 1M','Check 3M']}
import pandas as pd
df1= pd.DataFrame(raw_data,columns=['Series_Date','Value','Type','Desc'])
dict = {}
dict = {'Check':df1}
print dict
I am trying to append the appended df to the df element of this dict such as:
appended_data = {'Series_Date':['2017-03-13','2017-03-13','2017-03-13'],'Value':[1,1,1],'Type':['SP','1M','3M'],'Desc':['Check SP','Check 1M','Check 3M']}
import pandas as pd
appended = pd.DataFrame(appended_data,columns=['Series_Date','Value','Type','Desc'])
print appended
adfs = {k:df.merge(appended[appended.Desc==df.Desc],on=['Series_Date'],how='left',suffixes=['','_Appended']) for (k,df) in dict.items()}
However, on running this merge statement, I get the following error: ValueError: Can only compare identically-labeled Series objects
Tried reading on thie error but not sure how it's applicable here, any thoughts what could be done to get over this error or alternatively, another approach to do
A: using 'pd.concat' can do the job here.
import pandas as pd
raw_data = {'Series_Date':['2017-03-10','2017-03-10','2017-03-10','2017-03-13','2017-03-13','2017-03-13'],'Value':[1,1,1,1,1,1],'Type':['SP','1M','3M','SP','1M','3M'],'Desc':['Check SP','Check 1M','Check 3M','Check SP','Check 1M','Check 3M']}
df1= pd.DataFrame(raw_data,columns=['Series_Date','Value','Type','Desc'])
print 'df1:\n', df1
appended_data = {'Series_Date':['2017-03-13','2017-03-13','2017-03-13'],'Value':[1,1,1],'Type':['SP','1M','3M'],'Desc':['Check SP','Check 1M','Check 3M']}
appended = pd.DataFrame(appended_data,columns=['Series_Date','Value','Type','Desc'])
print 'appended\n:',appended
df_concat =pd.concat([appended,df1],axis=0)
print 'concat\n:',df_concat
Will results with:
df1:
Series_Date Value Type Desc
0 2017-03-10 1 SP Check SP
1 2017-03-10 1 1M Check 1M
2 2017-03-10 1 3M Check 3M
3 2017-03-13 1 SP Check SP
4 2017-03-13 1 1M Check 1M
5 2017-03-13 1 3M Check 3M
appended
: Series_Date Value Type Desc
0 2017-03-13 1 SP Check SP
1 2017-03-13 1 1M Check 1M
2 2017-03-13 1 3M Check 3M
concat
: Series_Date Value Type Desc
0 2017-03-13 1 SP Check SP
1 2017-03-13 1 1M Check 1M
2 2017-03-13 1 3M Check 3M
0 2017-03-10 1 SP Check SP
1 2017-03-10 1 1M Check 1M
2 2017-03-10 1 3M Check 3M
3 2017-03-13 1 SP Check SP
4 2017-03-13 1 1M Check 1M
5 2017-03-13 1 3M Check 3M
A: How about merging on both desc and Series_date
adfs = {k:df.merge(appended,on=['Desc' , 'Series_Date'], how='left',suffixes=['','_Appended']) for (k,df) in dict.items()}
A statement like appended.Desc == df.Desc is problematic, as these series are of different shape. You can try isin, such as appended.Desc.isin(df.Desc).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43144640",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I re-route network traffic using Python? Small apps like Freedom and Anti-social have created quite a stir lately. They cut you off from the internet either entirely or just block social networking sites in order to discourage procrastination and help you exert self control when you really want to get some work done.
I looked at the available options and also at some Google Chrome extensions and decided that none of them is exactly doing what I want, so I'm planning to write my own little Python tool to do that. My first impulse was to simply modify /etc/hosts to re-route requests to certain servers back to localhost. However, this can only block entire domains. I'd need to block addresses based on regular expressions or simple string matching to block something like google.com/reader (yes, this one in particular), but not the entire google.com domain.
Which Python framework can I use to monitor my network traffic and block blacklisted addresses and what is the basic design to achieve what I want to do? Do I use the socket library? I'm very comfortable with Python, but I'm very new to network programming.
A: Since you're comfortable with python, I'd directly recommend twisted. It is slightly harder than some other libraries, but it is well-tested, has great performance and many features. You would just implement a small HTTP proxy and do your regexp filtering on the URLs.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3960294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Why am I getting the record added again after deleting it I have a button, which calls the delete function when clicking on it:
deleteRecord = (id) => {
return this._entriesRef.doc(id).delete()
}
And in the DataStore, I have a event listener, which listens for the change of the snapshot:
snapshot.docChanges.forEach((change) => {
if (change.type === 'added') {
this[this._category].unshift({
...change.doc.data(),
id: change.doc.id
})
} else if (change.type === 'removed') {
this[this._category] = this[this._category].filter((log) => {
return log.id !== change.doc.id
})
}
}, this)
However, using breakpoints I can see that after the deletion happens, it first goes to the removed if statement, which is as expected, but then added the same record(with the same id) back to the data.
How could that happen?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/48411373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is it possible to initialize a property in a category before any category method is called? Is it possible to initialize a property in a category?
For example, if I have a mutable array property called nameList in a class. Is it possible to have a category created for this class to add an object to the array property before any of the category method is called?
A: If I understand you correctly, and others are interpreting your question differently, what you have is:
*
*A class with a property
*A category on that class
And you want to call a particular method automatically before any category method is called on a given instance, that method would "initialise" the category methods by modifying the property.
In other words you want the equivalent of a subclass with its init method, but using a category.
If my understanding is correct then the answer is no, there is no such thing as a category initializer. So redesign your model not to require it, which may be to just use a subclass - as that provides the behaviour you are after.
The long answer is you could have all the category methods perform a check, say by examining the property you intend to change to see if you have. If examining the property won't determine if an object has been "category initialized" then you might use an associated object (look in Apple's runtime documentation), or some other method, to record that fact.
HTH
Addendum: An even longer/more complex solution...
GCC & Clang both support a function (not method) attribute constructor which marks a function to be called at load time, the function takes no parameters and returns nothing. So for example, assume you have a class Something and a category More, then in the category implementation file, typically called Something+More.m, you can write:
__attribute__((constructor)) static void initializeSomethingMore(void)
{
// do stuff
}
(The static stops the symbol initializeSomethingMore being globally visible, you neither want to pollute the global name space or have accidental calls to this function - so you hide it.)
This function will be called automatically, much like a the standard class + (void) initialize method. What you can then do using the Objective-C runtime functions is replace the designated initializer instance methods of the class Something with your own implementations. These should first call the original implementation and then an initialize your category before returning the object. In outline you define a method like:
- (id) categoryVersionOfInit
{
self = [self categoryVersionOfInit]; // NOT a mistake, see text!
if (self)
{
// init category
}
return self;
}
and then in initializeSomethingMore switch the implementations of init and categoryVersionOfInit - so any call of init on an instance of Something actually calls categoryVersionOfInit. Now you see the reason for the apparently self-recursive call in categoryVersionOfInit - by the time it is called the implementations have been switched so the call invokes the original implementation of init... (If you're crosseyed at this point just draw a picture!)
Using this technique you can "inject" category initialization code into a class. Note that the exact point at which your initializeSomethingMore function is called is not defined, so for example you cannot assume it will be called before or after any methods your target class uses for initialization (+ initialize, + load or its own constructor functions).
A: Sure, it possible through objc/runtime and objc_getAssociatedObject/objc_setAssociatedObject
check this answer
A: No it's not possible in objective c.Category is the way to add only method to an existing class you can not add properties in to this.
Read this
Why can't I @synthesize accessors in a category?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/21898667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Regular expression alphanumeric with dash and underscore and space, but not at the beginning or at the end of the string This regex lets me have a string with alphanumeric, dash, underscore, and space chars in my string:
^[a-zA-Z0-9-_ ]+$
However, I need it to prevent string starting and ending by space. How can I do it?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60655453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Can @SuppressWarnings("deprecation") apply to the use of a deprecated interface without applying to the whole class? I have some legacy code that implements a deprecated interface. This particular component will soon be deprecated and removed itself so it does not make sense to refactor to address the root cause of the compiler warning. Instead I'd like to suppress it. However, I do NOT want the scope of the suppression to be for the entire class.
The code was originally:
public class Foo
extends
Bar
implements
DeprecatedBaz,
Qux { ... }
DeprecatedBaz is an interface which has been marked as @Deprecated and is a third party framework meaning I am unable to remove the @Deprecated. I'd like to suppress the warning but not deprecations for the whole class. Essentially I'd like to write:
public class Foo
extends
Bar
implements
@SuppressWarnings("deprecation")
DeprecatedBaz,
Qux { ... }
However this is invalid syntax and does not parse. So next I had hoped I might be able to do it at the import but this SO post seems to imply it must be done at a class level.
Alternatively I thought potentially applying it to all of the methods that interface dictates must be implemented might address the issue but that did not work either.
So it seems I'm forced to apply the annotation at the class level:
@SuppressWarnings("deprecation")
public class Foo
extends
Bar
implements
DeprecatedBaz,
Qux { ... }
I don't like this because if someone edits this class and introduces new code which refers to deprecated code the warning will be swallowed.
Is there a way to limit the scope in this case?
A: The @SuppressWarnings annotation can only be used at the point of a declaration. Even with the Java 8 annotation enhancements that allow annotations to occur in other syntactic locations, the @SuppressWarnings annotation can't be used where you need it in this case, that is, at the point where a deprecated interface occurs in the implements clause.
You're right to want to avoid putting @SuppressWarnings on the class declaration, since that will suppress possibly unrelated warnings throughout the entire class.
One possibility for dealing with this is to create an intermediate interface that extends the deprecated one, and suppress the warnings on it. Then, change the uses of the deprecated interface to the sub-interface:
@SuppressWarnings("deprecation")
interface SubBaz extends DeprecatedBaz { }
public class Foo ... implements SubBaz ...
This works to avoid the warnings because class annotations (in this case, @Deprecated) are not inherited.
A: The purpose of the @Deprecated annotation is to trigger the warning.
If you don't want to trigger the warning, don't use the annotation.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/23820581",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Play with numbers problem from hackerearth ....i'm getting time limit exceed problem You are given an array of n numbers and q queries. For each query you have to print the floor of the expected value(mean) of the subarray from L to R.
Input:
First line contains two integers N and Q denoting number of array elements and number of queries.
Next line contains N space seperated integers denoting array elements.
Next Q lines contain two integers L and R(indices of the array).
Output:
print a single integer denoting the answer.
Constraints:
1<= N ,Q,L,R <= 10^6
1<= Array elements <= 10^9
import java.util.Scanner;
public class PlayWithNumbers {
public static void main(String[] args) {
Scanner obj=new Scanner(System.in);
long n=obj.nextLong();
long qry=obj.nextLong();
long arr[]=new long[(int) n];
for (int i = 0; i <n ; i++) {
arr[i]=obj.nextInt();
}
for (int j = 0; j <qry ; j++) {
long sum=0;
double ans=0;
int L=obj.nextInt();
int R=obj.nextInt();
sum=(L+R)/2;
ans=Math.floor(sum);
System.out.println((int) ans);
}
}
}
A: First: your solution is wrong. The question clearly is stating that L and R are the indexes of the subarray (not the value), and you are using as value to find the mean value.
Second: Scanner class is very easy, need less typing but not recommended as it is very slow. Instead, use BufferReader.
Here is my solution:
import java.io.BufferedReader;
import java.io.InputStreamReader;
public class PlayWithNumbers {
public static void main(String[] args) {
BufferedReader br = new BufferedReader(
new InputStreamReader(System.in));
StringTokenizer st = new StringTokenizer(br.readLine());
long n=Integer.parseInt(st.nextToken());
long qry=Integer.parseInt(st.nextToken());
long arr[]=new long[(int) n];
st = new StringTokenizer(br.readLine());
// read every number, adding with previous all numbers and store in the array index
arr[0] = Integer.parseInt(st.nextToken());
for (int i = 1; i <n ; i++) {
arr[i]=arr[i-1]+Integer.parseInt(st.nextToken());
}
for (int j = 0; j <qry ; j++) {
long sum=0;
double ans=0;
st = new StringTokenizer(br.readLine());
int L=Integer.parseInt(st.nextToken());
int R=Integer.parseInt(st.nextToken());
// check if the value 1 then don't subtract left value (as in that case there won't be any left value
// otherwise subtract just left most value from the array
if (L == 1) {
sum=arr[R-1]/(R-L+1);
} else {
sum=(arr[R-1] - arr[L-2])/(R-L+1);
}
ans=Math.floor(sum);
System.out.println((int) ans);
}
}
}
Let me know if you need any clarification.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62346133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: Why composer runs as root even though it's setuid has been changed to another user I am using official docker php image as base image and installed composer on it. Some snippets from my dockerfile has given below.
RUN set ex \
# Install Composer
&& curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin -- --filename=composer \
&& chown www-data:www-data /usr/local/bin/composer \
&& chmod u+s /usr/local/bin/composer
Docker main process runs as root and worker process runs as www-data. When I run composer install on my container the vendor dir and composer.lock etc owned by root as the container's main process runs as root. So I had changed owner of the /usr/local/bin/composer to www-data and set u+s setuid on it. You can see it below.
/var/www/test # ls -al /usr/local/bin/composer
-rwsr-sr-x 1 www-data www-data 1875611 Oct 21 00:56 /usr/local/bin/composer
But when I run composer install still the vendor dir etc created with root owner. What is wrong I am doing ?
-rwxr-x--- 1 1000 www-data 2299 Oct 19 06:36 composer.json
-rw-r--r-- 1 root root 276423 Oct 21 01:02 composer.lock
drwxr-x--- 4 1000 www-data 4096 Oct 19 06:36 drush
-rwxr-x--- 1 1000 www-data 414 Oct 19 06:36 load.environment.php
-rwxr-x--- 1 1000 www-data 481 Oct 19 06:36 phpunit.xml.dist
drwxr-x--- 3 1000 www-data 4096 Oct 19 06:36 scripts
drwxr-xr-x 50 root root 4096 Oct 21 01:08 vendor
drwxr-x--- 7 1000 nginx 4096 Oct 21 01:02 web
Update-1
The dir where composer keeps vendor directory is bind mounted named volume. My docker-compose file like below:
version: "3.3"
services:
nginx:
container_name: ${PROJECT_NAME}.nginx
build: ./docker/nginx
image: witbix/nginx
restart: always
volumes:
- drupal:/var/www/${PROJECT_NAME}:cached
working_dir: /var/www/${PROJECT_NAME}
environment:
PROJECT_NAME: ${PROJECT_NAME}
DOMAIN_NAME: ${DOMAIN_NAME}
DRUPAL_VERSION: ${DRUPAL_VERSION}
MYSQL_HOSTNAME: ${PROJECT_NAME}.mariadb
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MYSQL_PORT: ${MYSQL_PORT}
HOST_USER: ${USER}
labels:
- "traefik.frontend.rule=Host:${DOMAIN_NAME}"
networks:
- drupal
php:
container_name: ${PROJECT_NAME}
# build: ./docker/php
image: witbix/php
restart: always
volumes:
- drupal:/var/www/${PROJECT_NAME}:cached
working_dir: /var/www/${PROJECT_NAME}
environment:
GITHUB_TOKEN: ${GITHUB_TOKEN}
networks:
- drupal
mariadb:
container_name: ${PROJECT_NAME}.mariadb
# build: ./docker/mariadb
image: witbix/mariadb
restart: always
volumes:
- database:/var/lib/mysql
environment:
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
networks:
- drupal
volumes:
drupal:
driver: local
driver_opts:
type: bind
device: $PWD/code/drupal
o: bind
database:
driver: local
networks:
drupal:
external:
name: ${NETWORK_NAME}
So when I execute mount command on my nginx container It gives me below output.
/var/www/test # mount
overlay on / type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay2/l/CYKNPHGSOXMLOUVNEOJ6QITFA2:/var/lib/docker/overlay2/l/OOOELOMQBXBBMCRFGVOTTOXUTQ:/var/lib/docker/overlay2/l/SLGSDLE7HYX7AY4JCOWPJIKD73:/var/lib/docker/overlay2/l/RMB5364TWTFBFY6HFZWJVTROKW:/var/lib/docker/overlay2/l/JGNFDDFSDHLKE4E63LME3E7QM3:/var/lib/docker/overlay2/l/STSQQ4PZE25ZTSNMTHBBD6AELJ:/var/lib/docker/overlay2/l/XJLZ5WXZZF55YINJ7TMCDMIL6G:/var/lib/docker/overlay2/l/W3DF5PJFB4H57RBOZ44CLWKGEP:/var/lib/docker/overlay2/l/NKVID7PASLZXXMDWZW6AHFPGOE:/var/lib/docker/overlay2/l/TQQRV5LAYELBLUBS5D6FPHRI3S,upperdir=/var/lib/docker/overlay2/0865874042b7848d173e19593df0f3397f466450f5f3b8f3d33fc79a33c3f336/diff,workdir=/var/lib/docker/overlay2/0865874042b7848d173e19593df0f3397f466450f5f3b8f3d33fc79a33c3f336/work)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,size=65536k,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,relatime,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (ro,nosuid,nodev,noexec,relatime,xattr,name=systemd)
cgroup on /sys/fs/cgroup/pids type cgroup (ro,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/memory type cgroup (ro,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (ro,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/devices type cgroup (ro,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (ro,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/blkio type cgroup (ro,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (ro,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/cpuset type cgroup (ro,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/rdma type cgroup (ro,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/freezer type cgroup (ro,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/perf_event type cgroup (ro,nosuid,nodev,noexec,relatime,perf_event)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
/dev/vda1 on /etc/resolv.conf type ext4 (rw,relatime,data=ordered)
/dev/vda1 on /etc/hostname type ext4 (rw,relatime,data=ordered)
/dev/vda1 on /etc/hosts type ext4 (rw,relatime,data=ordered)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
/dev/vda1 on /var/www/test type ext4 (rw,relatime,data=ordered)
proc on /proc/bus type proc (ro,relatime)
proc on /proc/fs type proc (ro,relatime)
proc on /proc/irq type proc (ro,relatime)
proc on /proc/sys type proc (ro,relatime)
proc on /proc/sysrq-trigger type proc (ro,relatime)
tmpfs on /proc/acpi type tmpfs (ro,relatime)
tmpfs on /proc/kcore type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/keys type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/timer_list type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/sched_debug type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/scsi type tmpfs (ro,relatime)
tmpfs on /sys/firmware type tmpfs (ro,relatime)
But executing mount command on my php container doesn't provide any output. It may as I am mounted the local files with nginx and then using that nginx volume with php.
A: The composer program is a ascii text file, and as such the setuid bit has no effect on it. Since you are kicking off the process as root, you can do something like su www-data -c "composer ...."
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52911426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Unable to start Sphinx searchd daemon due to already running searchd process, and it restarts just after killing it When I try to start searchd, it gives the following error.
bind() failed on 0.0.0.0, retrying...
FATAL: bind() failed on 0.0.0.0: Illegal seek
I can find a searchd process running
root 14863 0.1 0.0 73884 3960 ? Ssl 23:21 0:00 /usr/bin/searchd --nodetach
Now, when i kill it or try to stop it (searchd --stop), it instantly restarts.
root 15841 0.5 0.0 73884 3960 ? Ssl 23:33 0:00 /usr/bin/searchd --nodetach
I am guessing there is some setting by which it automatically starts when the process is not running. How can I stop this from happening?
A: By default, it seems like the debian package will start Sphinx with an additional keepalive process. I was able to stop it successfully with this;
sudo service sphinxsearch stop
A: the 'init: ... main process ended, respawning' suggests there is something in the init script that sets a watchdog to make sure sphinx doesnt die.
Perhaps you need to shutdown sphinx via the init script itself
/etc/init.d/sphinxsearch stop
A: To my knowledge, Upstart is responsible for respawning searchd after you attempt to stop/kill it.
Since we know that this process is being managed by upstart, we can terminate the daemon using "stop sphinxsearch" and then start it again with "start sphinxsearch".
If you want to kill it normally like any other process, then you can remove the "--nodetach" argument in the config file /etc/sphinxsearch/sphinx.conf. However, by doing this, you can no longer stop the process using "stop sphinxsearch".
A: No, there are no any sphinx option to restart Sphinx.
Probably some monitoring tool like monit installed for Sphinx.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/9844332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: MongoDB aggreagate query to get list of each element and its count for each record In my mongodb, i have few collections, i want to create a new collection by comparing the collection 1 and collection 2 using pymongo.
Collection 1 :
Object id timestamp Prof_Name subjects1
abc67478898k ISODate("2018-01-03T09:26:37.541Z") ABDC "sub1, sub2, sub3"
jjjjjjjjjj ISODate("2018-01-03T09:26:37.541Z") XYZ "sub2, sub4, sub8"
Collection 2 :
Object id timestamp UUID subjects2 rating score
3333333 ISODate("2018-01-03TZ") 7897 "sub1,sub4, sub7" 7 10
444444 ISODate("2018-01-03TZ") 4532 "sub2" 4 6
777777 ISODate("2018-01-03TZ") 7876 "sub1,sub2,sub3" 8 8
1111111 ISODate("2018-01-03TZ") 654 "sub1,sub3" 7 8
I'm getting my 3 collection, for each subject by Prof_name find matching subjects in collection2 and the UUID and UUID_count between a certain timestamp and my mongo query is as below:
db.data1.aggregate([
{"$lookup":{
"from":"data2",
"let":{"subject":{"$split":["$SUBJECT",", "]}},
"pipeline":[
{"$match": {"expr":{"$and":[{"$eq":[{"$year":"$timestamp"}, 2016]}, {"$eq":[{"$month":"$timestamp"}, 1]}]}}},
{"$addFields":{"SUBJECT_ID":{"$split":["$SUBJECT_ID",", "]},"SUBJECT":"$$subject"}},
{"$unwind":"$SUBJECT"},
{"$match":{"$expr":{"$in":["$SUBJECT","$SUBJECT_ID"]}}},
{"$facet":{
"UUID":[{"$group":{"_id":{"id":"$_id","UUID":"$UUID"}}},{"$count":"UUID_Count"}],
"REST":[
{"$group":{"_id":null,"subjects_list":{"$addToSet":"$SUBJECT"},"UUID_distinct_list":{"$addToSet":"$UUID"}}},
{"$addFields":{"subject_count":{"$size":"$subjects_list"},"UUID_distinct_count":{"$size":"$UUID_distinct_list"}}},
{"$project":{"_id":0}}
]
}},
{"$replaceRoot":{"newRoot":{"$mergeObjects":[{"$arrayElemAt":["$UUID",0]},{"$arrayElemAt":["$REST",0]}]}}}
],
"as":"ref_data"
}},
{"$unwind":{"path":"$ref_data","preserveNullAndEmptyArrays":true}},
{"$addFields":{"ref_data.Prof_Name":"$Prof_Name"}},
{"$replaceRoot":{"newRoot":"$ref_data"}},
{"$out":"data3"}
])
Above query gives me below collection.
Collection 3 :
objectid Prof_name subjects_list UUID_list UUID-count subject_count
12 ABDC sub1,sub2,sub3 7897,4532,7876,654 4 3
34 XYZ sub2,sub4,sub8 7897,4532,7876 2 3
Now i want to get another column for my collection 3 which says list of count for each subject and UUID associated with each subject, something like this
Collection 3 :
objectid Prof_name subjects_list UUID_list UUID-count subject_count each_sub_count UUID-assocaited_sub
12 ABDC sub1,sub2,sub3 7897,4532,7876,654 4 3 sub1:3,sub2:2,sub3:2 [sub1:7897,7876,654, sub2:4532,7876, sub3:7876]
34 XYZ sub2,sub4,sub8 7897,4532,7876 2 3 sub2:2,sub4:1,sub8:0 [sub2:4532,7876, sub4:7897,sub8:0]
Last 2 column is what i need, how do i achieve this, possible to modify above query and get it or what is the new query to get these columns.
A: Include another pipeline in $facet.
{"$facet":{
"UUID":[{"$group":{"_id":{"id":"$_id","UUID":"$UUID"}}},{"$count":"UUID_Count"}],
"COUNT":[
{"$group":{"_id":null,"subjects_list":{"$addToSet":"$SUBJECT"},"UUID_distinct_list":{"$addToSet":"$UUID"}}},
{"$addFields":{"subject_count":{"$size":"$subjects_list"},"UUID_distinct_count":{"$size":"$UUID_distinct_list"}}},
{"$project":{"_id":0}}
],
"SUB":[
{"$group":{"_id":"$SUBJECT","count":{"$sum":1}," UUID_list":{"$push":"$UUID"}}},
{"$group":{"_id":null,"each_sub_count":{"$push":{"sub":"$_id", "count":"$count"}},"UUID-assocaited_sub":{"$push":{"sub":"$_id", uuids:"$UUID_list"}}}},
{"$project":{"_id":0}}
]
}},
{"$replaceRoot":{"newRoot":{"$mergeObjects":[{"$arrayElemAt":["$UUID",0]},{"$arrayElemAt":["$COUNT",0]}, {"$arrayElemAt":["$SUB",0]}]}}}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/48495233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Facebook Graph API - PHP - Can't post to certain pages Question: Why is it giving this error and how do I resolve it?
I am using the Facebook Graph API to post a reply/comment to specific posts that I have already created in specific Facebook groups. I am a currently a member of the group and can reply/comment manually to my post.
The app is not currently "live" in the developer interface.
39 of the 80 some odd posts give this error:
Array ( [error] => Array ( [message] => (#200) Permissions error [type] => OAuthException [code] => 200 ) )
Access Token Code
//Gets Facebook Token ID for use in facebook-poster.php
session_start();
$app_id = "...";
$app_secret = "....";
$my_url = "......"; // redirect url
$code = $_REQUEST["code"];
if(empty($code)) {
// Redirect to Login Dialog
$_SESSION['state'] = md5(uniqid(rand(), TRUE)); // CSRF protection
$dialog_url = "https://www.facebook.com/dialog/oauth?client_id="
. $app_id . "&redirect_uri=" . urlencode($my_url) . "&state="
. $_SESSION['state'] . "&scope=publish_stream,publish_actions,read_friendlists,email";
echo("<script> top.location.href='" . $dialog_url . "'</script>");
}
if($_SESSION['state'] && ($_SESSION['state'] === $_REQUEST['state'])) {
$token_url = "https://graph.facebook.com/oauth/access_token?"
. "client_id=" . $app_id . "&redirect_uri=" . urlencode($my_url)
. "&client_secret=" . $app_secret . "&code=" . $code;
$response = file_get_contents($token_url);
$params = null;
parse_str($response, $params);
$longtoken=$params['access_token'];
}
echo $longtoken;
?>
Comment Posting Snippet
$accessToken = "abc";
$fbId = array ( '1893939', '919191');
echo "
<script>
window.fbAsyncInit = function() {
FB.init({
appId : '.....',
xfbml : true,
version : 'v2.2'
});
};
(function(d, s, id){
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) {return;}
js = d.createElement(s); js.id = id;
js.src = \"//connect.facebook.net/en_US/sdk.js\";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
</script>
</HEAD>
<BODY>";
foreach ($fbId as $ID) {
$i++; //Increment counter for sleep timer.
$attachment = array(
'access_token' => $accessToken,
'message' => "Message",
);
// set the target url
$url = 'https://graph.facebook.com/v2.2/' . $ID . '/comments';
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 2);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $attachment);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$go = curl_exec($ch);
echo "Facebook ID: <a href=http://www.facebook.com/". $ID .">". $ID ."</a> >> Status: ". $go ."<BR>";
curl_close ($ch);
$go = json_decode($go, TRUE);
echo "JSON Decode 1: <BR>";
print_r($go);
if( isset($go['id']) ) {
$url = "https://graph.facebook.com/v2.2/{$go['id']}/comments";
$attachment = array(
'access_token' => $accessToken,
'message' => "Bump". rand(0,5000) ."...",
);
// set the target url
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 2);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $attachment);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$comment = curl_exec($ch);
curl_close ($ch);
$comment = json_decode($comment, TRUE);
print_r($comment);
echo "<BR><HR>";
}
if ($i % 20 == 0) { echo "<HR>Sleep<HR><BR>"; sleep(20); }
}
foreach ($fbId as $ID) {
echo "<a href=http://www.facebook.com/". $ID .">". $ID ."</a> | ";
if ($i % 10 == 0) { echo "<BR>"; }
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/28308884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Is there a way to get the dual problem of a primal problem in pyomo? I am currently working on robust optimization and here I use the dual problem to make the optimizationn tractable. As I want to start to create larger problems, I don't want to have two optimization programs all the time. Therefore, I was wondering if it is possible to create the dual problem from the primal problem in pyomo, which I can then adjust (add uncertainty set) and optimize.
Until now, I have just found solutions which give you the dual values AFTER optimization. But I am looking for the whole dual problem before optimization.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74396843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is the difference between the addresses of a function's parameters always 4 bytes? I've been doing some pointers testing in C, and I was just curious if the addresses of a function's parameters are always in a difference of 4 bytes from one another.
I've tries to run the following code:
#include <stdio.h>
void func(long a, long b);
int main(void)
{
func(1, 2);
getchar();
return 0;
}
void func(long a, long b)
{
printf("%d\n", (int)&b - (int)&a);
}
This code seems to always print 4, no matter what is the type of func's parameters.
I was just wondering if it's ALWAYS 4, because if so it can be useful for something I'm trying to do (but if it isn't necessarily 4 I guess I could just use va_list for my function or something).
So: Is it necessarily 4 bytes?
A: Absolutely not, in so many ways that it would be hard to count them all.
First and foremost, the memory layout of arguments is simply not specified by the C language. Full stop. It is not specified. Thus the answer is "no" immediately.
va_list exists because there was a need to be able to navigate a list of varadic arguments because it wasn't specified other than that. va_list is intentionally very limited, so that it works on platforms where the shape of the stack does not match your intuition.
Other reasons it can't always be 4:
*
*What if you pass an object of length 8?
*What if the compiler optimizes a reference to actually point at the object in another frame?
*What if the compiler adds padding, perhaps to align a 64-bit number on a 64-bit boundary?
*What if the stack is built in the opposite direction (such that the difference would be -4 instead of +4)
The list goes on and on. C does not specify the relative addresses between arguments.
A: As the other answers correctly say:
No.
Furthermore, even trying to determine whether the addresses differ by 4 bytes, depending on how you do it, probably has undefined behavior, which means the C standard says nothing about what your program does.
void func(long a, long b)
{
printf("%d\n", (int)&b - (int)&a);
}
&a and &b are expression of type long*. Converting a pointer to int is legal, but the result is implementation-defined, and "If the result cannot be represented in the integer type, the behavior is undefined. The result need not be in the range of values of any integer type."
It's very likely that pointers are 64 bits and int is 32 bits, so the conversions could lose information.
Most likely the conversions will give you values of type int, but they don't necessarily have any meaning, nor does their difference.
Now you can subtract pointer values directly, with a result of the signed integer type ptrdiff_t (which, unlike int, is probably big enough to hold the result).
printf("%td\n", &b - &a);
But "When two pointers are subtracted, both shall point to elements of the same array object, or one past the last element of the array object; the result is the difference of the subscripts of the two array elements." Pointers to distinct object cannot be meaningfully compared or subtracted.
Having said all that, it's likely that the implementation you're using has a memory model that's reasonably straightforward, and that pointer values are in effect represented as indices into a monolithic memory space. Comparing &b vs. &a is not permitted by the C language, but examining the values can provide some insight about what's going on behind the curtain -- which can be especially useful if you're tracking down a bug.
Here's something you can do portably to examine the addresses:
printf("&a = %p\n", (void*)&a);
printf("&b = %p\n", (void*)&b);
The result you're seeing for the subtraction (4) suggests that type long is probably 4 bytes (32 bits) on your system. I'd guess you're on Windows. It also suggests something about the way function parameters are allocated -- something that you as a programmer should almost never have to care about, but is worth understanding anyway.
A:
[...] I was just curious if the addresses of a function's parameters are always in a difference of 4 bytes from one another."
The greatest error in your reasoning is to think that the parameters exist in memory at all.
I am running this program on x86-64:
#include <stdio.h>
#include <stdint.h>
void func(long a, long b)
{
printf("%d\n", (int)((intptr_t)&b - (intptr_t)&a));
}
int main(void)
{
func(1, 2);
}
and compile it with gcc -O3 it prints 8, proving that your guess is absolutely wrong. Except... when I compile it without optimization it prints out -8.
X86-64 SYSV calling convention says that the arguments are passed in registers instead of being passed in memory. a and b do not have an address until you take their address with & - then the compiler is caught with its pants down from cheating the as-if rule and it quickly pulls up its pants and stuffs them into some memory location so that they can have their address taken, but it is in no way consistent in where they're stored.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/57084488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: vscode How to change `package scripts` to` launch debug`?
package.json
"scripts": {
"start": "node -r dotenv/config index.js dotenv_config_path=.env",
.vscode\launch.json
{
// https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"type": "node",
"request": "launch",
"name": "debug",
"skipFiles": ["<node_internals>/**"],
"program": "${workspaceFolder}\\index.js",
"args": ["-r dotenv/config index.js dotenv_config_path=.env"]
}
]
}
I want to debug, but he can't work, How do I configure launch.json?
A: Please check Add Configuration button on lower right corner of launch.json.
Sample npm task configuration generated from same:
{
"type": "node",
"request": "launch",
"name": "Launch via NPM",
"runtimeExecutable": "npm",
"runtimeArgs": [
"run-script",
"debug"
],
"port": 9229
}
Screenshot:
Adding additional configuration:
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60164003",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: solving assignment error in typescript: «Module parse failed» I'm trying to assign a object to initialState variable, the selectedActivity type is Activity | undefined and after Nullish Coalescing operator (??) the emptyActivity is of type Activity.
but when this line execute, it throws an error that says: Module parse failed: Unexpected token You may need an appropriate loader to handle this file type.
Code:
interface Props {
selectedActivity: Activity | undefined
closeForm: () => void
}
export default function ActivityForm({ selectedActivity, closeForm }: Props) {
const emptyActivity: Activity = {
id: '',
title: '',
date: '',
description: '',
category: '',
city: '',
venue: '',
}
const initialState = selectedActivity ?? emptyActivity;
and this is Activity interface:
export interface Activity {
id: string
title: string
date: string
description: string
category: string
city: string
venue: string
}
I'm coding a react project and I want to solve the assignment error.
A: As the error said that your loader can't handle this syntax, maybe it is old, and you need to update it, but as a temporary solution you can convert this syntax to ternary operator syntax and it should work.
const initialState = selectedActivity ? selectedActivity : emptyActivity;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74333898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: string.c_str() outputs gibberish - but only sometimes I'm writing a short OpenGL-program, and need to import the shader source, which I use the following code for (OpenGL wants the shader source as a c-string):
std::ifstream t("GLshader.vert");
if (!t.is_open()) {
std::cerr << "vertex shader open failed" << std::endl;
}
std::stringstream buffer;
buffer << t.rdbuf();
const char* source = buffer.str().c_str();
std::string s = buffer.str().c_str();
std::cerr << "vs string :" << std::endl << s << std::endl;
std::cerr << "vs source:" << std::endl << source << std::endl;
What has me really stumped is the fact that sometimes everything is fine, and the last two lines of code output the same text. Sometimes however, for some reason, the output is instead this:
vs string:
/* actual shader code*/
#version 330 core
layout(location=0) in vec3 position;
out vec3 pos;
out vec2 texcoord;
void main(){
pos = position;
texcoord = (position.xy +1.f) / 2.f;
gl_Position = vec4(position, 1.0f);
}
vs source:
@ð4
The printouts, as well as the string, is there because I kept getting shader compilation errors (again, sometimes), and thought it might be some weird concurrency issue with reading the file. Obviously that works fine, but for some reason the c-string creation sometimes fails.
The function is called in a .dll, and I'm using visual studio. (I've run the .exe outside of VS, and the problem occurs there as well)
I'm really out of ideas here, anyone have an answer?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/41875076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: ms-access 2003: form does not open! help! i have an access db with forms, one of the forms is not opening. i double clicked on it, i tried to open in design mode. nothing happens. there's no error message, but nothing happens.
has anyone had this issue before?
i am sorry i actually am getting an error now:
The error said that there wasn't enough memory to open it or something to that effect.
A: Here's the bible for Access corruption issues.
http://www.granite.ab.ca/access/corruptmdbs.htm
First things first: try to decompile and recompile (check the help files on how to do that). Next, try creating a second database and importing your form from the corrupt one. Lastly, use SaveAsText and LoadFromText to export and reimport the form.
A: The lack of an error message makes this extra challenging. OTOH, without an error message, how do you know the form hasn't opened? Could it be open but hidden?
Try these two commands in the Immediate Window:
DoCmd.OpenForm "YourForm", acNormal,,,,acWindowNormal
? Forms("YourForm").Name
Do you you get any error messages then? If so, tell us what error messages and at which step they occur.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3258488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Error 404 after install gitlab I've installed gitlab on centos 7. I've followed the instructions gitlab instructions , I 've set the external url as 192.168.0.6/gitlab but I got a 404 error when I browse this url. I've never installed Gitlab before but on apache document root there is no Gitlab folder. What can be wrong? Did I missed something?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50682977",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to separate Arabic with alphabetical? If you try this link https://jsfiddle.net/u4bxz74c/10/. Later the result will be like this
PART 1
<p id="posttextareadisplay">
<p class="ENGLISH">This is a samplasde textssss</p>
<p class="ENGLISH"><b>فَإِذَا جَلَسْتَ فِي وَسَطِ الصلَاةِ فَاطْمَئِن، وَافْتَرِشْ فَخِذَكَ الْيُسْرَى ثُم تَشَهدْ</b></p>
</p>
PART 2
<p id="posttextareadisplay">
<p class="ENGLISH">This is a samplasde textssss <b>فَإِذَا جَلَسْتَ فِي وَسَطِ الصلَاةِ فَاطْمَئِن، وَافْتَرِشْ فَخِذَكَ الْيُسْرَى ثُم تَشَهدْ</b></p>
</p>
Question PART 1 : If you see the code below, all paragraphs or <p> are class="ENGLISH".
How to make it into a paragraph or <p> class="ENGLISH" to be class="ARAB", if the writing in a paragraph is Arabic? But if the writing is not Arabic, so the paragraph of class="ENGLISH"
PART 1
<p id="posttextareadisplay">
<p class="ENGLISH">This is a samplasde textssss</p>
<p class="ENGLISH"><b>فَإِذَا جَلَسْتَ فِي وَسَطِ الصلَاةِ فَاطْمَئِن، وَافْتَرِشْ فَخِذَكَ الْيُسْرَى ثُم تَشَهدْ</b></p>
</p>
***** I WANT TO BE LIKE THIS *******
<p id="posttextareadisplay">
<p class="ENGLISH">This is a samplasde textssss</p>
<p class="ARAB"><b>فَإِذَا جَلَسْتَ فِي وَسَطِ الصلَاةِ فَاطْمَئِن، وَافْتَرِشْ فَخِذَكَ الْيُسْرَى ثُم تَشَهدْ</b></p>
</p>
Question PART 2 : But if the Arabic joins with plain writing or regular fonts. So paragraph or <p> remain class="ENGLISH"
Like this
<p class="ENGLISH"> This is a sa <b> لْيُسْرَى ثُم تَشَهدْ</b></p>
<p class="ENGLISH"><b> لْيُسْرَى ثُم تَشَهدْ</b> This is a sa</p>
<p class="ENGLISH"> This is a sa <b> ا لْيُسْرَى ثُم تَشَهدْ</b> This is a sa</p>
<p class="ENGLISH"><b> لْيُسْرَى ثُم تَشَهدْ</b> This is a sa <b> لْيُسْرَى ثُم تَشَهدْ</b></p>
Note: I've tried this code, but it seems, this code encapsulates the entire contents of the textarea
if (pattern.test(newText)) {
str = newText.replace($format_search[i], $arab_format_replace[i]);
} else {
str = newText.replace($format_search[i], $format_replace[i]);
}
A: You can create a RegExp pattern from string "abcdefghijklmnopqrstuvwxyz " with RegExp() constructor with case insensitive flag i; iterate elements using a loop; if .textContent of element begins with one of the characters of string cast to RegExp or every character of element .textContent is one of the characters within RegExp, set element .className to "ENGLISH", else set element .className to "ARAB".
Note that id of element in document should be unique. Also, <p> is not a valid child element of <p> element.
Substituted class="posttextareadisplay" for id="posttextareadisplay" at parent <p> element and <span> for child <p> element.
const en = "abcdefghijklmnopqrstuvwxyz ";
for (let el of document.querySelectorAll(".ENGLISH")) {
if (new RegExp(`^[${en.slice(0, en.length -1)}]`, "i").test(el.textContent)
|| [...el.textContent].every(char =>
new RegExp(`${char}`, "i").test(en))) {
el.className = "ENGLISH"
} else {
el.className = "ARAB"
}
}
PART 1
<p class="posttextareadisplay">
<span class="ENGLISH">This is a samplasde textssss</span>
<span class="ENGLISH"><b>فَإِذَا جَلَسْتَ فِي وَسَطِ الصلَاةِ فَاطْمَئِن، وَافْتَرِشْ فَخِذَكَ الْيُسْرَى ثُم تَشَهدْ</b></span>
</p>
PART 2
<p class="posttextareadisplay">
<span class="ENGLISH">This is a samplasde textssss <b>فَإِذَا جَلَسْتَ فِي وَسَطِ الصلَاةِ فَاطْمَئِن، وَافْتَرِشْ فَخِذَكَ الْيُسْرَى ثُم تَشَهدْ</b></span>
</p>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45339203",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: 2D SVD and Kabsch algorithm I'm implementing Kabsch algorithm in 2D. Result of algo is: translation, rotation, scale. Combaining them and applying on 1st set of points should "align" them to 2nd set of points.
So far my implementation (work-in-progress):
// maps points 'A' onto points 'B'
public static (Vector2,Matrix2x2,float) Compute( List< Vector2 > pointsA, List< Vector2 > pointsB )
{
// scale : ratio of sums of "some" distances, e.g. to "neighbour" or to centroid...
var dsumA = 0f;
var dsumB = 0f;
for( var i = num; i --> 1; )
{
dsumA += ( pointsA[ i ] - pointsA[ i-1 ] ).sqrMagnitude;
dsumB += ( pointsB[ i ] - pointsB[ i-1 ] ).sqrMagnitude;
}
if( dsumA == 0f || dsumB == 0f )
return (Vector2.zero,Matrix2x2.IDENTITY,1f);
var scale = MathF32.Sqrt( dsumB / dsumA );
// rotation : based on singular-value-decomposition of cross-covariance matrix of given point-sets...
var centroidA = Vector2.zero;
var centroidB = Vector2.zero;
for( int i = num; i --> 0; ) // centroids
{
centroidA += pointsA[ i ];
centroidB += pointsB[ i ];
}
var invNum = 1f / num;
centroidA *= invNum;
centroidB *= invNum;
var cc = new Matrix2x2();
for( var i = num; i --> 0; ) // cross-covariance matrix : transposed( move-to-origin( A ) ) * move-to-origin( B )
{
var a = pointsA[ i ] - centroidA;
var b = pointsB[ i ] - centroidB;
cc.m00 += a.x * b.x; cc.m01 += a.x * b.y;
cc.m10 += a.y * b.x; cc.m11 += a.y * b.y;
}
(var u, _, var v) = cc.SingularValueDecomposition();
u.Transpose();
if( cc.GetDeterminant() < 0f ) // handling special "reflection" case
{
v.m01 *= -1f;
v.m11 *= -1f;
}
var rotation = v * u;
// transition : difference of centroids...
var translation = centroidB - rotation * ( centroidA * scale );
// result...
return (translation,rotation,scale);
}
To compute rotation matrix you need to compute SVD (singular value decomposition) of covariance matrix. I'm not "fluent" in math at all :( so I tried few SVD implementations (svd(A) = U * S * V^T) meantioned here.
I compared results of "borrowed" implementations with few online SVD calculators. Matrix S is alright but almost all entries in U and V have different signs. I tried 2 or 3 different implementations so I assume
that problem is somewhere else - probably in construction of covariance matrix :(, but I can't spot the bug.
*
*Any notes for implementation of Kabsh itself ?
*Or how can I fix SVD (those wrong signs in U and V) ?
I noticed that determinant of my covariance matrix is negative. Could it be this?
Many thanks for any advice.
UPDATE #1
For SVD I'm using following implementation (with others I got small errors - few degrees in final rotation).
public (Matrix2x2,Matrix2x2,Matrix2x2) SingularValueDecomposition()
{
Matrix2x2 u;
Matrix2x2 s;
Matrix2x2 v;
s.m00 = ( MathF32.Hypot( m00 - m11, m01 + m10 ) +
MathF32.Hypot( m00 + m11, m01 - m10 ) ) * 0.5f;
s.m01 = 0f;
s.m10 = 0f;
s.m11 = MathF32.Abs( s.m00 - MathF32.Hypot( m00 - m11, m10 + m01 ));
v.m10 = ( s.m00 > s.m11 ) ? MathF32.Sin(( MathF32.ATan2( 2f * ( m00 * m01 + m10 * m11 ), m00 * m00 - m01 * m01 + m10 * m10 - m11 * m11 )) / 2f ) : 0f;
v.m00 = MathF32.Sqrt( 1f - v.m10 * v.m10 );
v.m01 = -v.m10;
v.m11 = +v.m00;
u.m00 = s.m00 != 0f ? ( m00 * v.m00 + m01 * v.m10 ) / s.m00 : 1f;
u.m10 = s.m00 != 0f ? ( m10 * v.m00 + m11 * v.m10 ) / s.m00 : 0f;
u.m01 = s.m11 != 0f ? ( m00 * v.m01 + m01 * v.m11 ) / s.m11 : -u.m10;
u.m11 = s.m11 != 0f ? ( m10 * v.m01 + m11 * v.m11 ) / s.m11 : +u.m00;
return (u,s,v);
}
Yet still those signs in U and V are different from decompositions I got from few online SVD calculators.
UPDATE #2
I'm able to fixup rotation (rotation angle) to align points properly but "fixup" differs based on input point-sets.
Sometimes helps 180-angle. Sometimes 360-angle... and even 180+angle for some poit-sets :(. Mentioned / corrected angle is extracted from computed 'rotation' matrix.
This "correction" is needed for all implementations of SVD I tried so far (3 or 4) so I'm "starting" to think that problem lies somewhere else...
Translation and scale seams to work properly. With "fixed" rotation I'm able to align 'A' to 'B' properly.
UPDATE #3
I tested if SVD works by checking if U * S * Vt gets me original covariance-matrix and it does.
I found one problem in setupping covariance-matrix (fixed, code above updated) but still getting wrong rotation :(.
Also I fixed translation.
Ahhhh... so simple algorithm yet I can't beat it :)
UPDATE #4... FINAL ONE
Sorry everyone, I finally found the problem :). At some point (in my test app) order of points in one of input set changed. I corrected it and it "magically" started to work. I updated / polished the code above for future generations :).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65671675",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Write csv data to directly to azure blob in node js I want to create a CSV file using csv-writer , and upload that csv to azure blob, I am able to create the csv, store it on local system, and then read from local and upload to blob using azure-storage npm. But I dont want to create/store the CSV on local filesystem (because of some issues that I am running into on Prod), is there any way to create the CSV and directly feed to azure blob storage, without writing the csv to local file system.
Some code for reference
const csvWriter = createCsvWriter({
path: `__dirname/${blobName}`,
header: [
{ id: "id", title: "name" },
],
});
await csvWriter
.writeRecords(csvData)
.then(() => console.log("file successfully written"));
And once this csv is created on local, read it from there using fs module, and upload to blob using "blobService.createBlockBlobFromStream" function.
Can you please suggest how can I directly give path of azure blob storage to csvWriter? or is there any other way to achieve this?
A: Please try the code below.
const {BlobServiceClient, StorageSharedKeyCredential} = require('@azure/storage-blob');
const createCsvStringifier = require('csv-writer').createObjectCsvStringifier;
const accountName = 'account-name';
const accountKey = 'account-key';
const container = 'container-name';
const blobName = 'text.csv';
const csvStringifier = createCsvStringifier({
header: [
{id: 'name', title: 'NAME'},
{id: 'lang', title: 'LANGUAGE'}
]
});
const records = [
{name: 'Bob', lang: 'French, English'},
{name: 'Mary', lang: 'English'}
];
const headers = csvStringifier.getHeaderString();
const data = csvStringifier.stringifyRecords(records);
const blobData = `${headers}${data}`;
const credentials = new StorageSharedKeyCredential(accountName, accountKey);
const blobServiceClient = new BlobServiceClient(`https://${accountName}.blob.core.windows.net`, credentials);
const containerClient = blobServiceClient.getContainerClient(container);
const blockBlobClient = containerClient.getBlockBlobClient(blobName);
const options = {
blobHTTPHeaders: {
blobContentType: 'text/csv'
}
};
blockBlobClient.uploadData(Buffer.from(blobData), options)
.then((result) => {
console.log('blob uploaded successfully!');
console.log(result);
})
.catch((error) => {
console.log('failed to upload blob');
console.log(error);
});
Two things essentially in this code:
*
*Use createObjectCsvStringifier if you don't want to write the data to disk.
*Use @azure/storage-blob node package instead of azure-storage package as former is the newer one and the latter is being deprecated.
Update
Here's the code using azure-storage package.
const azure = require('azure-storage');
const createCsvStringifier = require('csv-writer').createObjectCsvStringifier;
const accountName = 'account-name';
const accountKey = 'account-key';
const container = 'container-name';
const blobName = 'text.csv';
const csvStringifier = createCsvStringifier({
header: [
{id: 'name', title: 'NAME'},
{id: 'lang', title: 'LANGUAGE'}
]
});
const records = [
{name: 'Bob', lang: 'French, English'},
{name: 'Mary', lang: 'English'}
];
const headers = csvStringifier.getHeaderString();
const data = csvStringifier.stringifyRecords(records);
const blobData = `${headers}${data}`;
const blobService = azure.createBlobService(accountName, accountKey);
const options = {
contentSettings: {
contentType: 'text/csv'
}
}
blobService.createBlockBlobFromText(container, blobName, blobData, options, (error, response, result) => {
if (error) {
console.log('failed to upload blob');
console.log(error);
} else {
console.log('blob uploaded successfully!');
console.log(result);
}
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/66653520",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: choose stepper in odeint through if statement I want to choose integration scheme through a if statement like this:
//stepper_type steppr; ??
if (integration_scheme == "euler") {
[auto] stepper = euler<state_type>{};
}
else
{
[auto] stepper = runge_kutta4<state_type>{};
}
but stepper is only valid inside the curly bracket.
What is the type of stepper to be defined before the if statement?
another way is to pass the integration scheme (or even stepper) as an argument to a function.
A: In C++17 and over, for this purpose we can apply std::variant as follows:
#include <variant>
class state_type {};
template<class T>
class euler {};
template<class T>
class runge_kutta4 {};
template<class T>
using stepper_t = std::variant<euler<T>, runge_kutta4<T>>;
Then you can do like this:
DEMO
stepper_t<state_type> stepper;
if (integration_scheme == "euler") {
stepper = euler<state_type>{};
}
else{
stepper = runge_kutta4<state_type>{};
}
std::cout << stepper.index(); // prints 0.
But although I don't know the whole code of your project, I think the subsequent code would not be simple one in the above way.
If I were you, I will define the base calss stepperBase and euler and runge_kutta4 as the inheritances of stepperBase.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/58353704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Python - Datetime's Time is always zero Strange little problem I'm facing with Datetime. Here's what I'm doing:
>>> from datetime import datetime, date
>>> t = date.timetuple(datetime.now())
>>> t
time.struct_time(tm_year=2011, tm_mon=6, tm_mday=14, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=1, tm_yday=165, tm_isdst=-1)
tm_hour, tm_min and tm_sec are all zero. Why is this?
A: Well t is a date, so of course it doesn't contain any time data. You have to use datetime.timetuple(datetime.now()) to have those fields populated.
A: I have tried this in my console and get the following results:
from datetime import datetime, date
date.timetuple(datetime.now())
>>> time.struct_time(tm_year=2011, tm_mon=6, tm_mday=14, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=1, tm_yday=165, tm_isdst=-1)
datetime.timetuple(datetime.now())
>>> time.struct_time(tm_year=2011, tm_mon=6, tm_mday=14, tm_hour=13, tm_min=23, tm_sec=34, tm_wday=1, tm_yday=165, tm_isdst=-1)
A: >>> from datetime import datetime
>>> datetime.timetuple(datetime.now())
time.struct_time(tm_year=2011, tm_mon=6, tm_mday=14, tm_hour=18, tm_min=25, tm_sec=20, tm_wday=1, tm_yday=165, tm_isdst=-1)
>>> from datetime import date
>>> date.timetuple(datetime.now())
time.struct_time(tm_year=2011, tm_mon=6, tm_mday=14, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=1, tm_yday=165, tm_isdst=-1)
this is my result.
A: this should work:
t = datetime.timetuple(datetime.now())
| {
"language": "en",
"url": "https://stackoverflow.com/questions/6342102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Getting the Date in stencil bigcommerce I'm setting up a minimum and maximum number of days a customer can return a product. But my problem is that how can I get the current Date on stencil bigcommerce? I tried using the
{{moment "now" "MM/DD/YYYY"}}
It works great but the problem is that when I change the current date of my computer the output of the code changes also. That is why I wanted to get like the "Server Date", just to be sure I will always getting the correct current date.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/56558072",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to vectorize this code below in Python without joining two dataframes I have two dataframes with multiple columns like below
head(df
SCHEDULING_DC_NBR COMMODITY_CODE Unload_Start_Time DOW Dlry
0 6042.0 SCGR 15:15 SUN 5
1 6042.0 SCGR 15:30 SUN 6
2 6042.0 SCGR 15:45 SUN 7
3 6042.0 SCGR 16:15 SUN 8
4 6042.0 SCGR 18:30 SUN 9
head(config_df)
Node Window APPLICABLE_DAYS COMMODITY_CODE Window_start_time config_ID
7023.0 03:15 AM to 03:16 AM MON SCPR 03:15 123
7023.0 03:15 AM to 03:16 AM THUR SCPR 03:15 123
7023.0 03:15 AM to 03:16 AM FRI SCPR 03:15 123
6042.0 06:00 PM to 06:05 PM SUN SCPR 18:00 111
6042.0 03:00 PM to 03:05 PM SUN SCGR 15:00 222
I want to apply row-wise operation on dataframe df to find the appropriate capacity_config_id from config_df using some logic like below using apply function.
def config_map(row):
row = row.copy()
return config_df.loc[(config_df['Node'] == row['SCHEDULING_DC_NBR']) & (config_df['COMMODITY_CODE'] == row['COMMODITY_CODE'])
& (config_df['APPLICABLE_DAYS'].str.contains(row['DOW'],case=False))
& (live_config['Window_start_time'] == row['Unload_Start_Time']),"capacity_config_id"].values[0]
Even though, the above code works it takes a lot of time to run.
I don't want to join or merge both the dataframes, as i will be doing multiple other checks in apply function above. I am looking for a way to vectorize this function for faster computation.
A: You may consider using Pandas apply method, https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html
Using this you can vectorize the operation
| {
"language": "en",
"url": "https://stackoverflow.com/questions/64345579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: WPF adorned elements flickering I have built a custom timer control that is basically a user control, this control has TextBox controls the have values which counts down while it is working.
This control is added as a child to a Canvas, and when start dragging the control it is placed in an AdornerLayer.
while dragging this control, the working TextBoxes flickers and if you stopped moving (but don't Mouse Up!!) the TextBoxes disappear!!!
I have search for a solution, and in one of them it said to set the adorned element and the AdornerLayer IsHitTestVisible to false but it didn't do the trick
Any help would be appreciated
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14577298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Render props method without nesting dom element In essence I have the following inside a react functional component:
<div className="content">
<div className="filters">
<MyFilter1 data={data}>
{(filter1Data) => {
return <MyFilter2 data={filter1Data}>
{(finalData) => {
return <div className="final-data">{finalData}</div>
}}
</MyFilter2>
}}
</MyFilter1>
</div>
<div className="filtered" />
I am using multiple filters with the render children paradigm to get the final data. But for (admittedly frustrating) styling reasons, I would like to render the final data in a div outside of the filters location (inside the 'filtered' div in this case). How can I do this? Is my approach wrong? Is the 'filter with render children' the wrong approach?
I have tried:
React portal with createRef(), in essence the problem ends up being that the ref is not available yet when the 'finalData' function is called.
Edit:
The filter components themselves are very complicated but I can give a couple examples. The data I'm filtering is a simple object with 12ish properties. One of the filters is implemented as a series of checkboxes determining which properties of the data I want to display (necessary as a separate component because of the number of fields). One of the properties on the data is a price, and another filter is a price range 'dialer', that specifies the price range to display ( aka exclude data points outside the range).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/56606153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Terraform parsing error in Visual Studio Code I am new to Terraform and I'm trying to deploy a resource group using the example from the documentation found here, in Visual Studio Code. I receive a json parsing error when trying to use terraform apply or terraform plan. The commandsterraform init, terraform fmt and terraform validate all work fine. Connecting to azure using az login also works.
Information about code, versioning and setup can be seen below.
Error
╷
│ Error: building AzureRM Client: please ensure you have installed Azure CLI version 2.0.79 or newer. Error parsing json result from the Azure CLI: unmarshaling the result of Azure CLI: invalid character 'C' looking for beginning of value.
│
│ with provider["registry.terraform.io/hashicorp/azurerm"],
│ on main.tf line 10, in provider "azurerm":
│ 10: provider "azurerm" {
│
╵
Code in main.tf
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.28.0"
}
}
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "example-resourcegroup"
location = "eu-west"
}
az --version output
azure-cli 2.41.0
core 2.41.0
telemetry 1.0.8
Dependencies:
msal 1.20.0b1
azure-mgmt-resource 21.1.0b1
terraform --version output
Terraform v1.2.5
on windows_amd64
+ provider registry.terraform.io/hashicorp/azurerm v3.28.0
A: I tried to reproduce the same issue in my environment and got the below results
For installing the terraform in visual studio refer this link
We have to install the developer cli use this link to download and install
I have installed the visual studio code and install the terraform
Please find the versions which I have used
I have created terraform file
vi main.tf
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.28.0"
}
}
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example23" {
name = "example-resourcegroup23"
location = "eastus"
}
I have followed some commands to run the file
Terraform init
terraform plan
terraform apply
When I open the portal I am able to see newly created resource group
Note:
1).In order to use the azure CLI, terraform should be able to do the azure cli authentication for that we have to add the token.
2).Both terraform and Azure cli should be on same path
az account get-access-token { "accessToken": token_id", "expiresOn": <Date_with_time>, "subscription": "subscription_id", "tenant": "", "tokenType": "token_type" }***
3). you can also refer this link here for know abt the issue
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74220252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: keyboard event should fire after filling text box I am working on keyboard events using j query. I have two text boxes one accepting first name and second one accepting last name. I want to alert first and the last name after filling the text box. I am using a(ascii code 65) key to fire event.
The problem here is event is firing when i am filling the text box. I don't want the interruption while entering name into the text box and i need to call the event after entering the text box. I am using ajax that connects c# in web forms.
Please help me with this.
This is my html code:
<div>
<table cellpadding="3" cellspacing="0" style="width: 25%;">
<tr>
<td>
First Name:
</td>
<td>
<asp:TextBox ID="txtFirstName" runat="server"></asp:TextBox>
</td>
</tr>
<tr>
<td>
Last Name:
</td>
<td>
<asp:TextBox ID="txtLastName" runat="server" onkeyup="JqueryAjaxCall();" AutoPostBack="true"></asp:TextBox>
</td>
</tr>
<tr>
<td>
</td>
<td>
</td>
</tr>
<tr>
<td>
</td>
<td>
</td>
</tr>
</table>
</div>
This is my Script:
<script src="Scripts/jquery1.11.0.min.js" type="text/javascript"></script>
<script type="text/javascript" language="javascript">
function JqueryAjaxCall() {
var pageUrl = '<%= ResolveUrl("~/Test.aspx/jqueryAjaxCall") %>';
var firstName = $("#<%= txtFirstName.ClientID %>").val();
var lastName = $("#<%= txtLastName.ClientID %>").val();
var parameter = { "firstName": firstName, "lastName": lastName }
$.ajax({
type: 'POST',
url: pageUrl,
data: JSON.stringify(parameter),
contentType: 'application/json; charset=utf-8',
dataType: 'json',
success: function (data) {
$(document).keyup(function (event) {
if (event.which == 65)
alert(data.d);
});
},
error: function (data, success, error) {
alert("Error : " + error);
}
});
return false;
}
</script>
This is my C# code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.Services;
public partial class Test : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
}
[WebMethod]
public static string jqueryAjaxCall(string firstName, string lastName)
{
//Do coding staff.
return firstName + " " + lastName;
}
}
A: Instead of onkeyup, use onchange event.
onchange event will fire only on blur of the text box.
<asp:TextBox ID="txtLastName" runat="server" onchange="JqueryAjaxCall();" AutoPostBack="true"></asp:TextBox>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29573636",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: echo works twice just for one id but doesn't for the other one There are 2 entries on the table. But it echos 3 entries but it echos twice one of them. If i change it to ASC from DESC, then it echos the other one twice. If i use "where id <> 5" then it just echos id 6 just once. But it is a dynamic site so... And i use the same exact code on another page and it works. Here is the full code:
<?php
$cek = mysql_query('select id,isim,aciklama,tarih from galeri where dil = '.$dbDil.' order by id desc');
while($kaynak = mysql_fetch_assoc($cek)){
$cekG = mysql_query('select resim_url from galeriresim where galeriID = '.$kaynak['id'].' order by id desc');
$galeri .= '<h1 class="sayfaBaslik fl"><span>'.$kaynak['tarih'].'</span> '.$kaynak['isim'].'</h1>';
$galeri .= '<h2 class="sayfaAciklama fl">'.$kaynak['aciklama'].'</h2>';
$galeri .= '<div class="sayfaIcerik" style="width:100%">';
$galeri .= '<div class="galeriH fl swiper-container-'.$kaynak['id'].'">';
$galeri .= '<ul class="fl swiper-wrapper-'.$kaynak['id'].'">';
while($kaynakG = mysql_fetch_assoc($cekG)){
$galeri .= '<li class="swiper-slide-'.$kaynak['id'].'"><img src="'.$yol.'images/galeri/'.$kaynak['id'].'/'.$kaynakG['resim_url'].'" /></li>';
}
$galeri .= '</ul></div></div>';
$galeri .='<script>';
$galeri .= 'var mySwiper = new Swiper(\'.swiper-container-'.$kaynak['id'].'\',{';
$galeri .= 'moveStartThreshold : 75,';
$galeri .= 'wrapperClass : "swiper-wrapper-'.$kaynak['id'].'",';
$galeri .= 'slideClass : "swiper-slide-'.$kaynak['id'].'"';
$galeri .= '});';
$galeri .= '</script>';
echo $galeri;
}
?>
A: You're doing your final echo INSIDE the main while() loop:
while(..) {
while(..) { .. }
echo ..
}
It should be
while(..) {
while(..) { .. }
}
echo ..
Since you're echoing INSIDE the main loop, you'll be running that echo multiple times, spitting out $galeri as it's being built.
A: Try this:
i have added mysq_free_result and echo must be outside the while !
<?php
$cek = mysql_query('select id,isim,aciklama,tarih from galeri where dil = '.$dbDil.' order by id desc');
while($kaynak = mysql_fetch_assoc($cek)){
$cekG = mysql_query('select resim_url from galeriresim where galeriID = '.$kaynak['id'].' order by id desc');
$galeri .= '<h1 class="sayfaBaslik fl"><span>'.$kaynak['tarih'].'</span> '.$kaynak['isim'].'</h1>';
$galeri .= '<h2 class="sayfaAciklama fl">'.$kaynak['aciklama'].'</h2>';
$galeri .= '<div class="sayfaIcerik" style="width:100%">';
$galeri .= '<div class="galeriH fl swiper-container-'.$kaynak['id'].'">';
$galeri .= '<ul class="fl swiper-wrapper-'.$kaynak['id'].'">';
while($kaynakG = mysql_fetch_assoc($cekG)){
$galeri .= '<li class="swiper-slide-'.$kaynak['id'].'"><img src="'.$yol.'images/galeri/'.$kaynak['id'].'/'.$kaynakG['resim_url'].'" /></li>';
}
mysql_free_result($cekG);
$cekG ="";
$galeri .= '</ul></div></div>';
$galeri .='<script>';
$galeri .= 'var mySwiper = new Swiper(\'.swiper-container-'.$kaynak['id'].'\',{';
$galeri .= 'moveStartThreshold : 75,';
$galeri .= 'wrapperClass : "swiper-wrapper-'.$kaynak['id'].'",';
$galeri .= 'slideClass : "swiper-slide-'.$kaynak['id'].'"';
$galeri .= '});';
$galeri .= '</script>';
}
echo $galeri;
?>
A: Use a JOINed query then you wouldn't need a double while loop. you shouldnt be using mysql anyway. Use mysqli or pdo
'select g.id,g.isim,g.aciklama,g.tarih, gr.resim_url
from galeri g
JOIN galeriresim gr on gr.galeriID = g.id
where g.dil = '.$dbDil.' order by id desc'
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27787062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: SQL - create SQL to join lists I have the following table:
CREATE temp TABLE "t_table" (
usr_id bigint,
address varchar[],
msg_cnt bigint,
usr_cnt bigint,
source varchar[],
last_update timestamp
);
Add Data:
INSERT INTO "t_table"(usr_id, address, msg_cnt, usr_cnt, source, last_update) VALUES (1, '{44.154.48.125,81.134.82.111,95.155.38.120,94.134.88.136}', 10, 3, '{src1,src2}', '2019-10-16 22:16:22.163000');
INSERT INTO "t_table"(usr_id, address, msg_cnt, usr_cnt, source, last_update) VALUES (2, '{44.154.48.125}', 10, 3, '{src1,src3}', '2019-10-16 22:16:22.163000');
INSERT INTO "t_table"(usr_id, address, msg_cnt, usr_cnt, source, last_update) VALUES (3, '{94.134.88.136}', 10, 3, '{src1,src4}', '2019-10-16 22:16:22.163000');
INSERT INTO "t_table"(usr_id, address, msg_cnt, usr_cnt, source, last_update) VALUES (4, '{127.0.0.1}', 10, 3, '{src1,src5}', '2019-10-16 22:16:22.163000');
INSERT INTO "t_table"(usr_id, address, msg_cnt, usr_cnt, source, last_update) VALUES (5, '{127.0.0.1,5.5.5.5}', 10, 3, '{src1,src3}', '2019-10-16 22:16:22.163000');
INSERT INTO "t_table"(usr_id, address, msg_cnt, usr_cnt, source, last_update) VALUES (6, '{1.1.0.9}', 10, 3, '{src1,src2}', '2019-10-16 22:16:22.163000');
Find users who share addresses.
Expected Results:
| users | address | sum_msg_cnt | sum_usr_cnt | max_last_date | source |
|---------------------------------|-------------------------------------------------------------|--------------|------------------|--------------------------------|-----------------------------|
| {1,2,3} | {44.154.48.125,81.134.82.111,95.155.38.120,94.134.88.136} | 30 | 9 | "2019-10-16 22:16:22.163000" | {src4,src1,src2,src3} |
| {4,5} | {127.0.0.1,5.5.5.5} | 20 | 6 | "2019-10-16 22:16:22.163000" | {src1,src5,src3} |
| {6} | {1.1.0.9} | 10 | 3 | "2019-10-16 22:16:22.163000" | {src1,src2} |
Question:
How do I formulate a SQL query to obtain the expected result?
Much appreciated.
More info:
PostgreSQL 9.5.19
A: I don't know if this is the most efficient method, but I can't come up with something better right now.
I assume this will have a terrible performance on a larger table.
with userlist as (
select array_agg(t.usr_id) as users,
a.address
from t_table t
left join unnest(t.address) as a(address) on true
group by a.address
), shared_users as (
select u.address,
array(select distinct ul.uid
from userlist u2, unnest(u2.users) as ul(uid)
where u.users && u2.users
order by ul.uid) as users
from userlist u
)
select users, array_agg(distinct address)
from shared_users
group by users;
What does it do?
The first CTE collects all users that share at least one address. The output of the userlist CTE is:
users | address
------+--------------
{1} | 95.155.38.120
{1,3} | 94.134.88.136
{1,2} | 44.154.48.125
{6} | 1.1.0.9
{4,5} | 127.0.0.1
{1} | 81.134.82.111
{5} | 5.5.5.5
Now this can be used to aggregate those user lists that share at least one address. The output of the shared_users CTE is:
address | users
--------------+--------
95.155.38.120 | {1,2,3}
94.134.88.136 | {1,2,3}
44.154.48.125 | {1,2,3}
1.1.0.9 | {6}
127.0.0.1 | {4,5}
81.134.82.111 | {1,2,3}
5.5.5.5 | {4,5}
As you can see we now have groups with the same list of usr_ids. In the final step we can group by those and aggregate the addresses, which will then return:
users | array_agg
--------+----------------------------------------------------------
{1,2,3} | {44.154.48.125,81.134.82.111,94.134.88.136,95.155.38.120}
{4,5} | {127.0.0.1,5.5.5.5}
{6} | {1.1.0.9}
Online example
A: Group the addresses using "group by" operator
| {
"language": "en",
"url": "https://stackoverflow.com/questions/58422026",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Firebase admin: Cannot read property 'cert' of undefined I'm trying to set up firebase in in my node.js server. In deploy.js I am setting up up like this:
const admin = require('firebase-admin/app');
const serviceAccount = require('../serviceAccountKey.json')
admin.initializeApp({
credential: admin.credential.cert(serviceAccount),
databaseURL: "https://projectName.firebaseio.com"
});
...
I am getting the error:
credential: admin.credential.cert(serviceAccount),
^
TypeError: Cannot read property 'cert' of undefined
at Object.<anonymous> (C:\Users\Uporabnik\hello-world\scripts\deploy.js:9:32)
at Module._compile (internal/modules/cjs/loader.js:1068:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1097:10)
at Module.load (internal/modules/cjs/loader.js:933:32)
at Function.Module._load (internal/modules/cjs/loader.js:774:14)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:72:12)
at internal/main/run_main_module.js:17:47
My package.json:
{
"name": "hardhat-project",
"devDependencies": {
"@nomiclabs/hardhat-ethers": "^2.0.6",
"ethers": "^5.6.9",
"hardhat": "^2.9.9"
},
"version": "1.0.0",
"description": "hello world smart contract",
"main": "hardhat.config.js",
"dependencies": {
"cors": "^2.8.5",
"express": "^4.18.1",
"firebase-admin": "^11.0.0"
},
"scripts": {
"test": "mocha"
},
"author": "",
"license": "ISC"
}
What could be causing it?
A: This is the solution:
Replace
const admin = require('firebase-admin/app');
with
const admin = require('firebase-admin');
You would use /app if you’re going to use modular approach
| {
"language": "en",
"url": "https://stackoverflow.com/questions/73120201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Smoother Keypressed event for paddle object in pong I'm trying to make my paddle movements smoother in my pong game. My player_paddle1 has smooth movements and will stop whenever I let go of the key. However, my other player_paddle2, which incorporates the same keypress algorithm has the other paddle, does not do so. It will keep on going even if I release the key.
if game_option == "Two Player":
if event.type == KEYDOWN:
if event.key == K_UP:
player_paddle1.direction = -1
elif event.key == K_DOWN:
player_paddle1.direction = 1
if event.key == K_w:
player_paddle2.direction = -1
elif event.key == K_s:
player_paddle2.direction = 1
if event.type == KEYUP:
if event.key == K_UP and player_paddle1.direction == -1:
player_paddle1.direction = 0
elif event.key == K_DOWN and player_paddle1.direction == 1:
player_paddle1.direction = 0
if event.key == K_UP and player_paddle2.direction == -1:
print("The key is now up!")
player_paddle2.direction = 0
elif event.key == K_UP and player_paddle2.direction == 1:
player_paddle2.direction = 0
Also, while the up and down keys are extremely responsive, the W and S keys are not too responsive, which means, a keypressed will not immedieatly result in paddle motion. How can I fix this?
A: You check K_UP for second player in KEYUP but it has to be K_w and K_s.
Besides you don't have to check player_paddle2.direction
if event.type == KEYUP:
if event.key == K_UP:
player_paddle1.direction = 0
elif event.key == K_DOWN:
player_paddle1.direction = 0
elif event.key == K_w: # <-- there was K_UP
print("The key is now up!")
player_paddle2.direction = 0
elif event.key == K_s: # <-- there was K_UP
player_paddle2.direction = 0
it can be shorter
elif event.type == KEYUP:
if event.key in (K_UP, K_DOWN)
player_paddle1.direction = 0
elif event.key in (K_w, K_s):
player_paddle2.direction = 0
BTW: you can use elif in some places.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/47560867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Using RestSharp, how to include a fieldname like "$fieldname" in RestRequest.AddJsonBody()? Using RestSharp, I need to POST a body containing a json string that looks like this:
{
"$a": "b",
"c": "d"
}
In the past I've created RestSharp requests using code like this:
var request = new RestRequest("someApiEndPoint", RestSharp.Method.POST);
request.AddJsonBody(new
{
a = "b",
c = "d"
});
What's the best way to add a "$" to the "a" property in this case?
A: Since you are using an anonymous type, you could just as easily switch to using a dictionary:
var root = new Dictionary<string, object>
{
{"$a", "b" },
{"c", "d" },
};
var request = new RestRequest("someApiEndPoint", RestSharp.Method.POST)
.AddJsonBody(root);
If you were using an explicit type, you could check RestSharp serialization to JSON, object is not using SerializeAs attribute as expected for options.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/47721020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Pass through `IHttpHandler` response on error So, IIS is pissing me off. It keeps on screwing with my error handling. I have a number of IHttpHandlers registered, and they're doing a good job. When I connect locally, I get the response I expect. But when I connect remotely, IIS's CustomErrorModule starts interfering. I've played with system.webServer/httpErrors and ssytem.web/customErrors to no avail. The only thing that works is system.webServer/modules/remove@name=CustomErrorModule, but that requires setting lockItem="false" for CustomErrorModule in the system's applicationHost.config file. Which is not a solution I like.
Is there some way to entirely disable CustomErrorModule without messing with any system files? I.e. without having admin privileges?
A: Aha! In <system.webServer>:
<httpErrors existingResponse="PassThrough" />
This does exactly what I want.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36227031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: onLoad() jQuery not working correctly because of Uncaught TypeError: Cannot read property 'hps' of undefined My onLoad jQuery function is not working. The HTML starts with a "loading" gif, and the jQuery should fade in the featured-photo while fading out image-loader.
The loadImage function is called in the html, and causes an error because window.app is undefined in the HTML. Like this:
html
<div class="featured-photo">
<img src="<?php echo wp_get_attachment_url( get_post_thumbnail_id($post->ID) ); ?>" class="lazy-image" onload="window.app.hps.loadImage(this, '.featured-photo')"/>
</div>
application.js
loadImage: function(img, attachment_class)
{
var backgroundImage = $(img).attr('src');
console.log("Lazy load image: "+backgroundImage);
$('.image-loader').fadeOut();
$('.post-loaders').fadeIn();
$(attachment_class).css("opacity", 0).css('background-image', 'url(' + backgroundImage + ')').css("opacity", 1);
}
The error I get is Uncaught TypeError: Cannot read property 'hps' of undefined, which I understand means window.app is undefined in the html page. My question is how do I properly define window.app within the html so the function will work?
The function is defined in application.js like this:
$(document).ready(function() {
window.app = new Application({ el: $("body") });
if ( $('body').hasClass("page-id-142") ) { revenew(); }
});
But I'm not sure how to define it in the HTML properly.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34169067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Cast ConstantDataArray to i8* in LLVM Simple question: "I have a ConstantDataArray of type [7 x i8], how can I cast it to have the i8* type?"
EDIT
More Context:
The ConstantDataArray is created as follows:
ConstantDataArray::getString(Fn->getContext(), "Hello", true);
And I have created a LLVM:Function that has an argument with the type coming from Type::getInt8PtrTy(getGlobalContext()) and I want to cast the array to this type, so I can pass it as an argument.
I'm developing a pass
A: You can use IRBuilder's CreateGlobalStringPtr which is a convenience wrapper for creating a global string constant and returning an i8* pointing to its first character.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43549707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Options for a file system based Sql database for a multiuser web application that can perform and scale What are my options for a file based sql database (not NoSql) which scales well, performs well, and is designed for handling many hundreds of multiple users (and plays nice with .net)?
My requirements
I'm accustomed to working with SqlServer, and for this application my needs are simpler (I still need sql though, although other parts of the application will use NoSql).
I want something which is embedded mainly because it's just simple and easy to set up, without any major overheads or services or configurations. I'd like to keep it filesystem for as long as I can.
However, when the time comes, ideally I'd like a solution which allows me to change the "context"of the database so maybe it is server based. I'd like that option to grow.
I'd also like it to be free (at least for small application, or non-commercial applications (although it will become commercial in the future...?)).
Does such a database solution exist?
Update
Sorry guys, I used the wrong terminology and ugh ink we misunderstood each other. Forget I said embedded, I meant file base, like lucene or raven, but relational.
A: You ever heard of SQL Server? Like SQL Server EMBEDDED? No install ;)
A: You have contradictory requirements.
Small and embedded (no server) usually means SQL Server compact or SQLLite. But these are neither multi-user not network aware in practice. Especially whan you say "hundreds of multiple users"
So you have to decide what you want to do. A proper, scalable, web based app with correct architecture? Or a a cheap kludgely unworkable unmaintainable mess?
SQL Server Compact will scale up of course in future to normal SQL Server with minimum fuss. But I'd start with SQL Server properly now
A: You can use FireBird, it can be embedded and scales well and deployment is really easy - an ADO.NET provider is available... see for more information http://www.firebirdsql.org/en/net-provider/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/7619525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Show popup once? I am trying to show a popup, and if the user clicks dont show again, I want to never show it again. However, the dont show again button is not working. I am using shared preferences:
if (dialogPrefs.getBoolean("Show", true) == true) {
new AlertDialog.Builder(this)
.setTitle("Blah")
.setMessage("Blah blah blah ")
.setNegativeButton("Not now", null)
.setNeutralButton("Don't show again", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialogEditor = dialogPrefs.edit();
dialogEditor.putBoolean("Show", false);
dialogEditor.commit();
}
})
.setPositiveButton("Enable", new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int which) {
enable();
}
}).show();
My preferences and editor are declared in the beginning as such:
SharedPreferences dialogPrefs;
SharedPreferences.Editor dialogEditor;
The shared prefs are initialized in onCreate().
Please let me know what the problem may be.
Thanks,
Ruchir
A: SharedPreferences.Editor.commit() returns a boolean, indicating the status of write to the actual SharedPreferences object. See if commit() returned true. Also, make sure, you're not editing the same SharedPreference using two Editors. The last editor to commit, will have its changes reflected.
Update Your code works fine, when I run it. I don't see anything wrong in your code. Please make sure you're writing to and reading from the same SharedPreferences.
A: Your problem is the declaration of the SharedPreferences; it is all declared but...not initialized! Where should the os write your key-value data?
I suggest you to read this Get a Handle to a SharedPreferences
Try this code, I tested it and work:
SharedPreferences dialogPrefs = this.getPreferences(Context.MODE_PRIVATE);
final SharedPreferences.Editor dialogEditor = dialogPrefs.edit();
if (dialogPrefs.getBoolean("Show", true)) {
new AlertDialog.Builder(this)
.setTitle("Blah")
.setMessage("Blah blah blah ")
.setNegativeButton("Not now", null)
.setNeutralButton("Don't show again", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialogEditor.putBoolean("Show", false);
dialogEditor.commit();
}
})
.setPositiveButton("Enable", new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int which) {
Log.i("TAG", "onClick: enable");
}
}).show();
}
}
A: It should be like this:
if (!dialogPrefs.getBoolean("Show", false)) {//don't show again will work
instead of:
if (dialogPrefs.getBoolean("Show", true) == true) {//this will always show dialog
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35687497",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Joining Lists using Linq returns different result than corresponding SQL query? I have 2 tables
TableA:
TableAID int,
Col1 varchar(8)
TableB:
TableBID int
Col1 char(8),
Col2 varchar(40)
When I run a SQL query on the 2 tables it returns the following number of rows
SELECT * FROM tableA (7200 rows)
select * FROM tableB (28030 rows)
When joined on col1 and selects the data it returns the following number of rows
select DISTINCT a.Col1,b.Col2 FROM tableA a
join tableB b on a.Col1=b.Col1 (6578 rows)
The above 2 tables on different databases so I created 2 EF models and retried the data separately and tried to join them in the code using linq with the following function. Surprisingly it returns 2886 records instead of 6578 records. Am I doing something wrong?
The individual lists seems to return the correct data but when I join them SQL query and linq query differs in the number of records.
Any help on this greatly appreciated.
// This function is returning 2886 records
public List<tableC_POCO_Object> Get_TableC()
{
IEnumerable<tableC_POCO_Object> result = null;
List<TableA> tableA_POCO_Object = Get_TableA(); // Returns 7200 records
List<TableB> tableB_POCO_Object = Get_TableB(); // Returns 28030 records
result = from tbla in tableA_POCO_Object
join tblb in tableB_POCO_Object on tbla.Col1 equals tblb.Col1
select new tableC_POCO_Object
{
Col1 = tblb.Col1,
Col2 = tbla.Col2
};
return result.Distinct().ToList();
}
A: The problem lies in the fact that in your POCO world, you're trying to compare two strings using a straight comparison (meaning it's case-sensitive). That might work in the SQL world (unless of course you've enabled case-sensitivity), but doesn't quite work so well when you have "stringA" == "StringA". What you should do is normalize the join columns to be all upper or lower case:
join tblb in tableB_POCO_Object on tbla.Col1.ToUpper() equals tblb.Col1.ToUpper()
Join operator creates a lookup using the specified keys (starts with second collection) and joins the original table/collection back by checking the generated lookup, so if the hashes ever differ they will not join.
Point being, joining OBJECT collections on string data/properties is bad unless you normalize to the same cAsE. For LINQ to some DB provider, if the database is case-insensitive, then this won't matter, but it always matters in the CLR/L2O world.
Edit: Ahh, didn't realize it was CHAR(8) instead of VARCHAR(8), meaning it pads to 8 characters no matter what. In that case, tblb.Col1.Trim() will fix your issue. However, still keep this in mind when dealing with LINQ to Objects queries.
A: This might happen because you compare a VARCHAR and a CHAR column. In SQL, this depends on the settings of ANSI_PADDING on the sql server, while in C# the string values are read using the DataReader and compared using standard string functions.
Try tblb.Col1.Trim() in your LINQ statement.
A: As SPFiredrake correctly pointed out this can be caused by case sensitivity, but I also have to ask you why did you write your code in such a way, why not this way:
// This function is returning 2886 records
public List<tableC_POCO_Object> Get_TableC()
{
return from tbla in Get_TableA()
join tblb in Get_TableB() on tbla.Col1 equals tblb.Col1
select new tableC_POCO_Object
{
Col1 = tblb.Col1,
Col2 = tbla.Col2
}.Distinct().ToList();
}
where Get_TableA() and Get_TableB() return IEnumerable instead of List. You have to watch out for that, because when you convert to list the query will be executed instantly. You want to send a single query to the database server.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/10805278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Increase json ordered list by 1 using python I have a json file that is similar to this
{
"items":[
{
"item":0
},
{
"item":1
},
{
"item":2
},
{
"item":3
}
]
}
I'd like to increase each number behind item by 1 and keeping the same format. The output should be
{
"items":[
{
"item":1
},
{
"item":2
},
{
"item":3
},
{
"item":4
}
]
}
How can I do it using Python?
Thanks.
A: >>> import json
>>> data = ''' {
... "items":[
... {
... "item":0
... },
... {
... "item":1
... },
... {
... "item":2
... },
... {
... "item":3
... }
... ]
... }'''
>>> print(json.dumps({a:[{b:1+c[b]for b in c}for c in d]for a,d in json.loads(data).items()},indent=4))
{
"items": [
{
"item": 1
},
{
"item": 2
},
{
"item": 3
},
{
"item": 4
}
]
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35593406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-4"
} |
Q: Setting Table View Cell Image I've set the image for the cells in my table view, but the lines dividing the cells aren't showing. What have I done wrong?
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
static NSString *mbTableIdentifier = @"SimpleTableItem";
UIImageView *image = [[UIImageView alloc]init];
image.image = [UIImage imageNamed:@"BarButton.png"];
UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:mbTableIdentifier];
if (cell == nil)
{
cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:mbTableIdentifier];
cell.textLabel.font=[UIFont systemFontOfSize:16.0];
}
// cell.backgroundView = [[CustomCellBackground alloc] init];
cell.selectedBackgroundView = [[CustomCellBackground alloc] init];
cell.textLabel.backgroundColor = [UIColor clearColor];
cell.textLabel.highlightedTextColor = [UIColor darkGrayColor];
cell.textLabel.textColor = [UIColor whiteColor];
cell.backgroundView = image;
cell.textLabel.text = [mbTableData objectAtIndex:indexPath.row];
return cell;
}
EDIT: I've logged my separator style and color
2013-05-20 07:28:40.392 KFBNewsroom[1274:c07] cell separator style: 2 separator color: UIDeviceRGBColorSpace 0.67 0.67 0.67 1
2013-05-20 07:28:40.393 KFBNewsroom[1274:c07] cell separator style: 2 separator color: UIDeviceRGBColorSpace 0.67 0.67 0.67 1
2013-05-20 07:28:40.393 KFBNewsroom[1274:c07] cell separator style: 2 separator color: UIDeviceRGBColorSpace 0.67 0.67 0.67 1
EDIT: Screenshot of the problem
EDIT: I ended up resolving the problem by adding a 1 pixel line to the bottom of my image.
A: -(CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath
{
return 50;
}
this delegate method for increase the height of your tableview cell.
you may try this.
A: Go to tableview properties in XIB, check if Separator has been set as 'None'. In that case, you need to set it as 'Single Line' from the drop down ..
A: Set property of your tableView from coding
yourTableView.separatorStyle=UITableViewCellSeparatorStyleSingleLine;
Using xib
Or increase your tableView row height (more then your image height)
-(CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath
{
return 100;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16640833",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How can I make this custom bootstrap navbar responsive to viewport size? I have the following navbar, which works pretty well with my screen (1300 pixels wide by 700 high):
When the viewport is smaller, the elements in the navbar go crazy:
I've found that switching between any of the typical bootstrap classes just cause more problems with alignment and sizing.
I tried to use this media query in my CSS but it did nothing at all:
@media only screen and (max-width: 700px) {
.navbar .app-badge {
display: block !important;
}
}
I've tried a bunch of ways to adjust the bootstrap classes or use a media query. Nothing works. I need help figuring out how to make the navbar collapse on smaller viewports.
Here's my HTML:
<nav class="navbar navbar-expand-sm navbar-spur
{% if not current_user.admin %}navbar-spur-user {% else %} navbar-spur-admin {% endif %}">
<div class="border rounded border-1 border-white pt-1 pl-1 even-height ml-n1 app-badge">
<a class="navbar-nav mr-auto text-center p-1" href="https://www.spur.community/holiday-cheer-drive">
<div class="text-center ml-n1 mt-n1">
<div class="stacked">
<img src="/static/logos/logo-spur-main.png">
</div>
<div class="stacked">
<span class="text-spur-ribbon m-1"><small><b>SPUR</b></small></span>
</div>
</div>
</a>
</div>
<div class="border rounded border-1 border-white pt-1 pl-1 ml-3 even-height app-badge">
<a class="navbar-brand p-0" href="/" title="Home">
<div class="parallel">
<img src="/static/logos/logo-spur-white.png">
</div>
<div class="parallel">
<span class="text-spur-red">
<h1>
<em>
<b>SPUR</b>
</em>
</h1>
</span>
</div>
<div class="ml-1 parallel">
<div class="mt-n2 stacked">
<small><span class="text-spur-green"><em>Holiday</em></span></small>
</div>
<div class="mt-n3 stacked">
<small><span class="text-spur-green"><em>Cheer</em></span></small>
</div>
<div class="mt-n3 stacked">
<small><span class="text-spur-green"><em>Drive</em></span></small>
</div>
</div>
{% if current_user.admin %}
<div class="mt-2 parallel">
<span class="text-spur-ribbon mt-2 ml-2"><em>Admin</em></span>
</div>
{% endif %}
</a>
</div>
<ul class="navbar-nav mr-auto mt-2 mt-lg-0">
<div class="navbar-nav border rounded border-1 border-white ml-3 pt-2 pb-2 even-height app-badge">
{% for url, route, label in nav_main %}
<li class="nav-item">
<a class="nav-link {{ 'active' if active_page==route }}" href="{{ url }}">{{ label }}</a>
</li>
{% endfor %}
</div>
{% if current_user.admin %}
<div class="navbar-nav border rounded border-1 border-white ml-3 pt-2 pb-2 even-height app-badge">
<li class="nav-item dropdown">
<a class="nav-link dropdown-toggle {{ 'active' if active_page in admin_labels }}" data-toggle="dropdown"
href="" id="adminDropdown" aria-haspopup="true" aria-expanded="false">Admin</a>
<ul class="dropdown-menu" aria-labelledby="adminDropdown">
{% for url, route, label in nav_admin_dropdown_top %}
<li><a href="{{ url }}"" class=" dropdown-item">{{ label }}</a></li>
{% endfor %}
<div class="dropdown-divider"></div>
{% for url, route, label in nav_admin_dropdown_bottom %}
<li><a href="{{ url }}"" class=" dropdown-item">{{ label }}</a></li>
{% endfor %}
</ul>
</li>
</div>
{% endif %}
{% if current_user.is_authenticated and nav_logged_in %}
<div class="navbar-nav border rounded border-1 border-white ml-3 pt-2 pb-2 even-height app-badge">
{% for url, route, label in nav_logged_in %}
<li class="nav-item">
<a class="nav-link {{ 'active' if active_page==route }}" href="{{ url }}">{{ label }}</a>
</li>
{% endfor %}
</div>
{% endif %}
</ul>
<ul class="navbar-nav m1-auto">
{% if current_user.is_anonymous %}
<!-- e.g., if NOT current_user.is_authenticated -->
<div class="navbar-nav border rounded border-1 border-white ml-3 pt-2 pb-2 even-height app-badge">
{% for url, route, label in nav_anon %}
<li class="nav-item">
<a class="nav-link {{ 'active' if active_page==route }}" href="{{ url }}">{{ label }}</a>
</li>
{% endfor %}
</div>
{% elif current_user.is_authenticated %}
{% if nav_right %}
<div class="navbar-nav border rounded border-1 border-white ml-3 pt-2 pb-2 even-height app-badge">
{% for url, route, label in nav_right %}
<li class="nav-item">
<a class="nav-link {{ 'active' if active_page==route }}" href="{{ url }}">{{ label }}</a>
</li>
{% endfor %}
</div>
{% endif %}
<div class="navbar-nav border rounded border-1 border-white ml-3 pt-2 pb-2 even-height app-badge">
<li class="nav-item">
<a class="nav-link" href="{{ url_for('user.logout') }}">Log Out</a>
</li>
</div>
{% endif %}
</ul>
</nav>
And my CSS (if you noticed the navbar-spur-admin class, it's the same as navbar-spur-user but with different colors):
/* NAVBAR */
/* NAVBAR */
/* NAVBAR */
/* BASE NAVBAR */
.navbar-spur, .navbar-spur .navbar-brand .navbar-nav {
background-image: none;
background-repeat: no-repeat;
}
.navbar-spur .navbar-brand .parallel img {
max-width: 2.5em;
}
.navbar-spur div a img {
max-width: 1.5em;
}
.navbar-spur .even-height {
height: 3.7em;
}
.navbar-spur .navbar-brand small {
font-size: 0.7em;
}
.navbar-spur a {
text-decoration: none;
}
.navbar-spur .navbar-brand .parallel, .navbar-spur .navbar-nav .parallel {
display: inline-block;
text-align: left;
vertical-align: top;
}
.navbar-spur .navbar-brand .stacked, .navbar-spur .navbar-nav .stacked {
display: block;
}
/* USER NAVBAR */
.navbar-spur-user, .navbar-spur-user .navbar-brand .navbar-nav {
background-color: #003274 !important;
}
.navbar-spur-user .app-badge {
background-color: #002658 !important;
}
.navbar-spur-user .navbar-nav .nav-item .nav-link {
color: #356275 !important;
}
.navbar-spur-user .navbar-nav .nav-item .active {
color: white !important;
}
Any advice, links to resources that would help me, or feedback would be greatly appreciated. Thank you!
A: Peter, I believe your problem has to do with the width of your logo under the class "navbar-brand". I had a similar issue, but fixed it by controlling the width of my logo under the Navbar-brand. According to bootstrap, "Adding images to the .navbar-brand will likely always require custom styles or utilities to properly size." Below is bootstraps example:
<!-- Just an image -->
<nav class="navbar navbar-light bg-light">
<a class="navbar-brand" href="#">
<img src="/docs/4.4/assets/brand/bootstrap-solid.svg" width="30" height="30" alt="">
</a>
</nav>
Notice the width and height settings after the image. In my case I was able to change the width of my logo image in CSS within my media query for small screen sizes.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60688363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Backbone error when I pass a parameter on URL This is my code:
<!DOCTYPE html>
<html>
<head>
<title>Bakbone pushStat</title>
<script type="text/javascript" src="vendor/jquery-1.9.0.min.js"></script>
<script type="text/javascript" src="vendor/underscore-min.js"></script>
<script type="text/javascript" src="vendor/backbone-min.js"></script>
<script type="text/javascript">
(function($){
var AppRouter = Backbone.Router.extend({
routes: {
"/": "initHome",
"home": "initHome",
"projects/(:id)" : "initProject"
}
});
var app_router = new AppRouter;
app_router.on('route:initHome' , function(){
alert('initHome');
});
app_router.on('route:initProject' , function(id){
alert('Projet');
});
$(document).on("click",".links",function(e) {
var href = $(this).attr("href");
var url = lang + "/" + href;
page = $(this).attr("data-id");
var param = $(this).attr("data-param");
if (typeof(param) == 'undefined') { param = ""; }
if(activepage != href && !main.hasClass("loadingPage")){
loader.show();
firstInit = false;
activepage = href;
res = app_router.navigate(url, true);
getContent(page,param);
}
return false;
});
Backbone.history.start({pushState: true, root: "/backbone_pushStat/"});
})(jQuery);
</script>
</head>
<body>
<a href="home" class="foo">HOME</a><br/>
<a href="projects/toto" class="bar">TEST</a><br/>
<div id="default">default</div>
<div id="entrees">entrees</div>
</body>
</html>
When I click the HOME link, it works very well.
But, I don’t know why, I get this error when I click the TEST link (firefox console).
SyntaxError: syntax error
backbone-min.js (ligne 1)
ReferenceError: jQuery is not defined
})(jQuery);
Anything wrong with my code?
Please help.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/23828825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to add bulk hardware access in softlayer using API and nodejs I have below code to add bulk hardware access in softlayer using API and nodejs:
slClient
.auth(slUserID, slApiKey)
.path('User_Customer', userID, 'addBulkHardwareAccess',{"hardwareIds":["XXXXX,XXXXXXX"]})
.post()
.then(res => {
resolve(res);
})
.catch(err => {
reject(err);
});
};
But it gives error:
TypeError: Cannot read property 'constructor' of undefined
A: To add bulk hardware access, use the following rest api:
Method: POST
https://[username]:[apiKey]@api.softlayer.com/rest/v3.1/SoftLayer_User_Customer/[userCustomerId]/addBulkHardwareAccess
Body: Json
{
"parameters":[
[
111111,
222222,
333333,
444444
]
]
}
Reference:
https://softlayer.github.io/reference/services/SoftLayer_User_Customer/addBulkHardwareAccess/
Or if you want to add access to all hardware, use this rest api:
Method: POST
https://[username]:[apiKey]@api.softlayer.com/rest/v3.1/SoftLayer_User_Customer/[userCustomerId]/addPortalPermission
Body: Json
{
"parameters": [
{
"keyName": "ACCESS_ALL_HARDWARE"
}
]
}
Reference:
https://softlayer.github.io/reference/services/SoftLayer_User_Customer/addPortalPermission/
A: There is parameters() method to provide parameters.
slClient
.auth(slUserID, slApiKey)
.path('User_Customer', args.userID, 'addBulkHardwareAccess')
.parameters([[XXXXXX,XXXXXXXXXX]])
.post()
.then(res => {
resolve(res);
})
.catch(err => {
reject(err);
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51010124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How can I ignore a field when marshalling a structure with P/Invoke I want to marshal a structure for use with P/Invoke, but this struct contains a field that is only relevant to my managed code, so I don't want it to be marshaled since it doesn't belong in the native structure. Is it even possible ? I was looking for an attribute similar to NonSerialized for serialization, but it doesn't seem to exist...
struct MyStructure
{
int foo;
int bar;
[NotMarshaled] // This attribute doesn't exist, but that's the kind of thing I'm looking for...
int ignored;
}
Any suggestion would be appreciated
A: There's no way to make the CLR ignore a field. I would instead use two structures, and perhaps make one a member of the other.
struct MyNativeStructure
{
public int foo;
public int bar;
}
struct MyStructure
{
public MyNativeStruct native;
public int ignored;
}
A: Two methods:
*
*Use a class instead of a struct: structures are always passed by pointer to the Windows API or other native functions. Replacing a call to doThis(ref myStruct) with a call to doThis([In, Out] myClass) should do the trick. Once you've done this, you can simply access your not-to-be-marshaled fields with methods.
*As i already stated, structs are (almost) always passed by reference: hence the callee knows nothing about structure dimensions: what about simply leaving your additional fields to be the last ones? When calling a native function that needs your structure's pointer and the structure's size, simply lie about its size, giving the one it would have without your extra fields. I don't know if it's a legal way to marshal such a structure back when obtaining it FROM a native function. Side question: does the Marshaller process class fields marked as private? (I hope not...)
A: based on my tests, auto property like:
private int marshaled { get; set; }
will consume space while marshaling (Marshal.SizeOf)
But! explicitly specified property will not:
private int skipped
{
get { return 0; }
set { }
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1704282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Find cities within given mile radius from address I'm using CakePHP/mysql to build my app and I'm trying to get a list of cities within a given mile radius around an address both supplied by the user. So far I have a list of all US cities in a database with the long/lat. I also am using a CakePHP plugin to get the long/lat of all addresses inserted into the database.
Now how do I do the last part? I'm new to the google maps api but it looks like there is a limit on how many queries I make a day. The only way I can think to do it is to check the distance from the address and compare it to every city in the database and if it is within the given radius then they are selected. It seems like this would be way too database intensive and I would pass my query quotas in one go.
Any ideas?
A: It sounds like you're trying to determine if a given point (city) is within a circle centered on a specific lat/lon. Here's a similar question.
Run a loop over each city and see if it satisfies the following condition:
if ( (x-center_x)2 + (y-center_y)2 <= radius2 )
If this is too slow, you could turn it into a lookup based on rounding the lat/lon. The more values you precompute, the closer to exact you can get.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/7825956",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to vertically align a p child of a parent div, with intention to easily text-align the child? My goal is to vertically align a child p of a parent div. Currently, I use Flexbox. But it prevents me to modify the text-align.
My expected result: The text is vertically aligned, and can be text-aligned.
My actual result: The text is vertically aligned but it can't be text-aligned.
body {
background-color: silver;
}
div {
background-color: white;
}
header, nav, section, article, aside, footer {
background-color: white;
padding: 10px;
border: solid;
}
.text-center {
text-align: center;
}
.center {
display: flex;
justify-content: center;
align-items: center;
}
<div style="display: flex;">
<div style="flex: 1;">
<section>
<p class="text-center">Section</p>
</section>
<article>
<p class="text-center">Article</p>
</article>
</div>
<aside class="center" style="flex: 1;">
<p style="text-align: right;">Aside</p>
</aside>
</div>
A: Is this what you're looking for? (sorry had to change up the classes a bit). What I did was added a display:grid; and align-items:center; on all parents of the p tags.
HTML:
<div class="flex">
<div class="flex-item">
<section>
<p class="text-center">Section</p>
</section>
<article>
<p class="text-center">Article</p>
</article>
</div>
<aside class="center flex-item">
<p>Aside</p>
</aside>
</div>
CSS
body {
background-color: silver;
}
div {
background-color: white;
}
header, nav, section, article, aside, footer {
background-color: white;
padding: 10px;
border: solid;
}
.flex {
display: flex;
}
.flex-item {
flex: 1;
}
.flex-item section,
.flex-item article,
.center {
display: grid;
align-items: center;
text-align: center;
}
.flex-item section,
.flex-item article {
height: 200px; /* this is just to see the vertical layout */
}
You should now be able to freely align, left, center and right, but there are more ways to do this. If you're worried about using grid, it has good support and it works fine on most major browsers :)
A: This is my first answer every, I hope this helps!
This happens because you put flex: 1; on the parent of the "aside" text, changing the size of the parent but not the child. Try changing flex: initial; on the "aside" text parent and you'll see what I mean. You can fix this in a ton of ways, one way is to put width: 100%; on the "aside" text <p></p>
A: you can put either p {flex-grow: 1} or p {min-width: 100%}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/67496692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: what does the "Index: 68, Size: 10" means? I am getting an error
java.lang.IndexOutOfBoundsException: Index: 68, Size: 10
I know what this errors means.. That I try to read a position of an array where it doesn't exists. I don't know however this Index: 68, Size: 10 what does it means.. that I tried to read the position 68 of an array that has only 10 positions?
A:
what does it means.. that I tried to read the position 68 of an array that has only 10 positions?
Exactly! Index = 9 is the maximal index you can access.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33923445",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: how to use one Frame buffer object to both render on texture2d and render on screen in opengles 2.x ios I want to record everything in my game( using opengles ) into movie to user could replay later.
I have problem is:
I used frame buffer to render on texture2d and get read pixel from it using CVPixelBuffer and CVOpenGLESTextureCache to record movie. But when i did that, my game was not running anymore. I think because frame buffer wasn't render to colorRenderBuffer, I added this line after record movie
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer);
After using this, game was running normally but movie is black.
So I wonder how to use one Frame buffer object to both render on texture2d and render on screen in opengles 2.x ios.
If anybody has any idea for resolve this problem, please help me.
Thanks in advance.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/20117314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: JSON Web Key (JWK) for IRS E services registration While signing up for the IRS E-services they are requesting a JSON Web Key (JWK). they want the following fields in the JWK
kid, kty, use, n, e, x5t, x5c.
The "kty" field should be equal to "RSA".
In this answer it is shown how to generate the keys but I cannot find out how to generate an x5c key.
A: x5c is fairly simple. you need to remove -----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- from the Self-Signed Certificate generated from mkjwk.org
OR
you can use the PHP script: https://control7.net/JWK/generate.php
paste your Public and Private Keypair and Self-Signed Certificate into the textbox and click Go
| {
"language": "en",
"url": "https://stackoverflow.com/questions/70791460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to safely access every n-th element in a container in a succinct way? Consider an STL container C that is forward-iteratable. I need to access every step element, starting from idx. If C is a vector (i.e. has a random-access iterator) I can just use index arithmetic:
template <class Container>
void go(const Container& C) {
for(size_t i = idx; i<C.size(); i+=step) {
/* do something with C[i] */
}
}
However, if C does not support that, e.g. C is a list, one needs to rewrite the above solution. A quick attempt would be:
template <class Container>
void go(const Container& C) {
size_t max = C.size();
size_t i = idx;
for(auto it = std::next(C.begin(),idx); i < max; i+=step, it+=step) {
/* do something with *it */
}
}
Not much longer and it works... except that most likely it will trigger the undefined behavior. Both std::next and it+=step can potentially step way beyond the C.end() before i < max check is performed.
The solution I am currently using (not shown) is really bloated when compared to the initial for. I have separate check for the first iteration and those that follows. A lot of boilerplate code...
So, my question is, can the above pattern be written in a safe, and succinct way? Imagine you want to nest these loops 2 or 3 times. You don't want the whole page of code for that!
*
*The code should be reasonably short
*The code should have no overhead. Doing std::next(C.begin(), i) in each iteration over i is unnecessairly long, if you can just std::advance(it, step) instead.
*The code should benefit from the case when it is indeed a random-access iterator when std::advance can be performed in constant time.
*C is constant. I do not insert, erase or modify C within the loop.
A: The comment in the question about the requirements inspired me to implement this in terms of k * step instead of some other mechanism controlling the number of iterations over the container.
template <class Container>
void go(const Container& C)
{
const size_t sz = C.size();
if(idx >= sz) return;
size_t k_max = (sz - idx) / step + 1;
size_t k = 0
for(auto it = std::advance(C.begin(), idx); k < k_max && (std::advance(it, step), true); ++k) {
/* do something with *it */
}
}
A: You might use helper functions:
template <typename IT>
IT secure_next(IT it, std::size_t step, IT end, std::input_iterator_tag)
{
while (it != end && step--) {
++it;
}
return it;
}
template <typename IT>
IT secure_next(IT it, std::size_t step, IT end, std::random_access_iterator_tag)
{
return end - it < step ? end : it + step;
}
template <typename IT>
IT secure_next(IT it, std::size_t step, IT end)
{
return secure_next(it, step, end, typename std::iterator_traits<IT>::iterator_category{});
}
And then:
for (auto it = secure_next(C.begin(), idx, C.end());
it != C.end();
it = secure_next(it, step, C.end()) {
/* do something with *it */
}
Alternatively, with range-v3, you could do something like:
for (const auto& e : C | ranges::view::drop(idx) | ranges::view::stride(step)) {
/* do something with e */
}
A: One option is to adapt the iterator so that it is safe to advance past the end. Then you can use stock std::next(), std::advance(), pass it to functions expecting an iterator, and so on. Then the strided iteration can look almost exactly like you want:
template<class Container, class Size>
void iterate(const Container& c, Size idx, Size step)
{
if (unlikely(idx < 0 || step <= 0))
return;
bounded_iterator it{begin(c), c};
for (std::advance(it, idx); it != end(c); std::advance(it, step))
test(*it);
}
This is not dissimilar from the secure_next() suggestion. It is a little more flexible, but also more work. The range-v3 solution looks even nicer but may or may not be an option for you.
Boost.Iterator has facilities for adapting iterators like this, and it's also straightforward to do it directly. This is how an incomplete sketch might look for iterators not supporting random access:
template<class Iterator, class Sentinel, class Size>
class bounded_iterator
{
public:
using difference_type = typename std::iterator_traits<Iterator>::difference_type;
using value_type = typename std::iterator_traits<Iterator>::value_type;
using pointer = typename std::iterator_traits<Iterator>::pointer;
using reference = typename std::iterator_traits<Iterator>::reference;
using iterator_category = typename std::iterator_traits<Iterator>::iterator_category;
template<class Container>
constexpr explicit bounded_iterator(Iterator begin, const Container& c)
: begin_{begin}, end_{end(c)}
{
}
constexpr auto& operator++()
{
if (begin_ != end_)
++begin_;
return *this;
}
constexpr reference operator*() const
{
return *begin_;
}
friend constexpr bool operator!=(const bounded_iterator& i, Sentinel s)
{
return i.begin_ != s;
}
// and the rest...
private:
Iterator begin_;
Sentinel end_;
};
template<class Iterator, class Container>
bounded_iterator(Iterator, const Container&) -> bounded_iterator<Iterator, decltype(end(std::declval<const Container&>())), typename size_type<Container>::type>;
And for random access iterators:
template<RandomAccessIterator Iterator, class Sentinel, class Size>
class bounded_iterator<Iterator, Sentinel, Size>
{
public:
using difference_type = typename std::iterator_traits<Iterator>::difference_type;
using value_type = typename std::iterator_traits<Iterator>::value_type;
using pointer = typename std::iterator_traits<Iterator>::pointer;
using reference = typename std::iterator_traits<Iterator>::reference;
using iterator_category = typename std::iterator_traits<Iterator>::iterator_category;
template<class Container>
constexpr explicit bounded_iterator(Iterator begin, const Container& c)
: begin_{begin}, size_{std::size(c)}, index_{0}
{
}
constexpr auto& operator+=(difference_type n)
{
index_ += n;
return *this;
}
constexpr reference operator*() const
{
return begin_[index_];
}
friend constexpr bool operator!=(const bounded_iterator& i, Sentinel)
{
return i.index_ < i.size_;
}
// and the rest...
private:
const Iterator begin_;
const Size size_;
Size index_;
};
As an aside, it seems GCC produces slightly better code with this form than with my attempts at something like secure_next(). Can its optimizer reason better about indices than pointer arithmetic?
This example is shared also via gist and godbolt.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45823347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Use STL Containers with Struct I want to store the address of each of the items of my linked list represented by this struct:
struct Node
{
int data;
Node* next;
};
I made an unordered set for this as:
unordered_set<Node*> h;
I defined the iterator as
unordered_set <Node*>::iterator got = h.find (&headB);
This naturally threw a lot of compiler errors. Reading up on forums, I realized that this was wrong because since Node isnt a standard data type, this iterator wouldnt work. Reading up more, I also saw somewhere that I also needed to define an operator for this implementation. I searched a lot on Stack Overflow but didnt find anything that answers this question.
So basically, I just want to know how do we make struct's work with any STL containers and iterators: How do we define containers and implement algorithms ( insert, search, deletion on them)
A: You are inserting and finding incorrectly.
Your elements are of type Node*.
You have an "address of" operator (&) before Your elements. Which means You are trying to add a Node**, which is the incorrect type.
So the compiler is saying:
prog.cpp:67:24: error: no matching function for call to ‘std::unordered_set<Node*>::insert(Node**)’
The correct way is to add an element of the correct type, which is Node*, by removing the address of operator (&);
Unless You wanted the address of a pointer, then You need the container to store Node**
| {
"language": "en",
"url": "https://stackoverflow.com/questions/48547958",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Jquery loop compare not getting expected result This is my code. I will explain everything step by step :
myobj ={ month: 1, total: 2 }{ month: 3, total: 1 }
newArray = [];
for(i=0;i<=5;i++){
$.each(myobj.function(k,v){
if(i==v.month){
newArray.push(v.month)
}else{
newArray.push(0)
}
})
}
After what I am getting is : 1,0,0,0,0,3,0,0,0,0
expected output : 1,0,3,0,0
I don't know what I am missing here. Can anyone please help me related this? I am stuck here
A: Your inner loop is pushing onto the new array for every item in the array, not just if the desired month is found.
Don't use an inner loop. Use find() to find the matching month, and push 0 if you don't find it.
for (i = 1; i <= 5; i++) {
if (myobj.find(el => el.month == i)) {
newArray.push(i);
} else {
newArray.push(0);
}
}
if you want to push the totals instead of the months, assign the result of find() to a variable so you can get the total from it.
for (i = 1; i <= 5; i++) {
var found = myobj.find(el => el.month == i);
newArray.push(found ? found.total : 0);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61614818",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-4"
} |
Q: Product scoped dimensions in Google Analytics and Google Tag Manager I need to pass my product scoped custom dimension throught Google Tag Manager into Google Analytics. I am able to read just first variable in my array of objects of products. And no custom dimension is showing at all (even this first value of all of them) in my Analytics.
I have set custom dimension in Analytics as product scoped:
Then in GTM I seted up custom dimension in my tag.
In dimension value I used variable which goes into structure and find variable productSize.
And here is my code:
<script>
dataLayer.push({
'event': 'productImpression',
'ecommerce': {
'impressions': [
{
'name': 'Android tričko',
'id': '12345',
'price': '299',
'brand': 'Google',
'category': 'Pánská trička',
'variant': 'bílá',
'list': 'Search Results',
'productSize': 'L', // product scoped custom dimension
'position': 1
},
{
'name': 'Donut Friday Scented T-Shirt',
'id': '67890',
'price': '33.75',
'brand': 'Google',
'category': 'Apparel',
'variant': 'Black',
'list': 'Search Results',
'productSize': 'XL', // product scoped custom dimension
'position': 2
}]
}
});
</script>
As I said the problem is I can read only the first value (obviously) of my custom dimension ("L"). What should I write instead of zero symbol in variable dot notation to get all of the values ("L", "XL")?
I need to pass all values about every products into Analytics. Do I have to I push every product in separate dataLayer.push()? Where is the problem that I don't see anything at all in Analytics? Please help.
A: You cannot use the name you've given to the dimension via the interface. You'd have to use the "dimension" keyword plus the numeric index (order of creation), so the dimension that is referred to as "productSize" in the reports would in your example be addressed as "dimension1" in the code:
...
'list': 'Search Results',
'dimension1': 'L', // product scoped custom dimension
'position': 1
...
After that GA will pick your dimensions automatically from the datalayer.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/40450926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Concatenating all combinations of matrix rows i use matlab and need to combine two 2-dimensional matrices, so that the resulting rows are combinations of the rows from the input matrices concatenated together.
I tried ndgrid, but this creates ALL possible combinations. I need the input rows to stay together to create the output.
Here is an example:
I got:
a= [1 2 3
4 5 6];
b= [7 8
9 10];
I need:
needed = [1 2 3 7 8
1 2 3 9 10
4 5 6 7 8
4 5 6 9 10];
I would prefer to do this without loops if possible
A: Here's an adaptation of yuk's answer using find:
[ib, ia] = find(true(size(b, 1), size(a, 1)));
needed = [a(ia(:), :), b(ib(:), :)];
This should be much faster than using kron and repmat.
Benchmark
a = [1 2 3; 4 5 6];
b = [7 8; 9 10];
tic
for k = 1:1e3
[ib, ia] = find(true(size(b, 1), size(a, 1)));
needed = [a(ia(:), :), b(ib(:), :)];
end
toc
tic
for k = 1:1e3
needed = [kron(a, ones(size(b,1),1)), repmat(b, [size(a, 1), 1])];
end
toc
The results:
Elapsed time is 0.030021 seconds.
Elapsed time is 0.17028 seconds.
A: Use a Kronecker product for a and repmat for b:
[kron(a, ones(size(b,1),1)), repmat(b, [size(a, 1), 1])]
ans =
1 2 3 7 8
1 2 3 9 10
4 5 6 7 8
4 5 6 9 10
A: It gives the desired result but you might need something else then array_merge if you have duplicated items.
$a = array(array(1, 2, 3), array(4, 5, 6));
$b = array(array(7, 8), array(9, 10));
$acc = array_reduce($a, function ($acc, $r) use ($b) {
foreach ($b as $br) {
$acc []= array_merge($r, $br);
}
return $acc;
}, array());
var_dump($acc);
Edit: Sorry I've just noticed the "without loops" section. You can change the foreach to array_reduce to.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17718923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: change Anchor link texts size and color I am trying to detect an anchor link that is being clicked from the previous page.
I have a HTML like
index.html
<a href='test.html#project1'>project1</a><a href='test.html#project2'>project2</a><a href='test.html#project3'>project3</a>
bunch of stuff...
test.html
<a href='#project1'>project1</a><a href='#project2'>project2</a><a href='#project3'>project3</a>
<a id = 'project1'>bunch of stuff......</a>
bunch of stuff
<a id = 'project2'>bunch of stuff......</a>
bunch of stuff
<a id = 'project3'>bunch of stuff......</a>
bunch of stuff
I want to change the clicked link text color to red and bigger size. So when user clicks project1 from index.html, the project1 on text.html text will be red and larger.
Is there anyway to do this through CSS or jQuery?
Thanks!
A: Use this code:
if(window.location.hash){
$('a[href="'+ window.location.hash +'"]').addClass('active');
}
and example CSS class:
a.active{
color: red;
font-size: 18px;
}
This checks whether window.location.hash exists, if it does it searches for an a element with an href value equal to the hash. It then adds the .active class to any matched elements.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19147914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: svn post-commit hook : "error resolving case" & "skipped \\ip-address\users\myDir"" messages I've got Collabnet SVN server installed on windows. Additionally, I have implemented a post-commit hook that should update a working copy "B" of a project when I commit to the repository from a working copy "A". Working copy "B" resides on a network drive [H -> \\ip-address\users\myDirB ]:
*
*Now when I specify the line of code below, in my post-commit hook and commit a change-set from working Copy "A":
SET WORKING_COPY=H:/myDirB
I get the error:
svn post-commit hook : "error resolving case"
*
*Alternatively if I specify:
SET WORKING_COPY=//ip-address/users/myDirB
I get the error:
"skipped \ip-address\users\myDirB""
What am I doing wrong? Cheers.
Please note :
* Collabnet Subversion server is installed on my C: drive
*
*it's running as a service account with full privileges over the network directory I want to update automatically via the post-commit hook i.e - \\ip-address\users\myDirB*
*I also have the path \\ip-address\users\myDirB mapped on to H: drive
A:
SET WORKING_COPY=//ip-address/users/myDirB
can not work, since cmd.exe needs a plain path (drive letter, colon, relative path). It can not deal with other paths like UNC or ip addresses. It must have a drive letter.
SET WORKING_COPY=H:/myDirB
this however doesn't work because you mapped H: to something as the user you're logged on. But the hook script is running as the user your svn server is running, i.e. as the service account. And the service account does not have the H: drive mapped.
A: I would recommend against using a post commit hook to do this. It's going to be forever brittle, and complicated - as you're finding out.
You should setup a continuous integration build that monitors the svn repo and then deploys the code if needed. Separating these concerns will save you headache in the future, provide easy way to notify team (IM, email or dashboard), and will help you out when/if wish to do any automated testing.
A: two issues I ran into were (1) paths and (2) permissions. Our old setup included collabnet subversion server 1.5.6 and apache 2.2 on Windows 2008 R2 (it's now wandisco 1.7.2 / apache2.2 on Win 2008 R2).
initially, I had the path as a mapped drive and the subversion and apache services running as the Local System account. So, we had something like:
SET WORKING_COPY=X:\the\path\to\theworkingcopy
Running it via CLI was fine, but commiting and executing via subsequent hook resulted in log message like
Error resolving case of 'X:\the\path\to\theworkingcopy'
So, I changed WORKING_COPY to use a UNC path such as:
\\servername\DRIVELETTER$\the\path\to\theworkingcopy
Still same problem, but I figured the services (for both) needed to run with network privileges, so I changed the service "Log On As" to a domain account for the svn server and apache.
One other issue I ran into was setting the domain for the services' "Log On As" user. I used a domain user, but used a wildcard for the domain, e.g. ".\theuser"
Then it just worked.
As far as this being a brittle solution, I'd agree that CI is a better way to go. Even though our svn update over UNC is (1) documented and (2) works now and (3) it's unlikely to change in the near future - as thekbb noted - it doesn't separate concerns.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/6732154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Extract field of struct array to new array I have a struct, which has 2 fields: time and pose. I have multiple instances of this struct composed in an array, so an example of this is:
poses(1)
-time = 1
-pose = (doesn't Matter)
poses(2)
-time = 2
-pose = (doesn't Matter)
poses(3)
-time = 3
-pose = (doesn't Matter)
...
Now when I print this:
poses.time
I get this:
ans =
1
ans =
2
ans =
3
How can I take that output and put it into a vector?
A: For cases that the field values are vectors (of same size) and you need the result in a matrix form:
posmat = cell2mat({poses.pose}');
That returns each pose vector in a different row of posmat.
A: Use brackets:
timevec=[poses.time];
tricky matlab, I know I know, you'll just have to remember this one if you're working with structs ;)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12082746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Serve a static JSON object file in rails How do I serve a static JSON object from a file in rails? (I want to access it in an ajax call)?
What is the best method?
A: Just place what you want to render in a variable, then use render :json => variable
There are sensible defaults for lists, dicts, etc...
See this:
http://guides.rubyonrails.org/layouts_and_rendering.html
Item: "2.2.8 Rendering JSON"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/9348712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Difference between clone remote repo directly and fork remote repo first I work for another project lately, including about 15 developers.
We use GitLab as our source control server.
What confused me is that we fork the original remote repo first, then clone the forked repo to local repo.
When we want to push some code, we push code to our own remote repo, then make a merge request to the original remote repo.
Why not just clone the remote repo to local repo directly?
I think a feature branch can also work. What's the difference?
Is there a standard?
========================================================================
origin remote repo -> own remote repo -> local repo
origin remote repo -> local repo
A:
Why not just clone the remote repo to local repo directly?
Usually, a fork is used for the following purpose: (i don't know if it's your case or not, I can only assume)
Why to fork?
Whenever you have your own repository in which you don't want others to "touch" the code directly but you still allow others to contribute or to fix bugs, you achieve it by a fork.
The original repository is a READ-ONLY for you since you are not a contributor, the fork, on the other hand, is under your account so you have full permissions to read/write.
On your fork, you develop your changes and when you are done you are "asking" the owner of the original repository to add (merge) your changes back to his repository, you do it using pull request.
This is the logic behind the fork.
In your case, I can only assume that the "main" repository need to be monitored for all changes, why? maybe it is the main repository for distribution, might be automation manipulating this repo, and so on.
To make it short - the fork is used when the original repo is read-only to you while allowing you to contribute back to it.
A:
Why not just clone the remote repo to local repo directly?
You could indeed do that. But then how would you make a pull request?
Pull requests are not a Git feature. Instead, they are an add-on, provided by various web hosting providers such as Bitbucket and GitHub. (Compare with the email messages from git request-pull; git request-pull is a Git feature. You can run git request-pull locally.
Note that because they are an add-on, each adder-on-er (is that an actual word?) may have a few tweaks of their own, that the other doesn't, but there's something pretty common here: A GitHub pull request can only be created using the GitHub web site. Unless you can write directly to the original GitHub repository, this requires creating a GitHub fork. A Bitbucket pull request can only be created using the Bitbucket web site. Unless you can write directly to the original Bitbucket repository, this requires creating a Bitbucket fork.
Assuming this pattern holds for GitLab—I have not used GitLab and can't say for sure, but it seems awfully likely—that would explain why you have to create a GitLab fork.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/58288217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: how to Configure search:search options to create cts element query having second parameter as cts-and query with empty sequence? I am using search:search API for searching in Marklogic.
I want to form a cts element query as given below
cts:element-query(fn:QName("element-name"), cts:and-query(()))
What would be the search options configuration (constraints) to form cts query as mentioned above.
A: If it is a top-level constraint, how about passing it as the additional query (an option for search:search)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/42022034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: detecting three presses of power button I'm new in Android. I'm trying to make an app that will run in background and detect three rapid presses of the power button. I've looked up a lot, but could not clear my confusion. can anyone please give me some suggestion? TIA.
A: Declare the static variable outside the onKeyDown and increment variable inside the onKeyDown and return if the value is equal to 3 and at the end again equal the static variable equal to 0;
static int i=0;
public boolean onKeyDown(int keyCode, KeyEvent event) {
if (event.getKeyCode() == KeyEvent.KEYCODE_POWER) {
i++;
if(i==3){
//do something
//at the end again i=0;
}
}
return super.onKeyDown(keyCode, event);
}
A: You could listen for every press of the power button, then in your listener you could
*
*measure how many presses you have just made up to this point
*measure the time between your current press and your last press
*if your time interval is right (say 200 ms) then increment your n_presses and do whatever you want after you reach 3 presses (e.g. create a super event and send it to a thread to process it)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/21386629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Laravel: text-overflow:ellipsis does not work with displaying unescaped data I'm having a problem with the property text-overflow:ellipsis.
In particular, the three points at the end of the broken line are not displayed.
Since I don't want to escape the data, since I use ckeditor (hence formatted text), I use the following wording
{!! $treatment->description !!}
for the Description column only.
This is my code:
<table id="tabledata" class="table table-striped hover" style="width:100%">
<thead>
<tr>
<th></th>
<th>Title</th>
<th>Description</th>
<th>Stay</th>
<th>Price</th>
<th>Action</th>
</tr>
</thead>
<tbody>
@foreach ($treatments as $treatment)
<tr>
<td></td>
<td class="p-4">{{ $treatment->title }}</td>
<td class="p-4"><span class="descresize">{!! $treatment->description !!}</span></td>
<td class="p-4">{{ $treatment->duration }} </td>
<td class="p-4">{{ $treatment->price }} €</td>
<td>
<a class="btn btn-primary" href="{{route('treatment.edit', compact('treatment'))}}" role="button">Update</a>
</td>
</tr>
@endforeach
</tbody>
</table>
CSS
.descresize{
display: inline-block;
width: 500px;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
Can you kindly help me? I am going crazy to solve this problem
A: In Laravel's Blade Templating engine, {!! !!} is used to output unescaped content, including (and not limited to) HTML tags. When combined with CKEditor, you typically get things like this:
<span class="descresize">{!! $treatment->description !!}</span>
<!-- <span class="descresize"><p>Something something long description of the associated Treatment</p></span> -->
Since the CSS properties are being assigned to <span class="descresize">, which now equates to <span class="descresize"><p>...</p></span>, the properties may or may not propagate to these nested HTML elements.
If the content of {!! $treatment->description !!} is going to be consistent (i.e. always a <p>...</p> element), you can simply modify the CSS to point at this nested element:
.descresize > p {
display: inline-block;
width: 500px;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
Since the <p> tag only contains text, and no nested elements, this should handle the properties correctly.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74310029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Can't validate my APP with iTunes, which names and identifiers must match itunesconnect? I have an tvOs App on iTunes, and I am now trying to upload (validate) an improved version of that App.
I get the error message:
iTunes Store Operation Failed. Unable to process the app at this time
due to general error
When I research this error message on Stack Overflow, I get the advice to be patient and try later. But I have been patient for a week! Meanwhile, I have been able to validate several other Apps. I conclude that Apple's server is not congested, and that my certificate is valid!
I have noticed that the upload dialog does not show the proper icon of the App (see below). In my experience, the icon placeholder usually displays the icon of the previous App, i.e. the App present att App Store. Since no icon is displayed, I guess that Apple does not identify my App correctly!
I know that I have the correct bundle identifier (reverse url followed by a name). But that identifier also identifies an iOS version of the tvOS App (so it does not uniquely pinpoint my tvOS App).
I do not know if all other identifiers (display name, etc) in my project are correct!
So my question is: Which things, names and identifiers, etc, must be correct to upload a new version of an App to iTunes? And how can I find out the correct values of those things by inspecting the App's page on itunesconnect ?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/40369346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: how to change css transition start? I'm trying to rotate each of these 2 cards when I click on them, but it doesn't work properly.
I wanna the transition to occur only when I click on the card, not at the beginning.
And if there are any ways to enhance this code plz let me know, I'm still a beginner
* {
margin: 0;
padding: 0;
}
/*body {
background-color: rgb(236, 236, 236);
}*/
.box {
width: 190px;
height: 270px;
margin: 3px;
padding: 10px;
list-style: none;
font-size: 310px;
font-family: "Century Schoolbook", serif;
border-radius: 4px;
display: inline-block;
line-height: 255px;
box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19);
transition: transform 0.7s linear;
}
ul {
position: relative;
top: 70px;
width: 500px;
margin: 0 auto;
text-align: center;
}
.a {
background: #fff;
color: #000;
transform: rotateY(180deg);
}
.b {
background: #000;
color: #fff;
}
.flip {
transform: rotateY(180deg);
}
.rflip {
transform: rotateY(0deg);
}
<!DOCTYPE html>
<html>
<head>
<title>S S</title>
<link rel="stylesheet" href="style.css">
</head>
<body>
<ul>
<li class="box a" onclick="this.classList.toggle('rflip');">S</li>
<li class="box b" onclick="this.classList.toggle('flip');">S</li>
</ul>
</body>
</html>
A: You could maybe look into different mouse events, such as 'mousedown' or 'mouseup'? and possibly add a small script to handle the callback.
<!DOCTYPE html>
<html>
<head>
<title>S S</title>
<link rel="stylesheet" href="style.css">
</head>
<body>
<ul>
<li class="box a" onmouseup="flipCard($event, true)">S</li>
<li class="box b" onmouseup="flipCard($event, false)">S</li>
</ul>
<script>
function flipCard(event, goRight) {
var classToToggle = goRight ? 'rflip' : 'flip';
event.target.classList.toggle(classToToggle);
}
</script>
</body>
</html>
If you want the transition to occur at a specific time, you can also use a timeout (1000 is milliseconds, so 1 second):
function flipCard(event, goRight) {
setTimeout(function() {
var classToToggle = goRight ? 'rflip' : 'flip';
event.target.classList.toggle(classToToggle);
}, 1000);
}
this was written without any testing, but the concept should be there
A: From ur rflip class, I think u wish to have reverse flip thats why you have
transform: rotateY(180deg);
in a class
If this is what you want to get rid of u can try
.a {
background: #fff;
color: #000;
transform: rotateY(360deg);
}
.rflip {
transform: rotateY(180deg);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/57728221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to mark other components invalid in a custom multi-field validator I refer to one of BalusC's answers:
JSF doesn't support cross-field validation, is there a workaround?
I follow the same way, and come out with code as below:
in .xhtml
<h:form id="form1">
<div>
<p:messages globalOnly="true" display="text" />
<h:inputHidden value="true">
<f:validator validatorId="fooValidator" />
<f:attribute name="input1" value="#{input1}" />
<f:attribute name="input2" value="#{input2}" />
<f:attribute name="input3" value="#{input3}" />
</h:inputHidden>
<h:panelGrid columns="3">
<h:outputText value="name 1: " />
<p:inputText binding="#{input1}" id="input11" value="#{testPage.input1}" />
<p:message for="input11" display="text"/>
</h:panelGrid>
<h:panelGrid columns="3">
<h:outputText value="name 2: " />
<p:inputText binding="#{input2}" id="input22" value="#{testPage.input2}" />
<p:message for="input22" display="text"/>
</h:panelGrid>
<h:panelGrid columns="3">
<h:outputText value="name 3: " />
<p:inputText binding="#{input3}" id="input33" value="#{testPage.input3}" />
<p:message for="input33" display="text"/>
</h:panelGrid>
<p:commandButton value="Submit" action="#{testPage.submitValidator}" update=":updateBody" />
</div>
</h:form>
java class:
@FacesValidator(value="fooValidator")
public class CustomValidator2 implements Validator {
@Override
public void validate(FacesContext context, UIComponent component, Object value)
throws ValidatorException {
UIInput input1 = (UIInput) component.getAttributes().get("input1");
UIInput input2 = (UIInput) component.getAttributes().get("input2");
UIInput input3 = (UIInput) component.getAttributes().get("input3");
Object value1 = input1.getSubmittedValue();
Object value2 = input2.getSubmittedValue();
Object value3 = input3.getSubmittedValue();
if (value1.toString().isEmpty() && value2.toString().isEmpty() && value3.toString().isEmpty()) {
String errorMsg = "fill in at least 1";
FacesMessage msg = new FacesMessage(FacesMessage.SEVERITY_ERROR, errorMsg, errorMsg);
FacesContext.getCurrentInstance().addMessage("form1:input11", msg);
//throw new ValidatorException(msg);
}
}
}
the code is working fine, but i face a problem. How to highlight border of name1 inputText(or both name1 and name2 inputText) with red color as usually done by JSF when validation fails.
image as reference:
http://img443.imageshack.us/img443/8106/errork.jpg
thanks in advance
A: Mark them invalid by UIInput#setValid(), passing false.
input1.setValid(false);
input2.setValid(false);
input3.setValid(false);
The borders are specific to PrimeFaces <p:inputText>, so you don't need to add any CSS boilerplate as suggested by the other answerer.
Note that this can also be achieved by OmniFaces <o:validateAll> tag without the need to homegrow a custom validator.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14376875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Print PDF using GhostScript I am in need of your support on the following issue since its pulling me for a while. We have a small c# utility, which print given PDF using GhostScript. This print as expected but fail to retain the page formatting’s. However, pages are printed as expected when I switch Adobe Acrobat in place of GhostScript. So I presume, I am making some obvious mistake on the GhostScript's command line arguments .
Background
Following is the core c# logic, which print a given PDF file with varying style across each pages. The given PDF file has pages;
*
*with inconsistent font style and colour
*some of the pages have normal font size where others are printed in extra small
*some of the pages has recommended margin but others have very small margin
*some of the pages are in colour and the rest in grey.
*some of the pages are landscape in style where other are portrait
In concise, the PDF which I am trying to print is nothing but a consolidation (joining individual pdfs into one large pdf) of numerous small sized pdf document with varying fonts style, size, margins.
Issue
Following logic use GhostScript(v9.02) to print PDF file. Though the following logic print any given PDF, it fail to retain the page formatting including header, footer, font size, margin, orientation ( my pdf file has pages those both landscape and portrait).
Interestingly, if I use acrobat reader to print the same PDF then it will print as expected along with all page level formatting's.
PDF specimen: First section, Second section
void PrintDocument()
{
var psInfo = new ProcessStartInfo();
psInfo.Arguments =
String.Format(
" -dPrinted -dBATCH -dNOPAUSE -dNOSAFER -q -dNumCopies=1 -sDEVICE=ljet4 -sOutputFile=\"\\\\spool\\{0}\" \"{1}\"",
GetDefaultPrinter(), @"C:\PDFOutput\test.pdf");
psInfo.FileName = @"C:\Program Files\gs\gs9.10\bin\gswin64c.exe";
psInfo.UseShellExecute = false;
using (var process= Process.Start(psInfo))
{
process.WaitForExit();
}
}
A: I think you asked this question before, and its also quite clear from your code sample that you are using GSView, not Ghostscript.
Now, while GSView does use Ghostscript to do the heavy lifting, its a concern that you are unable to differentiate between these two applications.
You still haven't provided an example PDF file to look at, nor a command line, though you have now at least managed to quote the Ghostscript version. You need to also give a command line (no I'm not prepared to assemble it from reading your code) and you should try this from the command line, not inside your own application, in order to show that its not your application making the error.
You should consider upgrading Ghostscript to the current version.
Note that a quick perusal of your code indicates that you are specifying a number of command line options (eg -dPDFSETTINGS) which are only appropriate for converting a file into PDF, not for any other purpose (such as printing).
So as I said before, provide a specimen file to reproduce the problem, and a command line (preferably a Ghostscript command line) which causes the problem. Knowing which printer you are using would probably be useful too, although its highly unlikely I will have a duplicate to test on.
A: Answer - UPDATE 16/12/2013
I was managed to get it fixed and wanted to enclose the working solution if it help others. Special thanks to 'KenS' since he spent lot of time to guide me.
To summarize, I finally decided to use GSView along with GhostScript to print PDF to bypass Adobe. The core logic is given below;
//PrintParamter is a custom data structure to capture file related info
private void PrintDocument(PrintParamter fs, string printerName = null)
{
if (!File.Exists(fs.FullyQualifiedName)) return;
var filename = fs.FullyQualifiedName ?? string.Empty;
printerName = printerName ?? GetDefaultPrinter(); //get your printer here
var processArgs = string.Format("-dAutoRotatePages=/All -dNOPAUSE -dBATCH -sPAPERSIZE=a4 -dFIXEDMEDIA -dPDFFitPage -dEmbedAllFonts=true -dSubsetFonts=true -dPDFSETTINGS=/prepress -dNOPLATFONTS -sFONTPATH=\"C:\\Program Files\\gs\\gs9.10\\fonts\" -noquery -dNumCopies=1 -all -colour -printer \"{0}\" \"{1}\"", printerName, filename);
try
{
var gsProcessInfo = new ProcessStartInfo
{
WindowStyle = ProcessWindowStyle.Hidden,
FileName = gsViewEXEInstallationLocation,
Arguments = processArgs
};
using (var gsProcess = Process.Start(gsProcessInfo))
{
gsProcess.WaitForExit();
}
}
A: You could use GSPRINT.
I've managed to make it work by only copying gsprint.exe/gswin64c.exe/gsdll64.dll in a directory and launch it from there.
sample code :
// This uses gsprint (mind the paths)
private const string gsPrintExecutable = @"C:\gs\gsprint.exe";
private const string gsExecutable = @"C:\gs\gswin64c.exe";
string pdfPath = @"C:\myShinyPDF.PDF"
string printerName = "MY PRINTER";
string processArgs = string.Format("-ghostscript \"{0}\" -copies=1 -all -printer \"{1}\" \"{2}\"", gsExecutable, printerName, pdfPath );
var gsProcessInfo = new ProcessStartInfo
{
WindowStyle = ProcessWindowStyle.Hidden,
FileName = gsPrintExecutable ,
Arguments = processArgs
};
using (var gsProcess = Process.Start(gsProcessInfo))
{
gsProcess.WaitForExit();
}
A: Try the following command within Process.Start():
gswin32c.exe -sDEVICE=mswinpr2 -dBATCH -dNOPAUSE -dNOPROMPT -dNoCancel -dPDFFitPage -sOutputFile="%printer%\\[printer_servername]\[printername]" "[filepath_to_pdf]"
It should look like this in C#:
string strCmdText = "gswin32c.exe -sDEVICE=mswinpr2 -dBATCH -dNOPAUSE -dNOPROMPT -dNoCancel -dPDFFitPage -sOutputFile=\"%printer%\\\\[printer_servername]\\[printername]\" \"[filepath_to_pdf]\"";
System.Diagnostics.Process.Start("CMD.exe", strCmdText);
This will place the specified PDF file into the print queue.
Note- your gswin32c.exe must be in the same directory as your C# program. I haven't tested this code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/20524323",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Emulator Settings Menu doesn't work Using the level 11/3.0 emulator, the 'Settings' menu doesn't work. Click on any item eg. 'Sound' or 'Screen' and nothing happens beyond the momentary high-lighting of the item.
In eclipse, a logcat entry appears
INFO/ActivityManager(73): Starting: Intent { act=android.intent.action.MAIN cmp=com.android.settings/.Settings (has extras) } from pid 375
and nothing else.
What's going on here??? I only want to change the language settings from the default Chinese!
update: I've got the screen set to a normal size(320x480) instead of the default(and only option) WXGA. Works fine if I use the WXGA setting - new windows open etc. Is ver 3.0 only for tablets? I thought the system was supposed to gracefully accommodate different-sized screens. I only installed it because ver 2.3 doesn't do javascript bridge or location spoofing.
A: It works for me.
Make sure you have your "Device ram size" setting for this AVD set high. It will default to 256, but I recommend 1024 (MB) if you can spare it. You can adjust this via the SDK and AVD Manager.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/5098345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Handling changes in an interface shared across multiple solutions? Our "main" solution is the development code: shared libraries, services, UI projects, etc. The other solution is an integration and automated tests solution. It references several of the development projects. The reason it is separate is to avoid interference with the development solution's unit test VSMDI file. And to allow us to play with different execution methods (other test runners, like Gallio or StoryTeller) without interfering with the development solution.
Recently, an interface changed in the development solution, one of our test mocks implemented that interface. But, it was not updated because there was no warning at compile time because it was in another solution. This broke our CI build.
Does anyone have a similar setup? How do you handle these issues, do you follow a strict procedure or is there some kind of technical answer?
A: One approach would be to extract any shared interfaces into a particular directory, and then use your version control system to ensure that the directory is the same between both projects -- for example, if you were using Subversion, it has a feature called "externals" that allows one project to contain a directory that is actually a link to a specified directory (or a specified version of a specified directory) in another project.
A: If your mock implements an interface from referenced project, than that project must have been built along with the rest of test projects. If it really isn't the case, check your build order/build configurations in visual studio.
It's still possible that an interface change does not trigger any compilation errors but tests do fail. But that's unrelated to solution setup.
A: I moved all my non-executing test infrastructure code into a single project. It's now in the development and test solution. This way, if development code is automatically refactored, it's changed in my project. If breaking changes are made to an interface, they'll become errors during compile.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/2493145",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Why don't you need to pass arguments to a qsort comparator function? Code below taken from here.
* qsort example */
#include <stdio.h>
#include <stdlib.h>
int values[] = { 40, 10, 100, 90, 20, 25 };
int compare (const void * a, const void * b)
{
return ( *(int*)a - *(int*)b );
}
int main ()
{
int n;
qsort (values, 6, sizeof(int), compare);
for (n=0; n<6; n++)
printf ("%d ",values[n]);
return 0;
}
We have a compare function with parameters in its signature but when we call it in qsort no arguments are passed. How are the values of a and b passed to the function? Thanks
A: In the context of this expression:
qsort (values, 6, sizeof(int), compare);
the subexpression compare that identifies a function decays into a pointer to that function (and not a function call). The code is effectively equivalent to:
qsort (values, 6, sizeof(int), &compare);
This is exactly the same thing that happens to arrays when used as arguments to a function (which you might or not have seen before but is more frequently asked):
void f( int * x );
int main() {
int array[10];
f( array ); // f( &array[0] )
}
A: When calling qsort, you're passing a pointer to the function which is why you don't specify any parameters.
Inside the qsort implementation choses values from the 'values' array and calls the 'compare' function. That's how 'a' and 'b' get passed.
A: qsort passes the addresses of whichever items in the array it wants to compare. For example, &values[3] and &values[5].
Since it doesn't really know the actual types of the items in the array, it uses the size parameter to correctly compute the addresses. See this implementation for example: http://insanecoding.blogspot.ie/2007/03/quicksort.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11353260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Heroku timing out when using sequelize As of yesterday (04/24/20) Heroku bumped the standard node version to v14. If you are using sequelize this can cause any call to sequelize to hang without an error until heroku times out.
A: You can fix this by downgrading to v13 of node.
add:
"engine": { "node": 13.x.x }
to your package.json and heroku will respect this version.
The issue is being tracked here
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61393954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: what does super really do in python I have just read Method Resolution Order by GvR, but I wonder if the following statement holds true(I agree with this) in Python's Super is nifty, but you can't use it. So super() causes the next method in the MRO to be called? Also noted in this comment.
One big problem with 'super' is that it sounds like it will cause the
superclass's copy of the method to be called. This is simply not the
case, it causes the next method in the MRO to be called (...)
class A(object):
def __init__(self):
#super(A, self).__init__()
print 'init A'
class B(object):
def __init__(self):
print 'init B'
class C(A, B):
def __init__(self):
super(C, self).__init__()
print 'init C'
c = C()
gives
init A
init C
While
class A(object):
def __init__(self):
super(A, self).__init__()
print 'init A'
class B(object):
def __init__(self):
print 'init B'
class C(A, B):
def __init__(self):
super(C, self).__init__()
print 'init C'
c = C()
gives
init B
init A
init C
A: Looks like the expected results in both cases... In the first case C calls to A (next class in MRO) which prints "init A" and returns so flow comes back to C which prints "init C" and returns. Matches your output.
In the second case C calls A (next in MRO) which calls B (next to A in MRO) which prints "init B" and returns so flow comes back to A which prints "init A" and returns back to C which prints "init C".
| {
"language": "en",
"url": "https://stackoverflow.com/questions/20393543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Printing as a dictionary in python I have a problem in python. I want to create a function to print a file from user to a new file (example.txt).
The old file is like this:
{'a':1,'b':2...)
and I want the new file like:
a 1,b 2(the next line)
But the function which I made can run but it doesn't show anything in the new file. Can someone help me please.
def printing(file):
infile=open(file,'r')
outfile=open('example.txt','w')
dict={}
file=dict.values()
for key,values in file:
print key
print values
outfile.write(str(dict))
infile.close()
outfile.close()
A: This creates a new empty dictionary:
dict={}
dict is not a good name for a variable as it shadows the built-in type dict and could be confusing.
This makes the name file point at the values in the dictionary:
file=dict.values()
file will be empty because dict was empty.
This iterates over pairs of values in file.
for key,values in file:
As file is empty nothing will happen. However if file weren't empty, the values in it would have to be pairs of values to unpack them into key, values.
This converts dict to a string and writes it to the outfile:
outfile.write(str(dict))
Calling write with a non-str object will call str on it anway, so you could just say:
outfile.write(dict)
You don't actually do anything with infile.
A: You can use re module (regular expression) to achieve what you need. Solution could be like that. Of course you can customize to fit your need. Hope this helps.
import re
def printing(file):
outfile=open('example.txt','a')
with open(file,'r') as f:
for line in f:
new_string = re.sub('[^a-zA-Z0-9\n\.]', ' ', line)
outfile.write(new_string)
printing('output.txt')
| {
"language": "en",
"url": "https://stackoverflow.com/questions/32420612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: jQuery append html element but with a different class name each time I'm creating a sitemap function for my website what I have a function which appends a new select box which multiple options and I have a remove function which will remove the select box.
Well I say remove the select box but what it actually does is remove all the select boxes that were created, however I wanted it to target the select box that it is related to.
I believe one way to implement this is to assign a different class to the select box element, does anyone know how I can do this?
Or can anyone recommend a better way to handle this?
The code I have so far is below, or view my jsFiddle
$("#newsublevel").click(function() {
$(".navoptions").append('<br/><select class="newoption"><option value="Home">Home</option><option value="Home">Home</option><option value="Home">Home</option><option value="Home">Home</option><option value="Home">Home</option></select><a href="#" class="remove">Remove</a>');
});
$(".navoptions").on('click','.remove',function() {
$(".newoption, .remove").remove();
});
add.html
<div class="maincontent">
<h2 class="sitemaphead">Sitemap</h2>
<p>Add a sub level to About.</p>
<div class="navoptions">
<select>
<option value-"Home">Home</option>
<option value-"Home">Home</option>
<option value-"Home">Home</option>
<option value-"Home">Home</option>
<option value-"Home">Home</option>
<option value-"Home">Home</option>
<option value-"Home">Home</option>
</select>
</div>
<p id=""><a href="#" id="newsublevel">Click here</a> to add another sub level</p>
</div>
A: this should work
$("#newsublevel").click(function() {
$(".navoptions").append('<div><select class="newoption"><option value="Home">Home</option><option value="Home">Home</option><option value="Home">Home</option><option value="Home">Home</option><option value="Home">Home</option></select><a href="#" class="remove">Remove</a></div>');
});
$(".navoptions").on('click','.remove',function() {
$(this).closest('div').remove()
});
i have added a container div element. on clicking the remove the code will find the container element and will remove only that
here is the updated jsfiddle http://jsfiddle.net/8ddAW/6/
A: Try this:
$(".navoptions").on('click','.remove',function() {
$(this).prev().andSelf().remove();
});
Fiddle Here
A: Try this out:- http://jsfiddle.net/adiioo7/8ddAW/4/
JS:-
$(function() {
$("#newsublevel").click(function() {
$(".navoptions").append('<div><br/><select class="newoption"><option value="Home">Home</option><option value="Home">Home</option><option value="Home">Home</option><option value="Home">Home</option><option value="Home">Home</option></select><a href="#" class="remove">Remove</a></div>');
});
$(".navoptions").on('click','.remove',function() {
$(this).parent().remove();
});
});
A: Actually, by using the fieldset element inside your form, you can perform grouping of all the <select> statements that you would like to be "related" and control them all at once. Even deeper, if you can keep track of the indexes of the groups that you're dynamically adding to, you could use those same indexes to clear them out without having to differentiate between them using unique classes.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18785132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: CUDA shared memory, does gap-access pattern penalizes performance? I am dealing with a CUDA shared memory access pattern which i am not sure if it is good or has some sort of performance penalty.
Suppose i have 512 integer numbers in shared memory
__shared__ int snums[516];
and half the threads, that is 256 threads.
The kernel works as follows;
(1) The block of 256 threads first applies a function f(x) to the even locations of snums[], then (2) it applies f(x) to the odd locations of snums[]. Function f(x) acts on the local neighborhood of the given number x, then changes x to a new value. There is a __syncthreads() in between (1) and (2).
Clearly, while i am doing (1), there are shared memory gaps of 32bits because of the odd numbers not being accessed. The same occurs in (2), there will be gaps on the even locations of snums[].
From what i read on CUDA documentation, memory bank conflicts should occur when threads access the same locations. But they do not talk about gaps.
Will there there be any problem with banks that could incur in a performance penalty?
A: I guess you meant:
__shared__ int snums[512];
Will there be any bank conflict and performance penalty?
Assuming at some point your code does something like:
int a = snums[2*threadIdx.x]; // this would access every even location
the above line of code would generate an access pattern with 2-way bank conflicts. 2-way bank conflicts means the above line of code takes approximately twice as long to execute as the optimal no-bank-conflict line of code (depicted below).
If we were to focus only on the above line of code, the obvious approach to eliminating the bank conflict would be to re-order the storage pattern in shared memory so that all of the data items previously stored at snums[0], snums[2], snums[4] ... are now stored at snums[0], snums[1], snums[2] ... thus effectively moving the "even" items to the beginning of the array and the "odd" items to the end of the array. That would allow an access like so:
int a = snums[threadIdx.x]; // no bank conflicts
However you have stated that a calculation neighborhood is important:
Function f(x) acts on the local neighborhood of the given number x,...
So this sort of reorganization might require some special indexing arithmetic.
On newer architectures, shared memory bank conflicts don't occur when threads access the same location but do occur if they access locations in the same bank (that are not the same location). The bank is simply the lowest order bits of the 32-bit index address:
snums[0] : bank 0
snums[1] : bank 1
snums[2] : bank 2
...
snums[32] : bank 0
snums[33] : bank 1
...
(the above assumes 32-bit bank mode)
This answer may also be of interest
| {
"language": "en",
"url": "https://stackoverflow.com/questions/23897760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to form str.decode line? With this line
str(hex(int(207))).decode('cp1251',errors='strict')
I get
'str' object has no attribute 'decode'
What is the correct way to format the line?
A: Problem
The decode method you want belongs to Bytes and BytesArray objects. So you need to convert your hex string to Bytes (or BytesArray I guess).
Solution
For this, you can use the fromhex method to convert the hex string. But it may require some formatting beforehand to exclude the '0x' part of the string. You may be better off therefore using Python's format, or f-strings, instead of hex.
Here is an example.
integer = 207
hexstring = f'{integer:x}'
hexbytes = bytes.fromhex(hexstring)
decoded = hexbytes.decode('cp1251',errors='strict')
Of course, you can combine the above into your original one-liner if you wish.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62968536",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: Android LruCache between two activities I read about using LruCache from developer.android.com, and I create a blurred Bitmap from one activity and put it into cache, now that I want to access the cached image from another activity but it returned null. Any examples or links on how to properly use cache will be greatly appreciated, thanks!
A: The LruCache is being instantiated in one class. It won't be accessible from another class. You could try the approach mentioned in this answer
https://stackoverflow.com/a/14325075/2931650
| {
"language": "en",
"url": "https://stackoverflow.com/questions/24783437",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do I use audispd to send messages forwareded from a remote system to local syslog Using RHEL6, I currently have audispd setup to send logs to a remote server. The remote server successfully receives the messages, and writes them to the remote audit log. My problem is, I can't seem to get the forwarded messages(local ones work) to be processed by audispd and to be written to rsyslog.
This doesn't work.
box1 auditd ===> box1 audispd ===> box2 auditd XXX> box2 audispd XXX> box2 rsyslog
This does.
box2 auditd ===> box2 audispd ===> box2 rsyslog
I know generally how to configure audispd to send local logs to rsyslog, but the forwarded logs are not going to rsyslog. The X's above show where the traffic is not reaching its destination.
I'm not looking to use imfile or other workarounds unless it is not possible to send forwarded messages through audispd on box2. I know I can send to rsyslog on box1, but it is my intention not to.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19801634",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Can't bind to a property I've got the following markup.
<TextBox x:Name="Address"
Text="{x:Static local:MainWindow.Boundie.SomeProp}"
</TextBox>
In the code behind I have a static property like so.
static Something Boundie { get; set; }
public class Something { public String SomeProp { get; set; } }
The problem is that it nags that "type expected" when I hover over Boundie and "static member expected" when I hover over SomeProp. When I leave out the latter, it only complains that the expected type is String but it only sees Something.
How do I bind to a static member's non-static field?
Why I want to do that? Because I want to reuse the domain object model and those classes are not equipped with static members.
A: SomeProp is instance property so you cannot use x:Static to access that. You can bind to it using combination of static Source and Path
<TextBox ...
Text="{Binding
Source={x:Static local:MainWindow.Boundie},
Path=SomeProp}"/>
A: <object property="{x:Static prefix:typeName.staticMemberName}" .../>
http://msdn.microsoft.com/en-us/library/ms742135.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27837488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Subsets and Splits