text
stringlengths 15
59.8k
| meta
dict |
---|---|
Q: Split list of student according to their mark - haskell There is a list of students where
data Student = Student {
name :: String,
mark :: Mark
} deriving Show
data Mark = F|E|D|C|B|A deriving (Show, Eq, Ord)
I need to split it like that [(mark,[students with this mark])]
I made something like this:
splitToGroups :: [Student] -> [(Mark, [String])]
splitToGroups [] = []
splitToGroups students = foldr funct [] students where
funct student [] = [(mark student,[name student])]
funct student ((x,xs):xss) | mark student == x = ((x,(name student):xs):xss)
| otherwise = (mark student,[name student]):(x,xs):xss
but it works incorectly. Maybe someone knows how it could be done..
A: Don't manually recurse if you can use standard tools. You can do this with sorting and grouping, but IMO preferrable is to use the types to express that you're building an associative result, i.e. a map from marks to students.
import qualified Data.Map as Map
splitToGroups :: [Student] -> Map.Map Mark [String]
splitToGroups students = Map.fromListWith (<>)
[ (sMark, [sName])
| Student sName sMark <- students ]
(If you want a list in the end, just use Map.toList.)
A: If you view Student as a tupple, your function has this type: splitToGroups :: [(Mark,String)] -> [(Mark, [String])].
So you need a function with type: [(a,b)] -> [(a,[b])].
Using Hoogle: search results
I get the following functions:
groupSort :: Ord k => [(k, v)] -> [(k, [v])]
collectSndByFst :: Ord a => [(a, b)] -> [(a, [b])]
They should solve your problem, remember to import the modules listed in the link to make them work. You should make a funtion that maps from Student -> (Mark, String) first.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/70294286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Passing data to collectionViewCell I'm want to use some custom struct in my collection view cell. I get the data from my API service and trying to pass it to my custom collection view cell.
I found couple answers but I still couldn't figure out how to do
Here is where I get the actual data:
func FetchFormData(linkUrl: String) {
let parameters: [String: AnyObject] = [:]
let postString = (parameters.flatMap({ (key, value) -> String in
return "\(key)=\(value)"
}) as Array).joined(separator: "&")
let url = URL(string: linkUrl)!
var request = URLRequest(url: url)
request.setValue("application/x-www-form-urlencoded", forHTTPHeaderField: "Content-Type")
request.httpMethod = "POST"
request.httpBody = postString.data(using: .utf8)
let task = URLSession.shared.dataTask(with: request) { data, response, error in
guard let data = data, error == nil else {
print("error=\(String(describing: error))")
return
}
if let httpStatus = response as? HTTPURLResponse, httpStatus.statusCode != 200 {
print("statusCode should be 200, but is \(httpStatus.statusCode)")
print("response = \(String(describing: response))")
}
let responseString = String(data: data, encoding: .utf8)
let contentData = responseString?.data(using: .utf8)
do {
let decoder = JSONDecoder()
self.formData = try decoder.decode(FormModel.self, from: contentData!)
} catch let err {
print("Err", err)
}
DispatchQueue.main.async {
//here is the where I reload Collection View
self.collectionView.reloadData()
}
}
task.resume()
}
Also here I'm trying to pass data to the cell:
func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
let cell = collectionView.dequeueReusableCell(withReuseIdentifier: cellId, for: indexPath) as! BaseFormCollectionViewCell
cell.backgroundColor = .green
//Data could be print out here
print(self.formData?.components?[indexPath.row])
cell.formComponent = (self.formData?.components?[indexPath.row])!
return cell
}
The actual problem is starting into my cell class
class BaseFormCollectionViewCell: UICollectionViewCell {
var formComponent: FormComponent!{
didSet {
//data can be print out here
print("Passed value is: \(formComponent)")
}
}
override init(frame: CGRect) {
super.init(frame: frame)
//this part is always nill
print(formComponent)
}
}
As you guys can see in the code It's going well until my collection view cell.
It should be a lot more simple but I couldn't figure out what's going on and Why its happening.
A: Modify your cell class as
class BaseFormCollectionViewCell: UICollectionViewCell {
var formComponent: FormComponent!{
didSet {
//this is unnecessary. You can achieve what u want with a bit more cleaner way using configure function as shown below
//data can be print out here
print("Passed value is: \(formComponent)")
}
}
override init(frame: CGRect) {
super.init(frame: frame)
//this part is always nill
print(formComponent)
}
func configure() {
//configure your UI of cell using self.formComponent here
}
}
finally
func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
let cell = collectionView.dequeueReusableCell(withReuseIdentifier: cellId, for: indexPath) as! BaseFormCollectionViewCell
cell.backgroundColor = .green
//Data could be print out here
print(self.formData?.components?[indexPath.row])
cell.formComponent = (self.formData?.components?[indexPath.row])!
(cell as! BaseFormCollectionViewCell).configure()
return cell
}
Look for (cell as! BaseFormCollectionViewCell).configure() in cellForItemAt thats how u trigger the UI configuration of cell after passing data to cell in statement above it.
Quite frankly u can get rid of didSet and relay on configure as shown
Hope it helps
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49402387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: android - rendering bitmaps from native code - nativeCreate bitmaps are not cleanedup from memory I am streaming a video in android and I decode frames in native code and then copy the pixels to a bitmap, then display the bitmap in Java using canvas.unlockandpost with a while loop for all the bitmaps.
Everything is fine, but the streaming of bitmaps is very slow and causes a crash. I only see a message on logcat saying that "low memory no more background processes".
I see on the allocation table from eclipse, that the bitmaps that I created are not getting deleted from memory, even though, I am overwritng the pixels everytime. Is there any way I can clean up the memory it is keeping.
My code is as follows.
C Code :
AndroidBitmapInfo info;
void* pixels;
int ret;
if ((ret =AndroidBitmap_lockPixels(env, bitmap, &pixels)) < 0) {
}
memcpy(pixels, pictureRGB, 480*320);
AndroidBitmap_unlockPixels(env, bitmap);
Java Code
Bitmap mBitmap = Bitmap.createBitmap(480, 320, Bitmap.Config.RGB_565);
renderbitmap(mBitmap, 0);
canvas.drawBitmap(mBitmap, 0, 0, null);
A: The code shown in your question is missing some critical parts to fully understand your problem, but it sounds like you're creating a new bitmap for every frame. Since Android only allows for about 16MB of allocations for each Java VM, your app will get killed after about 52 frames. You can create a bitmap once and re-use it many times. To be more precise, you are creating a bitmap (Bitmap.CreateBitmap), but not destroying it (Bitmap.recycle). That would solve your memory leak, but still would not be the best way to handle it. Since the bitmap size doesn't change, create it once when your activity starts and re-use it throughout the life of your activity.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/5082255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: ASP.NET web pages code behind So does the new ASP.NET web pages (also called razor pages) framework using the Razor view engine (referring to this: http://www.asp.net/webmatrix/tutorials/1-getting-started-with-webmatrix-and-asp-net-web-pages) not actually have any code-behind file? I looked on the samples and couldn't find an example. I assume no, but maybe there is a header reference where you can link it that I might be missing?
Can anyone confirm?
Thanks.
A: According to this blog you can easily have code-behind. http://www.compiledthoughts.com/2011/01/aspnet-mvc3-creating-razor-view-engine.html
Do you need it or not this is for you to decide. I strongly resent answers which starts with "you dont need it..".
Every person in the world must have a choice whether to shoot for a moon or shoot himself in the foot. Thats what I think is right.
A: There are no code behind files for Razor Views because you don't need them. You are writing the presentation logic using the Razor syntax on the view itself.
Razor views simplifies the mixing of raw HTML with dynamic content rendered using Razor syntax so you don't need a separate file. Furthermore there are no such things as Controls or Components in razor views so you don't need to configure them in a separate file.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/6797492",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Populate a JComboBox with file names of a specific directory I want to populate a JComboBox with file names of a directory. Then, if selected, each field has to show a JList. How can i implement this?
Thanks
A: You can use File>>listFiles()
http://download.oracle.com/javase/1.4.2/docs/api/java/io/File.html
to get the array of Files in a particular directory (the one you initialized the File-object with).
You can then use the individual File's getName() method to get the names, then use JComboBox's addItem() method to add those names:
http://download.oracle.com/javase/1.4.2/docs/api/javax/swing/JComboBox.html
Finally, to do something when the user clicks one of those names you have to install an item-listener using the JComboBox's addItemListener()-method. There are tutorials on how to do this last part and in general it just calls your ItemListener, giving it an ItemEvent, which you can then use to check which name was clicked.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/5181410",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Specflow 2.0.0 What is the alternative to using ScenarioContext.Current? I have upgraded to Specflow 2.0.0. with NUnit 3 (also tried with XUnit) and I want to be able to execute test cases in parallel in an attempt to reduce the elapsed time it takes to run the tests.
On attempting to execute the tests in parallel an error is returned stating that I cannot use FeatureContext.Current and ScenarioContext.Current.
The tests use these to extract the names of the feature and scenario for the additional logging.
The tests also use the tags to be able to control the tests.
For Example:
I am aware that it is possible to put the tags in the attributes
[Binding]
public class SpecFlowHooks
{
public ScenarioContext context;
public SpecFlowHooks(ScenarioContext scenarioContext)
{
context = scenarioContext;
}
[BeforeScenario("SpecialCase")]
public void BeforeScenario_SpecialCase() {
do some stuff
}
[BeforeScenario]
public void BeforeScenario() {
do different stuff
}
}
The problem with this is that BeforeScenario always runs. I do not want it to run if the tag "SpecialCase" is not present as the application will not be in the correct state. I therefore extract the tags and do something different if the tag "SpecialCase" is present.
How can I find the list of tags without using
List<String> tags = ScenarioContext.Current.ScenarioInfo.Tags.ToList();
A: The alternative for parallel execution is to allow the dependency injection system which specflow uses to provide you with the ScenarioContext instance. To do that have your steps class accept an instance of the ScenarioContext and store it in a field:
[Binding]
public class StepsWithScenarioContext
{
private readonly ScenarioContext scenarioContext;
public StepsWithScenarioContext(ScenarioContext scenarioContext)
{
if (scenarioContext == null) throw new ArgumentNullException("scenarioContext");
this.scenarioContext = scenarioContext;
}
[Given(@"I put something into the context")]
public void GivenIPutSomethingIntoTheContext()
{
scenarioContext.Set("test-value", "test-key");
}
}
A fuller explanation of how to use parallel execution can be found here
A similar approach needs to be taken to get the tags. Again add a constructor which takes a ScenarioContext to your class which holds your [BeforeScenario] method, and save the ScenarioContext in a field and use this filed instead of the ScenarioContext.Current
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35128274",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: SalesForce REST API INVALID_SESSION_ID error using v38.0 I am using SalesForce REST API For my PHP Application
But, when I send request at "instance_url/services/data/v38.0" along with the access token , I get this error:
[{"message":"Session expired or invalid","errorCode":"INVALID_SESSION_ID"}]
I have a developer salesforce account and have API enabled true for every profile.
This is my Headers:
$headers = array(
"Content-Type: application/x-www-form-urlencoded",
"Accept:application/json",
"Authorization: Bearer ".$access_token
);
Used in CURL.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51781035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Java: changing the order of lines with while-loop I am new to programming and I would like to ask what should I do so that the final value of k will appear before the values of a, when I try to put it inside the loop it repeats the statement and when I put it before the loop, its value is 0.
System.out.printf("Input an integer: ");
int a = in.nextInt();
int k = 0;
System.out.print(a);
while(a > 1)
{
if(a % 2 == 0)
a = a / 2;
else
a = 3 * a + 1;
System.out.printf(", " + a);
k++;
}
System.out.println("\nk = " + k);
Output:
Input an integer: 10
10, 5, 16, 8, 4, 2, 1
k = 6
A: You could try:
System.out.printf("Input an integer: ");
int a = in.nextInt();
int k = 0;
String str_a = "";
System.out.print(a);
while(a > 1)
{
if(a % 2 == 0)
a = a / 2;
else
a = 3 * a + 1;
str_a += ", " + String.valueOf(a);
k++;
}
System.out.println("k = " + k);
System.out.println("a = " + str_a);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/69657048",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: javascript/jquery: focus() not working I want to focus a special id (in roundcube) by a keyboard shortcut. The html is
...
<div id="mainscreen">
<div id="messagetoolbar" class="toolbar">
<div id="mailview-left" style="width: 220px;">
<div id="mailview-right" style="left: 232px;">
...
I tried the following:
// Strg + Tab, um in Nachrichtenbereich zu kommen
...
else if (event.keyCode == 9 && event.ctrlKey) {
alert("taste erkannt");
//document.getElementById("messagetoolbar").focus();
//$("#messagetoolbar").focus();
setTimeout(function() { $('#messagetoolbar').focus(); alert("zeit"); }, 3000);
}
...
The first alert and the second alert is shown but no focus on id messagetoolbar. Does anybody have an idea?
Thank you very much.
Edit: I think I should describe it better: I want to mark the first line/email in the email-inbox in roundcube. The inbox is a table with a tr-tag...when I try your solution the first line is dotted, too, but with enter I can't open the mail and with other keys I can't MARK the first line/mail... I think I have to "simulate a left-klick" to get the first line marked...?
Now I tried to use jquery's .trigger. The html of the inbox-Table is
<table id="messagelist" class="records-table messagelist sortheader fixedheader">
<thead>
<tbody>
<tr id="rcmrow27428" class="message">
<td class="threads"></td>
<td class="date">16.04.2014 13:41</td>
<td class="fromto">
...
I tried to use...
$('#messagelist tr').eq(1).addClass('message selected focused').removeClass('unfocused').trigger("click");
...but it doesn't work: It adds an removes the classes but doesn't really focus the line :-( With "buttons" it works.
EDIT AGAIN: I think the file list.js of roundcube is important for that question. There I found the following:
/**
* Set focus to the list
*/
focus: function(e)
{
var n, id;
this.focused = true;
for (n in this.selection) {
id = this.selection[n];
if (this.rows[id] && this.rows[id].obj) {
$(this.rows[id].obj).addClass('selected').removeClass('unfocused');
}
}
// Un-focus already focused elements (#1487123, #1487316, #1488600, #1488620)
// It looks that window.focus() does the job for all browsers, but not Firefox (#1489058)
$('iframe,:focus:not(body)').blur();
window.focus();
if (e || (e = window.event))
rcube_event.cancel(e);
},
Does anybody know how to modify or use referring to my question? Thank you!
A: Add tabindex=0 attribute to the div you want to foucs and you will be able to set focus on the div using .focus()
A: You can not focus controls like div, span etc. You can move to that div if required using book marks.
A: Do you want to highlight the div then you can use the jquery and following code to give highlight effect.
$("div").click(function () {
$(this).effect("highlight", {}, 3000);
});
A: Good morning,
I've read a few times now, that one can't focus table rows but only can focus elements, which accepts input by the user. Otherwise I think there must be a way to simulate a click on a table row by jquery/javascript! I tried the following:
document.onkeydown = function(event) {
...
else if (event.keyCode == 9 && event.ctrlKey) {
$('#messagelist tr').on('click', function () {
alert('I was clicked');
});
$('#messagelist tr').eq(1).click();
}
...
}
The alert is shown! But the row isn't "marked"!?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/23261224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: remove code duplication in java with custom getter This question isnt about selenium, but java in general
I have the scenario below, while trying to setup an automation framework using selenium and page object model -
public class SigninPage {
WebDriver driver;
private By emailInputSelector = By.id("user_email");
private By passwordInputSelector = By.id("user_password");
private By signinBtnSelector = By.xpath("//input[@value='Log In']");
private By errorDivSelector = By.xpath("//div[@role='alert']");
private By headerLinksSelector = By.className("nav-link");
public SigninPage(WebDriver driver) {
this.driver = driver;
}
public WebElement emailInput() {
return driver.findElement(emailInputSelector);
}
public WebElement passwordInput() {
return driver.findElement(passwordInputSelector);
}
public WebElement signinBtn() {
return driver.findElement(signinBtnSelector);
}
public WebElement errorDiv() {
return driver.findElement(errorDivSelector);
}
public List<WebElement> headerElement() {
return driver.findElements(headerLinksSelector);
}
}
as you can see, all my methods like headerElement() etc are doing the same thing - driver.findElement(selector) or sometimes driver.findElements(selector)
I want to remove this code duplication, like lombok provides @Getter.
What is my best bet to achieve this?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/68645012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: HTML5 Dynamic Video Background here is my code
$("body").prepend("<div class='fullscreen-bg'><video loop muted autoplay class='fullscreen-bg__video' ><source src='https://www.iresearchservices.com/wp-content/uploads/2018/03/bgvideo.mp4' type='video/mp4'></video></div>")
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
and its working fine. but what i need is, I want to call a function say remove-pre-loader(), once the video is loaded.
i have a per-loader screen, i want to remove the pre-loader once the video is loaded.
A: You can use loadeddata or canplaythrough event to check whether the browser has loaded the current frame of the audio/video. See a full list of possible video events here
$("body").prepend(
"<div class='fullscreen-bg'><video loop muted autoplay class='fullscreen-bg__video' ><source src='https://www.iresearchservices.com/wp-content/uploads/2018/03/bgvideo.mp4' type='video/mp4'><source src='https://www.iresearchservices.com/wp-content/uploads/2018/03/bgvideo.mp4' type='video/mp4'><source src='https://www.iresearchservices.com/wp-content/uploads/2018/03/bgvideo.mp4' type='video/mp4'></video></div>"
);
$(".fullscreen-bg__video").on("loadeddata", function () {
alert("Video Loaded")
});
.fullscreen-bg {
position: fixed;
top: 0;
right: 0;
bottom: 0;
left: 0;
overflow: hidden;
z-index: -100;
}
.fullscreen-bg__video {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
}
@media (min-aspect-ratio: 16/9) {
.fullscreen-bg__video {
width: 100%;
height: auto;
}
}
@media (max-aspect-ratio: 16/9) {
.fullscreen-bg__video {
width: auto;
height: 100%;
}
}
@media (min-aspect-ratio: 16/9) {
.fullscreen-bg__video {
height: 300%;
top: -100%;
}
}
@media (max-aspect-ratio: 16/9) {
.fullscreen-bg__video {
width: 300%;
left: -100%;
}
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51476316",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How can I get a regex to check that a string only contains alpha characters [a-z] or [A-Z]? I'm trying to create a regex to verify that a given string only has alpha characters a-z or A-Z. The string can be up to 25 letters long. (I'm not sure if regex can check length of strings)
Examples:
1. "abcdef" = true;
2. "a2bdef" = false;
3. "333" = false;
4. "j" = true;
5. "aaaaaaaaaaaaaaaaaaaaaaaaaa" = false; //26 letters
Here is what I have so far... can't figure out what's wrong with it though
Regex alphaPattern = new Regex("[^a-z]|[^A-Z]");
I would think that would mean that the string could contain only upper or lower case letters from a-z, but when I match it to a string with all letters it returns false...
Also, any suggestions regarding efficiency of using regex vs. other verifying methods would be greatly appreciated.
A: /// <summary>
/// Checks if string contains only letters a-z and A-Z and should not be more than 25 characters in length
/// </summary>
/// <param name="value">String to be matched</param>
/// <returns>True if matches, false otherwise</returns>
public static bool IsValidString(string value)
{
string pattern = @"^[a-zA-Z]{1,25}$";
return Regex.IsMatch(value, pattern);
}
A:
The string can be up to 25 letters long.
(I'm not sure if regex can check length of strings)
Regexes ceartanly can check length of a string - as can be seen from the answers posted by others.
However, when you are validating a user input (say, a username), I would advise doing that check separately.
The problem is, that regex can only tell you if a string matched it or not. It won't tell why it didn't match. Was the text too long or did it contain unallowed characters - you can't tell. It's far from friendly, when a program says: "The supplied username contained invalid characters or was too long". Instead you should provide separate error messages for different situations.
A: The regular expression you are using is an alternation of [^a-z] and [^A-Z]. And the expressions [^…] mean to match any character other than those described in the character set.
So overall your expression means to match either any single character other than a-z or other than A-Z.
But you rather need a regular expression that matches a-zA-Z only:
[a-zA-Z]
And to specify the length of that, anchor the expression with the start (^) and end ($) of the string and describe the length with the {n,m} quantifier, meaning at least n but not more than m repetitions:
^[a-zA-Z]{0,25}$
A: Regex lettersOnly = new Regex("^[a-zA-Z]{1,25}$");
*
*^ means "begin matching at start of string"
*[a-zA-Z] means "match lower case and upper case letters a-z"
*{1,25} means "match the previous item (the character class, see above) 1 to 25 times"
*$ means "only match if cursor is at end of string"
A:
I'm trying to create a regex to verify that a given string only has alpha
characters a-z or A-Z.
Easily done as many of the others have indicated using what are known as "character classes". Essentially, these allow us to specifiy a range of values to use for matching:
(NOTE: for simplification, I am assuming implict ^ and $ anchors which are explained later in this post)
[a-z] Match any single lower-case letter.
ex: a matches, 8 doesn't match
[A-Z] Match any single upper-case letter.
ex: A matches, a doesn't match
[0-9] Match any single digit zero to nine
ex: 8 matches, a doesn't match
[aeiou] Match only on a or e or i or o or u.
ex: o matches, z doesn't match
[a-zA-Z] Match any single lower-case OR upper-case letter.
ex: A matches, a matches, 3 doesn't match
These can, naturally, be negated as well:
[^a-z] Match anything that is NOT an lower-case letter
ex: 5 matches, A matches, a doesn't match
[^A-Z] Match anything that is NOT an upper-case letter
ex: 5 matches, A doesn't matche, a matches
[^0-9] Match anything that is NOT a number
ex: 5 doesn't match, A matches, a matches
[^Aa69] Match anything as long as it is not A or a or 6 or 9
ex: 5 matches, A doesn't match, a doesn't match, 3 matches
To see some common character classes, go to:
http://www.regular-expressions.info/reference.html
The string can be up to 25 letters long.
(I'm not sure if regex can check length of strings)
You can absolutely check "length" but not in the way you might imagine. We measure repetition, NOT length strictly speaking using {}:
a{2} Match two a's together.
ex: a doesn't match, aa matches, aca doesn't match
4{3} Match three 4's together.
ex: 4 doesn't match, 44 doesn't match, 444 matches, 4434 doesn't match
Repetition has values we can set to have lower and upper limits:
a{2,} Match on two or more a's together.
ex: a doesn't match, aa matches, aaa matches, aba doesn't match, aaaaaaaaa matches
a{2,5} Match on two to five a's together.
ex: a doesn't match, aa matches, aaa matches, aba doesn't match, aaaaaaaaa doesn't match
Repetition extends to character classes, so:
[a-z]{5} Match any five lower-case characters together.
ex: bubba matches, Bubba doesn't match, BUBBA doesn't match, asdjo matches
[A-Z]{2,5} Match two to five upper-case characters together.
ex: bubba doesn't match, Bubba doesn't match, BUBBA matches, BUBBETTE doesn't match
[0-9]{4,8} Match four to eight numbers together.
ex: bubba doesn't match, 15835 matches, 44 doesn't match, 3456876353456 doesn't match
[a3g]{2} Match an a OR 3 OR g if they show up twice together.
ex: aa matches, ba doesn't match, 33 matches, 38 doesn't match, a3 DOESN'T match
Now let's look at your regex:
[^a-z]|[^A-Z]
Translation: Match anything as long as it is NOT a lowercase letter OR an upper-case letter.
To fix it so it meets your needs, we would rewrite it like this:
Step 1: Remove the negation
[a-z]|[A-Z]
Translation: Find any lowercase letter OR uppercase letter.
Step 2: While not stricly needed, let's clean up the OR logic a bit
[a-zA-Z]
Translation: Find any lowercase letter OR uppercase letter. Same as above but now using only a single set of [].
Step 3: Now let's indicate "length"
[a-zA-Z]{1,25}
Translation: Find any lowercase letter OR uppercase letter repeated one to twenty-five times.
This is where things get funky. You might think you were done here and you may well be depending on the technology you are using.
Strictly speaking the regex [a-zA-Z]{1,25} will match one to twenty-five upper or lower-case letters ANYWHERE on a line:
[a-zA-Z]{1,25}
a matches, aZgD matches, BUBBA matches, 243242hello242552 MATCHES
In fact, every example I have given so far will do the same. If that is what you want then you are in good shape but based on your question, I'm guessing you ONLY want one to twenty-five upper or lower-case letters on the entire line. For that we turn to anchors. Anchors allow us to specify those pesky details:
^ beginning of a line
(I know, we just used this for negation earlier, don't get me started)
$ end of a line
We can use them like this:
^a{3} From the beginning of the line match a three times together
ex: aaa matches, 123aaa doesn't match, aaa123 matches
a{3}$ Match a three times together at the end of a line
ex: aaa matches, 123aaa matches, aaa123 doesn't match
^a{3}$ Match a three times together for the ENTIRE line
ex: aaa matches, 123aaa doesn't match, aaa123 doesn't match
Notice that aaa matches in all cases because it has three a's at the beginning and end of the line technically speaking.
So the final, technically correct solution, for finding a "word" that is "up to five characters long" on a line would be:
^[a-zA-Z]{1,25}$
The funky part is that some technologies implicitly put anchors in the regex for you and some don't. You just have to test your regex or read the docs to see if you have implicit anchors.
A: Do I understand correctly that it can only contain either uppercase or lowercase letters?
new Regex("^([a-z]{1,25}|[A-Z]{1,25})$")
A regular expression seems to be the right thing to use for this case.
By the way, the caret ("^") at the first place inside a character class means "not", so your "[^a-z]|[^A-Z]" would mean "not any lowercase letter, or not any uppercase letter" (disregarding that a-z are not all letters).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/990364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: How to set unique for specific field from specific table? I have this where i have all features from database:
$features = Feature::all();
User can add new additional feature that will be added in this table, but i want to validate so if user enter something that is already in databse to get a message. So name need to be unique. Any suggestion how can i do that?
I tried this but it save it anyway.
$this->validate($request, [
'name' => 'unique:features',
]);
A: From the docs
unique:table,column,except,idColumn
The field under validation must be unique in a given database table. If the column option is not specified, the field name will be used.
Specifying A Custom Column Name:
'email' => 'unique:users,email_address'
You may need to specify the column to be checked against.
A: $feauturescheck= Feauture::where('Columname', '=',Input::get('input'))->count();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/41381582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: is there Material increment/decrement Button implementation for Android? is there by any chance a Material design implementation for increment/decrement button ? image for illustration
A: No, but it's quite simple to implement, you only need a linear layout with horizontal orientation, containing a button, a textview and another button.
Have an internal value to count and then associate a callback to your buttons, where you add/substract your counter and update the textview, like this:
substractButton.setOnClickListener(v -> {
count--;
countTextView.setText(count);
});
sumButton.setOnClickListener(v -> {
count++;
countTextView.setText(count);
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/72383559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: JSOUP parse for android force closes I am trying to use JSOUP to get the menu data from a website (http://hdh.ucsd.edu/mobile/dining/locationdetails.aspx?l=11) but whenever I try to fetch the links, my android app crashes. What is the reason? This is the code I have
public class SixthFragment extends Fragment {
String url = "http://hdh.ucsd.edu/mobile/dining/locationdetails.aspx?l=11";
ProgressDialog mProgressDialog;
String MENU;
String HOURS;
TextView textview;
@Nullable
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
View rootView = inflater.inflate(R.layout.fragment_sixth, container, false);
textview = (TextView) rootView.findViewById(R.id.textView10);
new JSOUP().execute();
return rootView;
}
public class JSOUP extends AsyncTask<Void, Void, Void>{
ProgressDialog dialog;
@Override
protected void onPreExecute(){
super.onPreExecute();
dialog = new ProgressDialog(getActivity());
dialog.setMessage("loading...");
dialog.show();
}
@Override
protected Void doInBackground(Void... params){
try{
Document document = Jsoup.connect(url).get();
Elements elements = document.select("a[href]");
//HOURS = elements.text();
System.out.println(elements.size());
for(int i = 0; i<elements.size(); i++){
MENU += "\n" + elements.get(i).text();
System.out.println(i);
}
}
catch (Exception e){
}
return null;
}
@Override
protected void onPostExecute(Void result){
dialog.dismiss();
textview.setText(MENU);
super.onPostExecute(result);
}
}
}
And my error shows up as this
11-28 13:48:59.175 23230-23230/com.lamdevs.tritonbites E/AndroidRuntime: FATAL EXCEPTION: main
Process: com.lamdevs.tritonbites, PID: 23230
java.lang.NullPointerException: Attempt to invoke virtual method 'void android.widget.TextView.setText(java.lang.CharSequence)' on a null object reference
at com.lamdevs.tritonbites.fragments.SixthFragment$JSOUP.onPostExecute(SixthFragment.java:82)
at com.lamdevs.tritonbites.fragments.SixthFragment$JSOUP.onPostExecute(SixthFragment.java:43)
at android.os.AsyncTask.finish(AsyncTask.java:632)
at android.os.AsyncTask.access$600(AsyncTask.java:177)
at android.os.AsyncTask$InternalHandler.handleMessage(AsyncTask.java:645)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:135)
at android.app.ActivityThread.main(ActivityThread.java:5221)
at java.lang.reflect.Method.invoke(Native Method)
at java.lang.reflect.Method.invoke(Method.java:372)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:899)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:694)
Which I don't understand because elements has a size of 61 when I output the size of elements. Also, is there a way to get the breakfast, lunch, and dinner separately? Thank you
A: Please post your layout XML R.layout.fragment_sixth. Probably you're referring to a wrong id, which causes de NullPointerException.
To get each "Breakfast", "Lunch" and "Dinner" separately, you can simply iterate through the last three <div> tags inside the HTML element with id MainContent_divDailySpecials (I did find that id looking at the website DOM structure through Chrome Developer Tools inspector).
So, as a start, simply grab that element with
Document document = Jsoup.connect(url).get();
Element parentDiv = document.getElementById("MainContent_divDailySpecials");
and from there on you can iterate backwards to get the last three children of that div. (Please keep in mind that if, in any point in time, the structure of that page changes your code will break).
A: Looking at the error log you attached ..
java.lang.NullPointerException: Attempt to invoke virtual method
'void android.widget.TextView.setText(java.lang.CharSequence)' on a null object reference
at com.lamdevs.tritonbites.fragments.SixthFragment$JSOUP.onPostExecute(SixthFragment.java:82)
It talks about a NullPointerException on a TextView in SixthFragment Line 82 i.e. textview.setText(MENU);
The textview is initialized (textview = (TextView) rootView.findViewById(R.id.textView10);) which looks okay.
So, make sure you have a TextView with id textView10 in your layout fragment_sixth.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33975210",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to change Ruby and Rails version for the Rails base Docker image? Docker base image rails provides a full environment for Ruby on Rails. It pulls from the ruby upstream image. The rails base image specifies Ruby and Ruby on Rails versions.
What if we want to use different Ruby and Ruby on Rails versions?
Do we edit our Dockerfile in our project folder? Or, do we ssh into the machine, and install the ruby version we want and then build our own image?
Further details:
The rails base image documentation says that your doc file can simply be one line of code:
FROM rails:onbuild
This line of code pulls from the rails image on Docker Hub. This image has its own Dockerfile. The first line of this Dockerfile is FROM ruby:2.2.
Just to restate the question, what is the best way to create a container based off of the rails image, with different Ruby and Ruby on Rails versions? If possible, some sample code might be helpful for understanding how to do this.
A: I assume you want a docker image that is suiteable for plenty of rails apps.
I do not know docker at all, but maybe ignore what Docker offers to you, and do it yourself:
Create an image with all great ruby versions, maybe 1.9 and 2.3,
but i think you should just stick with latest ruby.
Use https://github.com/rbenv/rbenv to provide a ruby env
Every Rails applications usually ships with a Gemfile.
In production releases, the gem versions are locked in the Gemfile.lock file.
In case the gems need an update, you will need to update the app code and then the gems with
bundle install
So i think it´s not possible to have a docker "one-fits-all" image for plenty of rails apps nicely.
Something i do when installing productive rails apps is to install their gems in the app folder.
bundle install --path vendor/bundle
This puts the gem apps inside their vendor directory. I see no big chance here either to make update life easier.
As i never tried docker, or even visited their website, my post might be useless (sry)
I hope i understood your intentions, at least.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35353186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How can I download the oldest file of an FTP server? How can I download the oldest file of an FTP server?
FtpWebRequest request = (FtpWebRequest)WebRequest.Create("ftp://192.168.47.1/DocXML");
request.Method = WebRequestMethods.Ftp.ListDirectory;
request.Credentials = new NetworkCredential("Igor", "");
FtpWebResponse response = (FtpWebResponse)request.GetResponse();
Stream responseStream = response.GetResponseStream();
StreamReader reader = new StreamReader(responseStream);
string names = reader.ReadLine();
textBox12.Text = names;
A:
How can I download the oldest file of an FTP server?
Using WebRequestMethods.Ftp.ListDirectoryDetails
This will issue an FTP LIST command with a request to get the details on the files in a single request. This does not make things easy though because you will have to parse those lines, and there is no standard format for them.
Depending on the ftp server, it may return lines in a format like this:
08-10-11 12:02PM <DIR> Version2
06-25-09 02:41PM 144700153 image34.gif
06-25-09 02:51PM 144700153 updates.txt
11-04-10 02:45PM 144700214 digger.tif
Or
d--x--x--x 2 ftp ftp 4096 Mar 07 2002 bin
-rw-r--r-- 1 ftp ftp 659450 Jun 15 05:07 TEST.TXT
-rw-r--r-- 1 ftp ftp 101786380 Sep 08 2008 TEST03-05.TXT
drwxrwxr-x 2 ftp ftp 4096 May 06 12:24 dropoff
Or even another format.
This blog post "Sample code for parsing FtpwebRequest response for ListDirectoryDetails" provides an example of handling several formats.
If you know what the format is, just create a custom minimal line parser for it.
Using WebRequestMethods.Ftp.ListDirectory with WebRequestMethods.Ftp.GetDateTimestamp
This is easier, but the downside is that it requires you to submit several requests to find out the last modification dates for the directory entries.
This will get you a list of file and directory entries with names only, that is easier to parse.
public static IEnumerable<string> ListDirectory(string uri, NetworkCredential credentials)
{
var request = FtpWebRequest.Create(uri);
request.Method = WebRequestMethods.Ftp.ListDirectory;
request.Credentials = credentials;
using (var response = (FtpWebResponse)request.GetResponse())
using (var stream = response.GetResponseStream())
using (var reader = new StreamReader(stream, true))
{
while (!reader.EndOfStream)
yield return reader.ReadLine();
}
}
Then for each file you can get the last modification date by issuing a request per file:
public static DateTime GetLastModified(string fileUri, NetworkCredential credentials)
{
// error checking omitted
var request = FtpWebRequest.Create(fileUri);
request.Method = WebRequestMethods.Ftp.GetDateTimestamp;
request.Credentials = credentials;
using (var response = (FtpWebResponse)request.GetResponse())
return response.LastModified;
}
Now you can simply do the following to get a list of files with their last modification date.
var credentials = new NetworkCredential("Igor", "");
var filesAndDates = ListDirectory("ftp://192.168.47.1/DocXML", credentials)
.Select(fileName => new {
FileName = fileName,
LastModified = GetLastModified("ftp://192.168.47.1/DocXML/" + fileName, credentials)
})
.ToList();
// find the oldest entry.
var oldest = filesAndDates.OrderBy(x => x.LastModified).FirstOrDefault();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30280333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: is SICP still recommended? I have some experience with python, I asked for a new language, and said that i am having a hard time implementing what I have learned. they suggested I learn SICP. Saying it uses a great language and teaches great programming fundamentals.
But I notice it was published in 1984. Do you guys recommend it, or have I been trolled? :p
Thanks.
A: Yes, SICP is still a great book! The second edition, which is available online, as of 1996. Although, if you just want to learn Scheme instead of fundamental computer science, you might be better off with Teach Yourself Scheme in Fixnum Days.
A: I strongly encourage you to check out the book How to Design Programs. It focuses on the fundamentals of programming, not on the specific language, but it also uses Scheme as its language. It's also available free online.
You can also check out the current release of the second edition, which is in preparation (or the less-stable but more up-to-date current draft).
A: Firstly, you're loooking at the first edition. The second edition is from 1996.
You should VERY MUCH tackle the book. I've gone through about half and my mind is blown. I can't begin to explain how amazing it is. Not only will you develop an appreciation for elegance in programming, but you'll see the line blurred between coding and computer science.
Don't approach this book like a programming book. Approach it as if you want to learn the fundamentals of computation and computer science using programming as a means of expression.
A: SICP is one of the best books I've read for learning how to write programs well. I never used scheme outside of the work I did in that book, but it's well worth your time.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/7431861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: Normalize samples with ffmpeg I am writing application for music analysis. I wrote resampling module relying on ffmpeg. Currently, I have AV_SAMPLE_FMT_S16 but later I have converting to float, which can be time consuming.
Because I need samples to be in some reasonable interval I need to do some kind of normalization, for AV_SAMPLE_FMT_FLT samples.
So, how I can normalize samples which I get when I select AV_SAMPLE_FMT_FLT. Ideal interval would be -n to n, where n is greater than equal of 1.f
A: Given that AV_SAMPLE_FMT_FLT is already normalised to the -1 .. 1 range, we can multiply each sample by your 'n' value to have it scaled between -n .. n
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43213388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: error on string literal jsp So I want to create charts from jsp using fusioncharts. I'd choose fusioncharts then JFree, canvasjs or any other because I'd experiences using it in php language. It should be no problem in jsp because jsp and php are literally same, mapping above html, with only differences is their syntax. However, as I've done the code, it return String literal is not properly closed by a double-quote error on sql statement. Please help me because I am a beginner to jsp and java environment. Thank you.
<%@page contentType="text/html" pageEncoding="UTF-8"%>
<%@page import="java.sql.*" %>
<%@page import="java.util.*" %>
<%@page import="com.google.gson.*" %>
<%@page import="readConfig.readConfig" %>
<%
String hostdb = readConfig.getProperties("conUrl"); // MySQl host
String userdb = readConfig.getProperties("dbUser"); // MySQL username
String passdb = readConfig.getProperties("dbPass"); // MySQL password
String driver = readConfig.getProperties("dbDriver"); // MySQL driver
DriverManager.registerDriver(new com.mysql.jdbc.Driver());
Connection con = DriverManager.getConnection(hostdb , userdb , passdb);
%>
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Creating Charts with Data from a Database - fusioncharts.com</title>
<script src="vendor/fusioncharts/fusioncharts.js"></script>
</head>
<body>
<div id="chart"></div>
<%@page import="fusioncharts.FusionCharts" %>
<%
Gson gson = new Gson();
String sql= "SELECT m.month,IFNULL(x.cnt, 0) AS cnt FROM
(SELECT 1 AS month UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9 UNION SELECT 10 UNION SELECT 11 UNION SELECT 12) AS m
LEFT JOIN (SELECT DATE_FORMAT(date, '%c') as month, COUNT(*) AS cnt FROM ssl_sales where YEAR(date)=2017 AND status_ssl='new' GROUP BY month) AS x ON m.month = x.month
ORDER BY m.month DESC";
PreparedStatement pt=con.prepareStatement(sql);
ResultSet rs=pt.executeQuery();
Map<String, String> chartobj = new HashMap<String, String>();
chartobj.put("caption", "Top 10 most populous countries");
chartobj.put("showValues", "0");
chartobj.put("theme", "zune");
ArrayList arrData = new ArrayList();
while(rs.next()) {
Map<String, String> lv = new HashMap<String, String>();
lv.put("label", rs.getString("Monthly"));
lv.put("value", rs.getString("New Demand"));
arrData.add(lv);
}
rs.close();
Map<String, String> dataMap = new LinkedHashMap<String, String>();
dataMap.put("chart", gson.toJson(chartobj));
dataMap.put("data", gson.toJson(arrData));
FusionCharts columnChart= new FusionCharts(
"column2d",
"chart1",
"500","300",
"chart",
"json",
gson.toJson(dataMap)
);
%>
<%=columnChart.render()%>
</body>
error on eclipse terminal
Mar 27, 2018 2:07:17 PM org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet [jsp] in context with path [/registration] threw exception [Unable to compile class for JSP:
An error occurred at line: 78 in the jsp file: /dashboard.jsp
String literal is not properly closed by a double-quote
75:
76:
77: // Form the SQL query that returns the number of sales in 2017
78: String sql= "SELECT m.month, IFNULL(x.cnt, 0) AS cnt FROM
79: (SELECT 1 AS month UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9 UNION SELECT 10 UNION SELECT 11 UNION SELECT 12) AS m
80: LEFT JOIN (SELECT DATE_FORMAT(date, '%c') as month, COUNT(*) AS cnt FROM ssl_sales where YEAR(date)=2017 AND status_ssl='new' GROUP BY month) AS x ON m.month = x.month
81: ORDER BY m.month DESC";
Stacktrace:] with root cause
org.apache.jasper.JasperException: Unable to compile class for JSP:
An error occurred at line: 78 in the jsp file: /dashboard.jsp
String literal is not properly closed by a double-quote
75:
76:
77: // Form the SQL query that returns the number of sales in 2017
78: String sql= "SELECT m.month, IFNULL(x.cnt, 0) AS cnt FROM
79: (SELECT 1 AS month UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9 UNION SELECT 10 UNION SELECT 11 UNION SELECT 12) AS m
80: LEFT JOIN (SELECT DATE_FORMAT(date, '%c') as month, COUNT(*) AS cnt FROM ssl_sales where YEAR(date)=2017 AND status_ssl='new' GROUP BY month) AS x ON m.month = x.month
81: ORDER BY m.month DESC";
A: in Java String literals are not allowed more than one line .
so you can do using '+' like
String sql= "SELECT m.month,IFNULL(x.cnt, 0) AS cnt FROM "+
" (SELECT 1 AS month UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9 UNION SELECT 10 UNION SELECT 11 UNION SELECT 12) AS m "+
" LEFT JOIN (SELECT DATE_FORMAT(date, '%c') as month, COUNT(*) AS cnt FROM ssl_sales where YEAR(date)=2017 AND status_ssl='new' GROUP BY month) AS x ON m.month = x.month "+
" ORDER BY m.month DESC";
or write it in one line
String sql="SELECT m.month,IFNULL(x.cnt, 0) AS cnt FROM (SELECT 1 AS month UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9 UNION SELECT 10 UNION SELECT 11 UNION SELECT 12) AS m LEFT JOIN (SELECT DATE_FORMAT(date, '%c') as month, COUNT(*) AS cnt FROM ssl_sales where YEAR(date)=2017 AND status_ssl='new' GROUP BY month) AS x ON m.month = x.month ORDER BY m.month DESC";
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49504406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Python: Modify the internal behavior of a function by using decorator I'm learning to use python decorator.
def my_dcrtr(fun):
def new_fun():
return fun()
return new_fun
I realize the decorated function 'fun' acts like a black box inside the decorator. I can choose to use fun() or not at all inside new_fun. However, I don't know whether I can break into 'fun' and interact with fun's local scope inside the new_fun?
e.g. I'm trying to make a toy Remote Procedural Call (RPC) with python.
def return_locals_rpc_decorator(fun):
def decorated_fun(*args, **kw):
local_args = fun(*args, **kw)
# pickle the local_args and send it to server
# server unpickle and doing the RPC
# fetch back server results and unpickle to results
return rpc_results
return decorated_fun
@return_locals_rpc_decorator
def rpc_fun(a, b, c=3):
return locals() # This looks weird. how can I make this part of the decorator?
print(rpc_fun(2, 1, 6))
In this example, I try to get rpc_fun's argument list at runtime with the 'locals()' command. Then send it to server to execute. Instead of letting rpc_fun returns its locals(), is it possible to use the decorator to retrieve decorated function's argument space?
A: You can use function annotations for Python3:
def return_locals_rpc_decorator(fun):
def decorated_fun(*args, **kw):
local_args = fun(*args, **kw)
print(local_args)
fun_parameters = fun.__annotations__
final_parameters = {a:list(args)[int(b[-1])-1] for a, b in fun_parameters.items() if a != 'return'}
return final_parameters
return decorated_fun
@return_locals_rpc_decorator
def my_funct(a:"val1", b:"val2", c:"val3") -> int:
return a + b + c
print(my_funct(10, 20, 30))
Output:
60
{'a': 10, 'b': 20, 'c': 30}
In this way, you are using the wrapper function decorated_fun to access the decorated function's parameters and further information specified by the annotation. I changed the parameter descriptions in the annotations so that each string value would end in a digit that could be used to index args. However, if you do not want to change the parameter descriptions in the annotations, you can sort via ending character.
Edit: the code in the body of my_funct is executed when called in the wrapper function (decorated_fun), since the args, declared in the scope of decorated_fun is passed to and unpacked in local_args.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/47625767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Balance between use of Ajax on $.ready() and loading data already I have a page which has several independent sections. Data is to be populated in each section by different queries to the database.
In order to lower the page load time, I have ajaxified the loading of each section (following the Amazon.com philosophy).
In order to load each section of the page, I make an ajax call on $.ready() method in my page which in turn fetches data from the database.
In all 6 requests are made to the (same) server which completely generate the sections.
Now, I'm not sure if I am overburdening the server by making 6 requests each time the page is requested. Any suggestions?
(I use Struts/Hibernate/Jsp/jQuery)
A: You can always queue your ajax calls so they happen 1 at a time. There are jquery plugins to help with this:
http://www.protofunc.com/scripts/jquery/ajaxManager/
To me however, 6 calls does not seem too bad, as long as they dont generate a lot of server activity (hard db queries, file handling, etc...)
A: alternatively, you can start all your queries as soon as the page is requested, and save the results up for the ajax calls. This needs asynchronous execution on the server, and you need to make sure that the ajax response program can wait for the sql completion too.
If you query the same data in several of your 6 queries, this might be worth it. If tables are different for each, this might be really counterproductive (depending on your database engine, and cluster architecture).
However, if database stress and response time is a concern, caching techniques and other optimisations usually work better (memcache by example). But again this depends strongly on your database engine.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/8708115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Constructor parameter in configuration class required a bean of type 'StratusAuthenticationEntryPoint' that could not be found Parameter 0 of constructor in ResourceServerConfiguration required a bean of type 'StratusAuthenticationEntryPoint' that could not be found.
I am using spring boot 2.6.6
Here is the code:
@Configuration
@EnableGlobalMethodSecurity(prePostEnabled = true)
@Order(100)
//@Import({ApiPermissionEvaluator.class})
public class ResourceServerConfiguration extends WebSecurityConfigurerAdapter {
private final StratusAuthenticationEntryPoint securityAuthenticationEntryPoint;
public ResourceServerConfiguration(StratusAuthenticationEntryPoint securityAuthenticationEntryPoint) {
super();
this.securityAuthenticationEntryPoint = securityAuthenticationEntryPoint;
}
}
error message:
***************************
APPLICATION FAILED TO START
***************************
Description:
Parameter 0 of constructor in com.stratus.security.config.ResourceServerConfiguration required a bean of type 'com.stratus.security.config.StratusAuthenticationEntryPoint' that c
ould not be found.
Action:
Consider defining a bean of type 'com.stratus.security.config.StratusAuthenticationEntryPoint' in your configuration.
A: *
*Add @Component over Class of StratusAuthenticationEntryPoint that a bean created by spring ioc
*verify if the ComponentScan path contains Class of StratusAuthenticationEntryPoint
| {
"language": "en",
"url": "https://stackoverflow.com/questions/72137263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Maven build war doesn't work properly on Tomcat 7 I've been working on a web app that and here's me configuration before I get started telling my story.
Configuration: IntelliJ IDEA 11 + Java 7 + (Maven + Jetty 8.0.1).
When I clean and build my app on intelliJ, everything works as expected and nothing goes wrong; pages load, rendering are perfectly fine. Then I use -mvn clean package command to build my project so I'll test my app on Tomcat 7.
However, when I deploy my application to Tomcat, some pages aren't as I saw on my maven build: renderings doesn't work properly, styling is a little bit deprecated and so on...
Even some pages doesn't load fully when I check on firebug.
I wish someone encountered a similar issue; this thing is making me crazy... If you need any logs or anything just name it.
Thank you.
A: Apperently, there's always nuances that you miss sometimes and browsers keep extensive data of the pages you browser. I've just cleared all of my browsers caches and now everything works perfectly!.. So if you encounter something similar, be sure to clear your browser cache.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13446101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I determine the start and end of instructions in an object file? So, I've been trying to write an emulator, or at least understand how stuff works. I have a decent grasp of assembly, particularly z80 and x86, but I've never really understood how an object file (or in my case, a .gb ROM file) indicates the start and end of an instruction.
I'm trying to parse out the opcode for each instruction, but it occurred to me that it's not like there's a line break after every instruction. So how does this happen? To me, it just looks like a bunch of bytes, with no way to tell the difference between an opcode and its operands.
A: For most CPUs - and I believe Z80 falls in this category - the length of an instruction is implicit.
That is, you must decode the instruction in order to figure out how long it is.
A: If you're writing an emulator you don't really ever need to be able to obtain a full disassembly. You know what the program counter is now, you know whether you're expecting a fresh opcode, an address, a CB page opcode or whatever and you just deal with it. What people end up writing, in effect, is usually a per-opcode recursive descent parser.
To get to a full disassembler, most people impute some mild simulation, recursively tracking flow. Instructions are found, data is then left by deduction.
Not so much on the GB where storage was plentiful (by comparison) and piracy had a physical barrier, but on other platforms it was reasonably common to save space or to effect disassembly-proof code by writing code where a branch into the middle of an opcode would create a multiplexed second stream of operations, or where the same thing might be achieved by suddenly reusing valid data as valid code. One of Orlando's 6502 efforts even re-used some of the loader text — regular ASCII — as decrypting code. That sort of stuff is very hard to crack because there's no simple assembly for it and a disassembler therefore usually won't be able to figure out what to do heuristically. Conversely, on a suitably accurate emulator such code should just work exactly as it did originally.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/24945174",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What does QFE_Richmond actually mean? I am developing a VBA add-in for Excel that uses the RefEdit control.
One of my testers pointed out that he couldn't use keyboard shortcuts while selecting cells. And I found the solution to this problem here: http://support.microsoft.com/kb/291110
Set the magical value QFE_Richmond to 1 in the Excel section of HKEY_CURRENT_USER.
This solution works great.
My question is why?
What is the significance of the "QFE_Richmond" variable? Where did it come from? Why do you need this obscure flag to fix a simple glitch that has persisted at least through Excel 2010 and at least as far back as 2003? Does this flag do anything else?
And is it safe to automatically make this change for the users of my add-in, even though it globally affects their Excel settings?
A: The obvious answer seems to be that they either forgot to apply it in each version or they don't consider it important enough to make default because it is on the border of being considered a bug or a usability preference because it has an easy workaround(i.e. using the GUI instead of shortcuts). I wouldn't think applying the hotfix would hurt anything - they wouldn't make it available if that were the case.
Changing QFE_Richmond registry key to 1 is a to enable a hotfix.
http://support.microsoft.com/?kbid=291110
"Typically, hotfixes are made to address a specific customer situation and may not be distributed outside the customer organization."
In addition, the RefEdit control seems to have alternatives:
http://peltiertech.com/WordPress/refedit-control-alternative/
Which have been recommended because it has compatability issues:
http://peltiertech.com/WordPress/unspecified-painfully-frustrating-error/
So you could probably presume that MS has some gaps in their quality control for the RefEdit feature.
Good Luck.
EDIT/ADDITION:
By the way,
QFE stands for Quick Fix Engineering
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12428816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Assign a new column in pandas in a similar way as in pyspark I have the following dataframe:
df = pd.DataFrame([['A', 1],['B', 2],['C', 3]], columns=['index', 'result'])
index
result
A
1
B
2
C
3
I would like to create a new column, for example multiply the column 'result' by two, and I am just curious to know if there is a way to do it in pandas as pyspark does it.
In pyspark:
df = df\
.withColumn("result_multiplied", F.col("result")*2)
I don't like the fact of writing the name of the dataframe everytime I have to perform an operation as it is done in pandas such as:
In pandas:
df['result_multiplied'] = df['result']*2
A: Use DataFrame.assign:
df = df.assign(result_multiplied = df['result']*2)
Or if column result is processing in code before is necessary lambda function for processing counted values in column result:
df = df.assign(result_multiplied = lambda x: x['result']*2)
Sample for see difference column result_multiplied is count by multiple original df['result'], for result_multiplied1 is used multiplied column after mul(2):
df = df.mul(2).assign(result_multiplied = df['result']*2,
result_multiplied1 = lambda x: x['result']*2)
print (df)
index result result_multiplied result_multiplied1
0 AA 2 2 4
1 BB 4 4 8
2 CC 6 6 12
| {
"language": "en",
"url": "https://stackoverflow.com/questions/66967545",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Python dictionary printing first word behaviour I'm learning python and am on the topic of dictionaries. I wrote the following dictionary:
Team={
('projectmanager','Asma'): 'Sara',
('ba','Richard'): 'Steve',
('tester','Asma'): 'Rob',
'developer1': 'Misbah',
'developer2': 'Mariam'
}
I then wrote the following code:
for k,v in Team.items():
profile=type(k)
print('engTeam => {1} {0}'.format(k[0][0],profile))
The output I get is:
Team => <class 'str'> e
Team => <class 'str'> d
Team => <class 'tuple'> i
Team => <class 'str'> m
Team => <class 'tuple'> b
Team => <class 'str'> s
Team => <class 'tuple'> p
Team => <class 'str'> e
Team => <class 'str'> a
I don't understand why the first character of all non-tuple entries are being printed. If I think about it k[0][0] in my mind means get me the first element of the dictionary then the first sub element. But the non-tuple words don't have a sub element so the output should be blank, shouldn't it? Also k[0][0] should be printing the whole first word in the tuple e.g. 'projectmanager' instead of the first character of the first tuple word. What am I missing in understanding what k[0][0] means and what it is doing?
A:
If I think about it k[0][0] in my mind means get me the first element of the dictionary then the first sub element.
No, k is the key of a given key-value pair. You are iterating over the items, which are those pairs:
for k,v in Team.items():
Each key-value pair is assigned to the names k and v there.
Given that you have two different types of keys in your dictionary, strings and tuples, your type() information shows you exactly that; you print a series of <class 'str'> and <class 'type'> for those keys.
So if k is a tuple, then k[0] is the first element in that tuple and k[0][0] is the first character of that first element:
>>> k = ('projectmanager', 'Asma')
>>> type(k)
<class 'tuple'>
>>> k[0]
'projectmanager'
>>> k[0][0]
'p'
For strings, k[0] would be the first character. But a single character is a string too. A string of length 1, so getting the first element of that string is still a string, again of length 1:
>>> k = 'developer1'
>>> type(k)
<class 'str'>
>>> k[0]
'd'
>>> type(k[0])
<class 'str'>
>>> len(k[0])
1
>>> k[0][0]
'd'
You wouldn't get an empty value here.
A: Consider,
First case- key, k is set to be a to be tuple. For instance ('projectmanager','Asma'). Hence obviously k[0][0] will print out p
Second case (the more tricky one) when k is set to be a string, k[0] is the first element of string. k[0] is hence a string of length 1. Doing
k[0][0] would access the first element of k[0] which is inturn the same as k[0] is a string of length 1.For example when k is 'developer1' ,k[0] is'd' and k[0][0] is hence 'd'
Hope this helps, good luck
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43027477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to detect if a particular command has executed in terminal I need to detect if a particular command has run. for example if I do a git push I need to execute a build script in my fedora machine. Is there any way to do this.? we can use any facility in linux. Any help will be greatly appreciated.
A: You can try using "alias", try this link http://www.cyberciti.biz/tips/bash-aliases-mac-centos-linux-unix.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/37112531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: pkg-config file openmp dependency I want to write a pkg-config for a library that uses openmp internally.
My .pc file reads
prefix=/usr/local
exec_prefix=/usr/local
libdir=${exec_prefix}/lib
includedir=${prefix}/include
Name: LightFEM
Description:
Version: 1.0.0
Requires: openmp
Libs: -L${libdir} -lLightFEM
Cflags: -I${includedir}
However pkg-config returns the following error
Package openmp was not found in the pkg-config search path.
Perhaps you should add the directory containing `openmp.pc'
to the PKG_CONFIG_PATH environment variable
Package 'openmp', required by 'LightFEM', not found
How sould I write my .pc file ?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/71926507",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Google Spreadsheet node.js move a row or sort table Edit: I saw that I am not using the official google api. This is a package based on v3 of the google api. Here is the npm that I currently use.
I saw that google api v4 exists and has many more features than the npm package I currently use, would it be possible with the official v4 api? What would I need to change to use this api?
Is it possible to move a row to a specific row (number)?
If not, can I custom sort the entire table again?
This is how I access the table:
const GoogleSpreadsheet = require("google-spreadsheet");
const { promisify } = require("util");
const creds = require("./client_secret.json");
const doc = new GoogleSpreadsheet("XXXXXXXXXXXXXXXX");
const tt = promisify(doc.useServiceAccountAuth)(creds);
const info = await promisify(doc.getInfo)();
const sheet = info.worksheets[0];
And now I want to edit/update a row, and because the order should be different now if I change the column cell where my table is sorted by, I want to move the row somewhere else. this is how I modify a row:
var row = await promisify(sheet.getRows)({
offset: 1,
query: 'id = ' + userId
});
//This row has now a different value and should have a different row number
row[0].role= "value";
//This will update the row, but only on its current position.
promisify(row[0].save)();
Pseudo code (how it should look like, row[0] has no attribute that indicates the position):
//row[0] role was "" and now "member", the row should be where the other member rows are
row[0].index = IndexOf(AllRows.where(x => x.role == "member")) + 1
| {
"language": "en",
"url": "https://stackoverflow.com/questions/57201334",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do I change the width of a ScrollBar? I would like to change a TFrame's ScrollingBar width.
I know I could change all ScrollingBars in the system by:
SystemParametersInfo(SPI_SETNONCLIENTMETRICS,....
But how do I do it for a specific WinControl?
A: A lot of the code within Delphi depends on the width of scrollbars to be the fixed system setting so you can't alter the width without breaking the control. (Not without rewriting the TControlScrollBar and related controls in the VCL.)
You could, of course, hide the default scrollbars of the control and add your own TScrollbar components next to it.The standard TScrollBar class is a WinControl itself, where the scrollbar is taking the whole width and height of the control. The TControlScrollBar class is linked to other WinControl to manage the default scrollbars that are assigned to Windowed controls. While the raw API could make it possible to use a more flexible width, you'd always have the problem that the VCL will just assume the default system-defined width for these controls.
This also shows the biggest difference between both scrollbar types: TScrollBar has it's own Windows handle while TControlScrollBar borrows it from the related control.
A: You can try something like this:
your_frame.HorzScrollBar.Size := 50;
your_frame.HorzScrollBar.ButtonSize := your_frame.HorzScrollBar.Size;
A: procedure TForm1.FormCreate(Sender: TObject);
var NCMet: TNonClientMetrics;
begin
FillChar(NCMet, SizeOf(NCMet), 0);
NCMet.cbSize:=SizeOf(NCMet);
// get the current metrics
SystemParametersInfo(SPI_GETNONCLIENTMETRICS, SizeOf(NCMet), @NCMet, 0);
// set the new metrics
NCMet.iScrollWidth:=50;
SystemParametersInfo(SPI_SETNONCLIENTMETRICS, SizeOf(NCMet), @NCMet, SPIF_SENDCHANGE);
end;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1432704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: NullPointerException in Silk4J Object Map Editor when pressing the Tab key After creating a new Silk4J Object Map entry manually (right click, "Insert new") and enter the item name, I press the Tab key to move to the Locator path input field.
In this case, a series of "Object not set to an instance of an object" error messages appear. Eventually, Eclipse crashes. Doing some debugging, I found out that Eclipse crashes due to a StackOverflowException.
I can move to the locator path column using the mouse, but since I'm used to do things by keyboard, I'd really like to find a fix. How can I make the Tab key work as expected?
I am using Silk4J 16 Hotfix 2.
A: Doing some more debugging, I found out that Silk4J for Eclipse (Java) actually uses a WPF user interface (.NET).
While preinstalled by Windows, I never needed .NET on my machine, so I never installed any updates for it.
Installing the latest .NET updates, the problem was gone. In my case I updated to .NET 4.5.2.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30995413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: functional usage of IDS Camera (pyueye) I am using IDS camera functions which I need to capture some images whenever the camera receives an external trigger.
"ueye.is_SetExternalTrigger(hcam, ueye.IS_SET_TRIGGER_HI_LO)"
I can able to connect the camera while it was triggering but I could not save the images to my local memory. The documentation explains to us that images were saved to a memory buffer for every trigger change (HI_LO/LO_HI, etc..)
Does anybody can explain how to use "ueye.is_WaitForNextImage" & "ueye.is_CameraStatus" functions.
Can anybody have experience in these modules? just help me to use them.
I am defining the memory pointer as just "mem_ptr = ueye.c_mem_p()". Does anyone know how to define a specific memory pointer in any particular way and how to access it??
Thank You.
I have tried this one and some other ways also, but I cannot write everything here.
hCam = ueye.HIDS(0) #0: first available camera; 1-254: The camera with the specified camera ID
sInfo = ueye.SENSORINFO()
cInfo = ueye.CAMINFO()
pcImageMemory = ueye.c_mem_p()
MemID = ueye.int()
rectAOI = ueye.IS_RECT()
pitch = ueye.INT()
nBitsPerPixel = ueye.INT(24) #24: bits per pixel for color mode; take 8 bits per pixel for monochrome
channels = 3 #3: channels for color mode(RGB); take 1 channel for monochrome
m_nColorMode = ueye.INT() # Y8/RGB16/RGB24/REG32
bytes_per_pixel = int(nBitsPerPixel / 8)
nRet = ueye.is_InitCamera(hCam, None)
nRet = ueye.is_SetExternalTrigger(hCam, ueye.IS_SET_TRIGGER_HI_LO)
nRet = ueye.is_SetDisplayMode(hCam, ueye.IS_SET_DM_DIB)
nRet = ueye.is_AOI(hCam, ueye.IS_AOI_IMAGE_GET_AOI, rectAOI, ueye.sizeof(rectAOI))
width = rectAOI.s32Width
height = rectAOI.s32Height
nRet = ueye.is_AllocImageMem(hCam, width, height, nBitsPerPixel, pcImageMemory, MemID)
nRet = ueye.is_CaptureVideo(hCam, ueye.IS_DONT_WAIT)
d=0
while(nRet == ueye.IS_SUCCESS):
ueye.is_WaitForNextImage(hCam, 500, pcImageMemory, MemID)
array = ueye.get_data(pcImageMemory, width, height, nBitsPerPixel, pitch, copy=False)
frame = np.reshape(array,(height.value, width.value, bytes_per_pixel))
frame = cv2.resize(frame,(0,0),fx=0.5, fy=0.5)
cv2.imshow("SimpleLive_Python_uEye_OpenCV", frame)
filename = "images/file_%d.jpg"%d
cv2.imwrite(filename, img)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
ret = ueye.is_StopLiveVideo(hcam, ueye.IS_FORCE_VIDEO_STOP)
ret = ueye.is_ExitCamera(hcam)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75309908",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Angular Service to Express API Route + Posting Data I am in the process of converting one of my sites (http://maskedarmory.com) from LAMP (using Laravel 4 MVC) over to the MEAN stack and it has been quite a journey thus far.
I have managed to get the landing page up and running and the input POSTing to Angular controller that I have it routed to.
Now, the problem I am having is getting the service to send over the POSTed data that is in Angular over to the Express API. I keep keeping a 404 Not Found error on the /api/character URL path.
Also, how do I access the 'characterData' variable that is on the Angular side that is being passed over by the factory? Because I am trying to do a console.log on the 'characterData' variable on the server side and I am sure that that is out of scope.
app/routes.js (Express Routing)
// public/js/services/ArmoryService.js
angular.module('ArmoryService', []).
factory('Armory', function($http) {
return {
// Get the specified character by its profile ID.
get : function(id) {
return $http.get('/api/character/' + id);
},
// call to POST and create a new character armory.
create : function(characterData) {
return $http.post('/api/character', characterData);
}
}
});
app/routes.js (Express Routing)
module.exports = function(app) {
app.post('/api/character', function(req, res) {
console.log(characterData);
});
app.get('/', function(req, res) {
res.sendfile('./public/index.html'); // load our public/index.html file
});
};
If I do a console.log before the $http.post to the API, 'characterData' has all of the information it should.
I am sure that this is a routing issue of some type, but I will be damned if I can figure it out.
Thanks in advance for your help!
A: Try this:
app.post('/api/character', function(req, res) {
console.log(JSON.stringify(req.body));
res.status(200).send('whatever you want to send back to angular side');
});
app.get('/api/character/:id', function(req, res) {
console.log(req.params.id);
res.status(200).send('whatever you want to send back to angular side'');
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/25101735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Windows push notification service using Pushsharp giving Notification Failure var push = new PushBroker();
push.OnNotificationSent += NotificationSent;
push.OnChannelException += ChannelException;
push.OnServiceException += ServiceException;
push.OnNotificationFailed += NotificationFailed;
push.OnDeviceSubscriptionExpired += DeviceSubscriptionExpired;
push.OnDeviceSubscriptionChanged += DeviceSubscriptionChanged;
push.OnChannelCreated += ChannelCreated;
push.OnChannelDestroyed += ChannelDestroyed;
push.RegisterWindowsPhoneService();
push.QueueNotification(new WindowsPhoneToastNotification()
.ForEndpointUri(new Uri(uri))
.ForOSVersion(WindowsPhoneDeviceOSVersion.Eight)
.WithBatchingInterval(BatchingInterval.Immediate)
.WithNavigatePath("/LandingView.xaml")
.WithText1("PushSharp")
.WithText2("This is a Toast"));
push.StopAllServices();
I am using pushsharp nuget package for push notifications and while passing uri to this c# backend code for windows, I am getting notification failure exception.
A: I am using the latest version of PushSharp (version 3.0) in a project of mine to send toast notifications to Windows Phone Devices and it is working fine for me. I notice by the code you have above that you are using an older version of the PushSharp package, there is a new 3.0 version available from nuget.
You could use that latest package to send toast notification to windows phone devices. The latest version of PushSharp uses the WNS as opposed to the old MPNS.
If you go to that nuget get link i supplied above and download the solution you can see some examples on how to implement the push notifcations for windows phone using WNS. Look under the PushSharp.Test project (look for the WNSRealTest.cs file).
Below is an example of how you can send a toast notification to windows phone device:
var config = new WnsConfiguration(
"Your-WnsPackageNameProperty",
"Your-WnsPackageSid",
"Your-WnsClientSecret"
);
var broker = new WnsServiceBroker(config);
broker.OnNotificationFailed += (notification, exception) =>
{
//you could do something here
};
broker.OnNotificationSucceeded += (notification) =>
{
//you could do something here
};
broker.Start();
broker.QueueNotification(new WnsToastNotification
{
ChannelUri = "Your device Channel URI",
Payload = XElement.Parse(string.Format(@"
<toast>
<visual>
<binding template=""ToastText02"">
<text id=""1"">{0}</text>
<text id=""2"">{1}</text>
</binding>
</visual>
</toast>
","Your Header","Your Toast Message"))
});
broker.Stop();
As you may notice above the WnsConfiguration constructor requires a Package Name, Package SID, and a Client Secrete. To get these values your app must be registered with the Store Dashboard. This will provide you with credentials for your app that your cloud service will use in authenticating with WNS. You can check steps 1-3 on the following MSDN page for details on how to get this done. (note in the link above it states that you have to edit your appManifest.xml file with the identity of your app, I did not do this step, just make sure you have your windows phone app setup correctly to receive toast notification, this blog post will help with that.
Hope this helps.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/31781924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Replace Wordpress text with custom text I created a Wordpress site and I want to replace every occurrence of the word Wordpress example:
Wordpress => MySite
Is there any plugin to do what I want or any function that should I create?Or should I replace all the words manually?
A: There's a Wordpress Search and Replace Tool that will do what you need.
If you download a PHP script from the link, place it in the root of your site and then navigate to it will find and replace any term in the site database and / or files.
Obviously make sure you remove after you're done and it's better to rename it something less obvious before you upload it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18634221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: How to access Build properties on waterfall page inside buildbot stages? How can I access the build properties like event.change.id etc that are displayed on buildbot waterfall page, inside one of the buildbot stages, cbuildbot_stages.py?
-Pratibha
A: Buildbot offers a mechanism to access those properties. It is described in http://docs.buildbot.net/current/manual/cfg-properties.html#using-properties-in-steps
You will need to use Interpolate class to get the value that you are looking for: for example, Interpolate('%(prop:event.change.id)s.
Please note the introduction section that describes possible mistakes people make when they start using this functionality.
A: What and where is this file cbuildbot_stages.py ?
You could override the default buildbot behavior with what you want in your buildbot config file. Standard OOP practice.
A: Well I am using buildbot to build chromium os. cbuildbot_stages.py script is in /chromite/buildbot/ directory.
I want to access the gerrit change id in buildbot stages.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/22545177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: SQLite Syntax error while compiling: INSERT INTO I'm getting this RuntimeException when executing an AsyncTask:
Caused by: android.database.sqlite.SQLiteException: near ",": syntax error: , while compiling: INSERT INTO 'infrastructure' (lift,name,type,status,_id) VALUES ('2130837612','none','-','2130837600',0),('2130837612','none','-','2130837600',1),('2130837612','none','-','2130837600',2),('2130837612','none','-','2130837600',3),('2130837612','none','-','2130837600',4),('2130837612','none','-','2130837600',5)
All columns except _id are "text", _id is an integer and primary key.
This is where it crashes:
Cursor curtsr = db.rawQuery("SELECT COUNT(*) FROM 'Infrastructure'", null);
if (curtsr != null) {
curtsr.moveToFirst(); // Always one row returned.
if (curtsr.getInt(0) == 0) { // Zero count means empty table.
String INSERT_INFRA_VALUES = "INSERT INTO 'Infrastructure' (lift,name,type,status,_id) VALUES ('2130837612','none','-','2130837600',0),('2130837612','none','-','2130837600',1),('2130837612','none','-','2130837600',2),('2130837612','none','-','2130837600',3),('2130837612','none','-','2130837600',4),('2130837612','none','-','2130837600',5)";
db.execSQL(INSERT_INFRA_VALUES);
}
curtsr.close();
}
I can't find the reason why it's crashing.
Online SQLite lint tool https://sqliteonline.com/ isn't throwing any errors.
A: Comma-separated multiple VALUES insert was only introduced in sqlite 3.7.11 and chances are you are running on a device with older sqlite version.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/47000831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: JSqueeze changing double quotes to single quotes and causing problem I am trying to use JSqueeze to minify some javascript and I can see that Github owner has now marked the code as readonly (and therefore (?) will not support raising Issues), but I am running into a problem with the result from the following test:
functionA(this,"functionB()",300)
...turning it into ...
;functionA(this,'functionB()',300)
The problem here is that it is changing " to ' which when the JS is already in quotes (i.e. in an onClick) then this causes errors.
I can see that the JSqueeze respository is in archive, but it does do the rest of the minification very well (better than any other Javascript minifier I've looked at so far).
Does anyone have a fix for this?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75187071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Access to Backlog unavailable post TFS 2017 RC1 upgrade The tfs backlog page throws the following error : "TF400898: An Internal Error Occured." after upgrading to TFS 2017 RC1 from TFS 2015. All the other contents are in tact (ie Code, Build and Test). This error occurs for all the team projects in the Team Collection. Upgrade to tfs 2017 RC1 was successful without any errors.
Error logged in event viewer:
TF53010: The following error has occurred in a Team Foundation component or extension:
Date (UTC): 1/30/2017 12:47:55 PM
Machine: TFS01
Application Domain: /LM/W3SVC/2/ROOT/tfs-1-131302443521960468
Assembly: Microsoft.TeamFoundation.Framework.Server, Version=15.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a; v4.0.30319
Service Host: 4fc1f4bf-8005-4244-a66c-c57edf4df7f1 (<Team Project Collection>)
Process Details:
Process Name: w3wp
Process Id: 18288
Thread Id: 36708
Account name: <Domain>\<User>
Detailed Message: TF30065: An unhandled exception occurred.
Web Request Details
Url: http://<TfsServer>:8080/tfs/<TeamProjectCollection>/<TeamProject>/Application/_backlogs [method: GET]
User Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.3; WOW64; Trident/7.0; .NET4.0E; .NET4.0C; .NET CLR 3.5.30729; .NET CLR 2.0.50727; .NET CLR 3.0.30729; InfoPath.3)
Headers: not available
Path: /tfs/<TeamProjectCollection>/<TeamProject>/Application/_backlogs
Local Request: True
Host Address: fe80::a8d5:ecb7:9ff9:640c%13
User: <Domain>\<User> [authentication type: Negotiate]
Exception Message: Object reference not set to an instance of an object. (type NullReferenceException)
Exception Stack Trace: at Microsoft.TeamFoundation.Server.WebAccess.WorkItemTracking.Common.BacklogConfigurationService.<>c__DisplayClass1_0.<GetBacklogConfiguration>b__0()
at Microsoft.TeamFoundation.Framework.Server.VssRequestContextExtensions.TraceBlock[T](IVssRequestContext requestContext, Int32 enterTracepoint, Int32 leaveTracepoint, Int32 exceptionTracepoint, String area, String layer, String methodName, Func`1 action)
at Microsoft.TeamFoundation.Server.WebAccess.WorkItemTracking.Common.BacklogConfigurationService.GetBacklogConfiguration(IVssRequestContext requestContext, Guid projectId, TeamFoundationTeam team, Boolean validateProcessConfig)
at Microsoft.TeamFoundation.Agile.Server.AgileSettings..ctor(IVssRequestContext requestContext, CommonStructureProjectInfo project, TeamFoundationTeam team)
at Microsoft.TeamFoundation.Server.WebAccess.Agile.AgileAreaController.get_Settings()
at Microsoft.TeamFoundation.Server.WebAccess.Agile.BacklogsController.get_RequestBacklogContext()
at Microsoft.TeamFoundation.Server.WebAccess.Agile.BacklogsController.Index(String level, Nullable`1 showParents)
at lambda_method(Closure , ControllerBase , Object[] )
at System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters)
at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters)
at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass42.<BeginInvokeSynchronousActionMethod>b__41()
at System.Web.Mvc.Async.AsyncControllerActionInvoker.EndInvokeActionMethod(IAsyncResult asyncResult)
at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass37.<>c__DisplayClass39.<BeginInvokeActionMethodWithFilters>b__33()
at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass4f.<InvokeActionMethodFilterAsynchronously>b__49()
at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass4f.<InvokeActionMethodFilterAsynchronously>b__49()
at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass4f.<InvokeActionMethodFilterAsynchronously>b__49()
at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass4f.<InvokeActionMethodFilterAsynchronously>b__49()
at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass4f.<InvokeActionMethodFilterAsynchronously>b__49()
at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass4f.<InvokeActionMethodFilterAsynchronously>b__49()
at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass4f.<InvokeActionMethodFilterAsynchronously>b__49()
at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass4f.<InvokeActionMethodFilterAsynchronously>b__49()
at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass4f.<InvokeActionMethodFilterAsynchronously>b__49()
at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass4f.<InvokeActionMethodFilterAsynchronously>b__49()
at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass4f.<InvokeActionMethodFilterAsynchronously>b__49()
at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass4f.<InvokeActionMethodFilterAsynchronously>b__49()
at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass4f.<InvokeActionMethodFilterAsynchronously>b__49()
at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass4f.<InvokeActionMethodFilterAsynchronously>b__49()
at System.Web.Mvc.Async.AsyncControllerActionInvoker.EndInvokeActionMethodWithFilters(IAsyncResult asyncResult)
at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass25.<>c__DisplayClass2a.<BeginInvokeAction>b__20()
at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass25.<BeginInvokeAction>b__22(IAsyncResult asyncResult)
at System.Web.Mvc.Async.AsyncControllerActionInvoker.EndInvokeAction(IAsyncResult asyncResult)
at System.Web.Mvc.Controller.<>c__DisplayClass1d.<BeginExecuteCore>b__18(IAsyncResult asyncResult)
at System.Web.Mvc.Async.AsyncResultWrapper.<>c__DisplayClass4.<MakeVoidDelegate>b__3(IAsyncResult ar)
at System.Web.Mvc.Controller.EndExecuteCore(IAsyncResult asyncResult)
at System.Web.Mvc.Async.AsyncResultWrapper.<>c__DisplayClass4.<MakeVoidDelegate>b__3(IAsyncResult ar)
at System.Web.Mvc.Controller.EndExecute(IAsyncResult asyncResult)
at System.Web.Mvc.MvcHandler.<>c__DisplayClass8.<BeginProcessRequest>b__3(IAsyncResult asyncResult)
at System.Web.Mvc.Async.AsyncResultWrapper.<>c__DisplayClass4.<MakeVoidDelegate>b__3(IAsyncResult ar)
at System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult)
at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)
A: If the upgrade was successful without any errors. Then this kind of issue may related to the configuration.
You could try re-running the configuration wizard for the team project to fix the issue. How to please refer this tutorial: Configure features after an upgrade
| {
"language": "en",
"url": "https://stackoverflow.com/questions/41938501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Kafka SASL Auth with Kerberos: How to clear credentials cache I have a Java application that connects to Kafka through KafkaAdminClient. I'm using SASL authentication with GSSAPI mechanism (Kerberos). I am providing the krb5.conf, jaas.conf, principal, and keytab. When the application starts, if I provide the correct principal and keytab, and the first authentication attempt is successful, every subsequent attempt will remain successful, even if I change the principal/keytab to be incorrect. The reverse scenario is also true; if the principal in the first attempt is incorrect, causing a failure, every subsequent attempt also fails even after I correct the principal. I realize this is because Kerberos caches credentials; I'm wondering how to clear the cache without restarting the app. Can I force the principal to log off after a period of time?
I have tried setting various properties in the conf files with no luck. This is what I have:
jaas.conf
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
renewTicket=false
useKeyTab=true
storeKey=false
useTicketCache=false
remewTGT=false
refreshKrb5Config=true
keyTab="/tmp/keytab.keytab"
principal="***"
serviceName="kafka";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
renewTicket=false
useKeyTab=true
storeKey=false
useTicketCache=false
remewTGT=false
refreshKrb5Config=true
keyTab="/tmp/keytab.keytab"
principal="***"
serviceName="zookeeper";
};
krb5.conf
[libdefaults]
forwardable = true
default_realm = foo.bar.com
ticket_lifetime = 24h
dns_lookup_realm = false
dns_lookup_kdc = false
default_ccache_name = /tmp/krb5cc_%{uid}
rdns = false
ignore_acceptor_hostname = true
udp_preference_limit = 1
#default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
#default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
[domain_realm]
******.*****.*****.****.com = foo.bar.com
[logging]
default = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
kdc = FILE:/var/log/krb5kdc.log
[realms]
foo.bar.com = {
kdc = ******.*****.*****.****.com
admin_server = ********.******.******.*****.com
admin_server = ********.******.******.*****.com
}
This application is deployed to PCF and I cannot ssh into the instance, so doing a klist purge is not an option. Is there another way to make Kerberos forget previous logins? Any suggestions are greatly appreciated.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/70888898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Application Logging into Azure Blob Storage or use log4net into SQL I'm quite new in Development for Azure, I have a asp.net mvc 4 application in a Azure Cloud Service.
I have a application that has a considering quantity of transaction providing by API and I need to implement some applications loggings for improve the daily diagnostic, I'm looking for a tutorial that store those into a Blob Storage instead of SQL Database, but without relevant success.
Blob Storage sounds good because I don't need to increase substantially my database that also has all the business data and could compromise a business resource (Database) becoming slower because of log transactions.
If I decide to go to storage in SQL database I'm thinking in use Log4Net.
What you guys suggest and send me tutorial that I can follow, please.
Thank you.
A: Sorry our logging guidance is a little hard to find - something that we are currently working on resolving - but for now please take a look at the following resources:
Client logging overview - Essentially all client library operations are output using System.Diagnostics, so you intercept and write to text / xml file just using a standard TraceListener.
Analytics and Server logs - We have extensive service side logging capabilities as well - which troubleshooting distributed apps much simpler.
Let me know if you have any questions.
Jason
| {
"language": "en",
"url": "https://stackoverflow.com/questions/24261487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Why won't my text in a Love2D game update? I finally fixed the problem regarding the text not showing up in my game, now the next problem was it won't update. What was supposed to happen is that when I press the up and down arrow keys, the selected text would change, then pressing Enter would bring me to a new state, or quitting the game.
The code in my StartState.lua now looks like this:
StartState = Class{__includes = BaseState}
local option = 1
function StartState:update(dt)
if love.keyboard.wasPressed('up') then
if option == 1 then
option = 2
else
option = 1
end
elseif love.keyboard.wasPressed('down') then
if option == 2 then
option = 1
else
option = 2
end
end
if love.keyboard.wasPressed('enter') then
if option == 1 then
gStateMachine:change('play')
else
gStateMachine:change('quit')
end
end
end
function StartState:render()
local backgroundWidth = gTextures['start-background']:getWidth()
local backgroundHeight = gTextures['start-background']:getHeight()
love.graphics.draw(gTextures['start-background'], 0, 0, 0, VIRTUAL_WIDTH / (backgroundWidth - 1),
VIRTUAL_HEIGHT / (backgroundHeight - 1))
love.graphics.setFont(gFonts['large'])
love.graphics.setColor(153/255, 217/255, 234/255, 255/255)
love.graphics.printf('Attack Them All', 0, VIRTUAL_HEIGHT / 2 - 50, VIRTUAL_WIDTH, 'center')
love.graphics.setFont(gFonts['medium'])
love.graphics.setColor(0/255, 0/255, 0/255, 255/255)
love.graphics.printf('Play', 0, (VIRTUAL_HEIGHT / 2) + 2, VIRTUAL_WIDTH + 2, 'center')
love.graphics.printf('Quit', 0, (VIRTUAL_HEIGHT / 2) + 22, VIRTUAL_WIDTH + 2, 'center')
if option == 1 then
love.graphics.setColor(255/255, 174/255, 201/255, 255/255)
love.graphics.printf('Play', 0, (VIRTUAL_HEIGHT / 2), VIRTUAL_WIDTH, 'center')
love.graphics.setColor(237/255, 28/255, 36/255, 255/255)
love.graphics.printf('Quit', 0, (VIRTUAL_HEIGHT / 2) + 20, VIRTUAL_WIDTH, 'center')
else
love.graphics.setColor(237/255, 28/255, 36/255, 255/255)
love.graphics.printf('Play', 0, (VIRTUAL_HEIGHT / 2), VIRTUAL_WIDTH, 'center')
love.graphics.setColor(255/255, 174/255, 201/255, 255/255)
love.graphics.printf('Quit', 0, (VIRTUAL_HEIGHT / 2) + 20, VIRTUAL_WIDTH, 'center')
end
love.graphics.setColor(255/255, 255/255, 255/255, 255/255)
end
I already added up a StateMachine on my main.lua file:
function love.load()
love.graphics.setDefaultFilter('nearest', 'nearest')
math.randomseed(os.time())
love.window.setTitle('Attack Them All')
push:setupScreen(VIRTUAL_WIDTH, VIRTUAL_HEIGHT, WINDOW_WIDTH, WINDOW_HEIGHT, {
vsync = true,
fullscreen = false,
resizable = true
})
gStateMachine = StateMachine {
['start'] = function() return StartState() end,
['play'] = function() return PlayState() end,
['quit'] = function() return QuitState() end
}
gStateMachine:change('start')
end
function love.update(dt)
gStateMachine:update(dt)
end
What am I doing wrong here?
A: love.keyboard.wasPressed is not a standard Love2D API. Should be implemented somewhere else in your code? Check this first.
Then try to put some print("condition xxx is verified") in your conditions where your code is supposed to be executed, then you'll find where the condition is not verified.
Example:
if love.keyboard.wasPressed('up') then
print("up is pressed")
...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/64074746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Git log graph, display how two branches are diverging We would like to view a graph of how two branches are diverging. Running git log --oneline --graph displays only the current branch. How do we include both branches in the graph?
A: git log takes zero or more commits as arguments, showing the history leading up to that commit. When no argument is given, HEAD is assumed. For your case, you want to supply the two branch heads you want to compare:
git log --graph --oneline currentbranch otherbranch
If it doesn't display too much, you can simplify this by using
git log --graph --oneline --all
which acts as if you had specified every reference in .git/refs as the commits to display.
A: I had the same issue and landed here, but no answer helped me to display how two branches are diverging. Eventually I did experiment myself and found this worked.
Given branch A and branch B, I want to see where they diverged.
git log --oneline --graph --decorate A B `git merge-base A B`^!
Note: Don't forget there is ^! at the end. (It excludes the parents of the commit returned by merge-base.)
UPDATE
The one line command above isn't working in case merge base is more than one. In this case do this:
git merge-base A B -a
# e.g. output XXXX YYYY
git log --oneline --graph --decorate A B --not XXXX^ YYYY^
A: git log --graph --abbrev-commit --decorate --date=relative --format=format:'%C(bold blue)%h%C(reset) - %C(bold green)(%ar)%C(reset) %C(white)%s%C(reset) %C(dim white)- %an%C(reset)%C(bold yellow)%d%C(reset)' --all
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26784396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "58"
} |
Q: Appending HTML to specific DOM after ajax call In my php page, I've three lists ul#line, ul#comment, ul#vote.
I'm going to make an ajax call, something like
$.ajax({
type: "POST",
url: "ajax.php",
data: dataString,
cache: false,
success: function(html){
$("ul#line").append(html);
$("ul#lineli:last").fadeIn("slow");
}
Ajax.php is something like
if(isset($line)){
echo $line;
} elseif(isset($comment)){
echo $comment;
} elseif (isset($vote)){
echo $vote;
} else {
//do nothing;
}
What I want is, if the echoed out HTML is $line, it'll be appended to ul#line; if it is $comment, it'll be appended to ul#comment; if it is $vote, it'll be appended to ul#vote.
The current ajax call appends only at ul#line.
What do I need to change to achieve this??
A: I would pass back the information as JSON. Have something like:
{updateList : nameOfList, output: $line/$output/$vote }
Then on success you could do something like
$('#'+html.updateList).append(html.output);
You have to make sure to let jQuery know that you are sending and to accept json as the type back though.
A: php
class DataObject
{
public $Type;
public $Text;
}
$json=new DataObject();
if(isset($line)){
$json->Type="line";
$json->Text=$line;
return json_encode($json);
} elseif(isset($comment)){
$json->Type="comment";
$json->Text=$comment;
return json_encode($json);
} elseif (isset($vote)){
$json->Type="vote";
$json->Text=$vote;
return json_encode($json);
} else {
//do nothing;
}
javascript
$.ajax({
type: "POST",
url: "ajax.php",
dataType: 'json', //add data type
data: dataString,
cache: false,
success: function(data){
$("ul#"+data.type).append(data.text);
$("ul#"+data.type+"li:last").fadeIn("slow");
}
A: You need some way to differentiate the values of $line and $comment.
I'd suggest sending back JSON from your PHP script:
if(isset($line)){
echo '{"line" : ' . json_encode($line) . '}';
} elseif(isset($comment)){
echo '{"comment" : ' . json_encode($comment) . '}';
} elseif (isset($vote)){
echo '{"vote" : ' . json_encode($vote) . '}';
} else {
//do nothing;
}
Note: PHP isn't my strongest language so there might be a better way to generate the JSON response
success: function(data){
if(data.line) {
$("ul#line").append(html);
$("ul#lineli:last").fadeIn("slow");
}
else if(data.comment) {
$("ul#comment").append(html);
$("ul#commentli:last").fadeIn("slow");
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/5211528",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: ASP.NET MVC using more than 1 DataContext Insert Method So i recently want to try using more than 1 database
my main problem is the attribute on 1st and 2nd entities is different, so for example in 1st entities sample_table1 contain: "member_id" and "member_code" and the 2nd entities sample_table_2 contain "student_id" and "student_code" it's just the naming, the value of the attribute is same, so member_id = student_id, member_code = student_code
Example:
Controller
private MyEntities db = new MyEntities ();
//not sure know by add this one, my controller can connect 2 database, but the intellisense is working when i'm using db2 class
private MyEntities2 db2 = new MyEntities2 ();
[HttpPost]
[ValidateAntiForgeryToken]
public ActionResult CreateSomething(sample_table1 sample_table1, sample_table2 sample_table2)
{
if (ModelState.IsValid)
{
//insert data to 1st Entities
db.sample_table1.Add(sample_table1);
db.SaveChanges();
//insert data to 2nd Entities
"???" // because the attribute name on 2nd entities is not the same as the 1st entities, i cannot using either sample_table1 & sample_table2
db2.SaveChanges();
}
}
Views
@model 1st_entites_data_model.Models.sample_table1
// i don't know for sure but the 2nd entities data model won't appear when i add view
@{
ViewBag.Title = "Create Something";
}
@using (Html.BeginForm("CreateSomething", "test", FormMethod.Post, new { enctype = "multipart/form-data" }))
{
@Html.AntiForgeryToken()
@Html.ValidationSummary(true)
//i'm using only the 1st entities, because there is no need for user input the same data value twice
@Html.TextBoxFor(model => model.sample_table1.member_id)
@Html.ValidationMessageFor(model => model.sample_table1.member_id)
@Html.TextBoxFor(model => model.sample_table1.member_code)
@Html.ValidationMessageFor(model => model.sample_table1.member_code)
}
i cannot change the 2nd entities name because somebody else is making the 2nd entities, and i cannot edit the database structure...
Thank You Very Much...
A: You can use automapper to transfer data from first to second entity.After that your code will be:
...
db.sample_table1.Add(sample_table1);
db.SaveChanges();
//insert data to 2nd Entities
var sample_table2 = Mapper.Map<sample_table2>(sample_table1);
db2.sample_table_2.Add(sample_table2);
db2.SaveChanges();
...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26333249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Using NServiceBus and EventStore from High Level So I've been reading about EventStore and NServiceBus and I like the idea of having a transactional log of my data that can help me build views based on that data.
What I don't understand right now is how to distinguish between an event that will write to your read storage and the same event which might trigger an email to get sent.
ex. Creating a customer
CreateUserCommand -> CreateUserCommandHandler -> CreatedUserEvent
Should I be using the CreatedUserEvent to trigger both my write to my data storage and sending an email to a user?
A: In the last few years, Eric Evans has recognized an update to his DDD pattern: Domain Events (aka External Events concept).
Internal events in Event Sourcing patterns is what we've been focusing on, such as UserCreatedEvent in your example. Keep these explicit with an IEvent marker interface.
While IEvents are stull published on the bus, IDomainEvents are more notability for larger external-to-the-domain notifications that don't effect a state of an aggregate per say.
So...
CreateUser (ICommand)
^- CreateUserCommandHandler
UserCreated (IEvent)
^- UserCreatedEventHandler
SendNewUserEmail (ICommand)
^- SendNewUserEmailCommandHandler
NewUserEmailSent (IDomainEvent)
^- UserRegistrationService or some other AC
I am still pretty new to event sourcing myself; but, I would guess that you can have the UserRegistrationService register on the bus to listen for the SendNewUserEmail ICommand.
Either way you go, I would concentrate on creating additional commands/events for sending an email and the email was sent. Then, later on you can view the transaction log as to when it was queued to send, how long it took to send, was there any retries in sending, how many was sent at the same time and did it effect time delays (datetime diffs) to show any bottlenecks?, install a queue for sending emails and break it out into a smaller independent service, etc etc.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11852033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do I get index of a specific value (in second dataframe) based on the same value in first dataframe I have 2 data frames, df_ts and df_cmexport. I am trying to get the index of placement id in df_cmexport for the placements in df_ts
Refer to get an idea of the explanation : Click here to view excel file
Once I have the index of those placement id's as a list, I will iterate through them using for j in list_pe_ts_1: to get some value for 'j' index as such : df_cmexport['p_start_year'][j].
My code below returns an empty list for some reason print(list_pe_ts_1) returns []
I think something wrong with list_pe_ts_1 = df_cmexport.index[df_cmexport['Placement ID'] == pid_1].tolist() as this returens empty list of length 0
I even tried using list_pe_ts_1 = df_cmexport.loc[df_cmexport.isin([pid_1]).any(axis=1)].index but still gives a empty list
Help is always appreciated :) Cheers to you all @stackoverflow
for i in range(0, len(df_ts)):
pid_1 = df_ts['PLACEMENT ID'][i]
print('for pid ', pid_1)
list_pe_ts_1 = df_cmexport.index[df_cmexport['Placement ID'] == pid_1].tolist()
print('len of list',len(list_pe_ts_1))
ts_p_start_year_for_pid = df_ts['p_start_year'][i]
ts_p_start_month_for_pid = df_ts['p_start_month'][i]
ts_p_start_day_for_pid = df_ts['p_start_date'][i]
print('\np_start_full_date_ts for :', pid_1, 'y:', ts_p_start_year_for_pid, 'm:', ts_p_start_month_for_pid,
'd:', ts_p_start_day_for_pid)
# j=list_pe_ts
print(list_pe_ts_1)
for j in list_pe_ts_1:
# print(j)
export_p_start_year_for_pid = df_cmexport['p_start_year'][j]
export_p_start_month_for_pid = df_cmexport['p_start_month'][j]
export_p_start_day_for_pid = df_cmexport['p_start_date'][j]
print('\np_start_full_date_export for ', pid, "at row(", j, ") :", export_p_start_year_for_pid,
export_p_start_month_for_pid, export_p_start_day_for_pid)
if (ts_p_start_year_for_pid == export_p_start_year_for_pid) and (
ts_p_start_month_for_pid == export_p_start_month_for_pid) and (
ts_p_start_day_for_pid == export_p_start_day_for_pid):
pids_p_1.add(pid_1)
# print('pass',pids_p_1)
# print(export_p_end_year_for_pid)
else:
pids_f_1.add(pid_1)
# print("mismatch in placement end date for pid ", pids)
# print("pids list ",pids)
# print('fail',pids_f_1)
A: With below snippest you can get a list of the matching index field from seconds dataframe.
import pandas as pd
df_ts = pd.DataFrame(data = {'index in df':[0,1,2,3,4,5,6,7,8,9,10,11,12],
"pid":[1,1,2,2,3,3,3,4,6,8,8,9,9],
})
df_cmexport = pd.DataFrame(data = {'index in df':[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],
"pid":[1,1,1,2,3,3,3,3,3,4,4,4,5,5,6,7,8,8,9,9,9],
})
Create new dataframe by mearging the two
result = pd.merge(df_ts, df_cmexport, left_on=["pid"], right_on=["pid"], how='left', indicator='True', sort=True)
Then identify unique values in "index in df_y" dataframe
index_list = result["index in df_y"].unique()
The result you get;
index_list
Out[9]:
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 14, 16, 17, 18, 19,
20], dtype=int64)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/70498582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Garbage collector and Spring Integration we have a huge problem with our J2EE application.
Every day at 11am, our application starts to be very slow because of the garbage collector's activity.
We don't have any batch tasks that runs at that hour but we have a particular functionality (written using Spring Integration framework) used to merge a large number of files (.pdf, .doc, ...) but we've already checked that all the streams are closed after the end of the process.
Please find below some logs about the garbage collector activity:
2015-04-30T11:00:59.752+0200: 15957.260: [GC-- [PSYoungGen: 620734K->620734K(638336K)] 2017070K->2018870K(2036480K), 0.1506240 secs] [Times: user=0.19 sys=0.00, real=0.15 secs]
2015-04-30T11:00:59.902+0200: 15957.411: [Full GC [PSYoungGen: 620734K->45503K(638336K)] [ParOldGen: 1398136K->1398119K(1398144K)] 2018870K->1443622K(2036480K) [PSPermGen: 307483K->307454K(307776K)], 1.4963560 secs] [Times: user=2.65 sys=0.00, real=1.50 secs]
2015-04-30T11:01:02.271+0200: 15959.779: [Full GC [PSYoungGen: 622271K->46624K(638336K)] [ParOldGen: 1398122K->1398142K(1398144K)] 2020393K->1444767K(2036480K) [PSPermGen: 307456K->307456K(307776K)], 1.2484490 secs] [Times: user=2.18 sys=0.00, real=1.25 secs]
2015-04-30T11:01:04.181+0200: 15961.690: [Full GC [PSYoungGen: 576768K->45206K(638336K)] [ParOldGen: 1398143K->1398130K(1398144K)] 1974911K->1443337K(2036480K) [PSPermGen: 307456K->307456K(307776K)], 1.2151760 secs] [Times: user=2.13 sys=0.00, real=1.21 secs]
2015-04-30T11:01:05.958+0200: 15963.466: [Full GC [PSYoungGen: 576768K->47369K(638336K)] [ParOldGen: 1398130K->1398123K(1398144K)] 1974898K->1445493K(2036480K) [PSPermGen: 307456K->307456K(307776K)], 1.2177990 secs] [Times: user=2.13 sys=0.00, real=1.21 secs]
2015-04-30T11:01:07.782+0200: 15965.291: [Full GC [PSYoungGen: 576768K->46062K(638336K)] [ParOldGen: 1398123K->1398136K(1398144K)] 1974891K->1444199K(2036480K) [PSPermGen: 307460K->307460K(307776K)], 1.2808160 secs] [Times: user=2.25 sys=0.00, real=1.28 secs]
2015-04-30T11:01:09.682+0200: 15967.191: [Full GC [PSYoungGen: 576768K->46868K(638336K)] [ParOldGen: 1398136K->1398143K(1398144K)] 1974904K->1445012K(2036480K) [PSPermGen: 307460K->307460K(307776K)], 1.2058500 secs] [Times: user=2.11 sys=0.00, real=1.21 secs]
2015-04-30T11:01:11.588+0200: 15969.097: [Full GC [PSYoungGen: 576768K->50009K(638336K)] [ParOldGen: 1398144K->1398142K(1398144K)] 1974912K->1448152K(2036480K) [PSPermGen: 307469K->307469K(307776K)], 1.3628080 secs] [Times: user=2.39 sys=0.00, real=1.36 secs]
2015-04-30T11:01:13.616+0200: 15971.124: [Full GC [PSYoungGen: 576768K->47592K(638336K)] [ParOldGen: 1398142K->1398099K(1398144K)] 1974910K->1445691K(2036480K) [PSPermGen: 307469K->307467K(307776K)], 1.6644920 secs] [Times: user=2.95 sys=0.00, real=1.67 secs]
2015-04-30T11:01:16.046+0200: 15973.555: [Full GC [PSYoungGen: 576768K->49560K(638336K)] [ParOldGen: 1398143K->1397961K(1398144K)] 1974911K->1447521K(2036480K) [PSPermGen: 307470K->307470K(307776K)], 1.3118380 secs] [Times: user=2.30 sys=0.00, real=1.32 secs]
2015-04-30T11:01:18.204+0200: 15975.712: [Full GC [PSYoungGen: 576768K->25394K(638336K)] [ParOldGen: 1397965K->1397950K(1398144K)] 1974733K->1423344K(2036480K) [PSPermGen: 307470K->307467K(307776K)], 1.2144450 secs] [Times: user=2.11 sys=0.00, real=1.21 secs]
2015-04-30T11:01:20.284+0200: 15977.793: [Full GC [PSYoungGen: 576768K->32890K(638336K)] [ParOldGen: 1398082K->1398142K(1398144K)] 1974850K->1431033K(2036480K) [PSPermGen: 307467K->307467K(307776K)], 1.1964720 secs] [Times: user=2.10 sys=0.00, real=1.20 secs]
2015-04-30T11:01:21.485+0200: 15978.994: [Full GC [PSYoungGen: 40616K->32969K(638336K)] [ParOldGen: 1398142K->1398141K(1398144K)] 1438759K->1431111K(2036480K) [PSPermGen: 307467K->307467K(307776K)], 1.1652960 secs] [Times: user=2.03 sys=0.00, real=1.17 secs]
2015-04-30T11:01:23.397+0200: 15980.905: [Full GC [PSYoungGen: 576768K->58824K(638336K)] [ParOldGen: 1398143K->1397943K(1398144K)] 1974911K->1456768K(2036480K) [PSPermGen: 307467K->307467K(307776K)], 1.3021250 secs] [Times: user=2.31 sys=0.00, real=1.30 secs]
2015-04-30T11:01:24.717+0200: 15982.225: [Full GC [PSYoungGen: 68465K->53216K(638336K)] [ParOldGen: 1397943K->1398143K(1398144K)] 1466409K->1451359K(2036480K) [PSPermGen: 307467K->307467K(307776K)], 1.2082250 secs] [Times: user=2.10 sys=0.00, real=1.21 secs]
2015-04-30T11:01:27.386+0200: 15984.894: [Full GC [PSYoungGen: 576768K->41286K(638336K)] [ParOldGen: 1398143K->1397982K(1398144K)] 1974911K->1439268K(2036480K) [PSPermGen: 307467K->307467K(307776K)], 1.2909460 secs] [Times: user=2.29 sys=0.00, real=1.29 secs]
2015-04-30T11:01:29.271+0200: 15986.779: [Full GC [PSYoungGen: 576768K->27766K(638336K)] [ParOldGen: 1397982K->1398142K(1398144K)] 1974750K->1425908K(2036480K) [PSPermGen: 307467K->307467K(307776K)], 1.1639540 secs] [Times: user=2.03 sys=0.00, real=1.16 secs]
2015-04-30T11:01:31.260+0200: 15988.769: [Full GC [PSYoungGen: 576768K->36699K(638336K)] [ParOldGen: 1398142K->1398084K(1398144K)] 1974910K->1434783K(2036480K) [PSPermGen: 307470K->307470K(307776K)], 1.2504410 secs] [Times: user=2.20 sys=0.00, real=1.26 secs]
2015-04-30T11:01:33.183+0200: 15990.691: [Full GC [PSYoungGen: 576768K->25671K(638336K)] [ParOldGen: 1398084K->1398142K(1398144K)] 1974852K->1423814K(2036480K) [PSPermGen: 307470K->307470K(307776K)], 1.2439080 secs] [Times: user=2.19 sys=0.00, real=1.24 secs]
2015-04-30T11:01:35.113+0200: 15992.622: [Full GC [PSYoungGen: 576768K->38723K(638336K)] [ParOldGen: 1398142K->1398140K(1398144K)] 1974910K->1436864K(2036480K) [PSPermGen: 307470K->307470K(307776K)], 1.2855140 secs] [Times: user=2.25 sys=0.00, real=1.28 secs]
2015-04-30T11:01:36.975+0200: 15994.484: [Full GC [PSYoungGen: 576768K->38745K(638336K)] [ParOldGen: 1398140K->1398084K(1398144K)] 1974908K->1436829K(2036480K) [PSPermGen: 307470K->307470K(307776K)], 1.2950800 secs] [Times: user=2.28 sys=0.00, real=1.29 secs]
2015-04-30T11:01:38.964+0200: 15996.472: [Full GC [PSYoungGen: 576768K->33921K(638336K)] [ParOldGen: 1398084K->1398143K(1398144K)] 1974852K->1432065K(2036480K) [PSPermGen: 307470K->307470K(307776K)], 1.2367630 secs] [Times: user=2.17 sys=0.00, real=1.23 secs]
2015-04-30T11:01:40.830+0200: 15998.338: [Full GC [PSYoungGen: 576768K->33497K(638336K)] [ParOldGen: 1398143K->1398141K(1398144K)] 1974911K->1431638K(2036480K) [PSPermGen: 307503K->307503K(307840K)], 1.2936280 secs] [Times: user=2.25 sys=0.00, real=1.30 secs]
2015-04-30T11:01:42.832+0200: 16000.341: [Full GC [PSYoungGen: 576768K->38106K(638336K)] [ParOldGen: 1398141K->1398131K(1398144K)] 1974909K->1436237K(2036480K) [PSPermGen: 307508K->307508K(307840K)], 1.3531370 secs] [Times: user=2.34 sys=0.00, real=1.35 secs]
2015-04-30T11:01:44.846+0200: 16002.354: [Full GC [PSYoungGen: 576768K->29489K(638336K)] [ParOldGen: 1398131K->1398140K(1398144K)] 1974899K->1427630K(2036480K) [PSPermGen: 307508K->307508K(307840K)], 1.1770150 secs] [Times: user=2.06 sys=0.00, real=1.18 secs]
2015-04-30T11:01:46.800+0200: 16004.308: [Full GC [PSYoungGen: 576768K->36730K(638336K)] [ParOldGen: 1398140K->1398143K(1398144K)] 1974908K->1434874K(2036480K) [PSPermGen: 307508K->307508K(307840K)], 1.2761790 secs] [Times: user=2.26 sys=0.00, real=1.28 secs]
2015-04-30T11:01:48.919+0200: 16006.428: [Full GC [PSYoungGen: 576768K->46650K(638336K)] [ParOldGen: 1398143K->1397908K(1398144K)] 1974911K->1444558K(2036480K) [PSPermGen: 307508K->307508K(307840K)], 1.3290270 secs] [Times: user=2.35 sys=0.00, real=1.33 secs]
2015-04-30T11:01:50.901+0200: 16008.409: [Full GC [PSYoungGen: 576768K->43550K(638336K)] [ParOldGen: 1397908K->1398142K(1398144K)] 1974676K->1441693K(2036480K) [PSPermGen: 307508K->307508K(307840K)], 1.2121480 secs] [Times: user=2.11 sys=0.00, real=1.21 secs]
2015-04-30T11:01:52.792+0200: 16010.301: [Full GC [PSYoungGen: 576768K->32252K(638336K)] [ParOldGen: 1398142K->1398142K(1398144K)] 1974910K->1430394K(2036480K) [PSPermGen: 307508K->307508K(307840K)], 1.1738220 secs] [Times: user=2.06 sys=0.00, real=1.18 secs]
2015-04-30T11:01:54.698+0200: 16012.207: [Full GC [PSYoungGen: 576768K->40351K(638336K)] [ParOldGen: 1398142K->1398122K(1398144K)] 1974910K->1438473K(2036480K) [PSPermGen: 307508K->307508K(307840K)], 1.2184120 secs] [Times: user=2.13 sys=0.00, real=1.22 secs]
2015-04-30T11:01:56.581+0200: 16014.090: [Full GC [PSYoungGen: 576768K->51145K(638336K)] [ParOldGen: 1398122K->1397818K(1398144K)] 1974890K->1448963K(2036480K) [PSPermGen: 307508K->307508K(307776K)], 1.2204630 secs] [Times: user=2.15 sys=0.00, real=1.22 secs]
2015-04-30T11:01:58.522+0200: 16016.030: [Full GC [PSYoungGen: 576768K->45045K(638336K)] [ParOldGen: 1397818K->1396518K(1398144K)] 1974586K->1441563K(2036480K) [PSPermGen: 307524K->307504K(307840K)], 1.3422560 secs] [Times: user=2.37 sys=0.00, real=1.34 secs]
2015-04-30T11:02:00.559+0200: 16018.068: [GC [PSYoungGen: 576768K->59954K(636736K)] 1973802K->1456988K(2034880K), 0.2391820 secs] [Times: user=0.47 sys=0.00, real=0.24 secs]
2015-04-30T11:02:01.545+0200: 16019.054: [Full GC [PSYoungGen: 636722K->44520K(636736K)] [ParOldGen: 1397748K->1398143K(1398144K)] 2034470K->1442663K(2034880K) [PSPermGen: 307508K->307508K(307840K)], 1.3049690 secs] [Times: user=2.29 sys=0.01, real=1.30 secs]
2015-04-30T11:02:03.785+0200: 16021.293: [Full GC [PSYoungGen: 576768K->46249K(636736K)] [ParOldGen: 1398143K->1398141K(1398144K)] 1974911K->1444390K(2034880K) [PSPermGen: 307508K->307508K(307840K)], 1.2749070 secs] [Times: user=2.25 sys=0.00, real=1.28 secs]
2015-04-30T11:02:05.665+0200: 16023.174: [Full GC [PSYoungGen: 576768K->47378K(636736K)] [ParOldGen: 1398143K->1398143K(1398144K)] 1974911K->1445522K(2034880K) [PSPermGen: 307523K->307523K(307840K)], 1.2608770 secs] [Times: user=2.21 sys=0.00, real=1.27 secs]
2015-04-30T11:02:07.660+0200: 16025.169: [Full GC [PSYoungGen: 576768K->34799K(636736K)] [ParOldGen: 1398143K->1398141K(1398144K)] 1974911K->1432940K(2034880K) [PSPermGen: 307566K->307566K(307904K)], 1.3548270 secs] [Times: user=2.37 sys=0.00, real=1.35 secs]
2015-04-30T11:02:09.740+0200: 16027.249: [Full GC [PSYoungGen: 576768K->47591K(636736K)] [ParOldGen: 1398141K->1398140K(1398144K)] 1974909K->1445731K(2034880K) [PSPermGen: 307566K->307566K(307904K)], 1.3396300 secs] [Times: user=2.35 sys=0.00, real=1.34 secs]
Any hint about any possibie causes ?
Our application is build upon:
- Java 1.7
- Spring 3.2.8
- Spring WebFlow 2.3.2
- Spring Integration 3.0.2
- JBOSS EAP6
ps : sorry for my bad english :)
A: On seeing the GC cycle pasted above it seems excessive GC operation has been performed which has direct impact on the performance of the application. Excessive GC and Compaction always lead to slow responsive oa application as GC operation is stop the world process. It seems you are using gencon policy (Young/old region). If the young region is small then it will lead to lots of scavenge gc operation. If you need more analysis on the gc logs then load it in GCMV tool to get GC behaviour of your system
http://www.ibm.com/developerworks/java/jdk/tools/gcmv/
A: Since your old generation is full, the obvious answer here is that when those files are merged, you need more memory than you currently have assigned the JVM. Try increasing the heap size by running java with -Xmx, for example -Xmx4G to use a 4 GB heap.
If you still run into problems despite using a heap you think is large enough, the best way to investigate what fills your heap is to take a heap dump (run jmap -dump:format=b,file=dump.bin <jvm pid>) and analyze it with Eclipse MAT.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29966084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: Is it possible to add extras to browsable intents from HTML Let's take per say the following manifest:
<intent-filter>
<data scheme="myscheme"
host="myhost">
</data>
<action name="android.intent.action.VIEW">
</action>
<category name="android.intent.category.DEFAULT">
</category>
<category name="android.intent.category.BROWSABLE">
</category>
</intent-filter>
I could launch the activity which under the above intent filter is declared by redirecting the browser to:
myscheme://myhost?param1=param1¶m2=param2
However, I'm struggling with understanding if it is possible to do the same redirection, only with additional extras that would be received programmatically with:
myextra = getIntent().getStringExtra('myextra')
Any help would be very much appreciated.
A: The only ways to pass data directly from a web page to your app is on the URL that you register in an intent-filter. All to be retrieved via the Uri object - whether the data is on the path or with query params, as outlined below. There is no way to set extras on an Intent from a web page.
Uri uri = getIntent().getData();
String param1Value = uri.getQueryParameter("param1");
A: This is how I overcome this issue,
I developed my own browser and made browsable like you do
Uri data = getIntent().getData();
if(data == null) { // Opened with app icon click (without link redirect)
Toast.makeText(this, "data is null", Toast.LENGTH_LONG).show();
}
else { // Opened when a link clicked (with browsable)
Toast.makeText(this, "data is not null", Toast.LENGTH_LONG).show();
String scheme = data.getScheme(); // "http"
String host = data.getHost(); // "twitter.com"
String link = getActualUrl(host);
webView.loadUrl(link);
Toast.makeText(this,"scheme is : "+scheme+" and host is : "+host+ " ",Toast.LENGTH_LONG).show();
}
I think you are looking for this functions
data.getQueryParameter(String key);
A: One could use "intent scheme URL" in order to send more information like extra objects and actions when redirecting to a custom intent scheme URL via javascript or HTML.
intent://foobar/#Intent;action=myaction1;type=text/plain;S.xyz=123;end
Although this method doesn't actually answer the question as the scheme part is destined to always be "intent", this is a possible way to send intents from browsers with extra object or actions.
See more extensive information in the following report:
http://www.mbsd.jp/Whitepaper/IntentScheme.pdf
Or use the chrome documentation:
https://developer.chrome.com/multidevice/android/intents
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38832048",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Uncaught ReferenceError: __VUE_HMR_RUNTIME__ is not defined I am trying to add vue.js to a project with webpack, when I run I get an error 'Uncaught ReferenceError: VUE_HMR_RUNTIME is not defined' in console.
App.vue
<template>
<div>
<h3>{{text}}</h3>
</div>
</template>
<script>
export default {
name: 'App',
data(){
return {
text: 'Welcome'
}
}
}
</script>
index.js
import Vue from 'vue/dist/vue.runtime.common';
import App from "./App.vue";
new Vue({
el: '#app',
render: h => h(App)
})
vue-loader installed
A: just update your vue-loader. in recent weeks, it updates fast! from v16 back to v15.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/68567276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: New ways to update my windows application over internet I am aware of "Click once approach", I find a problem with its one year certificate, and if I want to have more than one year certificate, I have to purchase the certificate from them.
I want to know, If there is any way to address this, So i can have my clients get updates for many years.
A: You can generate a ClickOnce certificate with any expiration date (and any Issued To/Issued By values):
makecert -sv ClickOnceTestApp.pvk
-n CN=Sample ClickOnceTestApp.cer
-b 01/01/2012 -e 12/31/2100 -r
It will be a self-signed certificate, same as generated by Visual Studio for click once deployments.
from http://bernhardelbl.wordpress.com/2012/03/20/create-a-non-expiring-test-certificate-pfx-for-clickonce-applications/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/20798042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Matlab adding rgb to binary image side by side I'm supposed to add another image next to my threshold image with its original color like so: expected image
But I'm unsure how to do it having only achieving the binary image threshold on matlab. How do I show images side by side?
my result
clear all;
close all;
clc;
% read image
palm = imread('palmDown (2).jpg');
%split into RGB
redPalm = palm(:,:,1);
greenPalm = palm(:,:,2);
bluePalm = palm(:,:,3);
redLevel = -0.1;
greenLevel = -0.1;
blueLevel = 0.06;
redThresh = imbinarize(redPalm, redLevel);
greenThresh = imbinarize(greenPalm, greenLevel);
blueThresh = imbinarize(bluePalm, blueLevel);
colorSum = (redThresh&greenThresh&blueThresh);
colorSum2 = imcomplement(colorSum);
thumbFilled = imfill(colorSum2, 'holes');
figure;
imshow(thumbFilled); title('Sum of all');
A: There are many ways to colorize the thresholded image. One simple way is by multiplication:
palm = im2double(palm); % it’s easier to work with doubles in MATLAB
palm2 = palm * thumbFilled;
imshow([palm, palm2])
The multiplication uses implicit Singleton expansion. If you have an older version of MATLAB it won’t work, you’ll have to use bsxfun instead.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53895318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Excel statement to return a grade based on two values
I want to create an excel formula that will take the value in column B (Proxy score) and column D (Broker score) then use the calculation in shown in picture 2 to give a grade either , , ✔, , . The values on the left are the Proxy scores (column B) and the values on top are the broker scores (column D)
A: Just displaying the particular emoji you want is pretty simple, using the INDEX function.
Proxy Date Time Proxy Score Broker Score Staff Name Emoji
12/1/2018 9:24 2 3 Alan Ball
12/1/2018 11:03 3 2 Adam O'Tough
12/1/2018 11:44 2 1 Brian King ✔
So let's say you have the emoji definitions in a range, $C$3:$G$7:
A B C D E F G
----------------------------------------------------------
1 | Real
2 | 1 2 3 4 5
3 | P 1 ✔ ✔
4 | r 2 ✔ ✔
5 | o 3 ✔
6 | x 4 ✔ ✔
7 | y 5 ✔ ✔
Now let's assume your data is in a table. To get the appropriate emoji, you'd just use a formula like this: =INDEX($C$3:$G$7,[@[Proxy Score]],[@[Broker Score]])
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52646642",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-4"
} |
Q: Updating last 4 in each beautifulsoup I have a html file containing (its created from PrettyTable python lib):
<table>
<tr>
<td> 1 </td>
<td> 1 </td>
<td> 1 </td>
</tr>
<tr>
<td> 2 </td>
<td> 2 </td>
<td> 2 </td>
</tr>
<tr>
<td> 3 </td>
<td> 3 </td>
<td> 3 </td>
</tr>
</table>
I would like to update the last 2 cells of each row to have a different background using Beautifulsoup. so, for example update it to:
<table>
<tr>
<td> 1 </td>
<td style="background-color:blue;text-align:center;"> 1 </td>
<td style="background-color:blue;text-align:center;"> 1 </td>
</tr>
<tr>
<td> 2 </td>
<td style="background-color:blue;text-align:center;"> 2 </td>
<td style="background-color:blue;text-align:center;"> 2 </td>
</tr>
<tr>
<td> 3 </td>
<td style="background-color:blue;text-align:center;"> 3 </td>
<td style="background-color:red;text-align:center;"> 3 </td>
</tr>
</table>
any assistance would be gratefully received
matt
A: Try something along these lines:
from bs4 import BeautifulSoup as bs
options = """[your html above]"""
for i in range(2,4):
targets = soup.select(f'tr td:nth-child({i})')
for target in targets:
target['style']="background-color:blue;text-align:center;"
soup
Output should be your expected html in the question.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62842864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: How do I make a chrome extension that reads the text in the website I'm visiting? I'm trying to make a chrome extension that analyzes the website I'm currently visiting, but I'm not sure how to proceed. I believe I use chrome.pageCapture.saveAsMHTML() and FileReader, but I wasn't able to find any help online.
For instance, if I'm visiting reddit.com, I want to see if the front page contains the word "peanut". I'm also a bit of a JS newbie, so elaboration would be appreciated.
Note: I've been attempting this for two days, and I've already sifted through all of the relevant chrome extensions API at developer.chrome.com, JS forums, and YouTube videos; any obvious solutions I've missed are due to my lack of experience in JS.
Below is my code so far. I believe I've correctly captured the current tab as an MHTML file and I get hello.js:7 (anonymous function) as an error when I load the extension.
var __name__ = "__main__";
document.getElementById("result").innerHTML = "Compiled Python script in Chrome";
console.log("hello from python");
alert("yeet");
chrome.pageCapture.saveAsMHTML({'tabId': 12}, function(binary mhtmlData){
var reader = new FileReader();
reader.readAsBinaryString(mhmlData);
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54166544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Microsoft.WindowsAzure.Storage.dll version mismatch error in azure functions I am using a dll (MyApp.dll) which references azure storage dll version 7.2.1 through a nuget. I have added a project.json file to my azure function with "WindowsAzure.Storage": "7.2.1" .
I have also uploaded Microsoft.WindowsAzure.Storage to bin\ directory. My run.csx file just has "new MyApp.Run(req)".
I get following error about missing dll, what else can I change in my azure function to resolve this error? I can use MyApp.dll fine locally.
The type initializer for '' threw an exception. Could not load file
or assembly 'Microsoft.WindowsAzure.Storage, Version=8.0.0.0,
Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its
dependencies. The located assembly's manifest definition does not
match the assembly reference. (Exception from HRESULT: 0x80131040).
A: Did you reference WindowsAzure.Storage yourself in project.json? You shouldn't, because that one is already referenced for you by the environment. You should use #r to reference this one:
#r "Microsoft.WindowsAzure.Storage"
using Microsoft.WindowsAzure.Storage.Blob;
This is simply set in your function itself.
learn.microsoft.com/en-us/azure/azure-functions/functions-reference-csharp#referencing-external-assemblies
| {
"language": "en",
"url": "https://stackoverflow.com/questions/41358642",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Run HTML as PHP in PHP8 on Azure App Service Linux Not Working I need to parse HTML through PHP. This works fine on PHP 7.4 with the following added to the .htaccess file:
AddType application/x-httpd-php .html .htm
However as soon as I upgrade to PHP 8 on the app service, the code is displayed rather than parsed. I have tried the below, suggested in another post, which returns NULL:
<?php echo $_SERVER['REDIRECT_HANDLER']; ?>
Any suggestions?
A: That is because Azure App Service for PHP 8 no longer uses Apache but Nginx. This is related to the question "How to Deploy an App Service in azure with Laravel 8 and PHP 8 without public endpoint?".
As I mentioned there I will mention here too: I've written a full blog article about my first experiences with PHP 8 on Azure App Services which includes the issue you mention here.
Have a look at it and let me know if it solved your struggles.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/69506361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Placing a div over advertising instead of below I have this code:
<div class="c1">
//code that makes a div move downwards
</div>
<div class="banner">
<script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
<ins class="adsbygoogle" style="display:inline-block;width:200px;height:200px;" ....>
</ins>
<script>
(adsbygoogle = window.adsbygoogle || []).push({});
</script>
</div>
in div class c1 there is a div that scrolls down when clicking on a button. If instead of the google ads code there is
<div class="banner">
<img src="...">
</div>
The div is over the top of the image as it should be. But with google ads, the ad is not covered.
How to place the div above google ads?
A: just make these changes it should work
<div class="c1" style="position:absolute;z-index:2147483647">
//code that makes a div move downwards
</div>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/20747079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Oracle - update only one row without using rowid I am trying to update a view table, but only the first result from it.
I cannot use rowid - not working on view table.
Is there a way to update only the first row? as I said using rowid
solutions could not work.
select query example:
select addr
from addrView
where (tl = '7' and tr = '2')
returns 4 results, but when using update:
update addrView
set home='current'
where (tl = '7' and tr = '2')
I still want to upadte the first row.
A: ROWID is a unique identifier of each row in the database.
ROWNUM is a unique identifier for each row in a result set.
You should be using the ROWNUM version, but you will need an ORDER BY to enforce a sorting order, otherwise you won't have any guarantees what is the "first" row returned by your query and you might be updating another row.
update addrView
set home='current'
where (tl, tr) = (
select tl, tr
from (select tl, tr
from addrView
where (tl = '7' and tr = '2')
order by col_1
, col_2
, col_3 etc.
) result_set
where rownum = 1);
But, if you don't care about what data is in the first row returned by your query, then you can use only rownum = 1.
update addrView
set home = 'current'
where (tl = '7' and tr = '2')
and rownum = 1;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/37318136",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Mac Xcode Zlib Linker Errors I could use some help. I am just getting into Mac development a bit. This is a port from Windows and I am so close to finishing. I am trying to link to ZLib as I have an application that depends on it. I am creating multiple libraries with a fairly simple hierarchy. Anyways, the important thing is, I am getting linker errors when trying to link to ZLib. I have added libz.dylib as a framework and set it to the correct target but my calls to ZLib are getting "Symbol(s) not found" errors. I have tried using the Z_PREFIX flag but no luck. I have set the path of zlib to my library search paths even though I don't think I need it (/usr/lib..but it has an absolute path I think). It shows up in my target as a link binary and its listed at the top of the list which makes me think that link order is not the issue. The only header I link to is zlib.h and I even included that to the same target. However, no matter what I try I still get the same error. I was curious if anyone has any ideas.
I really appreciate any suggestions.
Thanks much.
Erik
| {
"language": "en",
"url": "https://stackoverflow.com/questions/6604544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to add a "Buy now" button BEFORE to "Add to cart" button (WordPress/Woocommerce) I wanna add a "Buy Now" button BEFORE to "Add to cart" button. I already tried with an others solutions here, but they all are for buttons after to "Add Cart" button.
Example:
A: you can use woocommerce_before_add_to_cart_quantity hook as following
add_action( 'woocommerce_after_add_to_cart_quantity', 'by_now' );
function by_now() {
echo '<div class="test"> buy now </div>';
}
in your css add :
.test {
display: inline-block;
}
.woocommerce div.product form.cart .button {
vertical-align: middle;
float: none !important ;
}
Output
output in your site
you need to place the code inside your functions.php
A: You can add "Buy Now" before by using hook woocommerce_before_add_to_cart_button.
add_action( 'woocommerce_before_add_to_cart_button', 'custom_content_before_addtocart_button', 100 );
function custom_content_before_addtocart_button() {
// custom content.
echo 'Buy Now';
}
You can add "Buy Now" after by using hook woocommerce_after_add_to_cart_button.
add_action( 'woocommerce_after_add_to_cart_button', 'custom_content_after_addtocart_button', 100 );
function custom_content_after_addtocart_button() {
// custom content.
echo 'Buy Now';
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52205990",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Chrome bug? Cannot capitalise label following file input Using the following code, I am unable to capitalise a label that directly follows a file input field in chrome:
CSS
label {
text-transform: capitalize;
}
HTML
<label for="book-file">file</label>
<input type="file" name="a_file" id="a_file"><label for="a_file">required</label>
Error reproduced:
JSFiddle : http://jsfiddle.net/Nc27q/
A way to resolve this is to move the second label onto its own line (see: http://jsfiddle.net/Nc27q/1/ ) within the HTML or even placing a space in between the HTML tags.
The second label is added using Javascript (as an error message) which is why it is not placed on another line.
Does anybody know why chrome does this and how I can get around it in a CSS only way?
A: It does indeed look like a bug. It's like it's seeing the file input in front of the text and treating that as part of the word, so not seeing the "r" in "required" as the first character in need of capitalization.
Adding
label:before {
content: " ";
}
to force the space seems to work: http://jsfiddle.net/Nc27q/4/ Since this is a Chrome-specific issue, you don't have to worry about pseudo-elements not being supported... (Of course, you may want to target it a bit more tightly than I have above.)
A: I'm guessing that this is caused by the way Chrome detects what to capitalize and what not. You don't have any spaces in your code, so it simply says file<input>required. Chrome's logic would probably determine that this is one word (or sentence), causing it to intentionally ignore it.
You might be able to use label:first-letter { text-transform: uppercase; } instead.
A: text-transform property doesn't have capitalize as one of its keyword values; you want uppercase. it works, check it here: http://jsfiddle.net/jalbertbowdenii/Nc27q/2/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/8341624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: why can't I set an ascii reference in an img title attribute using js? I'm having trouble using an ascii character reference (®) in an image title. It works fine when you set it via the html body, but when trying to do the same thing via javascript does not work.
check the sscce:
<style type="text/css">body {background-color:black;}</style>
<script src="https://ajax.googleapis.com/ajax/libs/prototype/1.7.0.0/prototype.js"></script>
<p>this image has the correct ascii character title:<br /><img src="http://www.prototypejs.org/images/logo-home.gif" id="img1" title="®" /></p>
<p>but why can't I set the same value via javascript?<br /><img src="http://www.prototypejs.org/images/logo-home.gif" id="img2" /></p>
<script type="text/javascript">
$("img2").title = "®";
</script>
Thanks.
A: $("img2").title = "®" should work, if not use
$("img2").title ='\u00AE'.
The html entities are not translated for pure text.
A: There are a wealth of answers at http://paulschreiber.com/blog/2008/09/20/javascript-how-to-unescape-html-entities/
But basic gist of it is your setting the text of the node not the html, so there are no html entities.
You can however set the innerHTML of a hidden object and read the text value of that. Or assuming your source encoding allows it just enter the reg symbol directly.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/5890510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Get the users tag relationship through posts I'm building a Laravel Blog application for learning purposes.
In this applications I have posts that are associated with one users, and tags that are associated with posts (and later videos).
The database structure that I have:
users
- id
...
posts
- id
- user_id
...
tags
- id
...
taggables
- id
- tag_id
- taggable_id // post id or video id
- taggable_type // App\Models\Post or App\Models\Video
The models that I have:
User.php
public function posts()
{
return $this->hasMany(Post::class);
}
Post.php
public function user()
{
return $this->belongsTo(User::class);
}
public function tags()
{
return $this->morphToMany(Tag::class, 'taggable')->withTimestamps();
}
Tag.php
public function posts()
{
return $this->morphedByMany(Post::class, 'taggable')->withTimestamps();
}
The problem is that I want to get all the tags that is associated with the given user's posts.
I thought that this can be done with relationships, but somehow I can't do it. Is there a way to do this with relationships? Or it can't be done?
The solution that I found:
User.php
public function tags()
{
$posts = $this->posts()
->pluck('id');
return Tag::whereHas('posts', function(Builder $query) use ($posts) {
$query->whereIn('taggable_id', $posts);
})->get();
}
This solution returns a collection of data, not a relationship. Is there a way to return a relationship in this case? Or is it not possible to make relationships in such a situation?
A: I dont tested it but you could do something like
public function postTags()
{
return $this->hasManyThrough(Tag::class, Post::class, 'taggable_id')->where('taggable_type', array_search(static::class, Relation::morphMap()) ?: static::class);
}
This is a normal hasManyThrough and you have to build the polymorphic logic by your self.
To Explain your query
public function tags()
{
$posts = $this->posts()
->pluck('id');
// The use of Tag::whereHas returns a Illuminate/Database/Eloquent/Query not an relation like HasMany, HasOne or like in your case a HasManyThrough
return Tag::whereHas('posts', function(Builder $query) use ($posts) {
$query->whereIn('taggable_id', $posts);
})
->get(); // The get at the end sends the query to your database, so you receive a collection
}
A: I understand your problem here I have Same type of database relation.
In my Category_model table I am storing
*
*category_id
*model_type
*model_id
Inside the Category model I have following code :
public function article()
{
return $this->morphedByMany(Article::class, 'model', 'category_model', 'category_id');
}
and inside controller :
// Show posts by Category
public function ShowArticleByCategory($category)
{
$cat = Category::where('name', $category)->firstOrFail();
$posts = $cat->article()->get()->sortByDesc('id');
return view('front.pages.show-by-category', compact('posts'));
}
This will give me all the posts that belongs selected/Supplied Category.
Hope this help you will get the idea of getting data from polymorphic relations.
All the parameters tha can be passed inside the relations are defined inside
vendor\laravel\framework\src\Illuminate\Database\Eloquent\Concerns\HasRelationships.php
| {
"language": "en",
"url": "https://stackoverflow.com/questions/66551697",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Determine if a websocket send() is finished Is there a way to get notified if a certain send() has finished? As i noticed the send() function is not blocking and the code continuous. Is there a simple way to either make it blocking or getting somehow notified if the send is finished?
A: You could rely on Socket.bufferedamount (never tried)
http://www.whatwg.org/specs/web-apps/current-work/multipage/network.html#dom-websocket-bufferedamount
var socket = new WebSocket('ws://game.example.com:12010/updates');
socket.onopen = function () {
setInterval(function() {
if (socket.bufferedAmount == 0){
// Im' not busy anymore - set a flag or something like that
}
}, 50);
};
Or implement an acknowledge answer from the server for every client message (tried, works fine)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18189144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: IDEA's "Missing require() statement" inspection highlights "JSON" When writing Node.js apps in Intellij IDEA, the "Missing require() statement" inspection complains about any use of JSON.
For example, the following line is flagged: var some_object = JSON.parse(some_json_text);
It then recommends adding var JSON = require("path") as the fix, which is obviously incorrect. The code still runs just fine, obviously, but the faulty inspection is frustrating.
There's a similar problem when using Mocha or Chai, which is solved by adding some DefinitelyTyped libraries. Is there perhaps a library that I'm missing for this?
Thanks,
James
**Edit:
Ctrl+clicking on it provides these:
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45316279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: jquery fire event when element is shown I need to fire an event on the following conditions for an image:
1) The image has been loaded (the following function takes care of that)
2) The image is shown. (if the image has not been shown yet, height() and width() return zero, meaning I cannot resize the image.
The main problem here is that my image has been loaded into the child of a hidden div. Since it is a child of the hidden div, change() will not work.
Here is the function I am currently using:
//Takes the number of pixesl to resize the x-axis of the image to
$.fn.resizeImage = function(newX) {
return this.each(function() {
$this = $(this);
//The load event is defined via plugin and takes care of the problem of the load event
//not firing for elements that have been cached by the browser.
$this.bind('load', function(){
var y = $this.height();
var x = $this.width();
//return if image does not for some reason
if(x == 0 || y == 0){return;}
//alert('y: ' + y + ' x: ' + x);
var ratio = y / x;
var newY = Math.floor(newX * ratio);
$this.height(newY);
$this.width(newX);
});
});
};
A: <img> elements have height and width properties that are automatically populated by the browser when it determines the dimensions of an image. .height() and .width() map to the more generic offsetHeight and offsetWidth properties that return 0 when an element is hidden. Accessing the height and width properties directly should allow your code to work correctly:
//Takes the number of pixesl to resize the x-axis of the image to
$.fn.resizeImage = function(newX) {
return this.each(function() {
var $this = $(this);
//The load event is defined via plugin and takes care of the problem of the load event
//not firing for elements that have been cached by the browser.
$this.bind('load', function() {
var y = this.height;
var x = this.width;
//return if image does not for some reason
if (x == 0 || y == 0) {
return;
}
//alert('y: ' + y + ' x: ' + x);
var ratio = y / x;
var newY = Math.floor(newX * ratio);
$this.height(newY);
$this.width(newX);
});
});
};
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4293773",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Linphone-web-plugin build stuck on OSX I'm trying to build linphone-web-plugin on OS X 10.9.3 with Xcode 5.1.1 installed. I have followed instructions in their README file. I have tryied to build it using XCode 4.6.3 (with xcodebuild command, not directly from the Xcode). Linphone-web-plugin is using firebreath-1.7.
The problem is that build always get stuck on line:
-- Check size of unsigned long long - done
It doesn't throw any errors, it just stays there forever.
Did anyone had this problem while building linphone-web-plugin?
Linphone-web-plugin can be found here.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/24575486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Net Core: Swashbuckle Set operationId Automatically on to Controller Action Method We are trying to override Swashbuckle/Swagger IO CodeGen naming conventions, when its creating Angular API Service Proxies, for existing 500+ controllers and the corresponding methods.
Currently linking Net Core 3 APIs with Angular Typescript.
https://stackoverflow.com/a/58567622/13889515
The following answer works:
[HttpGet("{id:int}", Name = nameof(GetProductById))]
public IActionResult GetProductById(int id) // operationId = "GetProductById"'
[HttpGet("{id:int}", Name = "GetProductById")]
public IActionResult GetProductById(int id) // operationId = "GetProductById"'
Is there a way to loop through all controllers and methods in startup? Name should equal name of Action Method within Controller.
This maybe possible solution, However, I need the action value.
return services.AddSwaggerGen(c =>
{
c.CustomOperationIds(e => $"{e.ActionDescriptor.RouteValues["controller"]}_{e.HttpMethod}");
https://stackoverflow.com/a/54294810/13889515
A: Utilize this piece of code:
return services.AddSwaggerGen(c =>
{
c.CustomOperationIds(e => $"{e.ActionDescriptor.RouteValues["action"]}");
| {
"language": "en",
"url": "https://stackoverflow.com/questions/63144598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Decision Tree Classifier I keep getting NaN error I've a small decision tree code and I believe I convert everything to int, and I have checked my train/test data with isnan, max etc.
I genuinely have no idea why its giving that error.
So I'm trying to pass Mnist dataset from Decision Tree and then I'll attack using a class.
Here is the code:
from AttackUtils import Attack
from AttackUtils import calc_output_weighted_weights, targeted_gradient, non_targeted_gradient, non_targeted_sign_gradient
(X_train_woae, y_train_woae), (X_test_woae, y_test_woae) = mnist.load_data()
X_train_woae = X_train_woae.reshape((len(X_train_woae), np.prod(X_train_woae.shape[1:])))
X_test_woae = X_test_woae.reshape((len(X_test_woae), np.prod(X_test_woae.shape[1:])))
from sklearn import tree
#model_woae = LogisticRegression(multi_class='multinomial', solver='lbfgs', fit_intercept=False)
model_woae = tree.DecisionTreeClassifier(class_weight='balanced')
model_woae.fit(X_train_woae, y_train_woae)
#model_woae.coef_ = model_woae.feature_importances_
coef_int = np.round(model_woae.tree_.compute_feature_importances(normalize=False) * X_train_woae.size).astype(int)
attack_woae = Attack(model_woae)
attack_woae.prepare(X_train_woae, y_train_woae, X_test_woae, y_test_woae)
weights_woae = attack_woae.weights
num_classes_woae = len(np.unique(y_train_woae))
attack_woae.create_one_hot_targets(y_test_woae)
attack_woae.attack_to_max_epsilon(non_targeted_gradient, 50)
non_targeted_scores_woae = attack_woae.scores
So the attack class does perturbation and non-targeted gradient attack. And here is the attack class:
import numpy as np
from sklearn.metrics import accuracy_score
def calc_output_weighted_weights(output, w):
for c in range(len(output)):
if c == 0:
weighted_weights = output[c] * w[c]
else:
weighted_weights += output[c] * w[c]
return weighted_weights
def targeted_gradient(foolingtarget, output, w):
ww = calc_output_weighted_weights(output, w)
for k in range(len(output)):
if k == 0:
gradient = foolingtarget[k] * (w[k]-ww)
else:
gradient += foolingtarget[k] * (w[k]-ww)
return gradient
def non_targeted_gradient(target, output, w):
ww = calc_output_weighted_weights(output, w)
for k in range(len(target)):
if k == 0:
gradient = (1-target[k]) * (w[k]-ww)
else:
gradient += (1-target[k]) * (w[k]-ww)
return gradient
def non_targeted_sign_gradient(target, output, w):
gradient = non_targeted_gradient(target, output, w)
return np.sign(gradient)
class Attack:
def __init__(self, model):
self.fooling_targets = None
self.model = model
def prepare(self, X_train, y_train, X_test, y_test):
self.images = X_test
self.true_targets = y_test
self.num_samples = X_test.shape[0]
self.train(X_train, y_train)
print("Model training finished.")
self.test(X_test, y_test)
print("Model testing finished. Initial accuracy score: " + str(self.initial_score))
def set_fooling_targets(self, fooling_targets):
self.fooling_targets = fooling_targets
def train(self, X_train, y_train):
self.model.fit(X_train, y_train)
self.weights = self.model.coef_
self.num_classes = self.weights.shape[0]
def test(self, X_test, y_test):
self.preds = self.model.predict(X_test)
self.preds_proba = self.model.predict_proba(X_test)
self.initial_score = accuracy_score(y_test, self.preds)
def create_one_hot_targets(self, targets):
self.one_hot_targets = np.zeros(self.preds_proba.shape)
for n in range(targets.shape[0]):
self.one_hot_targets[n, targets[n]] = 1
def attack(self, attackmethod, epsilon):
perturbed_images, highest_epsilon = self.perturb_images(epsilon, attackmethod)
perturbed_preds = self.model.predict(perturbed_images)
score = accuracy_score(self.true_targets, perturbed_preds)
return perturbed_images, perturbed_preds, score, highest_epsilon
def perturb_images(self, epsilon, gradient_method):
perturbed = np.zeros(self.images.shape)
max_perturbations = []
for n in range(self.images.shape[0]):
perturbation = self.get_perturbation(epsilon, gradient_method, self.one_hot_targets[n], self.preds_proba[n])
perturbed[n] = self.images[n] + perturbation
max_perturbations.append(np.max(perturbation))
highest_epsilon = np.max(np.array(max_perturbations))
return perturbed, highest_epsilon
def get_perturbation(self, epsilon, gradient_method, target, pred_proba):
gradient = gradient_method(target, pred_proba, self.weights)
inf_norm = np.max(gradient)
perturbation = epsilon / inf_norm * gradient
return perturbation
def attack_to_max_epsilon(self, attackmethod, max_epsilon):
self.max_epsilon = max_epsilon
self.scores = []
self.epsilons = []
self.perturbed_images_per_epsilon = []
self.perturbed_outputs_per_epsilon = []
for epsilon in range(0, self.max_epsilon):
perturbed_images, perturbed_preds, score, highest_epsilon = self.attack(attackmethod, epsilon)
self.epsilons.append(highest_epsilon)
self.scores.append(score)
self.perturbed_images_per_epsilon.append(perturbed_images)
self.perturbed_outputs_per_epsilon.append(perturbed_preds)
And this is the traceback it gives:
ValueError
Traceback (most recent call last) in
4 num_classes_woae = len(np.unique(y_train_woae))
5 attack_woae.create_one_hot_targets(y_test_woae)
----> 6 attack_woae.attack_to_max_epsilon(non_targeted_gradient, 50)
7 non_targeted_scores_woae = attack_woae.scores
~\MULTIATTACK\AttackUtils.py in
attack_to_max_epsilon(self, attackmethod, max_epsilon)
106 self.perturbed_outputs_per_epsilon = []
107 for epsilon in range(0, self.max_epsilon):
--> 108 perturbed_images, perturbed_preds, score, highest_epsilon = self.attack(attackmethod, epsilon)
109 self.epsilons.append(highest_epsilon)
110 self.scores.append(score)
~\MULTIATTACK\AttackUtils.py in attack(self,
attackmethod, epsilon)
79 def attack(self, attackmethod, epsilon):
80 perturbed_images, highest_epsilon = self.perturb_images(epsilon, attackmethod)
---> 81 perturbed_preds = self.model.predict(perturbed_images)
82 score = accuracy_score(self.true_targets, perturbed_preds)
83 return perturbed_images, perturbed_preds, score, highest_epsilon
...\appdata\local\programs\python\python35\lib\site-packages\sklearn\tree\tree.py
in predict(self, X, check_input)
413 """
414 check_is_fitted(self, 'tree_')
--> 415 X = self._validate_X_predict(X, check_input)
416 proba = self.tree_.predict(X)
417 n_samples = X.shape[0]
...\appdata\local\programs\python\python35\lib\site-packages\sklearn\tree\tree.py
in _validate_X_predict(self, X, check_input)
374 """Validate X whenever one tries to predict, apply, predict_proba"""
375 if check_input:
--> 376 X = check_array(X, dtype=DTYPE, accept_sparse="csr")
377 if issparse(X) and (X.indices.dtype != np.intc or
378 X.indptr.dtype != np.intc):
...\appdata\local\programs\python\python35\lib\site-packages\sklearn\utils\validation.py
in check_array(array, accept_sparse, accept_large_sparse, dtype,
order, copy, force_all_finite, ensure_2d, allow_nd,
ensure_min_samples, ensure_min_features, warn_on_dtype, estimator)
566 if force_all_finite:
567 _assert_all_finite(array,
--> 568 allow_nan=force_all_finite == 'allow-nan')
569
570 shape_repr = _shape_repr(array.shape)
...\appdata\local\programs\python\python35\lib\site-packages\sklearn\utils\validation.py
in _assert_all_finite(X, allow_nan)
54 not allow_nan and not np.isfinite(X).all()):
55 type_err = 'infinity' if allow_nan else 'NaN, infinity'
---> 56 raise ValueError(msg_err.format(type_err, X.dtype))
57
58
ValueError: Input contains NaN, infinity or a value too large for
dtype('float32').
EDIT:
I've added coefficient numbers as 0 and it now gives the same error just below the line, at attack.attack_to_max_epsilon(non_targeted_gradient, epsilon_number)
A: Try to apply one-hot enconde to your labels before to train..
from sklearn.preprocessing import LabelEncoder
mylabels= ["label1", "label2", "label2"..."n.label"]
le = LabelEncoder()
labels = le.fit_transform(mylabels)
and then try to split your data:
from sklearn.model_selection import train_test_split
(x_train, x_test, y_train, y_test) = train_test_split(data,
labels,
test_size=0.25)
now probably your labels will be encoded with numbers which is good to train a machine learning algorithm.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/56318601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: (AChartEngine) How to set default zoom rate? How to set custom font style? I use AChartEngine with my android project.
I have 2 questions when I implement a bar chart.
First question
Here is my bar chart when I just run on an emulator.
first picture
It seems work perfectly, but It's view looks better when I press decrease zoom rate button at right-buttom zoom pane.
second picture
I want the chart shows like this but I don't want to click zoom rate button every time I display the chart. Can I set default zoom rate? So when I run on emulator, The chart will be showed like second picture instantly.
Here is my code
XYSeries series1 = new XYSeries("Product Name1");
series1.add(1,15);
XYSeries series2 = new XYSeries("Product Name2");
series2.add(2,35);
XYMultipleSeriesDataset dataset = new XYMultipleSeriesDataset();
dataset.addSeries(series1);
dataset.addSeries(series2);
XYSeriesRenderer renderer1 = new XYSeriesRenderer();
renderer1.setColor(Color.GREEN);
renderer1.setDisplayChartValues(true);
renderer1.setChartValuesTextSize(20);
XYSeriesRenderer renderer2 = new XYSeriesRenderer();
renderer2.setColor(Color.BLUE);
renderer2.setDisplayChartValues(true);
renderer2.setChartValuesTextSize(20);
XYMultipleSeriesRenderer mRenderer = new XYMultipleSeriesRenderer();
mRenderer.addSeriesRenderer(renderer1);
mRenderer.addSeriesRenderer(renderer2);
mRenderer.setAxisTitleTextSize(16);
mRenderer.setChartTitle(chartTitle);
mRenderer.setChartTitleTextSize(20);
mRenderer.setLabelsTextSize(15);
mRenderer.setLegendTextSize(15);
mRenderer.setAxesColor(Color.WHITE);
mRenderer.setApplyBackgroundColor(true);
mRenderer.setBackgroundColor(Color.BLACK);
mRenderer.setBarSpacing(-0.7);
mRenderer.setZoomButtonsVisible(true);
mRenderer.setXTitle("Product");
mRenderer.setXLabels(0);
mRenderer.setXAxisMin(0);
mRenderer.setXAxisMax(3);
mRenderer.setYTitle("Calorie (kCal)");
Second question
How to implement my custom font style (font style file is keep at asset folder) to the chart? I found only this method but it doesn't work.
mRenderer.setTextTypeface(typefaceName, style);
Thanks in advance :)
A: For the first question, AChartEngine tries to fit your data the best way possible. However, you can tweak this behavior:
mRenderer.setXAxisMin(min);
mRenderer.setXAxisMax(max);
mRenderer.setYAxisMin(0);
mRenderer.setYAxisMax(40);
For the second question, I think you should first investigate how to add a custom font into a regular Android application and then it may work in AChartEngine too.
A: The answer for your second question is
first download the ttf file of your required font from net,
then create a folder named "assets", under it one more folder "fonts",
put the .ttf file inside fonts folder.
then in your activity write this code:
Typeface type=Typeface.createFromAssest(getAssets(),"fonts/yourttffile.ttf");
and set where you want, say textview
textview.setTypeface(type);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11810209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to increase number of clicks per second with pyautogui? I am developing a bot for timed mouse clicking game. I am using pyautogui. The aim is to click most times on a button in a minute. My code is:
import pyautogui, time
time.sleep(5)
while True:
pyautogui.click()
The infinite loop is not the problem, since FAILSAFE will prevent any negative consequences (pyautogui.FAILSAFE() is by default set to True). Essentially the downside is, pyautogui can only reach up to 10 clicks per second. Does someone know if I can increase the number of clicks per second? And if yes, how? Advice will be greatly appreciated!
A: You can set pyautogui.PAUSE to control the duration of the delay between actions. By default, it is set to 0.1 sec, that is why you are getting at most 10 clicks per second.
pyautogui.PAUSE = 0.01
for example will reduce the delay to allow 100 clicks per second if your hardware supports it.
From the doc, you can read the following:
You can add delays after all of PyAutoGUI’s functions by setting the pyautogui.PAUSE variable to a float or integer value of the number of seconds to pause. By default, the pause is set to 0.1 seconds.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35805649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: What's the cheapest way to filter a Core Data populated TableView with Sections? I've got a tableview populated by Core Data with multiple dynamic sections and I want to use a UISearchController to search for specific records. Every tutorial I've found says to use an array of fetched results, combined with a filtered array to display the results, but this is a pain for maintaining my tableview sections.
The code I have totally works, in that I just perform a fetch every time the user types in the search field, but I realize this is more expensive as far as resources go.
So how would I implement a filter on arrays while MAINTAINING my tableview sections? Arrays of arrays?
My current code:
//MARK: Search Deletegate Methods
func updateSearchResultsForSearchController(searchController: UISearchController) {
let searchText = self.searchController.searchBar.text
let selectedScopeButtonIndex = self.searchController.searchBar.selectedScopeButtonIndex
self.filterContentForSearch(searchText, scope: selectedScopeButtonIndex)
self.tableView.reloadData()
}
func filterContentForSearch(searchText: String, scope: Int)
{
println("Fired")
//Way too expensive for resources
var fetchRequest = NSFetchRequest()
fetchRequest = NSFetchRequest(entityName: "FoodItem")
let sortDescriptor = NSSortDescriptor(key: "name", ascending: true)
let mealDescriptor = NSSortDescriptor(key: "meal", ascending: true)
fetchRequest.sortDescriptors = [mealDescriptor,sortDescriptor]
//Predicate
if searchText != ""
{
let namePredicate = NSPredicate(format: "name CONTAINS[cd] %@", searchText)
fetchRequest.predicate = namePredicate
}
fetchedResultsController = NSFetchedResultsController(fetchRequest: fetchRequest, managedObjectContext: managedObjectContext, sectionNameKeyPath: "meal", cacheName: nil)
fetchedResultsController.delegate = self
fetchedResultsController.performFetch(nil)
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30764537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Add dumping to JS Pendulum There is a good JS Canvas example of Pendulum. It works fun.
http://rosettacode.org/wiki/Animate_a_pendulum#JavaScript_.2B_.3Ccanvas.3E
But it works as a "clock" pendulum - it never stops.
How can I stop it, like a simple pendulum usually stops in time ?
Thanks a lot!
A: This realizes the differential equation
angle''(t)+k*sin(angle(t))=0
Since they use the Euler forward method for integration, the system will actually increase its energy , measured as
E = 0.5*angle'(t)^2+k*(1-cos(angle)).
To add damping to the equation, you can simulate some air friction by setting
acceleration = -k*sin(angle)-c*velocity
| {
"language": "en",
"url": "https://stackoverflow.com/questions/23857158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: In grafana dashboard how to set alert mail configuration?
*
*Grafana version 4.0
*Datasource influxDB
Please consider me as a beginner.
For this, how to set alerts in Grafana dashboard? alerts send to emails.
/etc/grafana/grafana.ini
I wrote SMTP config like this:
[smtp]
enabled = True
host = localhost:25
user =
If the password contains # or ; you have to wrap it with trippel
quotes. Ex """#password;"""
[emails]
welcome_email_on_sign_up = True
When I set alerts in Grafana dashboard its show error:
template variables are not supported.
A: Configure this /usr/share/grafana/conf/defaults.ini file as the following:
[smtp]
enabled = true
host = smtp.gmail.com:587
user = [email protected]
password = """Your_Password"""
cert_file =
key_file =
skip_verify = true
from_address = [email protected]
from_name = Your_Name
ehlo_identity =
In this example, I set my own Gmail account with its SMTP:
smtp.gmail.com with 587(TLS) port.
You Should find your SMTP email address with its port.
[NOTE]
Don't forget to put your password in password_field.
A: Mail alert grafana configuration for windows \grafana-6.4.4.windows-amd64\grafana-6.4.4\conf\defaults.ini
[smtp]
enabled = true
host = smtp.gmail.com:587
;user =
# If the password contains # or ; you have to wrap it with triple quotes. Ex """#password;"""
;password =
;cert_file =
;key_file =
skip_verify = true
from_address = your_mail_id
from_name = Grafana
;ehlo_identity = dashboard.example.com
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45582955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: switch case on class type java I want to create a function that returns a value of the type passed as an argument.
for that I wanted to use a switch on the type of the argument, but I didn't have to do the right thing.
what is the correct way:
public <T> T threadLocalRandom(Class<T> type){
switch (type){
case Integer i:
return (T) ThreadLocalRandom.current().nextInt();
case Double:
return ThreadLocalRandom.current().nextDouble();
case Boolean:
return ThreadLocalRandom.current().nextBoolean();
}
}
public <T> T threadLocalRandom(Class<T> type){
switch (type.getClass().getSimpleName()){
case "Integer":
return (T) ThreadLocalRandom.current().nextInt();
case "Double":
return ThreadLocalRandom.current().nextDouble();
case "Boolean":
return ThreadLocalRandom.current().nextBoolean();
}
}
in any case I have an error because I return an Interger, a Double or a Boolean while the method wants to return a type T
Thanks
How to make the Type T match the type returned by the switch?
A: There is a dynamic cast() operation you can apply to the result:
return type.cast(ThreadLocalRandom.current().nextInt());
I'd be curious to know how you use your method. It seems likely there would be a cleaner way to embody and access this functionality.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/73549393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to pass props into createTheme() in Material-ui theme object I have used the docs to create a theme object in /theme.js and I'm importing it in a wrapper component in _app.tsx. I can't figure out how to pass props (i.e. a 'darkMode' prop) that would be able to be accessed in the theme file. Here's how it's setup:
_app.tsx:
function MyApp({ Component, pageProps }: AppProps) {
const store = useStore(pageProps.initialReduxState)
useEffect(() => {
if ((global.window as any).Cypress) {
global.window.store = store
console.log("Cypress store is running and linked to global document")
}
},[])
return(
<Provider store={store}>
<Head>
<meta content='width=device-width,initial-scale=1,maximum-scale=1,user-scalable=no' name='viewport' />
</Head>
<ContextWrapper>
<LayoutWrapper>
<Component {...pageProps} />
</LayoutWrapper>
</ContextWrapper>
</Provider>
)
}
export default MyApp
my ContextWrapper component:
import { ThemeProvider } from '@material-ui/core'
import theme from '../styles/theme'
import { useDispatch, useSelector } from 'react-redux'
import { useEffect } from 'react'
import userActions from '../store/actions/userActions'
import { useRouter } from 'next/router'
const ContextWrapper:React.FC = ({children: children}) => {
const { loggedIn } = useSelector((state: any) => state.userState)
const dispatch = useDispatch()
const router = useRouter()
useEffect(() => {
const user = window.sessionStorage.getItem('user_object')
if (user) {
dispatch({ type: userActions.LOG_IN, payload: JSON.parse(user) })
} else if (router.pathname != '/signup-test') {
router.push('/signup-test')
}
}, [loggedIn])
return(
<ThemeProvider theme={theme}>
{children}
</ThemeProvider>
)
}
export default ContextWrapper
my theme.js file:
import { createTheme, responsiveFontSizes } from '@material-ui/core'
// Info on themeing here -> https://next.material-ui.com/customization/theming/
const theme = createTheme({
palette: {
mode: 'light',
primary: {
main: '#777777'
},
secondary: {
main: '#333333'
},
medGray: {
main: "#999999"
},
navBar: {
main: '#ffffff'
},
},
components: {
MuiButton: {
defaultProps: {
variant: "contained",
},
styleOverrides: {
root: {
borderRadius: '2rem'
}
}
},
MuiOutlinedInput: {
defaultProps: {
inputProps: {
}
},
styleOverrides: {
root: {
marginBottom: '1rem',
borderRadius: '2rem',
border: 'none',
":before": {
border: 'none'
}
},
}
}
}
})
export default responsiveFontSizes(theme)
I'd like to pass a prop or value to my theme where I can control certain values.
For instance: palette: { mode: darkMode ? 'dark' : 'light }
Any guidance or assistance would be greatly appreciated!
A: this is how I do it. I had to go a level deeper than _app.tsx because I'm using NextJS ISR and needed data that is accessible at build time.
My useSettings hook is simply reading from localstorage, but you could use redux or whatever to get your settings.
import React, { ReactNode } from 'react';
import { ThemeProvider } from '@material-ui/core/styles';
import CssBaseline from '@material-ui/core/CssBaseline';
import useMediaQuery from '@material-ui/core/useMediaQuery';
import buildTheme from '../../theme';
import useSettings from '../../hooks/useSettings';
interface DisplayProps {
children: ReactNode;
}
export const Display: React.FC<DisplayProps> = ({ children }) => {
let prefersDarkMode = useMediaQuery('(prefers-color-scheme: dark)');
const { settings } = useSettings();
const theme = React.useMemo(() => {
if (settings.theme && settings.theme!=='system') {
prefersDarkMode = settings.theme === 'dark';
}
return buildTheme(prefersDarkMode, {
direction: settings.direction,
responsiveFontSizes: settings.responsiveFontSizes,
theme: settings.theme,
});
}, [prefersDarkMode, settings]);
return (
<ThemeProvider theme={theme}>
<CssBaseline />
{children}
</ThemeProvider>
);
};
export default Display;
theme.ts
import { createTheme, responsiveFontSizes } from '@material-ui/core';
export const commonThemeSettings = {
breakpoints: {
keys: ['xs', 'sm', 'md', 'lg', 'xl'],
values: { xs: 0, sm: 600, md: 960, lg: 1280, xl: 1920 },
},
direction: 'ltr',
mixins: {
toolbar: {
minHeight: 56,
'@media (min-width:0px) and (orientation: landscape)': {
minHeight: 48,
},
'@media (min-width:600px)': { minHeight: 64 },
},
},
typography: {
htmlFontSize: 16,
fontFamily: '"SharpBook19", "Helvetica", "Arial", sans-serif',
fontSize: 14,
fontWeightLight: 400,
fontWeightRegular: 400,
fontWeightMedium: 400,
fontWeightBold: 400,
h1: {
fontSize: '3rem',
},
h2: {
fontSize: '2.5rem',
},
h3: {
fontSize: '2.25rem',
},
h4: {
fontSize: '2rem',
},
h5: {
fontSize: '1.5rem',
},
shape: { borderRadius: 0 },
},
palette: {
primary: {
light: '#7986cb',
main: '#3f51b5',
dark: '#303f9f',
contrastText: '#fff',
},
secondary: {
light: '#ff4081',
main: '#f50057',
dark: '#c51162',
contrastText: '#fff',
},
error: {
light: '#e57373',
main: '#f44336',
dark: '#d32f2f',
contrastText: '#fff',
},
warning: {
light: '#ffb74d',
main: '#ff9800',
dark: '#f57c00',
contrastText: 'rgba(0, 0, 0, 0.87)',
},
info: {
light: '#64b5f6',
main: '#2196f3',
dark: '#1976d2',
contrastText: '#fff',
},
success: {
light: '#81c784',
main: '#4caf50',
dark: '#388e3c',
contrastText: 'rgba(0, 0, 0, 0.87)',
},
grey: {
50: '#fafafa',
100: '#f5f5f5',
200: '#eeeeee',
300: '#e0e0e0',
400: '#bdbdbd',
500: '#9e9e9e',
600: '#757575',
700: '#616161',
800: '#424242',
900: '#212121',
A100: '#d5d5d5',
A200: '#aaaaaa',
A400: '#303030',
A700: '#616161',
},
contrastThreshold: 3,
tonalOffset: 0.2,
action: {
disabledOpacity: 0.38,
focusOpacity: 0.12,
},
},
};
export const lightPalette = {
type: 'light',
text: {
primary: 'rgba(0, 0, 0, 0.87)',
secondary: 'rgba(0, 0, 0, 0.54)',
disabled: 'rgba(0, 0, 0, 0.38)',
hint: 'rgba(0, 0, 0, 0.38)',
},
divider: 'rgba(0, 0, 0, 0.12)',
background: { paper: '#fff', default: '#fafafa' },
action: {
active: 'rgba(0, 0, 0, 0.54)',
hover: 'rgba(0, 0, 0, 0.04)',
hoverOpacity: 0.04,
selected: 'rgba(0, 0, 0, 0.08)',
selectedOpacity: 0.08,
disabled: 'rgba(0, 0, 0, 0.26)',
disabledBackground: 'rgba(0, 0, 0, 0.12)',
focus: 'rgba(0, 0, 0, 0.12)',
activatedOpacity: 0.12,
},
};
export const darkPalette = {
type: 'dark',
text: {
primary: '#fff',
secondary: 'rgba(255, 255, 255, 0.7)',
disabled: 'rgba(255, 255, 255, 0.5)',
hint: 'rgba(255, 255, 255, 0.5)',
icon: 'rgba(255, 255, 255, 0.5)',
},
divider: 'rgba(255, 255, 255, 0.12)',
background: { paper: '#424242', default: '#303030' },
action: {
active: '#fff',
hover: 'rgba(255, 255, 255, 0.08)',
hoverOpacity: 0.08,
selected: 'rgba(255, 255, 255, 0.16)',
selectedOpacity: 0.16,
disabled: 'rgba(255, 255, 255, 0.3)',
disabledBackground: 'rgba(255, 255, 255, 0.12)',
focus: 'rgba(255, 255, 255, 0.12)',
activatedOpacity: 0.24,
},
};
const theme = (preferDark, additionalOptions) => {
let theme = createTheme({
...commonThemeSettings,
palette: preferDark
? { ...darkPalette, ...commonThemeSettings.palette }
: { ...lightPalette, ...commonThemeSettings.palette },
...additionalOptions,
});
if (additionalOptions && additionalOptions.responsiveFontSizes) {
theme = responsiveFontSizes(theme);
}
return theme;
};
export default theme;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/69059768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to Import JSON data to typescript Map type? I am trying to import json data that might include present or absent mappings within one of the properties, and figured the correct data type to represent this was Map<string, number>, but I'm getting an error when I try to do this.
My JSON file, data.json, looks like this:
{
"datas": [
{
"name":"test1",
"config":"origin",
"entries": {
"red":1,
"green":2
}
}
,
{
"name":"test2",
"config":"remote",
"entries": {
"red":1,
"blue":3
}
}
,
{
"name":"test3",
"entries": {
"red":1,
"blue":3,
"purple":3
}
}
]
}
My typescript code, Data.ts, which attempts to read it looks like this:
import data from './data.json';
export class Data {
public name:string;
public config:string;
public entries:Map<string, number>;
constructor(
name:string,
entries:Map<string, number>,
config?:string
) {
this.name = name;
this.entries = entries;
this.config = config ?? "origin";
}
}
export class DataManager {
public datas:Data[] = data.datas;
}
But that last line, public datas:Data[] = data.datas;, is throwing an error.
Is there a proper way to import data like this?
The goal, ultimately, is to achieve three things:
*
*Any time entries is present, it should receive some validation that it only contains properties of type number; what those properties are is unknown to the programmer, but will be relevant to the end-user.
*If config is absent in the JSON file, the construction of Data objects should supply a default value (here, it's "origin")
*This assignment of the data should occur with as little boilerplate code as possible. If, down the line Data is updated to have a new property (and Data.json might or might not receive updates to its data to correspond), I don't want to have to change how DataManager.data receives the values
Is this possible, and if so, what's the correct way to write code that will do this?
A: The lightest weight approach to this would not be to create or use classes for your data. You can instead use plain JavaScript objects, and just describe their types strongly enough for your use cases. So instead of a Data class, you can have an interface, and instead of using instances of the Map class with string-valued keys, you can just use a plain object with a string index signature to represent the type of data you already have:
interface Data {
name: string;
config: string;
entries: { [k: string]: number }
}
To make a valid Data, you don't need to use new anywhere; just make an object literal with name, config, and entries properties of the right types. The entries property is { [k: string]: number }, which means that you don't know or care what the keys are (other than the fact that they are strings as opposed to symbols), but the property values at those keys should be numbers.
Armed with that definition, let's convert data.datas to Data[] in a way that meets your three criteria:
const datas: Data[] = data.datas.map(d => ({
config: "origin", // any default values you want
...d, // the object
entries: onlyNumberValues(d.entries ?? {}) // filter out non-numeric entries
}));
function onlyNumberValues(x: { [k: string]: unknown }): { [k: string]: number } {
return Object.fromEntries(
Object.entries(x).filter(
(kv): kv is [string, number] => typeof kv[1] === "number"
)
);
}
*
*The above sets the entries property to be a filtered version of the entries property in the incoming data, if it exists. (If entries does not exist, we use an empty object {}). The filter function is onlyNumberValues(), which breaks the object into its entries via the Object.entries() method, filters these entries with a user-defined type guard function, and packages them back into an object via the Object.fromEntries() method. The details of this function's implementation can be changed, but the idea is that you perform whatever validation/transformation you need here.
*Any required property that may be absent in the JSON file should be given a default value. We do this by creating an object literal that starts with these default properties, after which we spread in the properties from the JSON object. We do this with the config property above. If the JSON object has a config property, it will overwrite the default when spread in. (At the very end we add in the entries property explicitly, to overwrite the value in the object with the filtered version).
*Because we've spread the JSON object in, any properties added to the JSON object will automatically be added. Just remember to specify any defaults for these new properties, if they are required.
Let's make sure this works as desired:
console.log(datas)
/* [{
"config": "origin",
"name": "test1",
"entries": {
"red": 1,
"green": 2
}
}, {
"config": "remote",
"name": "test2",
"entries": {
"red": 1,
"blue": 3
}
}, {
"config": "origin",
"name": "test3",
"entries": {
"red": 1,
"blue": 3,
"purple": 3
}
}] */
Looks good.
Playground link to code
| {
"language": "en",
"url": "https://stackoverflow.com/questions/71682828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to use a date in another feild of other date field is blank in Excel I have a data set of close to 2000 in an excel file. I have two date fields. I need to get a count on Date field one based on different date ranges however if date field one is blank, I need to use Date field two to add to the count. I'm not sure how to do that. I'm sure it can be done with an if statement of some sort but I'm currently at a loss
Here's an example of only counting the one column. How can I say "if A3 is blank, use B3"?
=COUNTIF('TABNAME'!A1:A2000,"<="&TODAY()-365)
A: You can try SUMPRODUCT:
=SUMPRODUCT(--(((A:A<=TODAY()-365)*(A:A<>"")+(A:A="")*(B:B<>"")*(B:B<=TODAY()-365))>=1))
Explanation:
Part (A:A<=TODAY()-365)*(A:A<>"") counts non empty cells in col A where date is less than year ago.
Part (A:A="")*(B:B<>"")*(B:B<=TODAY()-365))) counts non empty cells in col B where cell in col A is empty and date in col B is less than year ago.
By summing both parts we get total count of dates according to your conditions (I hope).
-- converts bool to int value so SUMPRODUCT can sum it, but you can ignore this part as formula works without it and >=1
=SUMPRODUCT(((A:A<=TODAY()-365)*(A:A<>"")+(A:A="")*(B:B<>"")*(B:B<=TODAY()-365)))
| {
"language": "en",
"url": "https://stackoverflow.com/questions/69351802",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Laravel pluck fails to get value from load relationship I need to return just an array of the name values for each of my roles. My roles is a hasMany relationship. This is currently how I'm trying to do it, but the returned result is unchanged, what am I missing?
/**
* Display the specified resource.
*
* @param int $id
* @return \Illuminate\Http\Response
*/
public function show(Request $request, Company $company)
{
$this->authorize('view', $company);
if ($request->boolean('with_roles')) {
$company = $company->load(['roles' => function ($query) {
$query->pluck('name');
}]);
}
return new ApiSuccessResponse($company);
}
gives me:
{
"model": {
"id": 1,
"user_id": 1,
"contact_first_name": "John",
"created_at": "2023-02-15T09:45:48.000000Z",
"updated_at": "2023-02-15T09:45:48.000000Z",
"deleted_at": null,
"is_deleting": false,
"roles": [
{
"id": 6,
"company_id": 1,
"name": "accountant",
"guard_name": "api",
"created_at": "2023-02-15T09:45:59.000000Z",
"updated_at": "2023-02-15T09:45:59.000000Z"
},
{
"id": 2,
"company_id": 1,
"name": "admin",
"guard_name": "api",
"created_at": "2023-02-15T09:45:48.000000Z",
"updated_at": "2023-02-15T09:45:48.000000Z"
},
{
"id": 4,
"company_id": 1,
"name": "affiliate",
"guard_name": "api",
"created_at": "2023-02-15T09:45:56.000000Z",
"updated_at": "2023-02-15T09:45:56.000000Z"
},
{
"id": 3,
"company_id": 1,
"name": "affiliate_manager",
"guard_name": "api",
"created_at": "2023-02-15T09:45:55.000000Z",
"updated_at": "2023-02-15T09:45:55.000000Z"
},
{
"id": 5,
"company_id": 1,
"name": "buyer",
"guard_name": "api",
"created_at": "2023-02-15T09:45:58.000000Z",
"updated_at": "2023-02-15T09:45:58.000000Z"
},
{
"id": 8,
"company_id": 1,
"name": "guest",
"guard_name": "api",
"created_at": "2023-02-15T09:46:02.000000Z",
"updated_at": "2023-02-15T09:46:02.000000Z"
},
{
"id": 7,
"company_id": 1,
"name": "user",
"guard_name": "api",
"created_at": "2023-02-15T09:46:00.000000Z",
"updated_at": "2023-02-15T09:46:00.000000Z"
}
]
}
}
I expect to see:
{
"model": {
"id": 1,
"user_id": 1,
"contact_first_name": "John",
"created_at": "2023-02-15T09:45:48.000000Z",
"updated_at": "2023-02-15T09:45:48.000000Z",
"deleted_at": null,
"is_deleting": false,
"roles": [
"accountant",
"admin",
"affiliate"
...
]
}
}
A: pluck doesn't modify the original relationship, it only returns an array. Try this one :
public function show(Request $request, Company $company)
{
$this->authorize('view', $company);
$result = $company->toArray();
if ($request->boolean('with_roles')) {
$result['roles'] = $company->roles()->pluck('name')->toArray();
}
return new ApiSuccessResponse($result);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75458593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to join datasets with values between other values? I have a use case where I need to join 2 data-frames.
ID view
ID BookTime
1 2
1 5
2 8
2 3
3 4
FareRule view
Start End Fare
1 3 10
3 6 20
6 10 25
Output is a result of join by checking the BookTime from the ID table. The Fare is computed based on the window that is between Start and End from FareRule.
ID FareDue
1 10
1 20
2 25
2 20
3 20
I am creating a view out of these data-frames and using CROSS JOIN to join them. But as we know, CROSS join is expensive so is there a better way to join them?
SELECT
ID,
Fare AS FareDue
FROM
ID
CROSS JOIN
FareRule
WHERE
BookTime >=Start
AND
BookTime< End
A: Given the following datasets:
val id = Seq((1, 2), (1, 5), (2, 8), (2, 3), (3, 4)).toDF("ID", "BookTime")
scala> id.show
+---+--------+
| ID|BookTime|
+---+--------+
| 1| 2|
| 1| 5|
| 2| 8|
| 2| 3|
| 3| 4|
+---+--------+
val fareRule = Seq((1,3,10), (3,6,20), (6,10,25)).toDF("start", "end", "fare")
scala> fareRule.show
+-----+---+----+
|start|end|fare|
+-----+---+----+
| 1| 3| 10|
| 3| 6| 20|
| 6| 10| 25|
+-----+---+----+
You simply join them together using between expression.
val q = id.join(fareRule).where('BookTime between('start, 'end)).select('id, 'fare)
scala> q.show
+---+----+
| id|fare|
+---+----+
| 1| 10|
| 1| 20|
| 2| 25|
| 2| 10|
| 2| 20|
| 3| 20|
+---+----+
You may want to adjust between so the boundaries are exclusive on one side. between by default uses the lower bound and upper bound, inclusive.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53400521",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I install GNUHealth? I am following the installation steps mentioned below but have encountered a python problem.
https://en.wikibooks.org/wiki/GNU_Health/Installation#Installing_GNU_Health_on_GNU/Linux_and_FreeBSD
At the step where the initialisation of the database instance is to be performed, I have encountered the following error after executing the following command.
python3 ./trytond-admin --all --database=health
Error encountered:
Traceback (most recent call last):
File "./trytond-admin", line 21, in <module>
admin.run(options)
File "/home/gnuhealth/gnuhealth/tryton/server/trytond-4.6.18/trytond/admin.py", line 24, in run
with Transaction().start(db_name, 0, _nocache=True):
File "/home/gnuhealth/gnuhealth/tryton/server/trytond-4.6.18/trytond/transaction.py", line 88, in start
database = Database(database_name).connect()
File "/home/gnuhealth/gnuhealth/tryton/server/trytond-4.6.18/trytond/backend/postgresql/database.py", line 97, in __new__
**cls._connection_params(name))
File "/home/gnuhealth/.local/lib/python3.6/site-packages/psycopg2/pool.py", line 161, in __init__
self, minconn, maxconn, *args, **kwargs)
File "/home/gnuhealth/.local/lib/python3.6/site-packages/psycopg2/pool.py", line 58, in __init__
self._connect()
File "/home/gnuhealth/.local/lib/python3.6/site-packages/psycopg2/pool.py", line 62, in _connect
conn = psycopg2.connect(*self._args, **self._kwargs)
File "/home/gnuhealth/.local/lib/python3.6/site-packages/psycopg2/__init__.py", line 126, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: fe_sendauth: no password supplied
Can anyone help me out with this error or tell me what I am missing?
Based on the error, I suspect that there's a difficulty in connecting to the DB as there is no password specified.
A: It seems that you did not configured the URI with the credentials to connect to the database. You can find the description of the configuration file at http://docs.tryton.org/projects/server/en/latest/topics/configuration.html#uri
Once you have a configuration file, you must run the command like this:
python3 ./trytond-admin --all --database=health -c /path/to/trytond.conf
| {
"language": "en",
"url": "https://stackoverflow.com/questions/57470881",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How can I daily reorganize my one-year-data? I am a beginner both at programming and obviously also at using Python language and I am really struggling when trying to perform the following task. I have a dataframe (in the picture, see the link "Data") made of glycemic and insulin values recorded every 5 minutes for 1 year (so, in total, there are 105120 data points for each of the 2 categories). What I need to do (and I am struggling to do) is to reorganize them daily. The output that I want to obtain is a sort of big matrix where each column represents a day (so in total there should be 365 columns) and each row contains the 5-minute information of each specific day. The final dimensions should be 288 x 365 (288 = data points in one day, 365 = days in a year).
My teacher suggested that I should use Pipe library but I did not manage to do it. I hope someone can help. Thank you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/73295957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: cloning submodule fails I am trying to clone a submodule for my drupal installation. I run the following command:
git submodule add http://git.drupal.org/project/token.git /sites/all/modules/token
This throws this error:
The following path is ignored by one of your .gitignore files:
/sites/all/modules/token
Use -f if you really want to add it.
But my .gitignore file is empty.
So I tried to run it as suggested:
submodule add -f http://git.drupal.org/project/token.git /sites/all/modules/token
But this throws this error:
fatal: could not create leading directories of '/sites/all/modules/token': Permission denied
Clone of 'http://git.drupal.org/project/token.git' into submodule path '/sites/all/modules/token' failed
Permissions are 777.
Ideas?
Regards
Lukas
A: Just found the answer:
git submodule add http://git.drupal.org/project/token.git sites/all/modules/token
The leading "/" was the problem.
A: I had the same problem, but apparently for different reasons. I tried to use git submodule add like I used git clone - without specifying the directory like this:
git submodule add ../repos/subA instead of git submodule add ../repos/subA subA
All I have to say is that is the effing worst error message possible to tell me I left off a required command-line argument.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/7765361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Realm Clone RealmQuery in different thread How to clone RealmQuery in different thread?
Problem:
*
*Create RealmQuery in X Thread.
*Query for RealmResults is same thread.
*If Empty Results, get data from Server in Y Thread.
*Insert data to Realm in background thread (Y). <-- New Instance of realm
*Re-query with same filters as in 1 in Z Thread.
*Return results in Main Thread.
As of now, am getting java.lang.IllegalStateException: Realm accessed from incorrect thread.
Tried cloning using, RealmQuery.createQueryFromResult(RealmResults<E> queryResults). Internally the clone using the same realm instance of the results.
How would the clone behave if the queryResults was empty?
Would be better if clone can be done in RxJava2.
A:
Re-query with same filters as in 1 in Z Thread.
Return results in Main Thread.
Okay so this is completely unnecessary because you can create a RealmQuery and store a field reference to the RealmResults, add a RealmChangeListener to it, and when you insert into Realm on the background thread, it will automatically update the RealmResults and call the RealmChangeListener.
So you don't need to "re-query with same filters in Z thread" (because Realm's findAllSortedAsync() already queries on a background thread), and you don't need to manually return results in main thread because findAllSortedAsync() already does that.
Solution: use Realm's notification system (and async query API). Read the docs: https://realm.io/docs/java/latest/#notifications
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44035272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Is cleaning your code not required anymore in C++? I was reading an article that stated due to something called RAII, you no longer needed to cleanup your code.
What prompted this research was I am currently coding something that requires cleanup before exiting the function.
For example, I have created a file, and mapped a view of a file.
Normally, I'd just use goto or do {break;} while(false); to exit. However, is it true this is no longer necessary with C++11?
I.e. no more
if( fail ) {
UnmapViewOfFile(lpMapView);
CloseHandle(hFileMap);
CloseHandle(hFile);
}
every few lines of code?
Does the compiler automatically wrap this up once the function exits? It just seems hard to believe that it actually cleans up function calls like the article said it did. (I may have misinterpreted it somehow.) What seems more likely is that it just cleans up created class libraries by calling their deconstructor from the C++ library.
EDIT: The article - from Wikipedia:
It doesn't necessarily state that it cleans up these function calls, but it does imply it does for C++ library function objects (such as FILE * , fopen, etc objects)
Does it work for WinAPI too?
A: C++ standard surely says nothing about usage of windows API functions like UnmapViewOfFile or CloseHandle. RAII is a programming idiom, you can use it or not, and its a lot older than C++11.
One of the reasons why RAII is recomended is that it makes life easier when working with exceptions. Destructors will always safely release any resources - mostly memory, but also handles. For memory your have classes in standard library, like unique_ptr and shared_ptr, but also vector and lots of other. For handles like those from WinAPI, you must write your own, like:
class handle_ptr {
public:
handle_ptr() {
// aquire handle
}
~handle_ptr() {
// release
}
}
A: Cleanup is still necessary, but due to the possibility of exceptions the code should not do cleanup simply by executing cleanup operations at the end of a function. That end may never be reached! Instead,
Do cleanup in destructors.
In C++11 it is particularly easy to any kind of cleanup in a destructor without defining a custom class, since it's now much easier to define a scope guard class. Scope guards were invented by Petru Marginean, who with Andrei Alexandrescu published an article about it in DDJ. But that original C++03 implementation was pretty complex.
In C++11, a bare bones scope guard class:
class Scope_guard
: public Non_copyable
{
private:
function<void()> f_;
public:
void cancel() { f_ = []{}; }
~Scope_guard()
{ f_(); }
Scope_guard( function<void()> f )
: f_( move( f ) )
{}
};
where Non_copyable provides move assignment and move construction, as well as default construction, but makes copy assignment and copy construction private.
Now right after successfully acquiring some resource you can declare a Scope_guard object that will guaranteed clean up at the end of the scope, even in the face of exceptions or other early returns, like
Scope_guard unmapping( [&](){ UnmapViewOfFile(lpMapView); } );
Addendum:
I should better also mention the standard library smart pointers shared_ptr and unique_ptr, which take care of pointer ownership, calling a deleter when the number of owners goes to 0. As the names imply they implement respectively shared and unique ownership. Both of them can take a custom deleter as argument, but only shared_ptr supports calling the custom deleter with the original pointer value when the smart pointer is copied/moved to base class pointer.
Also, I should better also mention the standard library container classes such as in particular vector, which provides a dynamic size copyable array, with automatic memory management, and string, which provides much the same for the particular case of array of char uses to represent a string. These classes free you from having to deal directly with new and delete, and get those details right.
So in summary,
*
*use standard library and/or 3rd party containers when you can,
*otherwise use standard library and/or 3rd party smart pointers,
*and if even that doesn't cut it for your cleanup needs, define custom classes that do cleanup in their destructors.
A: As @zero928 said in the comment, RAII is a way of thinking. There is no magic that cleans up instances for you.
With RAII, you can use the object lifecycle of a wrapper to regulate the lifecycle of legacy types such as you describe. The shared_ptr<> template coupled with an explicit "free" function can be used as such a wrapper.
A: As far as I know C++11 won't care of cleanup unless you use elements which would do. For example you could put this cleaning code into the destructor of a class and create an instance of it by creating a smart-pointer. Smart-pointers delete themselves when they are not longer used or shared. If you make a unique-pointer and this gets deleted, because it runs out of scope then it automatically calls delete itself, hence your destructor is called and you don't need to delete/destroy/clean by yourself.
See http://www.cplusplus.com/reference/memory/unique_ptr/
This is just what C++11 has new for automatically cleaning. Of course an usual class instance running out of scope calls its destructor, too.
A: No!
RAII is not about leaving clean-up aside, but doing it automatically. The clean-up can be done in a destructor call.
A pattern could be:
void f() {
ResourceHandler handler(make_resource());
...
}
Where the ResourceHandler is destructed (and does the clean-up) at the end of the scope or if an exception is thrown.
A: The WIN32 API is a C API - you still have to do your own clean up. However nothing stops you from writing C++ RAII wrappers for the WIN32 API.
Example without RAII:
void foo
{
HANDLE h = CreateFile(_T("C:\\File.txt"), FILE_READ_DATA, FILE_SHARE_READ,
NULL, OPEN_ALWAYS, 0, NULL);
if ( h != INVALID_HANDLE_VALUE )
{
CloseHandle(h);
}
}
And with RAII:
class smart_handle
{
public:
explicit smart_handle(HANDLE h) : m_H(h) {}
~smart_handle() { if (h != INVALID_HANDLE_VALUE) CloseHandle(m_H); }
private:
HANDLE m_H;
// this is a basic example, could be implemented much more elegantly! (Maybe a template param for "valid" handle values since sometimes 0 or -1 / INVALID_HANDLE_VALUE is used, implement proper copying/moving etc or use std::unique_ptr/std::shared_ptr with a custom deleter as mentioned in the comments below).
};
void foo
{
smart_handle h(CreateFile(_T("C:\\File.txt"), FILE_READ_DATA, FILE_SHARE_READ,
NULL, OPEN_ALWAYS, 0, NULL));
// Destructor of smart_handle class would call CloseHandle if h was not NULL
}
RAII can be used in C++98 or C++11.
A: I really liked the explanation of RAII in The C++ Programming Language, Fourth Edition
Specifically, sections 3.2.1.2, 5.2 and 13.3 explain how it works for managing leaks in the general context, but also the role of RAII in properly structuring your code with exceptions.
The two main reasons for using RAII are:
*
*Reducing the use of naked pointers that are prone to causing leaks.
*Reducing leaks in the cases of exception handling.
RAII works on the concept that each constructor should secure one and only one resource. Destructors are guaranteed to be called if a constructor completes successfully (ie. in the case of stack unwinding due to an exception being thrown). Therefore, if you have 3 types of resources to acquire, you should have one class per type of resource (class A, B, C) and a fourth aggregate type (class D) that acquires the other 3 resources (via A, B & C's constructors) in D's constructor initialization list.
So, if resource 1 (class A) succeeded in being acquired, but 2 (class B) failed and threw, resource 3 (class C) would not be called. Because resource 1 (class A)'s constructor had completed, it's destructor is guaranteed to be called. However, none of the other destructors (B, C or D) will be called.
A: It does NOT cleanup `FILE*.
If you open a file, you must close it. I think you may have misread the article slightly.
For example:
class RAII
{
private:
char* SomeResource;
public:
RAII() : SomeResource(new char[1024]) {} //allocated 1024 bytes.
~RAII() {delete[] SomeResource;} //cleaned up allocation.
RAII(const RAII& other) = delete;
RAII(RAII&& other) = delete;
RAII& operator = (RAII &other) = delete;
};
The reason it is an RAII class is because all resources are allocated in the constructor or allocator functions. The same resource is automatically cleaned up when the class is destroyed because the destructor does that.
So creating an instance:
void NewInstance()
{
RAII instance; //creates an instance of RAII which allocates 1024 bytes on the heap.
} //instance is destroyed as soon as this function exists and thus the allocation is cleaned up
//automatically by the instance destructor.
See the following also:
void Break_RAII_And_Leak()
{
RAII* instance = new RAII(); //breaks RAII because instance is leaked when this function exits.
}
void Not_RAII_And_Safe()
{
RAII* instance = new RAII(); //fine..
delete instance; //fine..
//however, you've done the deleting and cleaning up yourself / manually.
//that defeats the purpose of RAII.
}
Now take for example the following class:
class RAII_WITH_EXCEPTIONS
{
private:
char* SomeResource;
public:
RAII_WITH_EXCEPTIONS() : SomeResource(new char[1024]) {} //allocated 1024 bytes.
void ThrowException() {throw std::runtime_error("Error.");}
~RAII_WITH_EXCEPTIONS() {delete[] SomeResource;} //cleaned up allocation.
RAII_WITH_EXCEPTIONS(const RAII_WITH_EXCEPTIONS& other) = delete;
RAII_WITH_EXCEPTIONS(RAII_WITH_EXCEPTIONS&& other) = delete;
RAII_WITH_EXCEPTIONS& operator = (RAII_WITH_EXCEPTIONS &other) = delete;
};
and the following functions:
void RAII_Handle_Exception()
{
RAII_WITH_EXCEPTIONS RAII; //create an instance.
RAII.ThrowException(); //throw an exception.
//Event though an exception was thrown above,
//RAII's destructor is still called
//and the allocation is automatically cleaned up.
}
void RAII_Leak()
{
RAII_WITH_EXCEPTIONS* RAII = new RAII_WITH_EXCEPTIONS();
RAII->ThrowException();
//Bad because not only is the destructor not called, it also leaks the RAII instance.
}
void RAII_Leak_Manually()
{
RAII_WITH_EXCEPTIONS* RAII = new RAII_WITH_EXCEPTIONS();
RAII->ThrowException();
delete RAII;
//Bad because you manually created a new instance, it throws and delete is never called.
//If delete was called, it'd have been safe but you've still manually allocated
//and defeated the purpose of RAII.
}
fstream always did this. When you create an fstream instance on the stack, it opens a file. when the calling function exists, the fstream is automatically closed.
The same is NOT true for FILE* because FILE* is NOT a class and does NOT have a destructor. Thus you must close the FILE* yourself!
EDIT: As pointed out in the comments below, there was a fundamental problem with the code above. It is missing a copy constructor, a move constructor and assignment operator.
Without these, trying to copy the class would create a shallow copy of its inner resource (the pointer). When the class is destructed, it would have called delete on the pointer twice! The code was edited to disallow copying and moving.
For a class to conform with the RAII concept, it must follow the rule for three: What is the copy-and-swap idiom?
If you do not want to add copying or moving, you can simply use delete as shown above or make the respective functions private.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/20891100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Access website from browser in vagrant On windows 10 I have ran my vagrant up and then ssh into my vm successfully. Installed apache2 php5-cli php5 libapache2-mod-php
Now when i access localhost:8080 it is showing me apache default welcome page. How can i access my site in the browser ?
Here are the contents of my Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
# Every Vagrant development environment requires a box. You can search for
# boxes at https://atlas.hashicorp.com/search.
config.vm.box = "ubuntu/trusty64"
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
config.vm.network "forwarded_port", guest: 80, host: 8080
end
This is my current directory structure
A: You'll need to get your data into the VM and configure Apache to serve that data. For starters, add this to you Vagrantfile (after the confiv.vm.network line):
config.vm.synced_folder ".", "/var/www/html"
It will make your app folder available under /var/www/html on the VM. Apache on Ubuntu serves from that folder by default, so you should be able to see something after doing vagrant reload.
A: when you edit the configuration file by using vim or any other editor. After that, you have to reload the vagrant and then try to access the localhost:8080
Use the command
vagrant reload
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33916238",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: how android device identifies which layout to use In my android app, I have set the layout resources for both large and x-large screens, viz:
layout-large and layout-xlarge. When I open run it in an device emulator with "large" screen, it gets the layout from the "layout-large" folder, which seems to be correct. But when I use a device with x-large screen size, it still uses the "layout-large" resources.
The x-large device I used is a 10", 1280x800, 240dp emulator. Any idea?
I've included the following in the manifest:
<supports-screens
android:anyDensity="true"
android:smallScreens="true"
android:normalScreens="true"
android:largeScreens="true"
android:resizeable="true" />
A: below link will help you to understand about how android picks up layout files on various devices
http://developer.android.com/guide/practices/screens_support.html
A: Are you sure the folder name is layout-xlarge and not layout-x-large ? DOC
A: As per Android Documentation for the runtime rendering of layout
At runtime, the system ensures the best possible display on the current screen with the following procedure for any given resource:
The system uses the appropriate alternative resource
Based on the size and density of the current screen, the system uses any size- and density-specific resource provided in your application. For example, if the device has a high-density screen and the application requests a drawable resource, the system looks for a drawable resource directory that best matches the device configuration. Depending on the other alternative resources available, a resource directory with the hdpi qualifier (such as drawable-hdpi/) might be the best match, so the system uses the drawable resource from this directory.
If no matching resource is available, the system uses the default resource and scales it up or down as needed to match the current screen size and density
The "default" resources are those that are not tagged with a configuration qualifier. For example, the resources in drawable/ are the default drawable resources. The system assumes that default resources are designed for the baseline screen size and density, which is a normal screen size and a medium density. As such, the system scales default density resources up for high-density screens and down for low-density screens, as appropriate.
However, when the system is looking for a density-specific resource and does not find it in the density-specific directory, it won't always use the default resources. The system may instead use one of the other density-specific resources in order to provide better results when scaling. For example, when looking for a low-density resource and it is not available, the system prefers to scale-down the high-density version of the resource, because the system can easily scale a high-density resource down to low-density by a factor of 0.5, with fewer artifacts, compared to scaling a medium-density resource by a factor of 0.75.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11541627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Export variable label for SPSS with haven I would like to export a data set I work on in R for my colleagues to use in SPSS. When I export the data set I would like to include variable labels (i.e. the column below), I am not asking about value labels which describe the levels of the variable:
Is there an option in haven that allows me to set this variable label?
I have searched the documentation and found only functions to set value labels. I notice haven is a wrapper for ReadStat which seems to support variable labels. In the ReadStat documentation a variable label (Citizenship of respondent) can be seen in the chunk below:
{
"type": "SPSS",
"variables": [
{
"type": "NUMERIC",
"name": "citizenship",
"label": "Citizenship of respondent",
"categories": [
{
"code": 1,
"label": "Afghanistan"
},
...
My understanding of C++ is unfortunately not sophisticated enough to understand how haven works under the hood, so any suggestions are very welcome.
I have found a workaround, which involves manually setting the variable label by use of attributes. Consider the example below, using a teaching data set from the UK Data Service:
# install.packages("tidyverse")
library("tidyverse")
tmp = tempfile(fileext = ".zip")
tmpdir = tempdir()
download.file(
"http://ws.ukdataservice.ac.uk/REST/Download/Download/DSO/7912spss_e5b795672124e5b409e4a53c1a06fb9e.zip",
destfile = tmp
)
unzip(tmp, exdir = tmpdir)
tmpdir = paste0(tmpdir, "/UKDA-7912-spss/spss/spss19/")
file = paste0(tmpdir, list.files(tmpdir))
dat = haven::read_sav(file)
str(dat)
# Classes ‘tbl_df’, ‘tbl’ and 'data.frame': 22428 obs. of 14 variables:
# $ CASENEW : atomic 1 2 3 5 5 6 6 7 8 9 ...
# ..- attr(*, "label")= chr "New random ID number"
# ..- attr(*, "format.spss")= chr "F8.2"
# ..- attr(*, "display_width")= int 10
# ...
I can therefore change the variable label with:
attr(dat$CASENEW, "label") = "Foo"
attr(dat$CASENEW, "label")
# "Foo"
Which, when I write to a new .sav file, does indeed open as intended in SPSS. My question is, is there a native way to do this in haven?
A: Hadley's answer:
Just set the attributes— Hadley Wickham (@hadleywickham) October 27, 2017
So there you have it: the canonical haven answer is just to set the attributes.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/46954098",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: adding @Transient over MappedSuperClass attributes Currently I have a MappedSuperClass called BaseEntity, which I am extending over all my entity classes
@MappedSuperclass
public abstract class PersistentObject extends BaseEntity {
/**
*
*/
@Transient
private static final long serialVersionUID = -1701208353317749260L;
protected Tenant tenant;
@ManyToOne(fetch = FetchType.LAZY)
@JoinColumn(name = "TENANT_ID")
public Tenant getTenant() {
return this.tenant;
}
public void setTenant(Tenant tenant) {
this.tenant = tenant;
}
@Version
@Column(name = "VERSION")
public int getVersion() {
return this.version;
}
public void setVersion(int version) {
this.version = version;
}
}
In my entities which contains static data, I dont need tenant_id to be added where as i need all the other atrributes in BaseEntity.
Currently I can only change my column name and so on using AttributeOverrides and AssosciationOverrides, so how can i add @Transient over unneeded fields in MappedSuperClass?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12088078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Is there a way to calculate the mixed color (in RGB) from n (n>1) RGBA colors in a specified order in JavaScript? Say I have a svg like this:
<svg>
<rect id="background" x="0" y="0" width="100" height="100" fill="rgba(10,10,10,1)" />
<rect x="5" y="5" width="100" height="100" fill="rgba(255,125,0,.25)" />
<rect x="25" y="25" width="100" height="100" fill="rgba(0,125,0,.55)" />
<rect x="45" y="45" width="100" height="100" fill="rgba(255,225,25,.66)" />
</svg>
It will rendered as:
How can I get the RGB color of this by JavaScript calculation?
(I got the result (178,178,18) by using a color picker tool.
I am looking for some function like this:
function getMixedRGBByColors(bg_color_in_rgb, [colors_in_rgba_arr]) {
// bg_color_in_rgb defines the background color, it's not transaparent
// colors_in_rgba_arr is the array of the
// colors above the background, in a specific order
// (it needs to be an array because changing the order
// will change the outcome)
....
};
// Usage:
getMixedRGBByColors(
"10,10,10"
[
"255,125,0,.25",
"0,125,0,.55",
"255,255,25,.66"
]);
A: The best explanation is that the http://en.wikipedia.org/wiki/RGB_color_model is somewhat unintuitive for us humans.
And following is pretty well worked.
NewColor.R = Color1.R - (Color1.R - Color2.R)/2
NewColor.G = Color1.G - (Color1.G - Color2.G)/2
NewColor.B = Color1.B - (Color1.B - Color2.B)/2
https://github.com/benjholla/ColorMixer also will help you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/68124812",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Not implemented alter command for SQL ALTER TABLE "annotator_annotationmodel" ADD COLUMN "hospital_id" int NOT NULL I'm getting this error after doing migrate in Django rest framework backend server.
models.py file
class Hostipal(models.Model):
hospitalId = models.CharField(max_length=15, unique=True)
hospitalName = models.CharField(max_length=255, default='')
hospitalAddress = models.CharField(max_length=255, default='')
def __str__(self):
return self.hospitalName
class AnnotationModel(models.Model):
name = models.CharField(max_length=255, default='')
model_path = models.CharField(max_length=255, default='')
hospital = models.ForeignKey(Hostipal, on_delete=models.CASCADE)
def __str__(self):
return self.name
class ModelPool(models.Model):
name = models.CharField(max_length=255, default='')
modelsList = models.ManyToManyField(AnnotationModel, related_name="modelpool")
hospital = models.ForeignKey(Hostipal, on_delete=models.CASCADE)
def __str__(self):
return self.name
What I am trying to build:
*
*Every hospital has many AnnotationModels and ModelPools
*Each AnnotationModel may belong to many model pools
*Hence AnnotationModel has a hospital foreign key and ModelPool has a hospital foreign key and AnnotationModel ManyToMany column
Can we implement this using any other techniques?
Detailed error:
python3 manage.py migrate
Operations to perform:
Apply all migrations: admin, annotator, auth, contenttypes, sessions
Running migrations:
This version of djongo does not support "NULL, NOT NULL column validation check" fully. Visit https://nesdis.github.io/djongo/support/
Applying contenttypes.0001_initial...This version of djongo does not support "schema validation using CONSTRAINT" fully. Visit https://nesdis.github.io/djongo/support/
OK
Applying contenttypes.0002_remove_content_type_name...This version of djongo does not support "COLUMN DROP NOT NULL " fully. Visit https://nesdis.github.io/djongo/support/
This version of djongo does not support "DROP CASCADE" fully. Visit https://nesdis.github.io/djongo/support/
OK
Applying auth.0001_initial...This version of djongo does not support "schema validation using KEY" fully. Visit https://nesdis.github.io/djongo/support/
This version of djongo does not support "schema validation using REFERENCES" fully. Visit https://nesdis.github.io/djongo/support/
OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying auth.0007_alter_validators_add_error_messages... OK
Applying auth.0008_alter_user_username_max_length... OK
Applying auth.0009_alter_user_last_name_max_length... OK
Applying auth.0010_alter_group_name_max_length... OK
Applying auth.0011_update_proxy_permissions... OK
Not implemented alter command for SQL ALTER TABLE "annotator_annotationmodel" ADD COLUMN "hospital_id" int NOT NULL
Applying annotator.0001_initial...Traceback (most recent call last):
File "/Users/gamemaster/GitHub/DMP_Backend/env/lib/python3.8/site-packages/djongo/cursor.py", line 51, in execute
self.result = Query(
File "/Users/gamemaster/GitHub/DMP_Backend/env/lib/python3.8/site-packages/djongo/sql2mongo/query.py", line 783, in __init__
self._query = self.parse()
File "/Users/gamemaster/GitHub/DMP_Backend/env/lib/python3.8/site-packages/djongo/sql2mongo/query.py", line 875, in parse
raise e
File "/Users/gamemaster/GitHub/DMP_Backend/env/lib/python3.8/site-packages/djongo/sql2mongo/query.py", line 856, in parse
return handler(self, statement)
File "/Users/gamemaster/GitHub/DMP_Backend/env/lib/python3.8/site-packages/djongo/sql2mongo/query.py", line 888, in _alter
query = AlterQuery(self.db, self.connection_properties, sm, self._params)
File "/Users/gamemaster/GitHub/DMP_Backend/env/lib/python3.8/site-packages/djongo/sql2mongo/query.py", line 425, in __init__
super().__init__(*args)
File "/Users/gamemaster/GitHub/DMP_Backend/env/lib/python3.8/site-packages/djongo/sql2mongo/query.py", line 84, in __init__
super().__init__(*args)
File "/Users/gamemaster/GitHub/DMP_Backend/env/lib/python3.8/site-packages/djongo/sql2mongo/query.py", line 62, in __init__
self.parse()
File "/Users/gamemaster/GitHub/DMP_Backend/env/lib/python3.8/site-packages/djongo/sql2mongo/query.py", line 435, in parse
self._add(statement)
File "/Users/gamemaster/GitHub/DMP_Backend/env/lib/python3.8/site-packages/djongo/sql2mongo/query.py", line 598, in _add
raise SQLDecodeError(err_key=tok.value,
djongo.exceptions.SQLDecodeError:
Keyword: int
Sub SQL: ALTER TABLE "annotator_annotationmodel" ADD COLUMN "hospital_id" int NOT NULL
FAILED SQL: ('ALTER TABLE "annotator_annotationmodel" ADD COLUMN "hospital_id" int NOT NULL',)
Params: ([],)
Version: 1.3.4
The above exception was the direct cause of the following exception:
| {
"language": "en",
"url": "https://stackoverflow.com/questions/69903529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Euler.js Layout - Cytoscape I have deployed this layout to my application and is working fine, but I have two questions:
*
*"Ramdomize" parameter must be always "true" so that the browser won't run into "Memory Out of Space" error. But, my question here is how can generate the same graph orientation very time when I launch my application?
*Where can I find the graph state when generated and store it as the default layout so that users can see the same graph layout with the same position every time they launch the application.
Thank you very much.
Regards,
Arthur
| {
"language": "en",
"url": "https://stackoverflow.com/questions/66595314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: how to hide text in html page source? I wanna show some text (and images) in browser but this text shouldn't be able to select in page preview or page source :
*
*At first i tried to use canvas, but managing text and also images in canvas is not easy and for this case i can't use canvas.
*I tried to use image but in this case, image is too slow to load.
*I used ROT13 encryption in Aptana studio, but ROT13 just encrypt page source with JS and when you click on 'inspect element' in chrome or opera you can see decrypt text and html yet.
Question: Is there any way in jquery or anything else?
A: No, whatever you display as text in webpage can be found by digging into the source of the webpage (including js). What would this be useful for btw.?
Edit: This looks useful but ends up using canvas or flash I believe. Still might be tuned to be fairly fast and therefor useful:
http://eric-blue.com/2010/01/03/how-to-create-your-own-personal-document-viewer-like-scribd-or-google-books/
A: You most likely won't find a way to do this easily, as when the browser downloads the page, in order to show the text to the user it has to be decoded or decrypted. So no matter what, if the user can see it, they can copy it. If all you want is to block selection in the browser, this answer should help
A: No, if you want to place something on the page a browser need to know what you want to place on the page. And everything what was sent to the browser is readable for a user. So you cannot do this.
The answer is very simple: If you don't want to publish something don't place it on the internet.
A: yes, this my logic check out
make you string in ascii code and write on document
check below link and find example may you help
Link W3School
A: I guess no one could do that.
Just use some image instead, old-style, but useful.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/9109434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Optional try for function with no return value Could it be that when using try? (optional try) for the call of a throwing function with no return value the errors are just ignored?
func throwingVoidFunction() throws { . . . }
try? throwingVoidFunction()
I expected that the compiler does not allow a try? in front of a throwing function with return type void, but the compiler doesn't complain.
So is using try? in front of a void function a way to absorb errors? (like when using an empty default catch: catch {})
A: There is no reason for the compiler to complain. The return type
of
func throwingVoidFunction() throws { ... }
is Void and therefore the type of the expression
try? throwingVoidFunction()
is Optional<Void>, and its value is nil (== Optional<Void>.none) if an error was thrown while evaluating the expression,
and Optional<Void>.some() otherwise.
You can ignore the return value or test it against nil. An
example is given in An elegant way to ignore any errors thrown by a method:
let fileURL = URL(fileURLWithPath: "/path/to/file")
let fm = FileManager.default
try? fm.removeItem(at: fileURL)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/41751089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Java Collection-Within-Collection Concurrency I'm trying to make a class which utilizes a set-within-a-map thread-safe. I'm unsure of what particularly needs to be synchronized.
The map is defined as something similar to Map<Class<K>, Set<V>> map;. The following is a reduction of the way the map is being used internally in the implementation:
public void addObject(K key, V object) {
getSet(key).add(object);
}
public void removeObject(K key, V object) {
getSet(key).remove(object);
}
public void iterateObjectsInternally(K key, Object... params)
{
for (V o : getSet(key)) {
o.doSomething(params);
}
}
private Set<V> getSet(K key) {
if (!map.containsKey(key)) {
map.put(key, new Set<V>());
}
return map.get(key);
}
Problems with Map
As far as using the map itself goes, the only concurrency problems I see would be in getSet(K), where thread context may switch between containsKey and put. In this case, the following may happen:
[Thread A] map.containsKey(key) => returns false
[Thread B] map.containsKey(key) => returns false
[Thread B] map.put(key, new Set<V>())
[Thread B] map.get(key).add(object)
[Thread A] map.put(key, new Set<V>()) => Thread A ovewrites Thread B's object [!]
[Thread B] map.get(key).add(object)
Now, I'm currently using a regular HashMap for this implementation. And, if I am correctly, using Collection.synchronizedMap() or ConcurrentHashMap will only solves concurrency issues on the method-level. That is, methods will be performed atomically. These say nothing about the way methods interact with each other, so the following could still happen even when using a concurrent solution.
ConcurrentHashMap does, however, have the method putIfAbsent. The downside to this is that the statement map.putIfAbsent(key, new Set<V>()) will create a new set every time the set is requested. This seems like a lot of overhead.
On the other hand, is it enough, however, to simply wrap these two statements in a synchronized block?
synchronized(map) {
if (!map.containsKey(key)) {
map.put(key, new Set<V>());
}
}
Is there a better way than locking the entire map? Is there a way to lock only the key, so that reads on other values of the map aren't locked out?
synchronized(key) {
if (!map.containsKey(key)) {
map.put(key, new Set<V>());
}
}
Keep in mind that the keys are not necessarily the same object (they are specifically Class<?> types), but are equal by hashcode. Synchronizing by key may not work if synchronization requires object-address equality.
Problems with Set
The bigger issue, I think, is knowing if the set is being used properly. There are a few issues: adding objects, removing objects, and iterating objects.
Would wrapping the list in Collections.synchronizedList be enough to avoid concurrency issues in addObject and removeObject? I'm assuming that would be fine, as the synchronized wrapper would make them atomic operations.
However, iterating may be a different story. For iterateObjectsInternally, even if the set is synchronized, it still must be synchronized externally:
Set<V> set = getSet(key);
synchronized(set) {
for (V value : set) {
// thread-safe iteration
}
}
However, this seems like an awful waste. What if, instead, we replace simply use CopyOnWriteArrayList or CopyOnWriteArraySet as the definition. Since iteration would simply use a snapshot of the array contents, there's no way to modify it from another thread. Also, CopyOnWriteArrayList uses a re-entrant lock on the add and remove methods, which means that add/remove would be inherently safe as well (as they are synchronized methods). CopyOnWriteArrayList seems attractive because the number of iterations on the internal structure vastly outweigh the number of modifications on the list. Also, with a copied iterator, there is no need to worry about addObject or removeObject messing up the iteration from iterateObjectInternally (ConcurrentModificationExceptions) in another thread.
Are these concurrency checks on the right track and/or rigorous enough? I'm a newbie with concurrent programming problems, and I may be missing something obvious, or over-thinking. I know there are a few similar questions, but my implementation seemed different enough to warrant asking the questions as specifically as I did.
A: You are definitely overthinking this. Use a simple ConcurrentHashMap, and ConcurrentSkipListSet/CopyOnWriteArraySet depending on your concurrency characteristics (mainly if iteration needs to take into account on-the-fly modifications of the data). Use something like the following snippet as the getSet method:
private Set<V> getSet(K key) {
Set<V> rv = map.get(key);
if (rv != null) {
return rv;
}
map.putIfAbsent(key, new Set<V>());
return map.get(key);
}
This will ensure proper lockless concurrency when adding/removing objects, for the iteration you will need to determine whether missing updates is an issue in your problem domain. if it's not an issue to miss a new object when it's added during the iteration, use the CopyOnWriteArraySet.
On the third hand, you want to take a deep look into what kind of granularity you can use w.r.t. concurrency, what your requirements are, what the proper behavior is in edge cases, and most of all, what performance and concurrency characteristics your code must cover - if it's something that happens twice on startup, I'd just make all methods synchronized and be done with it.
A: If you are going to be adding to the 'set' often, CopyOnWriteArrayList and CopyOnWriteArraySet are not going to be viable - They use far too many resources for add operations. However, If you are adding rarely, and iterating over the 'set' often, than they are your best bet.
The Java ConcurrentHashMap puts every single map into a bucket per se - your put if absent operation will lock the list when it searches for the key, and then, release the lock and put in the key. Definitely use the ConcurrentHashMap instead of the map.
Your getSet method could will be inherently slow, especially when synchronized - perhaps, you could preload all of the keys and the sets sooner than later.
I suggest you follow in line with what Louis Wasserman says, and see if your performance is decent with the Guava implementation.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12101619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Does SQL's EXISTS function select duplicates? (Oracle SQL) t1 has all the credits of each person in a given ORACLE database. Each person is identified by ID. t2 contains the people I need to select from t1.
To do this, I tried the following:
select
ID,
capital,
balance
from
t1
where
exists (
select ID
from t2
where t1.ID = t2.ID);
My problem is that t2 has duplicate IDs. So for example, if ID = 2 has 3 credits and she is repeated 4 times in t2, will the above query select each of her credits 4 times? That is, will I have 3 x 4 credits for ID = 2?
Typically, I would count the duplicate credits from the result, but I currently do not have a unique identifier for the selected credits. This is the reason I am asking this here rather than trying a grouped by count from the query's result.
A: No. The query that you have written only returns rows from t1. You cannot multiply rows using a where clause (well, almost never and not in Oracle).
One reason for using exists or in is so you don't have to worry about duplicates, the way you would need to worry with a join.
A: No - think of exists as a True/False function. True: returns >0 rows; False: returns 0 rows. So it doesn't matter if there are duplicates.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61036765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Rails - undefined method "" for - Nested forms with select multiples i cannot find how i can fix my "undefined method "banks" for" in my dealsController.
My models :
class Deal < ActiveRecord::Base
has_many :pools
accepts_nested_attributes_for :pools,
reject_if: proc { |attributes| attributes['name'].blank?},
allow_destroy: true
validates :name, presence: true
end
class Pool < ActiveRecord::Base
belongs_to :deal
has_many :participating_banks, dependent: :destroy
has_many :banks, through: :participating_banks
validates :name, presence: true
end
class Bank < ActiveRecord::Base
has_many :participating_banks, dependent: :destroy
has_many :pools, through: :participating_banks
end
class ParticipatingBank < ActiveRecord::Base
belongs_to :pool
belongs_to :bank
end
My nested form for deal creation :
<%= form_for @deal do |f| %>
<p><%= f.text_field :name, placeholder: "Deal name" %></p>
<p><%= f.date_field :closing_date, placeholder: "Closing date" %></p>
<h2> Pools </h2>
<%= f.fields_for :pools do |builder| %>
<%= render 'pool_pools', f: builder %>
<% end %>
<%= link_to_add_pools "Add pool", f, :pools %>
<p><%= f.submit %></p>
<% end %>
My partial pool_pools :
<fieldset>
<%= f.label :name %> <br/>
<%= f.text_field :name %>
<%= f.hidden_field :_destroy %> <br/>
<%= f.select :bank_ids, Bank.all.collect {|x| [x.name, x.id]}, {}, :multiple => true %> <br/>
<%= link_to "remove", "#", class: "remove_pools" %>
</fieldset>
And my Deal/Show view where the issue is triggered (@banks.each) :
<h1> <%= @deal.name %> </h1>
<p> <%= @deal.closing_date %> </p>
<% if @deal.pools.any? %>
<ul>
<% @pools.each do |pool| %>
<li> <%= pool.name %> </li>
<ul>
<% @banks.each do |bank| %>
<%= bank.name %>
<% end %>
</ul>
<% end %>
</ul>
<% else %>
<p> No pools created yet - <%= link_to "New Pool", new_pool_path %> </p>
<% end %>
<%= link_to "Home", root_path %>
<%= link_to "Edit deal", edit_deal_path(@deal) %>
<%= link_to "Delete deal", deal_path(@deal), method: :delete, data: {confirm: "Are you sure?"} %>
How can i define my show action my DealsController to only show the Banks who belongs to the pool (which belongs to the deal) ?
def show
@pools = @deal.pools
@banks = @pool.banks
end
EDIT:
Below are the logs of the update action :
Started PATCH "/deals/9" for ::1 at 2016-01-30 15:11:22 +0100
Processing by DealsController#update as HTML
Parameters: {"utf8"=>"✓", "authenticity_token"=>"2G45qGM1BbfFKI38O6h+UkEdlKQNNIG5HkTy3llnO4Nq/xSIOzBr+TQhntEh90QXWHh6R2n0HG5uCxtCiU8yZg==", "deal"=>{"name"=>"Deal test 1é", "closing_date"=>"", "pools_attributes"=>{"0"=>{"name"=>"pool 10001", "_destroy"=>"false", "bank_ids"=>["", "1", "2"], "id"=>"12"}, "1"=>{"name"=>"", "_destroy"=>"false", "bank_ids"=>[""]}}}, "commit"=>"Update Deal", "id"=>"9"}
[1m[36mDeal Load (0.2ms)[0m [1mSELECT "deals".* FROM "deals" WHERE "deals"."id" = ? LIMIT 1[0m [["id", 9]]
[1m[35m (0.1ms)[0m begin transaction
[1m[36mPool Load (0.4ms)[0m [1mSELECT "pools".* FROM "pools" WHERE "pools"."deal_id" = ? AND "pools"."id" = 12[0m [["deal_id", 9]]
[1m[35mBank Load (0.2ms)[0m SELECT "banks".* FROM "banks" WHERE "banks"."id" IN (1, 2)
[1m[36mBank Load (0.2ms)[0m [1mSELECT "banks".* FROM "banks" INNER JOIN "participating_banks" ON "banks"."id" = "participating_banks"."bank_id" WHERE "participating_banks"."pool_id" = ?[0m [["pool_id", 12]]
[1m[35m (0.1ms)[0m commit transaction
Redirected to http://localhost:3000/deals/9
Completed 302 Found in 28ms (ActiveRecord: 1.6ms)
Started GET "/deals/9" for ::1 at 2016-01-30 15:11:22 +0100
Processing by DealsController#show as HTML
Parameters: {"id"=>"9"}
[1m[36mDeal Load (0.1ms)[0m [1mSELECT "deals".* FROM "deals" WHERE "deals"."id" = ? LIMIT 1[0m [["id", 9]]
Completed 500 Internal Server Error in 2ms (ActiveRecord: 0.1ms)
NoMethodError (undefined method `banks' for nil:NilClass):
app/controllers/deals_controller.rb:10:in `show'
He is the full error message :
undefined method `banks' for nil:NilClass
Extracted source (around line #10):
def show
@pools = @deal.pools
10 @banks = @pool.banks
end
def new
Many thanks :)
A: Just use pool.banks in the iteration of your view. Your controller is wrong because you try to find banks of multiple pools.
<% @pools.each do |pool| %>
<li><%= pool.name %> </li>
<ul>
<% pool.banks.each do |bank| %>
<%= bank.name %>
<% end %>
</ul>
<% end %>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35102570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Subsets and Splits