text
stringlengths 15
59.8k
| meta
dict |
---|---|
Q: How to read Json in Google Data Studio, Big Query, Json I am using Big Query for as a cloud data warehouse and DataStudio for vizualisation.
In Big Query I have a table with a column named data written in JSON. I only want to extract what is inside the field "city".
This formula below that someone gave me worked to extract what is inside the field "title". I used it to create a field in DataStudio.
REPLACE(REGEXP_EXTRACT(data, '"title":(.+)","ur'), "\"", "")
So, I tried in multiple ways to reuse this formula for the "city" field, but it hasn't worked. I don't understand this code.
What's inside my column data:
{
"address":{
"city":"This is what i want",
"country":"blablabla",
"lineAdresse":"blablabla",
"region":"blablabla",
"zipCode":"blablabla"
},
"contract":"blablabla",
"dataType":"blablabla",
"description":"blablabla",
"endDate":{
"_seconds":1625841747,
"_nanoseconds":690000000
},
"entreprise":{
"denomination":"blabla",
"description":"1",
"logo":"blablabla",
"blabla":"blablabla",
"verified":"false"
},
"id":"16256R8TOUHJG",
"idEntreprise":"blablabla",
"jobType":"blablabla",
"listInfosRh":null,
"listeCandidats":[
],
"field":0,
"field":0,
"field":14,
"field":"1625834547690",
"field":true,
"field":"",
"field":"ref1625834547690",
"skills":[
"field",
"field",
"field"
],
"startDate":{
"_seconds":1625841747,
"_nanoseconds":690000000
},
"status":true,
"title":"this I can extract",
"urlRedirection":"blablabla",
"validated":true
}
If anyone knows the formula to put in Data Studio to extract what's inside city and can explain it to me, this would help a lot.
Here's the formula I tried but where I got "null" result:
REPLACE(REGEXP_EXTRACT(data,'"city":/{([^}]*)}/'),"\"","") >>null
I tried this one but it wouldn't stop at the city. I got the address, the region, zipcode and all the rest after:
REPLACE(REGEXP_EXTRACT(data, '"city":(.+)","ur'), "\"", "")
A: It is possible to parse a JSON in a text field by ignoring any hierarchy and only looking for a specific field. In your case the field names were title and city . Please be aware that this approach is not save for user entered data: By setting the value of the "city":"\" hide \"" the script cannot extract the city.
select *,
REGEXP_EXTRACT(data, r'"title":\s*"([^"]+)') as title,
REGEXP_EXTRACT(data, r'"city":\s*"([^"]+)') as city
from(
Select ' { "address":{ "city":"This is what i want", "country":"blablabla", "lineAdresse":"blablabla", "region":"blablabla", "zipCode":"blablabla" }, "contract":"blablabla", "dataType":"blablabla", "description":"blablabla", "endDate":{ "_seconds":1625841747, "_nanoseconds":690000000 }, "entreprise":{ "denomination":"blabla", "description":"1", "logo":"blablabla", "blabla":"blablabla", "verified":"false" }, "id":"16256R8TOUHJG", "idEntreprise":"blablabla", "jobType":"blablabla", "listInfosRh":null, "listeCandidats":[ ], "field":0, "field":0, "field":14, "field":"1625834547690", "field":true, "field":"", "field":"ref1625834547690", "skills":[ "field", "field", "field" ], "startDate":{ "_seconds":1625841747, "_nanoseconds":690000000 }, "status":true, "title":"this I can extract", "urlRedirection":"blablabla", "validated":true }' as data
)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/70605501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: apply "nopin" to "og:image" I'm not a coder but I was able to build my site from nonstop searching for "how to do xxxx" on this site. Thank you all so much for the posts and info!
I want to know how I can apply the "nopin" tag for pinterest to my "og:image"
Here is a link to a sample page http://jamesngart.com/harvester.html
I made a horizontally cropped image of the illustration to be used as the og:image for facebook and twitter links, but I dont want pinterest to pick it up. I used the nopin tag for some images that I dont want pinned and it worked, but I cant seem to apply it to the OG:image.
pin interest is also not picking up any of the data-pin info i enter, I was thinking to add "this is a cropped image please pin the others" but nothing works. Here is my code:
<meta property="og:image" content="http://jamesngart.com/img/OG-Harvester.jpg" nopin="nopin" />
Thank you!
James
A: Reading various articles about this on the wide web, there seems to be very few guides for specific images, but this
http://allyssabarnes.com/2013/07/22/how-to-block-your-images-from-being-pinned/
link shows:
<meta name="pinterest" content="nopin" description="Enter your new description here" />
and
<img src="your-image.png" nopin="nopin">
Which leads me to establish that due to Opengraph being a meta feature that you would need to do something like:
<meta property="og:image" content="jamesngart.com/img/OG-Harvester.jpg" nopin="nopin" />
I would also hope you'd be using https://developers.pinterest.com/docs/getting-started/introduction/ for reference as well.
See also https://stackoverflow.com/a/10421287/3536236
Which actually states that (as of 2012) Pinterest does not directly reference OG:images in its processing.
Overall it's a little questionable why you would want to share an image on OpenGraph, (ie for Facebook and Google searches) that would then not be available for Pinterest specifically.
A: Don't add the nopin attribute to the facebook open (og) graph metatag.
Instead create a new meta tag and add it below or above the opengraph tag.
In the following example - pinning is disabled for the whole page:
<meta property="og:image" content="jamesngart.com/img/OG-Harvester.jpg" />
<meta name="pinterest" content="nopin" />
If you want to disable pinning per image you have to add the nopin attribute to the image (IMG) tag:
<img src="jamesngart.com/img/OG-Harvester.jpg" nopin="nopin" />
Read more about pinterest data-attributes and metatags in this article at csstricks
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33451638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: assign array value to element using loop How can I give li the value of a,b,c? currently with my below code I got all c.
arr = ['a','b','c'];
arr.forEach(function(obj){
var x = obj;
$('li').each(function () {
$(this).text(x);
});
});
Something is wrong with my code?
A: That is because you are iterating over all li when iterating over array and setting the value. jquery .text() accepts function as argument which accepts index as parameter. This will eliminates the need of iterating over li and array elements:
var arr = ['a','b','c'];
$('li').text(function(i){
return arr[i];
});
Working Demo
A: First, you set the text of all li to 'a', then to 'b' and finally to 'c'.
Instead, you may try iterating the li elements, and set the text content of the current one to the corresponding item in arr:
var arr = ['a','b','c'];
$('li').each(function(i) {
$(this).text(arr[i]);
});
var arr = ['a','b','c'];
$('li').each(function(i) {
$(this).text(arr[i]);
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<ul>
<li></li>
<li></li>
<li></li>
</ul>
Alternatively (but similarly), you can iterate the array, and set the text content of the corresponding li to the current value:
var arr = ['a','b','c'],
$li = $('li');
arr.forEach(function(txt, i) {
$li.eq(i).text(txt);
});
var arr = ['a','b','c'],
$li = $('li');
arr.forEach(function(txt, i) {
$li.eq(i).text(txt);
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<ul>
<li></li>
<li></li>
<li></li>
</ul>
A: You should iterate by arr, not by all li elements, you can do it like this:
arr = ['a','b','c'];
$.each(arr, function(i,v){
$('li').eq(i).text(v);
});
This way you will assign exactly the same number of li text as there is elements in the array.
A: In your code, everytime the outer iterator performed, all <li> was texted with the iteration result. So as a result you'll get all 'c'.
To do it right, you should code like this:
arr = ['a','b','c'];
$('li').each(function(index) {
$(this).text(arr[index]);
});
A: The jQuery each() function will pass in an index, which you can use
arr = ['a','b','c'];
$('li').each(function(i) {
$(this).text(arr[i]);
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29715709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I use Binder to perform dynamic bindings in my C# Function? I need to bind to an output blob, but the blob path needs to be computed dynamically in my function. How do I do it?
A: Have consolidated all of the information from this an other posts along with comments and created a blog post that demonstrates how to use Binder with a real world scenario. Thanks to @mathewc this became possible.
A: Binder is an advanced binding technique that allows you to perform bindings imperatively in your code as opposed to declaratively via the function.json metadata file. You might need to do this in cases where the computation of binding path or other inputs needs to happen at runtime in your function. Note that when using an Binder parameter, you should not include a corresponding entry in function.json for that parameter.
In the below example, we're dynamically binding to a blob output. As you can see, because you're declaring the binding in code, your path info can be computed in any way you wish. Note that you can bind to any of the other raw binding attributes as well (e.g. QueueAttribute/EventHubAttribute/ServiceBusAttribute/etc.) You can also do so iteratively to bind multiple times.
Note that the type parameter passed to BindAsync (in this case TextWriter) must be a type that the target binding supports.
using System;
using System.Net;
using Microsoft.Azure.WebJobs;
public static async Task<HttpResponseMessage> Run(
HttpRequestMessage req, Binder binder, TraceWriter log)
{
log.Verbose($"C# HTTP function processed RequestUri={req.RequestUri}");
// determine the path at runtime in any way you choose
string path = "samples-output/path";
using (var writer = await binder.BindAsync<TextWriter>(new BlobAttribute(path)))
{
writer.Write("Hello World!!");
}
return new HttpResponseMessage(HttpStatusCode.OK);
}
And here is the corresponding metadata:
{
"bindings": [
{
"name": "req",
"type": "httpTrigger",
"direction": "in"
},
{
"name": "res",
"type": "http",
"direction": "out"
}
]
}
There are bind overloads that take an array of attributes. In cases where you need to control the target storage account, you pass in a collection of attributes, starting with the binding type attribute (e.g. BlobAttribute) and inlcuding a StorageAccountAttribute instance pointing to the account to use. For example:
var attributes = new Attribute[]
{
new BlobAttribute(path),
new StorageAccountAttribute("MyStorageAccount")
};
using (var writer = await binder.BindAsync<TextWriter>(attributes))
{
writer.Write("Hello World!");
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/39855409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: Android - Open new activity on item clicked in RecyclerView I'm creating a simple app for newbies, but I'm stuck at the main part. I created RecyclerView list. I want to know how to open a new Activity clicking on the first item in the ListView (see in screenshot).
Here is my MainActivity file:
public class MainActivity extends AppCompatActivity {
Toolbar toolbar;
DrawerLayout drawerLayout;
NavigationView navigationView;
private ArrayList<String> countries;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
initViews();
}
private void initViews() {
RecyclerView recyclerView = (RecyclerView) findViewById(R.id.card_recycler_view);
recyclerView.setHasFixedSize(true);
RecyclerView.LayoutManager layoutManager = new LinearLayoutManager(getApplicationContext());
recyclerView.setLayoutManager(layoutManager);
countries = new ArrayList<>();
countries.add("Computer");
countries.add("RAM");
countries.add("ROM");
countries.add("MotherBoard");
countries.add("Printer");
countries.add("CPU");
countries.add("Pendrive");
countries.add("Keyboard & Mouse");
RecyclerView.Adapter adapter = new DataAdapter(countries);
recyclerView.setAdapter(adapter);
recyclerView.addOnItemTouchListener(new RecyclerView.OnItemTouchListener() {
GestureDetector gestureDetector = new GestureDetector(getApplicationContext(),
new GestureDetector.SimpleOnGestureListener() {
@Override
public boolean onSingleTapUp(MotionEvent e) {
return true;
}
});
@Override
public boolean onInterceptTouchEvent(RecyclerView rv, MotionEvent e) {
View child = rv.findChildViewUnder(e.getX(), e.getY());
if (child != null && gestureDetector.onTouchEvent(e)) {
int position = rv.getChildAdapterPosition(child);
Toast.makeText(getApplicationContext(), countries.get(position),
Toast.LENGTH_SHORT).show();
}
return false;
}
@Override
public void onTouchEvent(RecyclerView rv, MotionEvent e) {
}
@Override
public void onRequestDisallowInterceptTouchEvent(boolean disallowIntercept) {
}
});
}
}
DataAdapter:
public class DataAdapter extends RecyclerView.Adapter<DataAdapter.ViewHolder> {
private ArrayList<String> countries;
public DataAdapter(ArrayList<String> countries) {
this.countries = countries;
}
@Override
public DataAdapter.ViewHolder onCreateViewHolder(ViewGroup viewGroup, int i) {
View view = LayoutInflater.from(viewGroup.getContext()).inflate(R.layout.card_row, viewGroup, false);
return new ViewHolder(view);
}
@Override
public void onBindViewHolder(DataAdapter.ViewHolder viewHolder, int i) {
viewHolder.tv_country.setText(countries.get(i));
}
@Override
public int getItemCount() {
return countries.size();
}
public class ViewHolder extends RecyclerView.ViewHolder{
private TextView tv_country;
public ViewHolder(View view) {
super(view);
tv_country = (TextView)view.findViewById(R.id.tv_country);
}
}
}
A: You should use onItemClickListener or OnClickListener
recyclerView.setOnClickListener(new View.OnClickListener() {
...
}
A: In the onBindViewHolder function add onClickListner on items.
@Override
public void onBindViewHolder(DataAdapter.ViewHolder viewHolder, int i) {
viewHolder.tv_country.setText(countries.get(i));
viewHolder.tv_country.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
}
});
}
In this example i added onCLickListner on tv_country. If you need further help ask me in comments.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45905492",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: Different behaviour of self referential model. Is it a bug? It seems that I found a bug. I have two similar models that behave differently.
I have a Post model that belongs_to an Author.
I have a Task model that is self-referencing.
Model code:
app/models/author.rb:
class Author < ActiveRecord::Base
has_many :posts
end
app/models/post.rb:
class Post < ActiveRecord::Base
belongs_to :author
scope :published, -> { where(status: 1) }
after_create do
puts '========================================='
puts "author_id is present: '#{author_id}'"
puts "what about task association? '#{author}'"
puts '========================================='
end
end
app/models/task.rb:
class Task < ActiveRecord::Base
belongs_to :task
has_many :tasks
scope :published, -> { where(status: 1) }
after_create do
puts '========================================='
puts "task_id is present: '#{task_id}'"
puts "what about task association? '#{task}'"
puts '========================================='
end
end
Both Post and Task are similarly scoped, but behave differently:
Author.create.posts.published.create # works
Task.create.tasks.published.create # doesn't work
Task.create.tasks.create # works
There is an after_create callback in Task model, where it should print the parent Task but it is nil despite task_id having the correct ID of parent.
Why is it behaving differently?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26544986",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Return String Generated by PHP Webpage I have a url like the following:
http://www.test.co.uk/Requests.php?accessToken=01XJSK
Depending on wether or not the access token is valid, either a 1 or 'invalidAT' is returned. However when I try to return that value, I end up returning HTML and not the string.
This is what I am currently trying:
- (NSString *) getDataFrom:(NSString *)url{
NSMutableURLRequest *request = [[NSMutableURLRequest alloc] init];
[request setHTTPMethod:@"GET"];
[request setURL:[NSURL URLWithString:url]];
NSError *error = [[NSError alloc] init];
NSHTTPURLResponse *responseCode = nil;
NSData *oResponseData = [NSURLConnection sendSynchronousRequest:request returningResponse:&responseCode error:&error];
if([responseCode statusCode] != 200){
NSLog(@"Error getting %@, HTTP status code %i", url, [responseCode statusCode]);
return nil;
}
return [[NSString alloc] initWithData:oResponseData encoding:NSUTF8StringEncoding];
}
Can anyone explain how I go about returning either the '1' or the 'invalidAT'?
A: The url you give as an example "http://www.sonect.co.uk/Requests.php?accessToken=01XJSK", returns html with a frame that has "http://lvps92-60-123-84.vps.webfusion.co.uk/Requests.php?accessToken=01XJSK" as it's source:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<title>www.sonect.co.uk</title>
</head>
<frameset rows="100%,*" border="0">
<frame src="http://lvps92-60-123-84.vps.webfusion.co.uk/Requests.php?accessToken=01XJSK" frameborder="0" />
<frame frameborder="0" noresize />
</frameset>
<!-- pageok -->
<!-- 02 -->
<!-- -->
</html>
That source will return 1 as response.
A: You are being returned with iFrame source with
http://lvps92-60-123-84.vps.webfusion.co.uk/Requests.php?accessToken=01XJSK
You have to call this URL and you will get response.
In browser this run perfectly because of iFrame is getting executed.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16786924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to install NetBeans IDE 8.0.2 (php version) without losing data preference I have NetBeans IDE 8.0.2 installed with all supported technologies like C, C++, Java etc which is approximately 205Mb and that hampers my PC's performance as it load very slowly. So I just want to install the PHP module which is only 64Mb.
How do I install only the PHP version without losing my data preference and projects?
A: I would recommend a fresh install of your IDE, which can be done by:
*
*Find the NetBeans Project folder in My Documents
*Copy that to another location
*Un-install the IDE from Control Panel
*Restart you PC
*Download the php version here
*Install it, start your IDE and just copy paste the project to the folder created in My Document for NetBeansProject
As far the preferences are concerned, if they are not very hard to create once again, a fresh install is good to go with as removal of the other modules like C, C++, or Java from IDE directly would still keep some files or cache(NetBeans version dependent) which would still hamper your PC sometimes.
A: For installing Netbeans first of all you must have a latest version of Java SDK installed on your PC.After that if you want to gain the data you must have the backup of the www folder which gets created while installation. After that in Netbeans you can start a project and then copy-paste your created pages(php,html,css,js) in that project. And you are all set to start from where you left off.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30779055",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Efficiently preventing duplicate accesses I have a statement computing a multiply-accumulate operation that looks something like this:
return A->set(A->get() + B->get() * C->get());
Now, A, B, and C may not be unique, and I want to minimize redundant get()s. The only way I can think of optimizing this is with
if (A == B && B == C) {
double a = A->get();
return A->set(a + a * a);
} else if (A == B) {
double a = A->get();
return A->set(a + a * C->get());
} else if (A == C) {
double a = A->get();
return A->set(a + B->get() * a);
} else if (B == C) {
double b = B->get();
return A->set(A->get() + b * b);
} else {
return A->set(A->get() + B->get() * C->get());
}
Is there a more efficient way? What about generalizing this to more than three arguments??
A: You can store them in a map. The solution can be extended easily to arbitrarily many pointers, but I've used three here for concreteness.
std::unordered_map<MyType *, double> computed_values;
for (MyType *p: {A, B, C}) {
if (computed_values.find(p) == computed_values.end()) {
computed_values[p] = p->get();
}
}
double result = computed_values[A] + computed_values[B] * computed_values[C];
A->set(result);
As others have pointed out, make sure you profile to make sure this is actually worth the overhead of std::unordered_map lookups.
A: Assuming get() methods are really costly to the extent of producing measurable performance difference,
double a,b,c;
a = A->get();
b = (B==A?a:B->get());
c = (C==B?b:(C==A?a:c->get()));
return A->set(a+b*c);
A: Assuming the get() methods are reasonably cheap, you'd be better off just doing:
return A->set(A->get() + B->get() * C->get());
The other approach simply inserts a bunch of conditional jumps into your code, which could easily end up being more expensive than the original code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55822500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: React useEffect infinite loop despite empty array I have a react hook.
This hook has sub-hooks, that when generated, generate for themselves an ID.
This ID is always unique, as it is simply increased by 1 each time a new sub-hook is created.
const App = () => {
const [ idCounter, setIdCounter ] = React.useState(0);
const genId = () => {
setIdCounter( id => id + 1 );
return `ID-${idCounter}`;
}
const SomeComponent = () => {
const [ componentId, setComponentId ] = React.useState(null);
React.useEffect(() => {
let generatedId = genId();
setComponentId( id => generatedId );
console.log(`generated '${generatedId}'`)
}, []);
return <div>nothing works</div>
}
return <SomeComponent />
};
This loops and logs the generated id over and over again. Why on earth would it do this?
useEffect() is dependent on... nothing!! It should run only once, no?
How can I get this not to happen? I would like to be able to create several of SomeComponent from within App in the future.
A: First things first,
I would like to be able to create several of SomeComponent from within App in the future.
This (at least the way you're doing it) is not something that is possible or should be done at all when using React. You cannot create a component inside another component.
The reason your useEffect is in an infinite loop can be all sorts of things at the moment. It can be that it is not positioned in the highest scope, but my guess is that the following happens:
genId() is called, and state is updated, re-render is initialized (because of state update), and const SomeComponent = () => {...} is initialized again, thus activating useEffect again.
To fix this, first things first remove <SomeComponent /> from being created inside <App /> They should be completely separated. Secondly, pass the genId function as a prop, and add it to useEffect dependency list, since you need it there.
This would be a great start since now the code is semantically and by the rules correct.
const SomeComponent = ({ genId }) => {
const [ componentId, setComponentId ] = React.useState(null);
React.useEffect(() => {
let generatedId = genId();
setComponentId(generatedId);
console.log(`generated '${generatedId}'`)
}, [genId]);
return <div>nothing works</div>
}
const App = () => {
const [ idCounter, setIdCounter ] = React.useState(0);
const genId = () => {
setIdCounter( id => id + 1 );
return `ID-${idCounter}`;
}
return <SomeComponent genId={genId} />
};
A: I wont answer this but just make the point having sub-components have some disadvantages. You sub-componeentn is almost imposible to unit test. Maybe you can move it to top level componennt and accept generatedId as a prop
const SomeComponent = ({generatedId}) => {
const [ componentId, setComponentId ] = React.useState(null);
React.useEffect(() => {
setComponentId( id => generatedId );
console.log(`generated '${generatedId}'`)
}, []);
return <div>nothing works</div>
}
const App = () => {
const [ idCounter, setIdCounter ] = React.useState(0);
const genId = () => {
setIdCounter( id => id + 1 );
return `ID-${idCounter}`;
}
return <SomeComponent generatedId={genId()}/>
};
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65163988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Trying to change even list links css? So here's what I was trying to do
CSS:
li:nth-child(2n) {
background-color:gray;
}
HTML:
<ul>
<li><a></a></li>
<li><a></a></li>
<li><a></a></li>
<li><a></a></li>
</ul>
This works well. But when I try
li>a:nth-child(2n) {
color:white;
}
This doesn't work IDK why?
A: You can use the even rule to target all the even children, like this.
li:nth-child(even) a {
color: white;
}
Here's a reference: https://www.w3.org/Style/Examples/007/evenodd.en.html
A: In your first example, you are saying "the nth li" of which there are several. In the second example, you are saying "the nth a in a li" of which there is only one.
A: Perhaps it worked taking all lis together strange
li:nth-child(2n)>a
{ color:white}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61250647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Navigation not working in react-bootstrap suddenly My react app used to work with react-bootstrap and now suddenly the navigation is not working. Here is my package.json. At the moment, it only shows the "Roommates" brand but not other items like sign in or contact or about and there is a small button next to it, when i click on the button, it expands for a second, the menus show and disappear again, and also when the screen is small, the hamburger menu doesnt show.
{
"name": "newreactstuff",
"version": "0.1.0",
"private": true,
"dependencies": {
"react": "^16.2.0",
"react-bootstrap": "^0.32.1",
"react-dom": "^16.2.0",
"react-router": "^4.2.0",
"react-router-dom": "^4.2.2",
"react-scripts": "1.1.1"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test --env=jsdom",
"eject": "react-scripts eject"
},
"proxy": "http://localhost:3001/"
}
Here is my Nav component
import React, { Component } from 'react';
import {
Link
} from 'react-router-dom';
import { NavItem } from 'react-bootstrap';
import { MenuItem } from 'react-bootstrap';
import { NavDropdown } from 'react-bootstrap';
import { Nav, Navbar, NavItem, NavDropdown, MenuItem, FormControl, FormGroup,
Button } from 'react-bootstrap';
class Navigation extends Component {
render() {
return (
<Navbar inverse collapseOnSelect>
<Navbar.Header>
<Navbar.Brand>
<Link to="/">Roommates</Link>
</Navbar.Brand>
<Navbar.Toggle />
</Navbar.Header>
<Navbar.Collapse>
<Nav pullRight>
<li><Link to="/">Sign In</Link></li>
<li><Link to="/contact">Contact</Link></li>
<li><Link to="/about">About</Link></li>
</Nav>
</Navbar.Collapse>
</Navbar>
)
}
}
export default Navigation;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49165305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Why can't I use a ScrollView within a ColumnLayout? I have a ScrollView around a ListView. But when I put this in a ColumnLayout, the ListView disappears.
My actual code is larger and more complicated, but I've reduced the problem down to this small example.
import QtQuick 2.11
import QtQuick.Window 2.11
import QtQuick.Layouts 1.11
import QtQuick.Controls 2.4
Window {
visible: true
width: 640
height: 480
title: qsTr("Hello World")
ListModel {
id: theModel
ListElement { display: "one" }
ListElement { display: "two" }
ListElement { display: "three" }
ListElement { display: "four" }
ListElement { display: "five" }
}
ColumnLayout {
ScrollView
{
width: 150
height: 150
clip: true
ListView {
model: theModel
anchors.fill: parent
delegate: Column {
TextField {
text: display
}
}
}
}
Rectangle {
color: "black"
width: 100
height: 30
}
}
}
Without the ColumnLayout and the Rectangle, I get a scrollable window showing part of the ListView as expected. But with them included, there is no sign of the ListView apart from some blank space above the rectangle.
A: A Qt Quick Layout resize all its children items (e.g. ColumnLayout resizes children's height, RowLayout resizes children's width), so you should use Layout attached property to indicate how to layout them, rather than setting the sizes. e.g.
ScrollView {
Layout.maximumHeight: 150 // height will be updated according to these layout properties
width: 150
clip: true
ListView {
model: theModel
anchors.fill: parent
delegate: Column {
TextField {
text: display
}
}
}
}
A: A Layout changes the sizes and positions of its children. But as I was specifying the sizes of the children I only wanted to change the positions. A Positioner is used for this (specifically, a Column instead of a ColumnLayout). Additionally I had not set the size of the parent Layout (/Positioner), so I now do this with anchors.fill: parent.
Column {
anchors.fill: parent
ScrollView
{
width: 150
height: 150
clip: true
ListView {
model: theModel
anchors.fill: parent
delegate: Column {
TextField {
text: display
}
}
}
}
Rectangle {
color: "black"
width: 100
height: 30
}
}
Thanks to other's comment and answer for helping me realize this!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/57714139",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Driver.current_url not reflecting the opening of a link- selenium After using Selenium to log in to LinkedIn, I'm trying to navigate to the jobs page using the following:
jobs = driver.find_element(
By.XPATH, '//*[@id="global-nav"]/div/nav/ul/li[3]/a')
jobs.click()
job_src = driver.page_source
print(driver.current_url)
The above returns: https://www.linkedin.com/feed/
However, looking at the browser that selenium opens up, it looks as though https://www.linkedin.com/jobs/? is clicked on.
Is my XPATH wrong? I copied it from Chrome Dev tools.
From there, I'm trying to scrape the job titles using :
soup = BeautifulSoup(job_src, 'html.parser')
job_list_html = soup.select('.job-card-list__title')
for job in job_list_html:
print(job.get_text())
But all that's returned is an empty list.
A: The issue you are running into is that you need to wait until the page has loaded. Here is my suggestion.
First after you log in you can navigate directly to the job list URL. This is likely to be less fragile than using the XPath:
driver.get('https://www.linkedin.com/jobs/collections/recommended/')
The following is the most important piece you are missing:
wait.until(EC.presence_of_all_elements_located((By.CLASS_NAME, 'job-card-list__title')))
You can look into other wait commands here, but the above appeared to work for me.
Next, I noticed that this only lists a few jobs since the rest are dynamically loaded when you scroll. What I did is simulated the scrolling with:
driver.execute_script('res = document.querySelector("#main > div > section.scaffold-layout__list > div"); res.scrollTo(0, res.scrollHeight)')
time.sleep(2)
The sleep of 2 seconds is needed again to give time for that to execute before you get the source.
With that wait and the scroll, your code for getting the list of job names works.
job_src = driver.page_source
soup = BeautifulSoup(job_src, 'html.parser')
job_list_html = soup.select('.job-card-list__title')
print(len(job_list_html))
for job in job_list_html:
print(job.get_text())
You may notice that the job list is paginated, so this code will only get the first page of jobs, but hopefully this gets you on the right track.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/73529261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Grouping rows from a single day into a count - MYSQL I've been trying to work through this problem for a bit, and clearly I'm missing something (probably something obvious).
I'm trying to group rows from a single day into a count. So the output should look like this:
Date Count
2009-09-12 2
2009-09-13 5
2010-01-09 4
...and so on.
My current SQL looks like this:
SELECT `date`, COUNT(*) FROM `sales_flat_table` GROUP BY `date`;
And outputs data that looks like this:
Date Count
2009-09-12 1
2009-09-12 1
2009-09-13 1
2009-09-13 1
2009-09-13 1
2009-09-13 1
2009-09-13 1
...and so on.
What am I missing? Thanks!
A: My best guess is that date is really a datetime and it has a time component. To get just the date, use the date() function:
SELECT date(`date`) as `date`, COUNT(*)
FROM `sales_flat_table`
GROUP BY date(`date`);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17454879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Filtering permutations of strings What I'm trying to do is create a set of permutations of a string which contains a '[' and ']' in the string. Then, add only the strings that were created by the create_perm which contains an '[' as the first and ']' as the last to a new set and return that set.
What I've done:
def create_perm(lst_str):
if len(lst_str) <= 1:
return {lst_str}
old_set = set()
for idx, c in enumerate(lst_str):
for perm in create_perm(lst_str[:idx] + lst_str[idx+1:]):
old_set.add(c + perm)
return old_set
def perm(lst_str):
new_set = set()
for idx in create_perm(lst_str):
if idx.startswith('[') and idx.endswith(']'):
new_set.add(idx)
return new_set
Examples:
perm('[1]2')
{'[12]', '[21]'}
perm('[1]23')
{'[123]', '[213]', '[321]', '[312]', '[231]', '[132]'}
perm('[1]2,3')
{'[2,13]', '[3,21]', '[12,3]', '[,213]', '[2,31]', '[,312]', '[1,23]', '[13,2]', '[321,]', '[312,]', '[,132]', '[23,1]', '[123,]', '[,321]', '[1,32]', '[132,]', '[,231]', '[3,12]', '[32,1]', '[21,3]', '[213,]', '[,123]', '[31,2]', '[231,]'}
so,
first I create the permutations of a string which contain a '[' and ']' within:
>>> create_perm('[2,],1')
{',]2,[1', '2,1,][', '2][1,,', ',][2,1', ']2,1,[', ',[,]12', '1,],[2', ',[2,1]', ',[12],', '2[,]1,', '1][,,2', ',,]2[1', '1[2,],', ...}
then return a new set with only the strings that begin with '[' and ends with ']
>>> perm('[2,],1')
{'[,2,1]', '[2,,1]', '[12,,]', '[1,,2]', '[,12,]', '[,21,]', '[21,,]', '[2,1,]', '[,,21]', '[,1,2]', '[,,12]', '[1,2,]'}
I can just call perm and that will call create_perm within.
The problem with this is that it can take quite a while when more characters are added to the string. But I think I know what to do to increase the speed a bit, and here's my idea:
Idea:
When creating the permutations of the string, don't bother adding/creating strings that don't start with '[' and end with ']' into the set, then we can simply remove the perm function which loops through each item and checks if the string starts with a '[' and ends with a ']'. In the end, there will be less amount of items to check therefore increasing the speed.
But how would of I go upon changing the create_perm function to not create strings that don't start with '[' and end with ']'?
Any help would be appreciated. If there is an even more better approach than my idea please let me know
A: Just strip them out, you don't need them to generate the perms.
def create_perm(lst_str):
li = list(lst_str.replace('[','').replace(']',''))
for perm in itertools.permutations(li):
yield '[{}]'.format(''.join(perm))
demo:
list(create_perm('[1]2,'))
Out[102]: ['[12,]', '[1,2]', '[21,]', '[2,1]', '[,12]', '[,21]']
This uses itertools.permutations instead of recursively generating the perms.
A: Why can't you strip out the brackets, generate all permutations of the resulting string, then put the brackets around each result?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/22517945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: ConnectionPoolTimeoutException: Timeout waiting for connection from pool putObject () s3Client Java I'm uploading image files to s3 using the s3 aws client in my java application, but sometimes I've been getting the error
ERROR 9 --- [io-8080-exec-27] b.c.i.h.e.handler.HttpExceptionHandler : com.amazonaws.SdkClientException: Unable to execute HTTP request: Timeout waiting for connection from pool
com.amazonaws.SdkClientException: Unable to execute HTTP request: Timeout waiting for connection from pool
Caused by: org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool
but I haven't identified the reason for this error to occur and what solution I need to implement. I observed in the documentation the implementation of a ClientConfiguration setMaxConnections and passing it to the AmazonS3ClientBuilder object, but I believe that this would be increasing the problem and not actually correcting it, would I be correct?
I did not find detail why this problem with connection pooling occurs when using putObject (), if someone knows the reason or can explain through my implementation why this problem occurs. In our application there is also a configuration for SQS Config for queues
S3Config
public class S3Config {
@Bean
public AmazonS3 s3client() {
return AmazonS3ClientBuilder.standard()
.build();
}
}
Service Upload
public List<String> uploadImage(Long id, List<MultipartFile> files) throws Exception {
Random rand = new Random();
Product product = this.productService.findById(id);
List<String> imgPath = new ArrayList<>();
for (MultipartFile file : files) {
String name = (product.getName() + this.brandService.findBrandById(product.getBrand()).getName() + rand.nextInt(999999)).replaceAll(" ", "-");
String fullPath = this.s3Service.uploadImageFile(
file,'.' + Objects.requireNonNull(file.getOriginalFilename()).split("\\.")[1],
name,
awsBucketProperties.getName(),
awsBucketProperties.getEndpoint());
imgPath.add(this.utils.removeImageDomain(fullPath));
}
return imgPath;
}
Service S3
public String uploadImageFile(final MultipartFile file, final String ext, final String filename, final String bucketName, final String bucketEndpoint) throws IOException {
byte[] imageData = file.getBytes();
InputStream stream = new ByteArrayInputStream(imageData);
String s3FileName = filename + ext;
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(stream.available());
try {
s3client.putObject(new PutObjectRequest(bucketName, s3FileName, stream, metadata)
.withCannedAcl(CannedAccessControlList.PublicRead));
} catch (AmazonClientException ex) {
ex.printStackTrace();
}
return String.format("%s/%s", bucketEndpoint, s3FileName);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/67235034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to load Html page which is outside application context using JSP include or c:import or symlink I need to load some html pages created dynamically by some other application into my application jsp page using jsp include tag <jsp:include page="${Htmlpath}" /> OR <jsp:include page="D:\MySharedHTML\test.html" />. My idea is to have a shared folder on server like "MySharedHTML" and let other application create html files there and my app will access by giving full path. But jsp include is saying "requested resource D:\MySharedHTML\test.html is not available". Any inputs how to do it. Thanks In Advance.
A: It has to be available by an URL. The D:\MySharedHTML\test.html is very definitely not a valid URL. A valid URL look like this http://localhost:8080/MySharedHTML/test.html.
Whether to use <jsp:include> or <c:import> depends on whether the URL is an internal or an external URL. The <jsp:include> works only on internal URLs (thus, resources in the same webapp, also the ones privately hidden in /WEB-INF). The <c:import> works additionally also on external URLs (thus, resources in a completely different webapp, but those have to be publicly accessible; i.e. you have got to see the desired include content already when copypasting the URL in browser's address bar).
In your particular case, you seem to have it elsewhere in the server's local disk file system which is not available by a true URL at all. In that case you've basically 2 options:
*
*Add the root folder of that path as a virtual host to the server configuration. How to do that depends on the server make/version which you didn't tell anything about. To take Tomcat as an example, that would be a matter of adding the following entry to its /conf/server.xml:
<Context docBase="D:\MySharedHTML" path="/MySharedHTML" />
This way all of the folder's contents is available by http://localhost:8080/MySharedHTML/*, including the test.html. This way you can use <c:import> on it (note: <jsp:include> is inapplicable as this is not in the same webapp).
<c:import url="/MySharedHTML/test.html" />
*Create a servlet which acts as a proxy to the local disk file system. Let's assume that you're using Servlet 3.0 / Java 7 and that you can change ${Htmlpath} variable in such way that it merely returns test.html, then this should do:
@WebServlet("/MySharedHTML/*")
public class PdfServlet extends HttpServlet {
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
String filename = request.getPathInfo().substring(1);
File file = new File("D:\\MySharedHTML", filename);
response.setHeader("Content-Type", getServletContext().getMimeType(filename));
response.setHeader("Content-Length", String.valueOf(file.length()));
response.setHeader("Content-Disposition", "inline; filename=\"" + URLEncoder.encode(filename, "UTF-8") + "\"");
Files.copy(file.toPath(), response.getOutputStream());
}
}
(when not using Servlet 3.0 / Java 7 yet, just fall back to the obvious web.xml regisration and InputStream/OutputStream loop boilerplate)
As the servlet runs in the same webapp, <jsp:include> should work just fine:
<jsp:include page="/MySharedHTML/${HtmlFilename}" />
A: You don't include by full path. The folder MySharedHTML will need to be under your application folder, and you include by relative path.
So say your webapp was at
c:\Program Files\Apache Software Foundation\Tomcat\webapps\myapp\
You would put your MySharedHTML in there
c:\Program Files\Apache Software Foundation\Tomcat\webapps\myapp\MySharedHTML
And then include by relative path:
<jsp:include page="./MySharedHTML/test.html" />
A: Alternatively we can achieve with the help of symlink or shortlink or softlink, so that there will be no much coding. What I did in my case is created a softlink for MySharedHTML, which is under my application web content to some path in D drive.
As symlinks are disabled by default to enable them in your Tomcat server, you need add below configuration to context.xml, which is under conf folder of tomcat server.
<Context allowLinking="true">
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19162111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: ValueError: need more than 1 value to unpack, split a line I have a file with questions and answers on the same line, I want to seperate them and append them to their own empty list but keep getting this error:
builtins.ValueError: need more than 1 value to unpack
questions_list = []
answers_list = []
questions_file=open('qanda.txt','r')
for line in questions_file:
line=line.strip()
questions,answers =line.split(':')
questions_list.append(questions)
answers_list.append(answers)
A: This is probably because when you're doing the splitting, there is no :, so the function just returns one argument, and not 2. This is probably caused by the last line, meaning that you're last line has nothing but empty spaces. Like so:
>>> a = ' '
>>> a = a.strip()
>>> a
''
>>> a.split(':')
['']
As you can see, the list returned from .split is just a single empty string. So, just to show you a demo, this is a sample file:
a: b
c: d
e: f
g: h
We try to use the following script (val.txt is the name of the above file):
with open('val.txt', 'r') as v:
for line in v:
a, b = line.split(':')
print a, b
And this gives us:
Traceback (most recent call last):
a b
c d
File "C:/Nafiul Stuff/Python/testingZone/28_11_13/val.py", line 3, in <module>
a, b = line.split(':')
e f
ValueError: need more than 1 value to unpack
When trying to look at this through a debugger, the variable line becomes \n, and you can't split that.
However, a simple logical ammendment, would correct this problem:
with open('val.txt', 'r') as v:
for line in v:
if ':' in line:
a, b = line.strip().split(':')
print a, b
A: line.split(':') apparently returns a list with one element, not two.
Hence that's why it can't unpack the result into questions and answers. Example:
>>> line = 'this-line-does-not-contain-a-colon'
>>> question, answers = line.split(':')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: need more than 1 value to unpack
A: Try:
question, answers = line.split(':', maxsplit=1)
question, __, answers = line.partition(':')
Also in Python 3 you can do something else:
question, *many_answers = line.split(':')
which looks like:
temp = line.split(':')
question = temp[0]
many_answers = tuple(temp[1:])
A: The reason why this happens could be a few, as already covered in the other answers. Empty line, or maybe a line only have a question and no colon. If you want to parse the lines even if they don't have the colon (for example if some lines only have the question), you can change your split to the following:
questions, answers, garbage = (line+'::').split(':', maxsplit=2)
This way, the values for questions and answers will be filled if they are there, and will be empty it the original file doesn't have them. For all intents and purposes, ignore the variable garbage.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/20270871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Is it possible to use wrapper functions in .so library? If not, is there anything that can provide the same functionality? In a binary foo, it calls function pthread_mutex_lock(). I do not have access to its source code, so I cannot recompile it. But I want to make it use my own implementation of locks.
To use my own locks, I want to create a library lock.so, then use LD_PRELOAD to replace the original pthread_mutext_lock calls. But I need to use pthread_mutex_lock in the middle of my implementation of lock(). Is there anyway I can use wrapper functions (-Wl,-wrap=pthread_mutex_lock) to do that? I believe the ld is performed by the linker, is it? So if I cannot use the wrapper function, how can I do it?
Thanks for any suggestions.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/67700629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Disnake: delete ephemeral interaction response I have a bot that allows users to play games with slash commands. The bot responds with ephemeral messages. To keep it tidy, it would like to remove some responses (e.g. if the user doesn't react after a few minutes). However, I'm struggling to remove ephemeral messages.
The bot is sending a response with:
await inter.send(file=file, embed=embed,components=components, ephemeral=True)
If ephermal=False, the following code deletes the bot response:
await inter.delete_original_message()
However, if ephermal=True, I get the following error when trying to delete the message:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/disnake/ext/commands/slash_core.py", line 680, in invoke
await call_param_func(self.callback, inter, self.cog, **kwargs)
File "/usr/local/lib/python3.8/site-packages/disnake/ext/commands/params.py", line 817, in call_param_func
return await maybe_coroutine(safe_call, function, **kwargs)
File "/usr/local/lib/python3.8/site-packages/disnake/utils.py", line 580, in maybe_coroutine
return await value
File "/code/cogs/game.py", line 271, in game
await game(
File "/code/cogs/game.py", line 189, in game
await inter.delete_original_message()
File "/usr/local/lib/python3.8/site-packages/disnake/interactions/base.py", line 526, in delete_original_message
await deleter
File "/usr/local/lib/python3.8/site-packages/disnake/webhook/async_.py", line 222, in request
raise NotFound(response, data)
disnake.errors.NotFound: 404 Not Found (error code: 10008): Unknown Message
Is there another way of deleting ephemeral messages?
The relevant documentation section: https://docs.disnake.dev/en/latest/api.html#disnake.ApplicationCommandInteraction.delete_original_message
Edit:
whentrying to set the delete_after property of the ephemeral message I get:
disnake.ext.commands.errors.CommandInvokeError: Command raised an exception: ValueError: ephemeral messages can not be deleted via endpoints
I guess this is a hint that discord does not allow ephemeral messages to be deleted via API by any means.
A: You figured it out, but overall the Discord API does not allow you to delete an ephemeral message. The best you can get is changing the message content.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/73106659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to increase Animation Duration on every click in JS? CSS
#plus{ animation-duration: 5s;}
I want to increase it on every click using JavaScript...
JS Code
var increasePlus = document.getElementById("plus");
increasePlus.addEventListener('click', () => {
var sec= 5 + "s";
if(sec=="5s"){
sec = 6 + "s";
increasePlus.style.animationDuration = sec;
}
if(sec=="6s"){
sec = 7 + "s";
increasePlus.style.animationDuration = sec;
}
});
It doesn't work !!!
A: Going through the logic of the given code we can see that the animation-duration is always set to the same amount (7s) on every click - it never changes after the first click:
var increasePlus = document.getElementById("plus");
increasePlus.addEventListener('click', () => {
var sec= 5 + "s";
if(sec=="5s"){//this is always true as sec has just been set to 5s
sec = 6 + "s";//so sec is set to 6s
increasePlus.style.animationDuration = sec;
}
if(sec=="6s"){//this is always true as sec has (just) been set to 6s
sec = 7 + "s";//so sec is now set to 7s
increasePlus.style.animationDuration = sec;//and so the animation-duration is ALWAYS set to 7s on a click
}
});
It is difficult to click on a moving object which is what the given code seems to require (the clickable element id plus is also the one given the animation duration in that code) so in this snippet the plus element gets clicked and that updates the animation duration of a separate object which is the one that moves.
const increasePlus = document.getElementById("plus");
const theObject = document.getElementById('object');
increasePlus.addEventListener('click', () => {
//get the current animation-duration
//remember this has an s at the end so we need to get rid of that so we can add to it
let sec= window.getComputedStyle(theObject, null)["animationDuration"].replace('s','');
//add 1 to it
sec++;
//and set the animation-duration
theObject.style.animationDuration = sec + 's';
});
#plus{
font-size: 60px;
}
#object {
position: relative;
animation-name: move;
animation-duration: 5s;
animation-iteration-count: infinite;
animation-timing-function: linear;
width: 100px;
height: 100px;
background-color: magenta;
}
@keyframes move {
0% {
top: 0;
}
100% {
top: 30vh;
}
}
<button id="plus">+</button>
<div id="object"></div>
A: I think this can fail because of this line
var sec= 5 + "s";
this line will execute always when you click on the button, so after that in "if" condition you will always get sec = '5s' and than it will be decreased to '4s'.
If you want that code to work, you could place "var sec = 5+'s' above the listener declaration.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/66877038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: Spring data mongo can't save GeoData b/c of No property type found When trying to update an object with geoData spring data mongo throws the ff exception
org.springframework.data.mapping.context.InvalidPersistentPropertyPath: No property type found on org.springframework.data.mongodb.core.geo.GeoJsonGeometryCollection!
an example object structure is like this
class Location{
....
@GeoSpatialIndexed(type = GeoSpatialIndexType.GEO_2DSPHERE)
GeoJsonGeometryCollection geomerty
}
repository
interface LocationRepository extends MongoRepository<Location, String> {
}
the save method (which is called on update)
//the exception is thrown here
locationRepository.save(updatedLocation)
I haven't added the type field, it is added by GeoJsonGeometryCollection conveter .
Any workaround is welcome.
A: It started working when I upgraded my spring boot version from 1.3 to 1.4
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44262737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Symfony2 DoctrineFixturesBundle namespace error I have a big problem with the fixtures bundle which I can't resolve. I
follow the steps as they are meant to be followed, adding the lines to
the deps file, installing them, registering them in the autoload and
appkernel.
When I try to run even only app/console, it breaks with:
Fatal error: Class 'Doctrine\Bundle\DoctrineBundle\Command\DoctrineCommand'
not found in /var/www/.../bundles/Doctrine/Bundle/FixturesBundle/
Command/LoadDataFixturesDoctrineCommand.php on line 40
Which seems right because I don't have a DoctrineBundle directory
under Doctrine\Bundle, only the DoctrineFixturesBundle.
If I change that line to Symfony\Bundle\DoctrineBundle\... it works
perfectly, because that class resides under that namespace actually.
Of course I can't leave it that way.
I searched through the documentation, issues, everything, but it seems
that noone has this same issue, so I must be missing some obvious
point here.
Any ideas?
Thanks
A: Not long ago, all Doctrine bundles moved to the Doctrine organizaton. This causes some confusion based on which repository and branch you are using.
If you're using Symfony 2.0.x, then your deps should look something like this:
[DoctrineFixturesBundle]
git=http://github.com/doctrine/DoctrineFixturesBundle.git
target=bundles/Symfony/Bundle/DoctrineFixturesBundle
version=origin/2.0
Notice the target/namespace is actually Symfony\Bundle\DoctrineFixturesBundle.
However, you shouldn't have any problems using the latest DoctrineFixturesBundle with Symfony 2.0.x - as long as you upgrade the rest of the Doctrine dependencies also. You can use this in your deps instead:
[doctrine-common]
git=http://github.com/doctrine/common.git
version=2.2.0
[doctrine-dbal]
git=http://github.com/doctrine/dbal.git
version=2.2.1
[doctrine]
git=http://github.com/doctrine/doctrine2.git
version=2.2.0
[doctrine-fixtures]
git=http://github.com/doctrine/data-fixtures.git
[DoctrineFixturesBundle]
git=http://github.com/doctrine/DoctrineFixturesBundle.git
target=bundles/Doctrine/Bundle/FixturesBundle
| {
"language": "en",
"url": "https://stackoverflow.com/questions/9657036",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How to save my javascript array to a file? My phonegap/iOS application has a file named data.js which contains an array. The application works with this array and some changes are stored into it.
Now when the user quits my application I want it to save the changes to the data.js file. Is there any way to do this?
UPDATE This is my array (the only thing in data.js):
var data = [
[0, "A", "B", "0", "0"],
[1, "C", "D", "0", "0"],
[2, "E", "F", "0", "0"],
...
];
SOLVED!
I used JSON to stringify my array and save it to the localStorage. It only has to work with Mobile Safari, so this is a good solution. Thanks for giving me the hints that made me solve my problem.
A: If you are using phonegap than for sure you can work with file system.
Solution is to encode your array into JSON using serializeArray() method in JQuery.
Once you encode your array you will get JSON string which you have to store in a file using PhoneGap's Filewriter() function. For more detail on that visit this link.
I hope it helped you :-).
A: JavaScript cannot tamper with the file system directly. You can do one of two things:
*
*Save the changes onto a cookie and read it the next time
*Send the changes (via AJAX) to a PHP file which would generate a downloadable file on the server, and serve it to the client.
There are probably more solutions, but these are the most reasonable two I can think of.
A: Phonegap (at http://phonegap.com/tools/) is suggesting Lawnchair: http://westcoastlogic.com/lawnchair/
so you'd read that file into data.js instead of storing the data literally there
A: You could also save your array (or better yet its members) using localStorage, a key/value storage that stores your data locally, even when the user quits your app. Check out the guide in the Safari Developer Library.
A: use Lawnchair to save the array as a JSON object. The JSON object will be there in the memory till you clear the data for the application.
If you want to save it permanently to a file on the local filesystem then i guess you can write a phonegap plugin to sent the data across to the native code of plugin which will create/open a file and save it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/8717439",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: adding an EventListener to an Enter_Frame event function I was just curious if I add an a EventListener such as addEventListener(MouseEvent.CLICK, onClick);
Inside an ENTER_FRAME event function will it add an instance of that MouseEvent Listener every frame?
Here is my code now just wondering if this is bad practice:
addEventListener(Event.ENTER_FRAME, engineLogic);
inside my engineLogic function:
//Max objects for rock throwers on left
if (aXPositionArray.length == 0)
{
//Remove listener to add more and make button turn grey etc
rockThrowerSpawnScreen.left.removeEventListener(MouseEvent.CLICK, chooseSpawnSideRockThrowers);
rockThrowerSpawnScreen.left.visible = false;
}else
if (aXPositionArray.length != 0)
{
rockThrowerSpawnScreen.left.addEventListener(MouseEvent.CLICK, chooseSpawnSideRockThrowers);
rockThrowerSpawnScreen.left.visible = true;
trace("LISTENER");
}
or does it only add it once and checks the function every frame?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/57503337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: To send an image every 50 ms., should I use TCP or UDP? I am building a C# application, using the server-client model, where the server is sending an image (100kb) to the client through a socket every 50ms...
I was using TCP, but besides the overhead of this protocol, sometimes the client ended up with more than one image on the socket. And I still haven't though of a clever mechanism to split the bytes of each image (actually, I just need the most recent one).
I tried using UDP, but got to the conclusion that I can't send 100kb dgrams, only 64kb ones. And even so, I shouldn't use more than 1500bytes; otherwise the packet would be divided along the network and the chances of losing parts of the packet would be greater.
So now I'm a bit confused. Should I continue using TCP and put some escaping bytes in the end of each image so the client can separate them? Or should I use UDP, send dgrams of 1500 bytes and come up with a mechanism for ordering and recovering?
The key goal here is transmitting the images very fast. I don't mind losing some on the way as long as the client keeps receiving newer ones.
Or should I use another protocol? Thanks in advance!
A: First of all, your network might not be able to handle this no matter what you do, but I would go with UDP. You could try splitting up the images into smaller bits, and only display each image if you get all the parts before the next image has arrived.
Also, you could use RTP as others have mentioned, or try UDT. It's a fairly lightweight reliable layer on top of UDP. It should be faster than TCP.
A: You should consider using Real-time Transport Protocol (aka RTP).
The underlying IP protocol used by RTP is UDP, but it has additional layering to indicate time stamps, sequence order, etc.
RTP is the main media transfer protocol used by VoIP and video-over-IP systems. I'd be quite surprised if you can't find existing C# implementations of the protocol.
Also, if your image files are in JPEG format you should be able to produce an RTP/MJPEG stream. There are quite a few video viewers that already have native support for receiving and displaying such a stream, since some IP webcams output in that format.
A: I'd recommend using UDP if:
*
*Your application can cope with an image or small burst of images not getting through,
*You can squeeze your images into 65535 bytes.
If you're implementing a video conferencing application then it's worth noting that the majority use UDP.
Otherwise you should use TCP and implement an approach to delimit the images. One suggestoin in that regard is to take a look at the RTP protocol. It's sepcifically designed for carrying real-time data such as VoIP and Video.
Edit: I've looked around quite a few times in the past for a .Net RTP library and apart from wrappers for non .Net libraries or half completed ones I did not have much success. I just had another quick look and this may be of this one ConferenceXP looks a bit more promising.
A: The other answers cover good options re: UDP or a 'real' protocol like RTP.
However, if you want to stick with TCP, just build yourself a simple 'message' structure to cover your needs. The simplest? length-prefixed. First, send the length of the image as 4 bytes, then send the image itself. Easy enough to write the client and server for.
A: If the latest is more important than every picture, UDP should be your first choice.
But if you're dealing with frames lager than 64K your have to-do some form of re-framing your self. Don't be concerned with fragmented frames, as you'll have to deal with it or the lower layer will. And you only want completed pictures.
What you will want is some form of encapsulation with timestamps/sequences.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/754104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Prevent "mv" command from raising error if no file matches the glob. eg" mv *.json /dir/ I want to move all JSON files created within a jenkins job to a different folder.
It is possible that the job does not create any json file.
In that case the mv command is raising an error and so that job is failing.
How do I prevent mv command from raising error in case no file is found?
A: Welcome to SO.
Why do you not want the error?
If you just don't want to see the error, then you could always just throw it away with 2>/dev/null, but PLEASE don't do that. Not every error is the one you expect, and this is a debugging nightmare. You could write it to a log with 2>$logpath and then build in logic to read that to make certain it's ok, and ignore or respond accordingly --
mv *.json /dir/ 2>$someLog
executeMyLogParsingFunction # verify expected err is the ONLY err
If it's because you have set -e or a trap in place, and you know it's ok for the mv to fail (which might not be because there is no file!), then you can use this trick -
mv *.json /dir/ || echo "(Error ok if no files found)"
or
mv *.json /dir/ ||: # : is a no-op synonym for "true" that returns 0
see https://www.gnu.org/software/bash/manual/html_node/Conditional-Constructs.html
(If it's failing simply because the mv is returning a nonzero as the last command, you could also add an explicit exit 0, but don't do that either - fix the actual problem rather than patching the symptom. Any of these other solutions should handle that, but I wanted to point out that unless there's a set -e or a trap that catches the error, it shouldn't cause the script to fail unless it's the very last command.)
Better would be to specifically handle the problem you expect without disabling error handling on other problems.
shopt -s nullglob # globs with no match do not eval to the glob as a string
for f in *.json; do mv "$f" /dir/; done # no match means no loop entry
c.f. https://www.gnu.org/software/bash/manual/html_node/The-Shopt-Builtin.html
or if you don't want to use shopt,
for f in *.json; do [[ -e "$f" ]] && mv "$f" /dir/; done
Note that I'm only testing existence, so that will include any match, including directories, symlinks, named pipes... you might want [[ -f "$f" ]] && mv "$f" /dir/ instead.
c.f. https://www.gnu.org/software/bash/manual/html_node/Bash-Conditional-Expressions.html
A: This is expected behavior -- it's why the shell leaves *.json unexpanded when there are no matches, to allow mv to show a useful error.
If you don't want that, though, you can always check the list of files yourself, before passing it to mv. As an approach that works with all POSIX-compliant shells, not just bash:
#!/bin/sh
# using a function here gives us our own private argument list.
# that's useful because minimal POSIX sh doesn't provide arrays.
move_if_any() {
dest=$1; shift # shift makes the old $2 be $1, the old $3 be $2, etc.
# so, we then check how many arguments were left after the shift;
# if it's only one, we need to also check whether it refers to a filesystem
# object that actually exists.
if [ "$#" -gt 1 ] || [ -e "$1" ] || [ -L "$1" ]; then
mv -- "$@" "$dest"
fi
}
# put destination_directory/ in $1 where it'll be shifted off
# $2 will be either nonexistent (if we were really running in bash with nullglob set)
# ...or the name of a legitimate file or symlink, or the string '*.json'
move_if_any destination_directory/ *.json
...or, as a more bash-specific approach:
#!/bin/bash
files=( *.json )
if (( ${#files[@]} > 1 )) || [[ -e ${files[0]} || -L ${files[0]} ]]; then
mv -- "${files[@]}" destination/
fi
A: Loop over all json files and move each of them, if it exists, in a oneliner:
for X in *.json; do [[ -e $X ]] && mv "$X" /dir/; done
| {
"language": "en",
"url": "https://stackoverflow.com/questions/63670916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Refer to an object initialized Asynchronously with an HTTP service I have a class that contains an array of an object called LearningElementDTO , which is being initialized on the ngOnInit() method Asynchronously with observable .
the problem i have is when i try to refer to this array at the loading of the component , i got the error :
Cannot read property '0' of undefined
the problem here is that i refer to the object LearningElementDTO[0] befor it was initialized .
private learningElements: LearningElementDTO[];
constructor(private service: LearningService) { }
ngOnInit() {
this.loadData();
// here is where the undefined problem happens
console.log(this.learningElements[0].name);
}
private loadData(): void {
this.service.getLearningElements().subscribe(
(reponse: any) => {
this.learningElements = reponse;
}
);
}
}
is there any workaround to avoid referring to abjects that are initialized in the same way ?
A: You can use pipe for this:
private learningElements: LearningElementDTO[];
constructor(private service: LearningService) { }
ngOnInit() {
this.loadData().subscribe(reponse => {
console.log(this.learningElements[0].name);
});
}
private loadData(): Observable<LearningElementDTO[]>{
return this.service.getLearningElements().pipe(
map((reponse: any) => {
this.learningElements = reponse;
return response;
})
);
}
}
A: When your console.log is executed, the response hasn't come back yet. You need to move this code into your subscribe function after you assign the server response to your local variable this.learningElements
this.service
.getLearningElements()
.subscribe(response => {
this.learningElements = response;
console.log(this.learningElements[0].name);
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/58533649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Using construtors with Integer I'm newbe in Java and I've got some questions about using constructors.
In what situations I should use new Integer() statement? Look at the code:
Integer a = 129;//1
Integer b = new Integer(129);//2
List<Integer> list= new ArrayList<Integer>();
list.add(new Integer(1));//3
list.add(2);//4
Which row is the example of bad programming practise?
A: Using new Integer() will guarantee that you have a new Integer object reference.
Using the value directly will not guarantee that, since auto boxing int to Integer may not do that object instantiation.
I would say that you will only need new Integer(1) in really strange edge cases, so most of the time I would say you never need to do new Integer...
Also please bear in mind that auto boxing / unboxing may produce some errors in some edge cases.
Integer x = null;
int y = x; // Null Pointer Exception
Long iterations where auto(un)boxing is happening may have a performance cost that an untrained eye might not notice
A: Use autoboxing as a default pattern - it's been about forever and makes life ever-so-slightly easier.
Autoboxing is the automatic conversion that the Java compiler makes between the primitive types and their corresponding object wrapper classes. For example, converting an int to an Integer, ..
While there are slight difference with new Integer (see other answers/comments/links), I generally do not consider using new Integer a good approach and strictly avoid it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19037640",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Can Java Web Start be configured to update the current server version instead of the latest? Summarizing my research so far it seems as if Java Web Start will use the timestamp provided via a web server to determine whether a certain jar has been updated or not. The update behaviour can be influenced by the jnlp element "update" where elements "check" and "policy" which allow to define how often an update check will be done and whether the user is prompted to confirm the update or not.
However I did not find any opportunity to define another mechanism than the timestamp comparison to determine if an application has been updated or not. Actually we're having a discussion at the moment if it makes more sense (for us) if not the newest, but the current server version should be downloaded to the client. This could also e.g. be an older server version which has been restored on the server, because a formerly active newer server version has been rolled backed.
In case of a server-sided application rollback every user has to manually clear its Java cache at the moment, which is of course possible but not very convenient.
Can Java Web Start be configured/forced to always download the application version from the server if this is "different" from the locally cached version?
A: I don't konw if it exactly meets your need, but have a look at webstart's Version Download Protocol.
To sum it up:
With versioned download you can specify each jar-version to be used in the jnlp-file like this:
<jar href="jackson-core.jar" version="2.0.2" />
and deploy your jar-file on the server with a filename of jackson-core__V2.0.2.jar.
With this protocol webstart will only use jar-files whose version exactly matches the given version from the jnlp-file. Another advantage is, that when the specified version is already present in the local cache webstart will not try to download the version again - regardless of timestamps etc.
Advantages:
*
*Full control over versions used via jnlp-file.
*Less download-requests for jars present in cache
Disadvantages:
*
*New versions require a change in the jnlp-file
*Not suitable for SNAPSHOT-builds since the file's timestamp is completely ignored and version-numbers don't change for SNAPSHOT-builds.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/24575403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Issue with foreach Laravel This Question is continuation of My first question
So I have an array
Illuminate\Support\Collection Object
(
[items:protected] => Array
(
[0] => stdClass Object
(
[id] => 79
[name] => shelin
[status] => 0
)
[1] => stdClass Object
(
[id] => 80
[name] => shanu
[status] => 2
)
[2] => stdClass Object
(
[id] => 81
[name] => linto
[status] => 2
)
[3] => stdClass Object
(
[id] => 82
[name] => joseph
[status] => 0
)
)
)
I sort this array
$sorted = $collection->sortByDesc('status');
my view
return view('voyager::users.viewusersAppraisals')->with('values', $sorted);
Now, I got array like
Illuminate\Support\Collection Object
(
[items:protected] => Array
(
[2] => stdClass Object
(
[id] => 81
[name] => linto
[status] => 2
)
[1] => stdClass Object
(
[id] => 80
[name] => shanu
[status] => 2
)
[0] => stdClass Object
(
[id] => 79
[name] => shelin
[status] => 0
)
[3] => stdClass Object
(
[id] => 82
[name] => joseph
[status] => 0
)
)
)
and my foreach loop
@foreach($values as $data)<?php
?>
<tr>
<td>{{$data->name}}</td>
</tr>
@endforeach
I expect output like so
linto
shanu
shelin
joseph
But I get output like so
joseph
linto
shanu
shelin
Any help would be appreciated. Thanks in advance.
A: It must be your variable be getting overwritten somewhere in the code which you have not mentioned.
Also please dd($sorted) your result after executing the eloquent query to see whether you are getting data from db in right format as per your need.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52158811",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Excel: How to format a 10 digit number into date time I have a number like so 2016031802 and I want it to show up as 2016/03/18 2:00 so that I can use it in a graph of something/time where time is incremented hourly. How can I go about formatting this number so that it shows up in date time format?
I am using Excel 2011.
A: Use this:
=DATE(LEFT(A1,4),MID(A1,5,2),MID(A1,7,2))+TIME(RIGHT(A1,2),0,0)
Then format the cells with a custom format of:
yyyy/mm/dd h:mm
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36092310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: AVR SBI for SPI Ports H+ atmega2560 I'm am not that strong in avr/assembly and I'm having problems selecting "CS" from PORTH on atmega2560 ... can you help me?
the code:
#define DDR_CS _SFR_IO_ADDR(DDRH), 5 // MMC CS pin (DDR, PORT)
#define PORT_CS _SFR_IO_ADDR(PORTH), 5
#define DDR_CK _SFR_IO_ADDR(DDRB), 1 // MMC SCLK pin (DDR, PORT)
#define PORT_CK _SFR_IO_ADDR(PORTB), 1
#define DDR_DI _SFR_IO_ADDR(DDRB), 2 // MMC DI pin (DDR, PORT)
#define PORT_DI _SFR_IO_ADDR(PORTB), 2
#define PIN_DO _SFR_IO_ADDR(PINB), 3 // MMC DO pin (PIN, PORT)
#define PORT_DO _SFR_IO_ADDR(PORTB), 3
;---------------------------------------------------------------------------;
.nolist
#include <avr/io.h>
.list
.text
;---------------------------------------------------------------------------;
; Initialize MMC port
;
; void init_spi (void);
.global init_spi
.func init_spi
init_spi:
sbi DDR_CS ; CS: output
sbi DDR_DI ; DI: output
sbi DDR_CK ; SCLK: output
sbi PORT_DO ; DO: pull-up
ret
.endfunc
The error that I'm geting:
asmfunc.S:44: Error: number must be positive and less than 32
I found this post and I think this is the answer bat I don't know how to write it in the correct syntax.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/40807274",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: how to implement background services in react native? How to implement background service in my app? I want to run the app in the background always without user interaction
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75262208",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Wallet style with multiple cells border Swift How do I make only the last cell with round corner and black border color? and the rest of the cell will only have left and right border?
This is the design of the cell. The pink part is the section header, the white part is the cell. In the image I have 6 cells and I want the 6th one to have round corner and black border. Cell 1-5 will only have left and right border.
My tableview will contain few sets of todo please see the image under.
Thank you.
func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
.
.
.
cell.view.clipsToBounds = true
if indexPath.row == todoList.count - 1 {
cell.view.layer.cornerRadius = 10
cell.view.layer.maskedCorners = [.layerMinXMaxYCorner,.layerMaxXMaxYCorner]
cell.view.layer.borderColor = UIColor.black.cgColor //not working it makes all cell has border
cell.view.layer.borderWidth = 1
} else {
//only want left and right with black border
}
.
.
.
}
A: @PpppppPppppp, I managed to get the result with some hacks. Do post if you found another way to do it. Here's the final result:
Instead of setting left and right borders for cell, set black colour to cell's contentView and place a view inside with leading and trailing constraints to make it look like it has a border.
Then provide a viewForHeaderInSection and a viewForFooterInSection with masked corners as required in your UI. Some hacks required in the footer to hide the top border.
I didn't use any custom UITableViewCell or UITableViewHeaderFooterView since this is only for demo. FInd the whole code for table view below.
extension ViewController: UITableViewDataSource, UITableViewDelegate {
func numberOfSections(in tableView: UITableView) -> Int {
return 4
}
func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
return 6
}
func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
let cell = tableView.dequeueReusableCell(withIdentifier: "Cell", for: indexPath)
cell.textLabel?.text = "index: \(indexPath.row)"
return cell
}
func tableView(_ tableView: UITableView, heightForRowAt indexPath: IndexPath) -> CGFloat {
return 50
}
func tableView(_ tableView: UITableView, viewForHeaderInSection section: Int) -> UIView? {
let header = UIView(frame: .init(x: 0, y: 0, width: tableView.bounds.width, height: 70))
header.backgroundColor = .white
let innderView = UIView(frame: .init(x: 0, y: 20, width: header.bounds.width, height: 50))
header.addSubview(innderView)
innderView.backgroundColor = .lightGray
innderView.layer.cornerRadius = 8
innderView.layer.borderColor = UIColor.black.cgColor
innderView.layer.borderWidth = 2
innderView.layer.maskedCorners = [.layerMinXMinYCorner, .layerMaxXMinYCorner]
return header
}
func tableView(_ tableView: UITableView, heightForHeaderInSection section: Int) -> CGFloat {
return 70
}
func tableView(_ tableView: UITableView, viewForFooterInSection section: Int) -> UIView? {
let footer = UIView(frame: .init(x: 0, y: 0, width: tableView.bounds.width, height: 20))
let innerView = UIView(frame: .init(x: 2, y: 0, width: footer.bounds.width-4, height: footer.bounds.height-2))
footer.addSubview(innerView)
innerView.backgroundColor = .white
innerView.layer.cornerRadius = 8
innerView.layer.maskedCorners = [.layerMinXMaxYCorner, .layerMaxXMaxYCorner]
footer.backgroundColor = .black
footer.layer.cornerRadius = 8
footer.layer.maskedCorners = [.layerMinXMaxYCorner, .layerMaxXMaxYCorner]
return footer
}
func tableView(_ tableView: UITableView, heightForFooterInSection section: Int) -> CGFloat {
return 20
}
}
A: I do think @Jithin answer using adding a subview is the easiest and greatest answer, but if you really want to draw your own border line, we can use UIBezierPath to achieve this. (which I think is a little bit overkill for this).
extension ViewController: UITableViewDataSource {
func tableView(_ tableView: UITableView, willDisplayHeaderView view: UIView, forSection section: Int) {
let cornerRadius: CGFloat = 10.0
let lineWidth: CGFloat = 2
// deduct the line width to keep the line stay side the view
let point1 = CGPoint(x: 0.0 + lineWidth / 2, y: view.frame.height)
let point2 = CGPoint(x: 0.0 + lineWidth / 2, y: 0.0 + cornerRadius + lineWidth / 2)
let point3 = CGPoint(x: 0.0 + cornerRadius + lineWidth / 2, y: 0.0 + lineWidth / 2)
let point4 = CGPoint(x: view.frame.width - cornerRadius - lineWidth / 2, y: 0.0 + lineWidth / 2)
let point5 = CGPoint(x: view.frame.width - lineWidth / 2, y: 0.0 + cornerRadius + lineWidth / 2)
let point6 = CGPoint(x: view.frame.width - lineWidth / 2, y: view.frame.height - lineWidth / 2)
// draw the whole line with upper corner radius
let path = UIBezierPath()
path.move(to: point1)
path.addLine(to: point2)
path.addArc(withCenter: CGPoint(x: point3.x, y: point2.y),
radius: cornerRadius,
startAngle: .pi,
endAngle: -.pi/2,
clockwise: true)
path.addLine(to: point4)
path.addArc(withCenter: CGPoint(x: point4.x, y: point5.y),
radius: cornerRadius,
startAngle: -.pi/2,
endAngle: 0,
clockwise: true)
path.addLine(to: point6)
path.addLine(to: point1)
let topBorder = CAShapeLayer()
topBorder.path = path.cgPath
topBorder.lineWidth = lineWidth
topBorder.strokeColor = UIColor.purple.cgColor
topBorder.fillColor = nil
// add the line to header view
view.layer.addSublayer(topBorder)
}
func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
let cell = tableView.dequeueReusableCell(withIdentifier: "testingCell", for: indexPath) as! TableViewCell
cell.cellLabel.text = "\(mockData[indexPath.section][indexPath.row])"
cell.backgroundColor = .green
if indexPath.row == mockData[indexPath.section].count - 1 {
cell.setAsLastCell()
// we can add a mask to cut those area outside our border line
let maskPath = UIBezierPath(roundedRect: cell.bounds, byRoundingCorners: [.bottomLeft, .bottomRight], cornerRadii: CGSize(width: 10, height: 10))
let maskLayer = CAShapeLayer()
maskLayer.path = maskPath.cgPath
cell.layer.mask = maskLayer
} else {
cell.setAsNormalCell()
cell.layer.mask = nil
}
return cell
}
}
And here is the UITableViewwCell:
class TableViewCell: UITableViewCell {
@IBOutlet weak var cellLabel: UILabel!
let leftBorder = CALayer()
let rightBorder = CALayer()
let bottomBorder = CAShapeLayer()
let cornerRadius: CGFloat = 10
let lineWidth: CGFloat = 2
override func awakeFromNib() {
super.awakeFromNib()
}
override func layoutSubviews() {
super.layoutSubviews()
leftBorder.frame = CGRect(x: 0, y: 0, width: lineWidth, height: self.frame.height)
leftBorder.backgroundColor = UIColor.blue.cgColor
self.layer.addSublayer(leftBorder)
rightBorder.frame = CGRect(x: self.frame.width - lineWidth, y: 0.0, width: lineWidth, height: self.frame.height)
rightBorder.backgroundColor = UIColor.blue.cgColor
self.layer.addSublayer(rightBorder)
// same idea as drawing line in the header view
let point1 = CGPoint(x: 0.0 + lineWidth / 2, y: 0.0)
let point2 = CGPoint(x: 0.0 + lineWidth / 2, y: self.frame.height - cornerRadius - lineWidth / 2)
let point3 = CGPoint(x: cornerRadius + lineWidth / 2, y: self.frame.height - lineWidth / 2)
let point4 = CGPoint(x: self.frame.width - cornerRadius - lineWidth / 2, y: self.frame.height - lineWidth / 2)
let point5 = CGPoint(x: self.frame.width - lineWidth / 2, y: self.frame.height - cornerRadius - lineWidth / 2)
let point6 = CGPoint(x: self.frame.width - lineWidth / 2, y: 0.0)
let path = UIBezierPath()
path.move(to: point1)
path.addLine(to: point2)[![enter image description here][1]][1]
path.addArc(withCenter: CGPoint(x: point3.x, y: point2.y),
radius: cornerRadius,
startAngle: .pi,
endAngle: .pi/2,
clockwise: false)
path.addLine(to: point4)
path.addArc(withCenter: CGPoint(x: point4.x,y: point5.y),
radius: cornerRadius,
startAngle: .pi/2,
endAngle: 0,
clockwise: false)
path.addLine(to: point6)
bottomBorder.path = path.cgPath
bottomBorder.strokeColor = UIColor.red.cgColor
bottomBorder.lineWidth = lineWidth
bottomBorder.fillColor = nil
self.layer.addSublayer(bottomBorder)
}
func setAsNormalCell() {
leftBorder.isHidden = false
rightBorder.isHidden = false
bottomBorder.isHidden = true
}
func setAsLastCell() {
leftBorder.isHidden = true
rightBorder.isHidden = true
bottomBorder.isHidden = false
}
}
And of course, the above code is just for testing purposes and maybe a bit messy, but I hope it can explain a bit about drawing a line.
The result:
A: You can give corner radius to your tableview.
tableView.layer.cornerRadius = 10
tableView.layer.borderColor = UIColor.black.cgColor
tableView.layer.borderWidth = 1
A: I have a UICollectionView extension however it should work the same for UITableView
@objc func addBorder(fromIndexPath:IndexPath, toIndexPath:IndexPath, borderColor:CGColor, borderWidth:CGFloat){
let fromAttributes = self.layoutAttributesForItem(at: fromIndexPath)!
let toAttributes = self.layoutAttributesForItem(at: toIndexPath)!
let borderFrame = CGRect(x: fromAttributes.frame.origin.x
,y: fromAttributes.frame.origin.y
,width: fromAttributes.frame.size.width
,height: toAttributes.frame.origin.y + toAttributes.frame.size.height - fromAttributes.frame.origin.y)
let borderTag = ("\(fromIndexPath.row)\(fromIndexPath.section)\(toIndexPath.row)\(toIndexPath.section)" as NSString).integerValue
if let borderView = self.viewWithTag(borderTag){
borderView.frame = borderFrame
}
else{
let borderView = UIView(frame: borderFrame)
borderView.tag = borderTag
borderView.backgroundColor = UIColor.clear
borderView.isUserInteractionEnabled = false
borderView.layer.borderWidth = borderWidth
borderView.layer.borderColor = borderColor
self.addSubview(borderView)
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/63146929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How can I change the directory that "cd \" goes to in git? When I enter cd \ git goes into a different directory than I would like to.
How can I change it so that cd \ takes me to
C:\Users\J P\Dropbox\Git Bash
A: cd \ takes you to the root directory of the current drive. That is a function of Windows, not a function of git.
If you want to change it, you'll have to use Windows to do that, not git.
One route might be to use a separate drive letter (e.g. Z:) bound to C:\Users\J P\Dropbox\Git Bash. In DOS the SUBST command did that. It appears to work with XP, and here's a way to make it persistent. The easiest appears to be:
net use z: "\\computerName\c$\Users\J P\Dropbox\Git Bash" /persistent:yes
Then if you change to drive z:, cd \ will take you to z:'s root, which will be the right place.
There's probably a different/better Windows way to do that.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/28294426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Gorilla WebSocket WriteMessage errors - Go Lang I am currently experimenting with Gorilla WebSocket package. When sending a message using WriteMessage, if an error is returned, what should I do? Should I start the Closing Handshake or assume that if there is a problem it will be caught using the ReadMessage method and simply log the error?
A: If WriteMessage returns an error, then the application should close the connection. This releases resources used by the connection and causes the reader to return with an error.
It is not possible to send a closing handshake after WriteMessage returns an error. If WriteMessage returns an error, then all subsequent writes will also return an error.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35350101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: xamarin forms Scrollview VerticalScrollBarVisibility does not visible for android only its working in IOS I have tried Xamarin forms Scrollview VerticalScrollBarVisibility="always", but it does not visible for android,Its Showing in IOS devices.I need Scrollbar visible in android device also.Can anybody help me? Thank you in advance.
my Code:<ScrollView Orientation="Vertical" VerticalOptions="FillAndExpand" HorizontalOptions="FillAndExpand" VerticalScrollBarVisibility="Always"><Grid></Grid></ScrollView>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/70800485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Browser Issue/Difference for pdfs defined within The following html code has been used to experiment with how different browsers handle a pdf created through the html control. Below is a very basic html page.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<body>
<object data="http://www.dol.gov/ebsa/pdf/401kfefm.pdf" type="application/pdf" width="500" height="300">
<p>Missing PDF plugin for this browser.
<a href="http://www.dol.gov/ebsa/pdf/401kfefm.pdf">Click here to download the PDF file.</a></p>
</object>
</body>
</html>
The pdf file will load properly in all of the browsers I have tested; Firefox 5, IE 8, Chrome 12.x, Safari 5. However the size of the control seems to vary between IE and the other browsers. Between the other three browsers the size is consistent but I ideally would like the control to be the same size on all browsers.
The picture below shows the size difference between Chrome and IE.
Thanks for the help.
A: Something is probably going on wrong with the pixels size. Try set the '500px' instead of '500' or percentage ('20%')
Hope this helps
| {
"language": "en",
"url": "https://stackoverflow.com/questions/6497134",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Appcelerator tableView font color How can you set the color of the font for the rows in tableView?
I DO NOT want to set it row by row like this:
var table_data = [
{title:'Row 1', color: 'black'},
];
I have tried adding font:{colour:'black'} to the table var but it does not seem to work. Like this:
var table1 = Titanium.UI.createTableView({
data:table_data,
separatorColor:'black',
font:{color:'black'}
});
I want to be able to set it so any row in the table has a set color. Specifically as I will be adding items to the table and I want them to be 'black' not the default white/grey. So when I add new items they will be black...
I am sure this is simple but I cant seem to find anything that is assisting me hence the question here
Thanks in advance.
A: here you go.Add a lable in tableview row and set it according to your own desire
var self = Ti.UI.createWindow({
backgroundColor : 'white',
title : 'Saved Locations'
});
var data = [];
var tabLoc = Ti.UI.createTableView({
});
self.add(tabLoc);
var row = Titanium.UI.createTableViewRow({
height : '60dp',
className : "tableRow",
});
var labTitle = Ti.UI.createLabel({
color : 'black',
font : {
fontSize : '12dp',
fontWeight : 'bold'
},
height : Ti.UI.SIZE,
text : 'There is no location yet saved',
textAlign : 'center'
});
row.add(labTitle);
data.push(row)
tabLoc.setData(data);
self.open()
Thanks
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18052165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Scripts not running in FF and Chrome I am using javascript alert message to show validation messages. In Firefox and Chrome first time working fine,second time for same alert same its asking message like "Prevent this page from creating additional dialogs" with check box. After select that check box, Next time button click scripts not executing. How to block that message?
A: Use a JavaScript Modal popup! eg. JQuery UI Modal Popup
A: its a browser property for the client,if he doesnt want to view these alerts. you cant remove that check box and message.
if this message is shown then what the problem, leave it for the user.
why you want to force him to view these alerts,
it must be user's wish to see or not see these alerts.
for better user experience and for your purpose you can use fancybox or facebox
fancy box fiddler check this http://jsfiddle.net/BJNYr/
A: This is a browser matter, so you as a developer cant do anything with that behavior.
Here is a similar question
already answered here
A: unfortunately you can't be sure that user has his browser settings with javascript alerts popup on ( that called with 'alert('...') function').
You should use javascript dialog plygin instead.
For example:
http://fancybox.net/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/23076084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: renaming a file while creating zip file through Maven-assembly plugin i am using maven-assembly plugin to create the zip , how can i rename some files while zipping using the same plugin??
Update:
This is the profile in pom
<profile>
<id>D1</id>
<activation>
<property>
<name>D1</name>
<value>true</value>
</property>
</activation>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.2.2</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
<configuration>
<descriptors>
<descriptor>assembly/online-distribution.D1.xml</descriptor>
</descriptors>
<appendAssemblyId>false</appendAssemblyId>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
This is Assembly.xml
<?xml version="1.0" encoding="UTF-8" ?>
<assembly
xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-
plugin/assembly/1.1.2 http://maven.apache.org/xsd/assembly-1.1.2.xsd">
<formats>
<format>tar.gz</format>
</formats>
<id>online</id>
<includeBaseDirectory>false</includeBaseDirectory>
<dependencySet>
<outputDirectory>resources</outputDirectory>
<unpack>true</unpack>
<includes>
<include>${project.groupId}:core-config:jar</include>
</includes>
<unpackOptions>
<includes>
<include>coresrv/env-config.D1.properties</include>
</includes>
</unpackOptions>
</dependencySet>
<files>
<file>
<source>${project.groupId}/core-config.jar/coresrv/env-config.D1.properties</source>
<outputDirectory>/</outputDirectory>
<destName>env-config.properties</destName>
</file>
</files>
</assembly>
i am getting that jar and unpacking it, then renaming a file and zipping it again.
Thanks
A: Answering an old post for posterity... and next time I ask Google and get sent here.
Renaming used to be a pain in Maven, this plugin does what is says on the tin:
copy-rename-maven-plugin
(available in Maven central)
Easy to use:
<plugin>
<groupId>com.coderplus.maven.plugins</groupId>
<artifactId>copy-rename-maven-plugin</artifactId>
<version>1.0.1</version>
<executions>
<execution>
<id>copy-properties-file</id>
<phase>prepare-package</phase>
<goals>
<goal>copy</goal>
</goals>
<configuration>
<sourceFile>source.props</sourceFile>
<destinationFile>destination.properties</destinationFile>
</configuration>
</execution>
</executions>
</plugin>
Note: the rename goal will move file(s) in target/
A: I faced with same problem, I need to use unpack with renaming some files and as solution we can use two executions of maven-assembly-plugin.
During first execution we will use format dir instead of one of archive formats and will prepare files content and as result will have folder with all needed files.
During second execution we will use folder from previous execution as source in fileSets with files and we will have ability to rename any file using files.file.destName, and as format for second execution we can use archive format like zip to create final archive.
A: You can use
<outputFileNameMapping>...</outputFileNameMapping>
which sets the mapping pattern for all dependencies included in this assembly uses
default value:
${artifact.artifactId}-${artifact.version}${dashClassifier?}.${artifact.extension}.
A: You just need to add it in assembly plugin executions like below;
<executions>
<execution>
<configuration>
<finalName> you can give any name you want <finalName>
<configuration>
</execution>
</executions>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16542433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: CMake not finding boost_python lib with brew (macOS) I think I'm going crazy. For some weird reason, I cannot get cmake to find boost_python. I've always used the same CMakeList and the same steps for installing Boost with Boost-Python support on macOS. On GNU/Linux and Windows I usually just build the library manually and it works just fine. Several months ago it also just worked perfectly fine on macOS by issuing the following commands:
brew install python2 boost boost-python
(I'm specifically using python2 and not python3)
I've no idea what's causing this issue because I've never had it before... perhaps it can't find the boost-python library from brew? (/usr/local/Cellar/boost-python/1.67.0/lib/). But then again, I never bothered with changing search paths for cmake or anything like that. This CMakeList always worked both under Linux and macOS.
my CMakeList.txt
cmake_minimum_required(VERSION 3.6)
project(game-client)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}-std=c++1z -Wall -Wextra -Wno-unused-parameter -pthread")
# configure boost
set(Boost_USE_STATIC_LIBS ON)
set(Boost_USE_MULTITHREADED ON)
find_package(Boost COMPONENTS filesystem system python REQUIRED)
if (NOT Boost_FOUND)
MESSAGE(FATAL_ERROR "Could not find boost library")
endif ()
# configure python
find_package(PythonLibs 2.7 REQUIRED)
if (NOT PYTHONLIBS_FOUND)
MESSAGE(FATAL_ERROR "Could not find python library")
endif ()
include_directories(${Boost_INCLUDE_DIRS})
include_directories(${PYTHON_INCLUDE_DIRS})
# configure preprocessor flags for internal classlogs
add_definitions(-DDUMP_GOOD_PACKETS)
# add_definitions(-DCLASSLOG)
set(SOURCE_FILES src/main.cpp src/Config.cpp include/Config.hpp src/SequenceTable.cpp include/SequenceTable.hpp
include/Singleton.hpp include/Logger.hpp src/NetworkStream.cpp include/NetworkStream.hpp
include/packets/incoming/in.hpp include/packets/outcoming/out.hpp src/Buffer.cpp include/Buffer.hpp
include/Packet.hpp src/PacketHandler.cpp include/PacketHandler.hpp src/Core.cpp
include/Core.hpp src/Cipher.cpp include/Cipher.hpp src/KeyAgreement.cpp include/KeyAgreement.hpp
src/AuthInput.cpp include/AuthInput.hpp src/MainInput.cpp include/MainInput.hpp include/PythonManager.hpp
src/PythonManager.cpp include/PythonInstance.hpp src/PythonInstance.cpp include/packets/common/common.hpp
src/Entity.cpp include/Entity.hpp include/Item.hpp src/Item.cpp src/Environment.cpp include/Environment.hpp)
add_executable(game-client ${SOURCE_FILES})
target_link_libraries(game-client ${Boost_LIBRARIES} ${PYTHON_LIBRARIES} cryptopp)
cmake output:
memcpys-MBP:build memcpy$ cmake ..
-- The C compiler identification is Clang 6.0.0
-- The CXX compiler identification is Clang 6.0.0
-- Check for working C compiler: /usr/local/opt/llvm/bin/clang
-- Check for working C compiler: /usr/local/opt/llvm/bin/clang -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/local/opt/llvm/bin/clang++
-- Check for working CXX compiler: /usr/local/opt/llvm/bin/clang++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMake Error at /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:2044 (message):
Unable to find the requested Boost libraries.
Boost version: 1.67.0
Boost include path: /usr/local/include
Could not find the following static Boost libraries:
boost_python
Some (but not all) of the required Boost libraries were found. You may
need to install these additional Boost libraries. Alternatively, set
BOOST_LIBRARYDIR to the directory containing Boost libraries or BOOST_ROOT
to the location of Boost.
Call Stack (most recent call first):
CMakeLists.txt:10 (find_package)
CMake Error at CMakeLists.txt:12 (MESSAGE):
Could not find boost library
-- Configuring incomplete, errors occurred!
See also "/Users/memcpy/git-repos/game- client/build/CMakeFiles/CMakeOutput.log".
I run cmake with with following options as suggested in the comments:
cmake -DBoost_DEBUG=ON ..
-- The C compiler identification is Clang 6.0.0
-- The CXX compiler identification is Clang 6.0.0
-- Check for working C compiler: /usr/local/opt/llvm/bin/clang
-- Check for working C compiler: /usr/local/opt/llvm/bin/clang -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/local/opt/llvm/bin/clang++
-- Check for working CXX compiler: /usr/local/opt/llvm/bin/clang++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1121 ] _boost_TEST_VERSIONS = 1.67.0;1.67;1.66.0;1.66;1.65.1;1.65.0;1.65;1.64.0;1.64;1.63.0;1.63;1.62.0;1.62;1.61.0;1.61;1.60.0;1.60;1.59.0;1.59;1.58.0;1.58;1.57.0;1.57;1.56.0;1.56;1.55.0;1.55;1.54.0;1.54;1.53.0;1.53;1.52.0;1.52;1.51.0;1.51;1.50.0;1.50;1.49.0;1.49;1.48.0;1.48;1.47.0;1.47;1.46.1;1.46.0;1.46;1.45.0;1.45;1.44.0;1.44;1.43.0;1.43;1.42.0;1.42;1.41.0;1.41;1.40.0;1.40;1.39.0;1.39;1.38.0;1.38;1.37.0;1.37;1.36.1;1.36.0;1.36;1.35.1;1.35.0;1.35;1.34.1;1.34.0;1.34;1.33.1;1.33.0;1.33
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1123 ] Boost_USE_MULTITHREADED = ON
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1125 ] Boost_USE_STATIC_LIBS = ON
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1127 ] Boost_USE_STATIC_RUNTIME =
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1129 ] Boost_ADDITIONAL_VERSIONS =
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1131 ] Boost_NO_SYSTEM_PATHS =
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1199 ] Declared as CMake or Environmental Variables:
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1201 ] BOOST_ROOT =
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1203 ] BOOST_INCLUDEDIR =
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1205 ] BOOST_LIBRARYDIR =
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1207 ] _boost_TEST_VERSIONS = 1.67.0;1.67;1.66.0;1.66;1.65.1;1.65.0;1.65;1.64.0;1.64;1.63.0;1.63;1.62.0;1.62;1.61.0;1.61;1.60.0;1.60;1.59.0;1.59;1.58.0;1.58;1.57.0;1.57;1.56.0;1.56;1.55.0;1.55;1.54.0;1.54;1.53.0;1.53;1.52.0;1.52;1.51.0;1.51;1.50.0;1.50;1.49.0;1.49;1.48.0;1.48;1.47.0;1.47;1.46.1;1.46.0;1.46;1.45.0;1.45;1.44.0;1.44;1.43.0;1.43;1.42.0;1.42;1.41.0;1.41;1.40.0;1.40;1.39.0;1.39;1.38.0;1.38;1.37.0;1.37;1.36.1;1.36.0;1.36;1.35.1;1.35.0;1.35;1.34.1;1.34.0;1.34;1.33.1;1.33.0;1.33
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1282 ] Include debugging info:
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1284 ] _boost_INCLUDE_SEARCH_DIRS = PATHS;C:/boost/include;C:/boost;/sw/local/include
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1286 ] _boost_PATH_SUFFIXES = boost-1_67_0;boost_1_67_0;boost/boost-1_67_0;boost/boost_1_67_0;boost-1_67;boost_1_67;boost/boost-1_67;boost/boost_1_67;boost-1_66_0;boost_1_66_0;boost/boost-1_66_0;boost/boost_1_66_0;boost-1_66;boost_1_66;boost/boost-1_66;boost/boost_1_66;boost-1_65_1;boost_1_65_1;boost/boost-1_65_1;boost/boost_1_65_1;boost-1_65_0;boost_1_65_0;boost/boost-1_65_0;boost/boost_1_65_0;boost-1_65;boost_1_65;boost/boost-1_65;boost/boost_1_65;boost-1_64_0;boost_1_64_0;boost/boost-1_64_0;boost/boost_1_64_0;boost-1_64;boost_1_64;boost/boost-1_64;boost/boost_1_64;boost-1_63_0;boost_1_63_0;boost/boost-1_63_0;boost/boost_1_63_0;boost-1_63;boost_1_63;boost/boost-1_63;boost/boost_1_63;boost-1_62_0;boost_1_62_0;boost/boost-1_62_0;boost/boost_1_62_0;boost-1_62;boost_1_62;boost/boost-1_62;boost/boost_1_62;boost-1_61_0;boost_1_61_0;boost/boost-1_61_0;boost/boost_1_61_0;boost-1_61;boost_1_61;boost/boost-1_61;boost/boost_1_61;boost-1_60_0;boost_1_60_0;boost/boost-1_60_0;boost/boost_1_60_0;boost-1_60;boost_1_60;boost/boost-1_60;boost/boost_1_60;boost-1_59_0;boost_1_59_0;boost/boost-1_59_0;boost/boost_1_59_0;boost-1_59;boost_1_59;boost/boost-1_59;boost/boost_1_59;boost-1_58_0;boost_1_58_0;boost/boost-1_58_0;boost/boost_1_58_0;boost-1_58;boost_1_58;boost/boost-1_58;boost/boost_1_58;boost-1_57_0;boost_1_57_0;boost/boost-1_57_0;boost/boost_1_57_0;boost-1_57;boost_1_57;boost/boost-1_57;boost/boost_1_57;boost-1_56_0;boost_1_56_0;boost/boost-1_56_0;boost/boost_1_56_0;boost-1_56;boost_1_56;boost/boost-1_56;boost/boost_1_56;boost-1_55_0;boost_1_55_0;boost/boost-1_55_0;boost/boost_1_55_0;boost-1_55;boost_1_55;boost/boost-1_55;boost/boost_1_55;boost-1_54_0;boost_1_54_0;boost/boost-1_54_0;boost/boost_1_54_0;boost-1_54;boost_1_54;boost/boost-1_54;boost/boost_1_54;boost-1_53_0;boost_1_53_0;boost/boost-1_53_0;boost/boost_1_53_0;boost-1_53;boost_1_53;boost/boost-1_53;boost/boost_1_53;boost-1_52_0;boost_1_52_0;boost/boost-1_52_0;boost/boost_1_52_0;boost-1_52;boost_1_52;boost/boost-1_52;boost/boost_1_52;boost-1_51_0;boost_1_51_0;boost/boost-1_51_0;boost/boost_1_51_0;boost-1_51;boost_1_51;boost/boost-1_51;boost/boost_1_51;boost-1_50_0;boost_1_50_0;boost/boost-1_50_0;boost/boost_1_50_0;boost-1_50;boost_1_50;boost/boost-1_50;boost/boost_1_50;boost-1_49_0;boost_1_49_0;boost/boost-1_49_0;boost/boost_1_49_0;boost-1_49;boost_1_49;boost/boost-1_49;boost/boost_1_49;boost-1_48_0;boost_1_48_0;boost/boost-1_48_0;boost/boost_1_48_0;boost-1_48;boost_1_48;boost/boost-1_48;boost/boost_1_48;boost-1_47_0;boost_1_47_0;boost/boost-1_47_0;boost/boost_1_47_0;boost-1_47;boost_1_47;boost/boost-1_47;boost/boost_1_47;boost-1_46_1;boost_1_46_1;boost/boost-1_46_1;boost/boost_1_46_1;boost-1_46_0;boost_1_46_0;boost/boost-1_46_0;boost/boost_1_46_0;boost-1_46;boost_1_46;boost/boost-1_46;boost/boost_1_46;boost-1_45_0;boost_1_45_0;boost/boost-1_45_0;boost/boost_1_45_0;boost-1_45;boost_1_45;boost/boost-1_45;boost/boost_1_45;boost-1_44_0;boost_1_44_0;boost/boost-1_44_0;boost/boost_1_44_0;boost-1_44;boost_1_44;boost/boost-1_44;boost/boost_1_44;boost-1_43_0;boost_1_43_0;boost/boost-1_43_0;boost/boost_1_43_0;boost-1_43;boost_1_43;boost/boost-1_43;boost/boost_1_43;boost-1_42_0;boost_1_42_0;boost/boost-1_42_0;boost/boost_1_42_0;boost-1_42;boost_1_42;boost/boost-1_42;boost/boost_1_42;boost-1_41_0;boost_1_41_0;boost/boost-1_41_0;boost/boost_1_41_0;boost-1_41;boost_1_41;boost/boost-1_41;boost/boost_1_41;boost-1_40_0;boost_1_40_0;boost/boost-1_40_0;boost/boost_1_40_0;boost-1_40;boost_1_40;boost/boost-1_40;boost/boost_1_40;boost-1_39_0;boost_1_39_0;boost/boost-1_39_0;boost/boost_1_39_0;boost-1_39;boost_1_39;boost/boost-1_39;boost/boost_1_39;boost-1_38_0;boost_1_38_0;boost/boost-1_38_0;boost/boost_1_38_0;boost-1_38;boost_1_38;boost/boost-1_38;boost/boost_1_38;boost-1_37_0;boost_1_37_0;boost/boost-1_37_0;boost/boost_1_37_0;boost-1_37;boost_1_37;boost/boost-1_37;boost/boost_1_37;boost-1_36_1;boost_1_36_1;boost/boost-1_36_1;boost/boost_1_36_1;boost-1_36_0;boost_1_36_0;boost/boost-1_36_0;boost/boost_1_36_0;boost-1_36;boost_1_36;boost/boost-1_36;boost/boost_1_36;boost-1_35_1;boost_1_35_1;boost/boost-1_35_1;boost/boost_1_35_1;boost-1_35_0;boost_1_35_0;boost/boost-1_35_0;boost/boost_1_35_0;boost-1_35;boost_1_35;boost/boost-1_35;boost/boost_1_35;boost-1_34_1;boost_1_34_1;boost/boost-1_34_1;boost/boost_1_34_1;boost-1_34_0;boost_1_34_0;boost/boost-1_34_0;boost/boost_1_34_0;boost-1_34;boost_1_34;boost/boost-1_34;boost/boost_1_34;boost-1_33_1;boost_1_33_1;boost/boost-1_33_1;boost/boost_1_33_1;boost-1_33_0;boost_1_33_0;boost/boost-1_33_0;boost/boost_1_33_0;boost-1_33;boost_1_33;boost/boost-1_33;boost/boost_1_33
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1306 ] location of version.hpp: /usr/local/include/boost/version.hpp
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1330 ] version.hpp reveals boost 1.67.0
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1416 ] guessed _boost_COMPILER =
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1426 ] _boost_MULTITHREADED = -mt
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1502 ] _boost_RELEASE_ABI_TAG = -
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1504 ] _boost_DEBUG_ABI_TAG = -d
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1567 ] _boost_LIBRARY_SEARCH_DIRS_RELEASE = /usr/local/include/lib;/usr/local/include/../lib;/usr/local/include/stage/lib;PATHS;C:/boost/lib;C:/boost;/sw/local/lib_boost_LIBRARY_SEARCH_DIRS_DEBUG = /usr/local/include/lib;/usr/local/include/../lib;/usr/local/include/stage/lib;PATHS;C:/boost/lib;C:/boost;/sw/local/lib
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1756 ] Searching for FILESYSTEM_LIBRARY_RELEASE: boost_filesystem-mt-1_67;boost_filesystem-mt;boost_filesystem
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:400 ] Boost_LIBRARY_DIR_RELEASE = /usr/local/lib _boost_LIBRARY_SEARCH_DIRS_RELEASE = /usr/local/lib;NO_DEFAULT_PATH;NO_CMAKE_FIND_ROOT_PATH
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1809 ] Searching for FILESYSTEM_LIBRARY_DEBUG: boost_filesystem-mt-d-1_67;boost_filesystem-mt-d;boost_filesystem-mt;boost_filesystem
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:400 ] Boost_LIBRARY_DIR_DEBUG = /usr/local/lib _boost_LIBRARY_SEARCH_DIRS_DEBUG = /usr/local/lib;NO_DEFAULT_PATH;NO_CMAKE_FIND_ROOT_PATH
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1756 ] Searching for SYSTEM_LIBRARY_RELEASE: boost_system-mt-1_67;boost_system-mt;boost_system
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:400 ] Boost_LIBRARY_DIR_RELEASE = /usr/local/lib _boost_LIBRARY_SEARCH_DIRS_RELEASE = /usr/local/lib;NO_DEFAULT_PATH;NO_CMAKE_FIND_ROOT_PATH
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1809 ] Searching for SYSTEM_LIBRARY_DEBUG: boost_system-mt-d-1_67;boost_system-mt-d;boost_system-mt;boost_system
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:400 ] Boost_LIBRARY_DIR_DEBUG = /usr/local/lib _boost_LIBRARY_SEARCH_DIRS_DEBUG = /usr/local/lib;NO_DEFAULT_PATH;NO_CMAKE_FIND_ROOT_PATH
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1756 ] Searching for PYTHON_LIBRARY_RELEASE: boost_python-mt-1_67;boost_python-mt;boost_python
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:400 ] Boost_LIBRARY_DIR_RELEASE = /usr/local/lib _boost_LIBRARY_SEARCH_DIRS_RELEASE = /usr/local/lib;NO_DEFAULT_PATH;NO_CMAKE_FIND_ROOT_PATH
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1809 ] Searching for PYTHON_LIBRARY_DEBUG: boost_python-mt-d-1_67;boost_python-mt-d;boost_python-mt;boost_python
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:400 ] Boost_LIBRARY_DIR_DEBUG = /usr/local/lib _boost_LIBRARY_SEARCH_DIRS_DEBUG = /usr/local/lib;NO_DEFAULT_PATH;NO_CMAKE_FIND_ROOT_PATH
-- [ /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:1883 ] Boost_FOUND = 1
CMake Error at /usr/local/Cellar/cmake/3.11.4/share/cmake/Modules/FindBoost.cmake:2044 (message):
Unable to find the requested Boost libraries.
Boost version: 1.67.0
Boost include path: /usr/local/include
Could not find the following static Boost libraries:
boost_python
Some (but not all) of the required Boost libraries were found. You may
need to install these additional Boost libraries. Alternatively, set
BOOST_LIBRARYDIR to the directory containing Boost libraries or BOOST_ROOT
to the location of Boost.
Call Stack (most recent call first):
CMakeLists.txt:10 (find_package)
CMake Error at CMakeLists.txt:12 (MESSAGE):
Could not find boost library
-- Configuring incomplete, errors occurred!
See also "/Users/memcpy/git-repos/game-client/build/CMakeFiles/CMakeOutput.log".
A: @Tsyvarev this. I needed to change the COMPONENTS as "python27". This behavior definitely changed over the time. I don't remember having to do this at boost release >= 1.65 <= 1.67. Thanks.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50919160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: MLFlow creates a new experiment run when logging manually along with autolog I am using MLFlow to log metrics and artefacts in the AzureML workspace. With autolog, tensorflow training metrics are available in the experiment run in the AzureML workspace. Along with auto-logging of metrics - I want to log extra metrics and plots in the same experiment run. Doing it with MLFlow - it is creating a new experiment run.
Auto logging:
mlflow.autolog()
Manual logging:
mlflow.log_metric(f"label-A", random.randint(80, 90))
Expected:
Manually logged metrics are available in the same experiment run.
A: Instead of using the module method call mlflow.log_metric to log the metrics, use the client MlflowClient which takes run_id as the parameter.
Following code logs the metrics in the same run_id passed as the parameter.
from mlflow.tracking import MlflowClient
from azureml.core import Run
run_id = Run.get_context(allow_offline=True).id
MlflowClient().log_metric(run_id, "precision", 0.91)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/71409262",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Why is my SVG sprite id reference not loading the corresponding svg? I am using the SVG Sprite system in vanilla JS to load two SVGs on to my page. I have contained both SVGs in one icons.svg file using https://svgsprit.es/ service:
<svg width="0" height="0" class="hidden">
<symbol viewBox="0 0 0 0" id="delete">
<symbol viewBox="0 0 0 0" id="delete">
<symbol fill="#000000" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" id="edit">
<path d="M 18.414062 2 C 18.158062 2 17.902031 2.0979687 17.707031 2.2929688 L 15.707031 4.2929688 L 14.292969 5.7070312 L 3 17 L 3 21 L 7 21 L 21.707031 6.2929688 C 22.098031 5.9019687 22.098031 5.2689063 21.707031 4.8789062 L 19.121094 2.2929688 C 18.926094 2.0979687 18.670063 2 18.414062 2 z M 18.414062 4.4140625 L 19.585938 5.5859375 L 18.292969 6.8789062 L 17.121094 5.7070312 L 18.414062 4.4140625 z M 15.707031 7.1210938 L 16.878906 8.2929688 L 6.171875 19 L 5 19 L 5 17.828125 L 15.707031 7.1210938 z"></path>
</symbol>
</symbol>
</symbol>
<symbol viewBox="0 0 0 0" id="edit">
<symbol fill="#000000" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" id="edit">
<path d="M 18.414062 2 C 18.158062 2 17.902031 2.0979687 17.707031 2.2929688 L 15.707031 4.2929688 L 14.292969 5.7070312 L 3 17 L 3 21 L 7 21 L 21.707031 6.2929688 C 22.098031 5.9019687 22.098031 5.2689063 21.707031 4.8789062 L 19.121094 2.2929688 C 18.926094 2.0979687 18.670063 2 18.414062 2 z M 18.414062 4.4140625 L 19.585938 5.5859375 L 18.292969 6.8789062 L 17.121094 5.7070312 L 18.414062 4.4140625 z M 15.707031 7.1210938 L 16.878906 8.2929688 L 6.171875 19 L 5 19 L 5 17.828125 L 15.707031 7.1210938 z"></path>
</symbol>
</symbol>
</svg>
When I add the HTML through dynamic JS I use:
const container = document.createElement('div');
//add edit icon
const editIcon = document.createElementNS("http://www.w3.org/2000/svg", "svg");
editIcon.classList.add('icon');
const use = document.createElementNS("http://www.w3.org/2000/svg", "use");
use.setAttributeNS('http://www.w3.org/1999/xlink', 'xlink:href', 'images/icons.svg#edit');
editIcon.appendChild(use);
//add delete icon
const deleteIcon = document.createElementNS("http://www.w3.org/2000/svg", "svg");
deleteIcon.classList.add('icon');
const use2 = document.createElementNS("http://www.w3.org/2000/svg", "use");
use2.setAttributeNS('http://www.w3.org/1999/xlink', 'xlink:href', 'images/icons.svg#delete');
deleteIcon.appendChild(use2);
container.appendChild(editIcon);
container.appendChild(deleteIcon);
but only the edit icon appears successfully? I notice that the delete SVG has two symbol tags each with an id - am I referencing it wrong in my JS?
**and yes I know xlink:href is deprecated! this is just a small project for learning so browser compatibility is not highly important
A: Your sprite file is wonky. You should not have multiple nested <symbol> elements.
<symbol viewBox="0 0 0 0" id="delete">
<symbol viewBox="0 0 0 0" id="delete">
Each icon should only have one.
The reason your "delete" icon is not showing is because, when the browser tries to find the "delete" symbol, it has two that have id="delete". That is illegal for a start, because id attributes must be unique.
It will choose one of them. In this case it doesn't matter which one it chooses. That's because all that either "delete" symbol contains is a <symbol> element. Which is effectively nothing, because <symbol> elements by themselves are not rendered. They are only rendered when referenced by a <use>.
You got lucky with the "edit" symbol, because you have three of those. But luckily your browser is probably picking the first id match it finds. And for id="edit" the first one is three levels down inside the nested <symbol id="delete"> ones.
In other words, your sprite file looks like this to the browser
<svg width="0" height="0" class="hidden">
<symbol viewBox="0 0 0 0" id="delete">
</symbol>
<symbol fill="#000000" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" id="edit">
<path d="M 18.414062 2 C 18.158062 2 17.902031 2.0979687 17.707031 2.2929688 L 15.707031 4.2929688 L 14.292969 5.7070312 L 3 17 L 3 21 L 7 21 L 21.707031 6.2929688 C 22.098031 5.9019687 22.098031 5.2689063 21.707031 4.8789062 L 19.121094 2.2929688 C 18.926094 2.0979687 18.670063 2 18.414062 2 z M 18.414062 4.4140625 L 19.585938 5.5859375 L 18.292969 6.8789062 L 17.121094 5.7070312 L 18.414062 4.4140625 z M 15.707031 7.1210938 L 16.878906 8.2929688 L 6.171875 19 L 5 19 L 5 17.828125 L 15.707031 7.1210938 z"></path>
</symbol>
</svg>
Fix the nested symbol problem. It looks like you are passing, to that utility, SVG files that already contain only symbols. So it is simply wrapping symbols in other symbols.
I expect you should be passing renderable SVGs to that utility.
If your SVG files don't render anything when opened with a browser, they are probably already a "sprite sheet". Only use SVGs that display something when opened in a browser.
For your immediate problem, try this manually fixed file instead.
<svg width="0" height="0" class="hidden">
<symbol fill="#000000" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" id="delete">
<path d="M 18.414062 2 C 18.158062 2 17.902031 2.0979687 17.707031 2.2929688 L 15.707031 4.2929688 L 14.292969 5.7070312 L 3 17 L 3 21 L 7 21 L 21.707031 6.2929688 C 22.098031 5.9019687 22.098031 5.2689063 21.707031 4.8789062 L 19.121094 2.2929688 C 18.926094 2.0979687 18.670063 2 18.414062 2 z M 18.414062 4.4140625 L 19.585938 5.5859375 L 18.292969 6.8789062 L 17.121094 5.7070312 L 18.414062 4.4140625 z M 15.707031 7.1210938 L 16.878906 8.2929688 L 6.171875 19 L 5 19 L 5 17.828125 L 15.707031 7.1210938 z"></path>
</symbol>
<symbol fill="#000000" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" id="edit">
<path d="M 18.414062 2 C 18.158062 2 17.902031 2.0979687 17.707031 2.2929688 L 15.707031 4.2929688 L 14.292969 5.7070312 L 3 17 L 3 21 L 7 21 L 21.707031 6.2929688 C 22.098031 5.9019687 22.098031 5.2689063 21.707031 4.8789062 L 19.121094 2.2929688 C 18.926094 2.0979687 18.670063 2 18.414062 2 z M 18.414062 4.4140625 L 19.585938 5.5859375 L 18.292969 6.8789062 L 17.121094 5.7070312 L 18.414062 4.4140625 z M 15.707031 7.1210938 L 16.878906 8.2929688 L 6.171875 19 L 5 19 L 5 17.828125 L 15.707031 7.1210938 z"></path>
</symbol>
</svg>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/69634962",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Reorder Time/Location data to list point to point trips I have a df that the first and last recorded time at a particular location. Example raw data linked in code below.
df <- read.csv('https://raw.githubusercontent.com/smitty1788/Personal-Website/master/example.csv', header = T)
address fuel name Long Lat Time
1 625-627 S St NW, Washington, DC 20001, USA 87 EC6502 -77.02081 38.91411 5/18/2017 13:36
2 625-627 S St NW, Washington, DC 20001, USA 87 EC6502 -77.02081 38.91411 5/18/2017 15:28
3 1301-1327 Howard Rd SE, Washington, DC 20020, USA 87 EC6502 -76.99312 38.86101 5/18/2017 16:03
4 1301-1327 Howard Rd SE, Washington, DC 20020, USA 87 EC6502 -76.99312 38.86101 5/18/2017 20:17
5 821 Whittier Pl NW, Washington, DC 20012, USA 81 EC6502 -77.02542 38.97149 5/18/2017 21:03
6 821 Whittier Pl NW, Washington, DC 20012, USA 81 EC6502 -77.02542 38.97149 5/19/2017 8:35
7 1327 Allison St NW, Washington, DC 20011, USA 81 EC6502 -77.03118 38.94508 5/19/2017 8:50
8 1327 Allison St NW, Washington, DC 20011, USA 81 EC6502 -77.03118 38.94508 5/19/2017 8:55
9 815 Whittier Pl NW, Washington, DC 20012, USA 81 EC6502 -77.02481 38.97148 5/19/2017 9:11
10 1655-1699 N Rhodes St, Arlington, VA 22201, USA 100 EP0253 -77.08 38.89306 5/18/2017 13:36
11 1655-1699 N Rhodes St, Arlington, VA 22201, USA 100 EP0253 -77.08 38.89306 5/18/2017 15:02
12 2617 N Stuart St, Arlington, VA 22207, USA 100 EP0253 -77.11257 38.9066 5/18/2017 15:28
13 2617 N Stuart St, Arlington, VA 22207, USA 100 EP0253 -77.11257 38.9066 5/18/2017 16:54
14 1432-1488 N Quincy St, Arlington, VA 22201, USA 100 EP0253 -77.10842 38.8887 5/18/2017 17:14
15 1432-1488 N Quincy St, Arlington, VA 22201, USA 100 EP0253 -77.10842 38.8887 5/18/2017 18:30
16 1020-1028 N Stafford St, Arlington, VA 22201, USA 84 EP0253 -77.11047 38.88278 5/18/2017 23:15
17 1020-1028 N Stafford St, Arlington, VA 22201, USA 84 EP0253 -77.11047 38.88278 5/19/2017 13:53
The data would indicate that there was a trip between rows 2 and 3, 4 and 5, 6 and 7 and so on for each individual plate in column "name".
I and trying to figure out an efficient way to reorganize the data so that one row would show starting location and ending location (end_address, end_fuel, end_long, end_lat, end_time). Essentially, each row is one trip made. Ideally the new df would be organized like this
name, st_address, st_fuel, st_long, st_lat, st_time, end_address, end_fuel, end_long, end_lat, end_time
Would someone be able to help me identify a way to do this? Thanks!
A: A dplyr solution which relies on group_by to identify vehicle names.
library(dplyr)
# code each pair with a trip id by dividing by 2 - code each trip as 1 = from, 0 = to
df <- df %>%
group_by(name) %>%
mutate(trip_id = (1 + seq_along(address)) %/% 2,
from_to = (seq_along(address) %% 2))
# seprate into from and to
df_from <- df %>% filter(from_to %% 2 == 1) %>% select(-from_to)
df_to <- df %>% filter(from_to %% 2 == 0) %>% select(-from_to)
# join the result
result <- inner_join(df_from, df_to, by = c("name", "trip_id"))
A: library(tidyverse)
library(lubridate)
df <- read.csv('https://raw.githubusercontent.com/smitty1788/Personal-Website/master/example.csv',
header = T)
# Remove 1st and Last row of each group
df_clean <- df %>%
mutate(Time = mdy_hm(Time)) %>%
group_by(name) %>%
arrange(name, Time) %>%
filter(row_number() != 1,
row_number() != n())
df_tripID <- df_clean %>%
group_by(name) %>%
mutate(trip_id = (1 + seq_along(address)) %/% 2,
from_to = (seq_along(address) %% 2))
# seprate into from and to
df_from <- df_tripID %>%
filter(from_to %% 2 == 1) %>%
select(-from_to)
df_to <- df_tripID %>%
filter(from_to %% 2 == 0) %>%
select(-from_to)
# join the result
car2go_trips <- inner_join(df_from, df_to, by = c("name", "trip_id"))
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44119246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Oracle SQL - Define the year element of a date dependent on the current month I am trying to create a view in SQL Developer based on this statement:
SELECT * FROM ORDERS WHERE START_DATE > '01-JUL-2020'
The year element of the date needs to set to the year of the current date if the current month is between July and December otherwise it needs to be the previous year.
The statement below returns the required year but I don't know how to incorporate it (or a better alternative) into the statement above:
select
case
when month(sysdate) > 6 then
year(sysdate)
else
year(sysdate)-1
end year
from dual
Thanks
A: Oracle doesn't have a built-in month function so I'm assuming that is a user-defined function that you've created. Assuming that's the case, it sounds like you want
where start_date > (case when month(sysdate) > 6
then trunc(sysdate,'yyyy') + interval '6' month
else trunc(sysdate,'yyyy') - interval '6' month
end)
A: Just subtract six months and compare the dates:
SELECT *
FROM ORDERS
WHERE trunc(add_months(sysdate, -6), 'YYYY') = trunc(start_date, 'YYYY')
This compares the year of the date six months ago to the year on the record -- which seems to be the logic you want.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/66367298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Attempt to index field (a nil value) I am trying to make simple object container in Lua ( to practice a language a little)
Container = {}
Container.__index = Container
function Container.create( maxNumber )
local c = {} -- our new object
setmetatable(c, Container)
c.maxNumberOfRecords = maxNumber
c.cont = {}
return c
end
function Container:add(index, val)
self.cont[index] = val
end
function Container:getAt(index)
return self.cont[index]
end
return Container
but I always get error Attempt to index field 'cont' (a nil value) when I try to add to container. Can anyone tell me what is a problem ?
I am totally new to lua but I looked at code at documentation http://lua-users.org/wiki/SimpleLuaClasses
A: Can you show an example of code that does not work? It looks OK to me:
> Container = require "Container"
> c = Container.create(5)
> c:add(2, 42)
> =c:getAt(2)
42
| {
"language": "en",
"url": "https://stackoverflow.com/questions/24478922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Explanation for why allocating a second time changes performance I was testing some micro benchmarks on dense matrix multiplication (as a curiosity), and I noticed some very strange performance results.
Here is a minimal working example:
#include <benchmark/benchmark.h>
#include <random>
constexpr long long n = 128;
struct mat_bench_fixture : public benchmark::Fixture
{
double *matA, *matB, *matC;
mat_bench_fixture()
{
matA = new double[n * n];
matB = new double[n * n];
matC = new double[n * n];
benchmark::DoNotOptimize(matA);
benchmark::DoNotOptimize(matB);
benchmark::DoNotOptimize(matC);
#if 0
delete[] matA;
delete[] matB;
delete[] matC;
benchmark::DoNotOptimize(matA);
benchmark::DoNotOptimize(matB);
benchmark::DoNotOptimize(matC);
matA = new double[n * n];
matB = new double[n * n];
matC = new double[n * n];
benchmark::DoNotOptimize(matA);
benchmark::DoNotOptimize(matB);
benchmark::DoNotOptimize(matC);
#endif
}
~mat_bench_fixture()
{
delete[] matA;
delete[] matB;
delete[] matC;
}
void SetUp(const benchmark::State& s) override
{
// generate random data
std::mt19937 gen;
std::uniform_real_distribution<double> dis(0, 1);
for (double* i = matA; i != matA + n * n; ++i)
{
*i = dis(gen);
}
for (double* i = matB; i != matB + n * n; ++i)
{
*i = dis(gen);
}
}
};
BENCHMARK_DEFINE_F(mat_bench_fixture, impl1)(benchmark::State& st)
{
for (auto _ : st)
{
for (long long row = 0; row < n; ++row)
{
for (long long col = 0; col < n; ++col)
{
matC[row * n + col] = 0;
for (long long k = 0; k < n; ++k)
{
matC[row * n + col] += matA[row * n + k] * matB[k * n + col];
}
}
}
benchmark::DoNotOptimize(matA);
benchmark::DoNotOptimize(matB);
benchmark::DoNotOptimize(matC);
benchmark::ClobberMemory();
}
}
BENCHMARK_REGISTER_F(mat_bench_fixture, impl1);
BENCHMARK_MAIN();
There is an #if 0 block in the constructor of the fixture which can be toggled to #if 1 for the two different scenarios I'm testing. What I've noticed is that for some reason when I force a re-allocation of all the buffers, for some reason on my system the time it takes for the benchmark to run magically improves by about 15%, and I have no explanation why this is happening. I was hoping someone could enlighten me on this. I would also like to know if there are any additional micro-benchmarking "best practices" suggestions for avoiding weird performance anomalies like this in the future.
How I'm compiling (assuming Google Benchmark has been installed somewhere it can be found):
$CC -o mult_test mult_test.cpp -std=c++14 -pthread -O3 -fno-omit-frame-pointer -lbenchmark
I've been running this with:
./mult_test --benchmark_repetitions=5
I'm doing all my testing in Ubuntu 18.04 x64 (Kernel version 4.15.0-30-generic)
I've tried several different variations of this code, and they all give the same basic result over multiple runs (it's surprising how consistent the results are for me):
*
*Move allocation/initialization inside of the benchmark "SetUp" phase (non-timed part) so that the allocation/deallocation happens every new sample point
*Switched compilers between GCC 7.3.0 and Clang 6.0.0
*Tried different Computers with different CPU's (Intel i5-6600K, and one with dual socket Xeon E5-2630 v2)
*Tried different methods for implementing the benchmark framework (i.e. not using Google Benchmark at all and manually implementing timing via std::chrono)
*Forcing all buffers to be aligned to several different boundaries (64 bytes, 128 bytes, 256 bytes)
*Forcing a fixed number of iterations in each sample timing period
*Tried running with a much higher number of repetitions (20+)
*Forced a constant CPU clock frequency using the performance governor
*Tried different compiler flags for optimization options (removed no-omit-frame-pointer, tried -march=native)
*I've tried using std::vector for managing the storage, using new[]/delete[] pairs, and malloc/free. They all give similar results.
I've compared the assembly of the hot portion of the code, and it's identical between the two test cases (screenshot from perf for one of the cases):
40:
mov 0xc0(%r15),$rcx
mov 0xd0(%r15),%rdx
add $0x8,$rcx
move 0xc8(%r15),%r9
add %r8,%r9
xor %r10d,%r10d
nop
60:
mov %r10,%r11
shl $0x7,$r11
mov %r9,%r13
xor %esi,%esi
nop
70:
lea (%rsi,%r11,1),%rax
movq %0x0,(%rdx,%rax,8)
xordp %xmm0,%xmm0
mov $0xffffffffffffff80,%rdi
mov %r13,%rbx
nop
90:
movsd 0x3f8(%rcx,%rdi,8),%xmm1
mulsd -0x400(%rbx),%xmm1
addsd %xmm0,%xmm1
movsd %xmm1,(%rdx,%rax,8)
movsd 0x400(%rcs,%rdi,8),%xmm0
mulsd (%rbx),%xmm0
addsd %xmm1,%xmm0
movsd %xmm0,(%rdx,%rax,8)
add $0x800,%rbx
add $0x2,%rdi
jne 90
add $0x1,%rsi
add $0x8,%r13
cmp $0x80,%rsi
jne 70
add $0x1,%r10
add $0x400,%rcx
cmp $0x80,%r10
jne 60
add $0xffffffffffffffff,%r12
jne 40
Here is a representative screenshot of perf stat for not performing a re-allocation:
Running ./mult_test
Run on (4 X 4200 MHz CPU s)
CPU Caches:
L1 Data 32K (x4)
L1 Instruction 32K (x4)
L2 Unified 256K (x4)
L3 Unified 6144K (x1)
----------------------------------------------------------------------
Benchmark Time CPU Iterations
----------------------------------------------------------------------
mat_bench_fixture/impl1 2181531 ns 2180896 ns 322
mat_bench_fixture/impl1 2188280 ns 2186860 ns 322
mat_bench_fixture/impl1 2182988 ns 2182150 ns 322
mat_bench_fixture/impl1 2182715 ns 2182025 ns 322
mat_bench_fixture/impl1 2175719 ns 2175653 ns 322
mat_bench_fixture/impl1_mean 2182246 ns 2181517 ns 322
mat_bench_fixture/impl1_median 2182715 ns 2182025 ns 322
mat_bench_fixture/impl1_stddev 4480 ns 4000 ns 322
Performance counter stats for './mult_test --benchmark_repetitions=5':
3771.370173 task-clock (msec) # 0.994 CPUs utilized
223 context-switches # 0.059 K/sec
0 cpu-migrations # 0.000 K/sec
242 page-faults # 0.064 K/sec
15,808,590,474 cycles # 4.192 GHz (61.31%)
20,201,201,797 instructions # 1.28 insn per cycle (69.04%)
1,844,097,332 branches # 488.973 M/sec (69.04%)
358,319 branch-misses # 0.02% of all branches (69.14%)
7,232,957,363 L1-dcache-loads # 1917.859 M/sec (69.24%)
3,774,591,187 L1-dcache-load-misses # 52.19% of all L1-dcache hits (69.35%)
558,507,528 LLC-loads # 148.091 M/sec (69.46%)
93,136 LLC-load-misses # 0.02% of all LL-cache hits (69.47%)
<not supported> L1-icache-loads
736,008 L1-icache-load-misses (69.47%)
7,242,324,412 dTLB-loads # 1920.343 M/sec (69.34%)
581 dTLB-load-misses # 0.00% of all dTLB cache hits (61.50%)
1,582 iTLB-loads # 0.419 K/sec (61.39%)
307 iTLB-load-misses # 19.41% of all iTLB cache hits (61.29%)
<not supported> L1-dcache-prefetches
<not supported> L1-dcache-prefetch-misses
3.795924436 seconds time elapsed
Here is a representative screenshot of perf stat for forcing a re-allocation:
Running ./mult_test
Run on (4 X 4200 MHz CPU s)
CPU Caches:
L1 Data 32K (x4)
L1 Instruction 32K (x4)
L2 Unified 256K (x4)
L3 Unified 6144K (x1)
----------------------------------------------------------------------
Benchmark Time CPU Iterations
----------------------------------------------------------------------
mat_bench_fixture/impl1 1862961 ns 1862919 ns 376
mat_bench_fixture/impl1 1861986 ns 1861947 ns 376
mat_bench_fixture/impl1 1860330 ns 1860305 ns 376
mat_bench_fixture/impl1 1859711 ns 1859652 ns 376
mat_bench_fixture/impl1 1863299 ns 1863273 ns 376
mat_bench_fixture/impl1_mean 1861658 ns 1861619 ns 376
mat_bench_fixture/impl1_median 1861986 ns 1861947 ns 376
mat_bench_fixture/impl1_stddev 1585 ns 1591 ns 376
Performance counter stats for './mult_test --benchmark_repetitions=5':
3724.287293 task-clock (msec) # 0.995 CPUs utilized
11 context-switches # 0.003 K/sec
0 cpu-migrations # 0.000 K/sec
246 page-faults # 0.066 K/sec
15,612,924,579 cycles # 4.192 GHz (61.34%)
23,344,859,019 instructions # 1.50 insn per cycle (69.07%)
2,130,528,330 branches # 572.063 M/sec (69.07%)
331,651 branch-misses # 0.02% of all branches (69.08%)
8,369,233,786 L1-dcache-loads # 2247.204 M/sec (69.18%)
4,206,241,296 L1-dcache-load-misses # 50.26% of all L1-dcache hits (69.29%)
308,687,646 LLC-loads # 82.885 M/sec (69.40%)
94,288 LLC-load-misses # 0.03% of all LL-cache hits (69.50%)
<not supported> L1-icache-loads
475,066 L1-icache-load-misses (69.50%)
8,360,570,315 dTLB-loads # 2244.878 M/sec (69.37%)
364 dTLB-load-misses # 0.00% of all dTLB cache hits (61.53%)
213 iTLB-loads # 0.057 K/sec (61.42%)
144 iTLB-load-misses # 67.61% of all iTLB cache hits (61.32%)
<not supported> L1-dcache-prefetches
<not supported> L1-dcache-prefetch-misses
3.743017809 seconds time elapsed
Here is a minimal working example which doesn't have any external dependencies, and allows for testing memory alignment issues:
#include <random>
#include <chrono>
#include <iostream>
#include <cstdlib>
constexpr long long n = 128;
constexpr size_t alignment = 64;
inline void escape(void* p)
{
asm volatile("" : : "g"(p) : "memory");
}
inline void clobber()
{
asm volatile("" : : : "memory");
}
struct mat_bench_fixture
{
double *matA, *matB, *matC;
mat_bench_fixture()
{
matA = (double*) aligned_alloc(alignment, sizeof(double) * n * n);
matB = (double*) aligned_alloc(alignment, sizeof(double) * n * n);
matC = (double*) aligned_alloc(alignment, sizeof(double) * n * n);
escape(matA);
escape(matB);
escape(matC);
#if 0
free(matA);
free(matB);
free(matC);
escape(matA);
escape(matB);
escape(matC);
matA = (double*) aligned_alloc(alignment, sizeof(double) *n * n);
matB = (double*) aligned_alloc(alignment, sizeof(double) *n * n);
matC = (double*) aligned_alloc(alignment, sizeof(double) *n * n);
escape(matA);
escape(matB);
escape(matC);
#endif
}
~mat_bench_fixture()
{
free(matA);
free(matB);
free(matC);
}
void SetUp()
{
// generate random data
std::mt19937 gen;
std::uniform_real_distribution<double> dis(0, 1);
for (double* i = matA; i != matA + n * n; ++i)
{
*i = dis(gen);
}
for (double* i = matB; i != matB + n * n; ++i)
{
*i = dis(gen);
}
}
void run()
{
constexpr int iters = 400;
std::chrono::high_resolution_clock timer;
auto start = timer.now();
for (int i = 0; i < iters; ++i)
{
for (long long row = 0; row < n; ++row)
{
for (long long col = 0; col < n; ++col)
{
matC[row * n + col] = 0;
for (long long k = 0; k < n; ++k)
{
matC[row * n + col] += matA[row * n + k] * matB[k * n + col];
}
}
}
escape(matA);
escape(matB);
escape(matC);
clobber();
}
auto stop = timer.now();
std::cout << std::chrono::duration_cast<std::chrono::nanoseconds>(
stop - start)
.count() /
iters
<< std::endl;
}
};
int main()
{
mat_bench_fixture bench;
for (int i = 0; i < 5; ++i)
{
bench.SetUp();
bench.run();
}
}
To compile:
g++ -o mult_test mult_test.cpp -std=c++14 -O3
A: In my machine, I can reproduce your case by using different alignments for pointers. Try this code:
mat_bench_fixture() {
matA = new double[n * n + 256];
matB = new double[n * n + 256];
matC = new double[n * n + 256];
// align pointers to 1024
matA = reinterpret_cast<double*>((reinterpret_cast<unsigned long long>(matA) + 1023)&~1023);
matB = reinterpret_cast<double*>((reinterpret_cast<unsigned long long>(matB) + 1023)&~1023);
matC = reinterpret_cast<double*>((reinterpret_cast<unsigned long long>(matC) + 1023)&~1023);
// toggle this to toggle alignment offset of matB
// matB += 2;
}
If I toggle the commented line in this code, I got 34% difference on my machine.
Different alignment offsets cause different timings. You can play with offsetting the other 2 pointers too. Sometimes the difference is smaller, sometimes bigger, sometimes there's no change.
This must be caused by a cache issue: by having different last bits of the pointers, different collision patterns occur in the cache. And as your routine is memory intensive (all the data doesn't fit into L1), cache performance matters a lot.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51829128",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Javascript Apexchart title of x axis always in the wrong position I'm having some issues with the positioning of the title of the x axis in my column chart.
Depending on the range of the y values, the title is always in a different position, like in the examples:
Example 1
Example 2
Here's my code:
var options = {
chart: {
type: 'bar'
},
series: [{
name: 'Elevation',
data: dict["values"],
color: "#e0bf51"
}],
xaxis: {
title: {
text: 'Elevation (m)',
// offsetY: +120,
floating: true,
},
categories: newkeys,
tickAmount: 10,
},
dataLabels: {
enabled: false
},
yaxis: {
title: {
text: 'Percentage (%)',
offsetX: 10,
floating: true,
},
axisBorder: {
show: true
},
labels: {
show: false,
formatter: function (val) {
return val + "%";
}
}
},
stroke: {
colors: ["transparent"],
width: 2
},
plotOptions: {
bar: {
columnWidth: "100%",
rangeBarOverlap: true,
rangeBarGroupRows: false
}
},
tooltip: {
x: {
formatter: (value) => { return String(value) + '-' + String((parseInt(value)+9)) + " m" },
},
y: {
title: {
formatter: (seriesName) => "Percentage of land:",
},
},
}
}
var chart = new ApexCharts(document.getElementById(elementId), options);
chart.render();
I tried changing the offset values, but as the title is always in a new position this approach doesn't work.
A: It is not ideal, but you can downgrade a bit the version of ApexCharts.
This bug appeared with v3.36.1, so it was not in v3.36.0.
let options = {
series: [{
name: 'Series',
data: [10, 20, 15]
}],
chart: {
type: 'bar',
height: 350
},
dataLabels: {
enabled: false
},
xaxis: {
categories: ['Category 1', 'Category 2', 'Category 3'],
title: {
text: 'Axis title'
}
}
};
let chart = new ApexCharts(document.querySelector('#chart'), options);
chart.render();
<script src="https://cdn.jsdelivr.net/npm/[email protected]"></script>
<div id="chart"></div>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74626354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: jQuery event on change() I have a simple form:
<div class="class_a">
<fieldset>
<label><input type="radio" id="radio10" name="color" value="1" />Red</label><br />
<label><input type="radio" name="color" value="2" />Yellow</label><br />
<label><input type="radio" name="color" value="3" />Blue</label><br />
<label><input type="radio" name="color" value="4" />Purple</label><br />
</fieldset>
</div>
<div class="block-cms">
<fieldset>
<label><input type="radio" name="time" value="6" />12</label><br />
<label><input type="radio" name="time" value="7" />11</label><br />
<label><input type="radio" name="time" value="8" />10</label><br />
<label><input type="radio" name="time" value="9" />9</label><br />
</fieldset>
</div>
What im trying to do here is by using jQuery change() hide off second fieldset.
$("input#radio10").change(function () {
var checked = true;
checked = checked && $(this).is(':checked');
if ($('input#radio10:checked') ) {
$('.block-cms').show()
}
else {
$('.block-cms').hide();
}
});
Not sure what con be wrong here. Can anyone suggest me what should be done different please?
A: Your id shouldn't have the #, that's for the selector, it should just be id="radio10".
Change that, and this is what you should be after:
$(".class_a :radio").change(function () {
$(".block-cms").toggle($("#radio10:checked").length > 0);
});
You can test it out here.
A: First of all the id on the element should be radio10 and not #radio10.
Then use this code
$("input[name='color']").change(function () {
if ($('input#radio10').is(':checked') ) {
$('.block-cms').show()
}
else {
$('.block-cms').hide();
}
});
A: Here's another solution (IMO having an id on an <input type="radio"> seems a bit wrong to me):
$("input[name='color']").change(function () {
if ($(this).val() == 1) {
$('.block-cms').show()
}
else {
$('.block-cms').hide();
}
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4143258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What is a reliable way to know if libcurl finished downloading a file? I have written this basic class:
class SteamHTTP
{
public:
SteamHTTP();
virtual ~SteamHTTP();
void DownloadAllGames(const wxString& username, wxGauge* progress);
private:
CURL* m_curl;
std::stringstream m_currentRequestString;
private:
static size_t write_func(char *ptr, size_t size, size_t nmemb, void *userdata);
static int progress_func(void *clientp, double dltotal, double dlnow, double ultotal, double ulnow);
};
SteamHTTP::SteamHTTP()
{
m_curl = curl_easy_init();
}
SteamHTTP::~SteamHTTP()
{
curl_easy_cleanup(m_curl);
}
size_t SteamHTTP::write_func(char *data, size_t size, size_t nmemb, void *userdata)
{
SteamHTTP* ptr = reinterpret_cast<SteamHTTP*>(userdata);
ptr->m_currentRequestString << data;
return size*nmemb;
}
int SteamHTTP::progress_func(void *clientp, double dltotal, double dlnow, double ultotal, double ulnow)
{
wxGauge* ptr = reinterpret_cast<wxGauge*>(clientp);
ptr->SetValue(dlnow * 100.0f / dlnow );
return 0;
}
void SteamHTTP::DownloadAllGames(const wxString& username, wxGauge* gauge)
{
std::string url;
CURLcode result;
// Build URL
url = std::string("http://steamcommunity.com/id/") + username.mbc_str() + std::string("/games?tab=all&xml=1");
// Set URL
curl_easy_setopt(m_curl, CURLOPT_URL, url.c_str());
// Follow redirection
curl_easy_setopt(m_curl, CURLOPT_FOLLOWLOCATION, 1);
// Data Callback
curl_easy_setopt(m_curl, CURLOPT_WRITEFUNCTION, SteamHTTP::write_func);
curl_easy_setopt(m_curl, CURLOPT_WRITEDATA, this);
// Progress Callback
curl_easy_setopt(m_curl, CURLOPT_PROGRESSFUNCTION, SteamHTTP::progress_func);
curl_easy_setopt(m_curl, CURLOPT_PROGRESSDATA, gauge);
curl_easy_setopt(m_curl, CURLOPT_NOPROGRESS, FALSE);
// Perform
result = curl_easy_perform(m_curl);
if (result != 0){
wxMessageBox(curl_easy_strerror(result), wxMessageBoxCaptionStr, wxICON_ERROR|wxOK);
}
}
What I struggle with is to tell reliably when libcurl is actually finished. Is there a callback for that? I would need that to parse the data downloaded.
What is the best way to tell if libcurl is done and I can process the data?
p.s.: This code is work in progress, checks need yet to be written etc.
A: When curl_easy_perform() returns, it is done. It is as simple as that. Check the return code to figure out if it succeeded or not.
A: in CURLOPTPROGRESSFUNCTION callback there are few parameters:
int function(void *clientp, double dltotal, double dlnow, double ultotal, double ulnow);
dltotal is the total bytes to be downloaded, and dlnow is the number of bytes downloaded so far. The download is completed when dltotal == dlnow.
dltotal is the total number of bytes libcurl expects to download in this transfer. dlnow is the number of bytes downloaded so far
| {
"language": "en",
"url": "https://stackoverflow.com/questions/20923417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How to view metaspace area in heap dump? I got the OutOfMemoryError : Metaspace in my tomcat.
And I got heap dump file of jvm. (I use java8)
So I want to view only metaspace area in heap dump(hprof format), but I can't know how to do that.
I use MAT(Eclipse Memory Analyzer) to analyze the heap dump file.
Can I view only metaspace area?
Thanks.
A: There isn't direct metaspace information in a HPROF file.
Your might have a class loader leak or duplicate classes.
Try reading Permgen vs Metaspace in Java or
The Unknown Generation: Perm
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62206793",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: A lisp function refinement I've done the Graham Common Lisp Chapter 5 Exercise 5, which requires a function that takes an object X and a vector V, and returns a list of all the objects that immediately precede X in V.
It works like:
> (preceders #\a "abracadabra")
(#\c #\d #r)
I have done the recursive version:
(defun preceders (obj vec &optional (result nil) &key (startt 0))
(let ((l (length vec)))
(cond ((null (position obj vec :start startt :end l)) result)
((= (position obj vec :start startt :end l) 0)
(preceders obj vec result
:startt (1+ (position obj vec :start startt :end l))))
((> (position obj vec :start startt :end l) 0)
(cons (elt vec (1- (position obj vec :start startt :end l)))
(preceders obj vec result
:startt (1+ (position obj vec
:start startt
:end l))))))))
It works correctly, but my teachers gives me the following critique:
"This calls length repeatedly. Not so bad with vectors, but still unnecessary. More efficient and more flexible (for the user) code is to define this like other sequence processing functions. Use :start and :end keyword parameters, the way the other sequence functions do, with the same default initial values. length should need to be called at most once."
I am consulting the Common Lisp textbook and google, but there seem to be of little help on this bit: I don't know what he means by "using :start and :end keyword parameters", and I have no clue of how to "call length just once". I would be grateful if you guys could give me some idea how on to refine my code to meet the requirement that my teacher posted.
UPDATE:
Now I have come up with the following code:
(defun preceders (obj vec
&optional (result nil)
&key (start 0) (end (length vec)) (test #'eql))
(let ((pos (position obj vec :start start :end end :test test)))
(cond ((null pos) result)
((zerop pos) (preceders obj vec result
:start (1+ pos) :end end :test test))
(t (preceders obj vec (cons (elt vec (1- pos)) result)
:start (1+ pos) :end end :test test)))))
I get this critique:
"When you have a complex recursive call that is repeated identically in more than one branch, it's often simpler to do the call first, save it in a local variable, and then use the variable in a much simpler IF or COND."
Also,for my iterative version of the function:
(defun preceders (obj vec)
(do ((i 0 (1+ i))
(r nil (if (and (eql (aref vec i) obj)
(> i 0))
(cons (aref vec (1- i)) r)
r)))
((eql i (length vec)) (reverse r))))
I get the critique
"Start the DO at a better point and remove the repeated > 0 test"
A: a typical parameter list for such a function would be:
(defun preceders (item vector
&key (start 0) (end (length vector))
(test #'eql))
...
)
As you can see it has START and END parameters.
TEST is the default comparision function. Use (funcall test item (aref vector i)).
Often there is also a KEY parameter...
LENGTH is called repeatedly for every recursive call of PRECEDERS.
I would do the non-recursive version and move two indexes over the vector: one for the first item and one for the next item. Whenever the next item is EQL to the item you are looking for, then push the first item on to a result list (if it is not member there).
For the recursive version, I would write a second function that gets called by PRECEDERS, which takes two index variables starting with 0 and 1, and use that. I would not call POSITION. Usually this function is a local function via LABELS inside PRECEDERS, but to make it a bit easier to write, the helper function can be outside, too.
(defun preceders (item vector
&key (start 0) (end (length vector))
(test #'eql))
(preceders-aux item vector start end test start (1+ start) nil))
(defun preceders-aux (item vector start end test pos0 pos1 result)
(if (>= pos1 end)
result
...
))
Does that help?
Here is the iterative version using LOOP:
(defun preceders (item vector
&key (start 0) (end (length vector))
(test #'eql))
(let ((result nil))
(loop for i from (1+ start) below end
when (funcall test item (aref vector i))
do (pushnew (aref vector (1- i)) result))
(nreverse result)))
A: Since you already have a solution that's working, I'll amplifiy Rainer Joswig's solution, mainly to make related stylistic comments.
(defun preceders (obj seq &key (start 0) (end (length seq)) (test #'eql))
(%preceders obj seq nil start end test))
The main reason to have separate helper function (which I call %PRECEDERS, a common convention for indicating that a function is "private") is to eliminate the optional argument for the result. Using optional arguments that way in general is fine, but optional and keyword arguments play horribly together, and having both in a single function is a extremely efficient way to create all sorts of hard to debug errors.
It's a matter of taste whether to make the helper function global (using DEFUN) or local (using LABELS). I prefer making it global since it means less indentation and easier interactive debugging. YMMV.
A possible implementation of the helper function is:
(defun %preceders (obj seq result start end test)
(let ((pos (position obj seq :start start :end end :test test)))
;; Use a local binding for POS, to make it clear that you want the
;; same thing every time, and to cache the result of a potentially
;; expensive operation.
(cond ((null pos) (delete-duplicates (nreverse result) :test test))
((zerop pos) (%preceders obj seq result (1+ pos) end test))
;; I like ZEROP better than (= 0 ...). YMMV.
(t (%preceders obj seq
(cons (elt seq (1- pos)) result)
;; The other little bit of work to make things
;; tail-recursive.
(1+ pos) end test)))))
Also, after all that, I think I should point out that I also agree with Rainer's advice to do this with an explicit loop instead of recursion, provided that doing it recursively isn't part of the exercise.
EDIT: I switched to the more common "%" convention for the helper function. Usually whatever convention you use just augments the fact that you only explicitly export the functions that make up your public interface, but some standard functions and macros use a trailing "*" to indicate variant functionality.
I changed things to delete duplicated preceders using the standard DELETE-DUPLICATES function. This has the potential to be much (i.e., exponentially) faster than repeated uses of ADJOIN or PUSHNEW, since it can use a hashed set representation internally, at least for common test functions like EQ, EQL and EQUAL.
A: A slightly modofied variant of Rainer's loop version:
(defun preceders (item vector
&key (start 0) (end (length vector))
(test #'eql))
(delete-duplicates
(loop
for index from (1+ start) below end
for element = (aref vector index)
and previous-element = (aref vector (1- index)) then element
when (funcall test item element)
collect previous-element)))
This makes more use of the loop directives, and among other things only accesses each element in the vector once (we keep the previous element in the previous-element variable).
A: Answer for your first UPDATE.
first question:
see this
(if (foo)
(bar (+ 1 baz))
(bar baz))
That's the same as:
(bar (if (foo)
(+ 1 baz)
baz))
or:
(let ((newbaz (if (foo)
(+ 1 baz)
baz)))
(bar newbaz))
Second:
Why not start with I = 1 ?
See also the iterative version in my other answer...
A: The iterative version proposed by Rainer is very nice, it's compact and more efficient since you traverse the sequence only one time; in contrast to the recursive version which calls position at every iteration and thus traverse the sub-sequence every time. (Edit: I'm sorry, I was completely wrong about this last sentence, see Rainer's comment)
If a recursive version is needed, another approach is to advance the start until it meets the end, collecting the result along its way.
(defun precede (obj vec &key (start 0) (end (length vec)) (test #'eql))
(if (or (null vec) (< end 2)) nil
(%precede-recur obj vec start end test '())))
(defun %precede-recur (obj vec start end test result)
(let ((next (1+ start)))
(if (= next end) (nreverse result)
(let ((newresult (if (funcall test obj (aref vec next))
(adjoin (aref vec start) result)
result)))
(%precede-recur obj vec next end test newresult)))))
Of course this is just another way of expressing the loop version.
test:
[49]> (precede #\a "abracadabra")
(#\r #\c #\d)
[50]> (precede #\a "this is a long sentence that contains more characters")
(#\Space #\h #\t #\r)
[51]> (precede #\s "this is a long sentence that contains more characters")
(#\i #\Space #\n #\r)
Also, I'm interested Robert, did your teacher say why he doesn't like using adjoin or pushnew in a recursive algorithm?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1822382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: im making a golf scorecard program in php but want to use javascript for counting the hits now i have programmed in php to hit a button after every hit you make but thats in php so after hit the button the program takes contact with the server.
i would like to know if it is possible to make the hit calculation via javascrit sothat the program does not upgrade to the server an the hits will be visable in the screen and be upgraded everytime via javascript. sothat after ending the hole i can save the result to the server.
i had made the following for test . but still the computer goes to update via the server.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Untitled Document</title>
<?
if ($aantalslagen_plus==""){
$aantalslagen_plus="0";
}
?>
<script type="text/javascript">
var a = <? echo $aantalslagen_plus; ?>
function bereken() {
a += 1;
document.getElementById('aantalslagen').value = a;
};
</script>
</head>
<body>
<form method="post" action="" . $_SERVER['PHP_SELF'] . "">
<br><br><br>
<table width="422" height="179" border="0">
<tr>
<td>totaal aantal slagen: </td>
<td> </td>
</tr>
<?php
$aantalslagen_plus = htmlentities($_POST['aantalslagen_plus']);
?>
<tr>
<td>totaal aantal slagen php:</td>
<td>
<? echo "<input type=\"text\" VALUE=\"$aantalslagen_plus\" name=\"aantalslagen_plus\" id=\"aantalslagen\" size=\"10\">"; ?>
<td><? echo $aantalslagen_plus; ?></td>
</tr>
</table>
<br>
<button onclick="bereken();"> + </button>
</form>
</body>
</html>
A: Just initialize aantalslagen to 0 in JavaScript and don't mess with any <form> or POST request or anything like that because you don't need to since it doesn't seem like you're saving the value in some cookie so the value will stay if the user refreshes the page. If you do want the value to stay once the user refreshes, I suggest you look into localStorage.
//The input element:
var input = document.getElementById("aantalslagen-input");
//The td element:
var td = document.getElementById("aantalslagen-td");
//The button element:
var button = document.getElementById("aantalslagen-button");
//The variable:
var aantalslagen = 0;
//When the button is clicked...
button.addEventListener("click", function() {
//Increment aantalslagen:
aantalslagen += 1;
//Update it in the input and td:
input.value = aantalslagen;
td.textContent = aantalslagen;
});
<br><br><br>
<table width="422" height="179" border="0">
<tr>
<td>totaal aantal slagen: </td>
<td> </td>
</tr>
<tr>
<td>totaal aantal slagen php:</td>
<td><input type="text" VALUE="0" name="aantalslagen_plus" id="aantalslagen-input" size="10">
<td id="aantalslagen-td">0</td>
</tr>
</table>
<br>
<button id="aantalslagen-button"> + </button>
A: Your <button> is inside of a <form> element and the default behaviour is for the form to submit to the server.
Remove the <form> element since it no longer seems to serve any purpose.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/32185105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: How to avoid splitting float numbers while using Wordninja? I am working on cleaning textual data. I have a column that shows description of products. Some words are adjacent. I used wordninja for splitting words.
It worked well but it split the float numbers. I want to use these numbers as it is.
Is there a way to avoid split floats while using wordninja?
import wordninja
def split_adjacent_words(description):
words = wordninja.split(description)
return" ".join(words)
df['description_clean'] = df[df['description'].isna()==False]['description'].apply(lambda x : split_adjacent_words(x))
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74804599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Removing contents of two files at the same time - UNIX I have two files called "fileA.txt" and "fileB.txt".
fileA.txt has the following contents :
1 Arizona ABDJAQ 224
2 Ohio AKOGFR 458
3 Wisconsin EFGTAP 871
4 Colorado NAHBAX 991
The four columns above are "ID", "State", "Pattern", "Number"
fileB.txt has the following contents:
1 Arizona NKIGAB 763
2 Ohio BAVYAD 918
3 Wisconsin AUOBAQ 547
4 Colorado INABEA 622
Again the four columns are "ID", "State", "Pattern", "Number"
Now this is what I want to do:
I want to scan through "fileA.txt" first and remove all records whose "Pattern" column just has one "A". Keep all records that have 2 "A"'s in them. So I would remove Ohio and Wisconsin. (ID "2" and ID "3"). At the same time, I want to simultaneously remove these ID's from "fileB.txt" as well !! (inspite of the fact that in fileB, Ohio and Wisconsin have 2 "A"'s in the pattern).
After this step, my "fileA.txt" should look like :
1 Arizona ABDJAQ 224
4 Colorado NAHBAX 991
and my "fileB.txt" should look like :
1 Arizona NKIGAB 763
4 Colorado INABEA 622
Next, I want to scan "fileB.txt" to remove any records with patterns having one "A" and delete the corresponding record from "fileA.txt". (in this case Arizona because it has only "A" in fileB and so we remove Arizona from both fileB and fileA.)
After this step, I would be left with only one record in each file :
"fileA.txt" will have:
4 Colorado NAHBAX 991
and "fileB.txt" will have
4 Colorado INABEA 622
So, to put it in short, I want to scan both files and keep only those records which have 2 "A"'s in their pattern in BOTH files.
Is there a one-line Unix command or a relatively easy approach to do this ?
appreciate the help !
A: I have written one-liner in a Python(280 characters of code) for this.
python -c"import re,sys;o=lambda f,m:open(f,m);x=lambda h:[i for i in o(h,'r').readlines()];y=lambda s:len(re.findall(r'(\w+)',s)[2].split('A'))>2;z=lambda f,s:o(f,'a'if len(s)else'w').write(s);a,b=sys.argv[1:3];w=zip(x(a),x(b));z(a,'');z(b,'');[(z(a,c),z(b,d))for(c,d)in w if y(c)and y(d)]" a.txt b.txt
Note: this code does not close file descriptors. I assume that OS does that.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12610471",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: JQuery + Fullcalendar - large number of iteration with jquery object I'm trying to render large number of events (about 50, and it's might be more).
for (var eventIndex = 0; eventIndex < resp.select_events.length; eventIndex++){
var event = resp.select_events[eventIndex];
c.fullCalendar('renderEvent',{
id: event.id,
title: eventName,
start: event.event_date,
description: eventDesc,
write: event.write
},true);
}
It takes several seconds, and sometimes browser asks me for abort scipt execution. So, I think, I need a way to do it somehow like async - parrallel with execution of last part of script. Can you advice some tool or something like that? Thx
A: Modifying the loop itself may help at some degree. Read this article http://jsperf.com/fastest-array-loops-in-javascript/11
Also this https://blogs.oracle.com/greimer/entry/best_way_to_code_a
In general fastest way for your loop is while loop in reverse, with simplified the test condition:
var i = arr.length; while (i--) {/*....*/}
A: Do you have to render one by one? Why dont you setup first an array like a JSON array and add it to eventSources? The best way for your calendar to render large number of events is to let Fullcalendar do the job for you. You are trying ,in my perspective ofc, do what fullcalendar internally already does. Check the example below and this is if you have to do this client side, i would do this server side.
var jsonarray = [];
for (var eventIndex = 0; eventIndex < resp.select_events.length; eventIndex++){
/* c.fullCalendar('renderEvent',{
id: event.id,
title: eventName,
start: event.event_date,
description: eventDesc,
write: event.write
},true);*/
var event = resp.select_events[eventIndex];
var myevent = {
"id": event.id,
"title": eventName,
"start": event.event_date,
"description": eventDesc,
"write": event.write
};
jsonarray.push(myevent);
}
c.fullCalendar('addEventSource', jsonarray);
Let em know if you have any doubt
| {
"language": "en",
"url": "https://stackoverflow.com/questions/20260777",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Combining two MS Access queries I have this query:
SELECT "I1" & "," & "I2" AS Item_set, Round(Sum([T1].Fuzzy_Value)/Count(*),15) AS Support
FROM (SELECT *
FROM Prune AS t
WHERE t.Trans_ID IN
(SELECT t1.Trans_ID FROM (
SELECT *FROM Prune WHERE [Nama]="I1") AS t1
INNER JOIN (SELECT * FROM Prune WHERE [Nama]="I2") AS t2 ON t1.Trans_ID = t2.Trans_ID)
AND t.Nama IN ("I1","I2")) AS T1;
And ttrans query
SELECT Count([Trans_ID].[Trans_ID]) AS Expr1
FROM Trans_ID;
I need to change Count (*) from :
SELECT "I1" & "," & "I2" AS Item_set, Round(Sum([T1].Fuzzy_Value)/Count(*),15)
into ttrans query.
I've tried using
SELECT "I1" & "," & "I2" AS Item_set, Round(Sum([T1].Fuzzy_Value)/ttrans.Expr1,15) AS Support
FROM (SELECT *
FROM Prune AS t
WHERE t.Trans_ID IN
(SELECT t1.Trans_ID FROM (
SELECT *FROM Prune WHERE [Nama]="I1") AS t1
INNER JOIN (SELECT * FROM Prune WHERE [Nama]="I2") AS t2 ON t1.Trans_ID = t2.Trans_ID)
AND t.Nama IN ("I1","I2")) AS T1, ttrans;
But I got error like this :
You tried to execute a query that does not include the specified expression
'Round(sum([T1].Fuzzy_Value/ttrans.Expr1,15)' as part of an aggregate function
any idea how to fix it?
Note : I'm trying to find 2 combination of all item in transaction database and get a result like this
ITEM Support
I1, I2 0.xxxxxxxxx
where support is (total transaction containing item I1 and I2 / total transaction) -> note that I'm using ttrans query to get total transaction value
note2: I'm using MS Access
note3:
Ttrans table will look like this
Expr1
270200
A: Try:
SELECT "I1" & "," & "I2" AS Item_set, Round(Sum([T1].Fuzzy_Value)/ttrans.Expr1,15) AS Support
FROM (SELECT *
FROM Prune AS t
WHERE t.Trans_ID IN
(SELECT t1.Trans_ID FROM (
SELECT *FROM Prune WHERE [Nama]="I1") AS t1
INNER JOIN (SELECT * FROM Prune WHERE [Nama]="I2") AS t2 ON t1.Trans_ID = t2.Trans_ID)
AND t.Nama IN ("I1","I2")) AS T1, ttrans
GROUP BY "I1" & "," & "I2"
A: somehow i find the answer :
i tried using
SELECT "I1" & "," & "I2" AS Item_set, Round(Sum([T1].Fuzzy_Value)/sum(ttrans.Expr1),15)
it worked wonder
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4439369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Controlling the PTZ function of IP camera using C++ I am working on a project that requires the control of the PTZ function of my IP camera through the UI. I am currently using a D-Link DCS-5020L cloud camera, Microsoft Visual Studio 2017 and OpenCV 3.3 for my setup.
I am still new to c++ and OpenCV but my project requires the use of it. I am able to access the camera feed but I'm not sure how to control the functions of the camera using C++ code through OpenCV or if OpenCV is even needed.
Is there a C++ code to control the PTZ functions of the IP camera?
This is my code for attaining the video output, if necessary.
// VIDEO CAPTURE //
Mat frame;
VideoCapture cap("http://username:password@IPADDRESS:PORT/video.cgi?resolution=640x360&req_fps=30&.mjpg");
if (!cap.isOpened()) //EXIT PROGRAM IF FAILED
{
cout << "CAMERA UNAVAILABLE" << endl;
return -1;
}
while (1)
{
bool bSuccess = cap.read(frame); //READ NEW FRAME FROM VIDEO
if (!bSuccess) //BREAK LOOP IF FAILED
{
cout << "UNABLE TO DISPLAY VIDEO" << endl;
break;
}
}
Any help is appreciated. Thank you.
A: Usually, PTZ functions are software implemented on the server running in the cam.
Some older cameras used to ship with an activeX control.
These functions can be accessed by getting or posting to a url relative to the camera.
For Your camera, you should be able to post the controls on the following url:
http://<ip>/pantiltcontrol.cgi
Available controls:
POST parameters
PanSingleMoveDegree (default 5)
TiltSingleMoveDegree (default 5)
PanTiltSingleMove
Values for PanTiltSingleMove (based on the web UI controls):
Top 1
Top right 2
Right 5
Bottom right 8
Bottom 7
Bottom left 6
Left 3
Top left 0
Home (reset) 4
So a typical post example using curl to change the pan-tilt, should be similar to this:
curl --user <username>:<password> --user-agent "user" --data "PanSingleMoveDegree=5&TiltSingleMoveDegree=5&PanTiltSingleMove=5" http://<ip>/pantiltcontrol.cgi
For a quick test using your web browser, You should Be able to do the same thing using a get request for the following structured url:
http://<username>:<password>@<ip>/pantiltcontrol.cgi?PanSingleMoveDegree=5&TiltSingleMoveDegree=5&PanTiltSingleMove=5
Now, Back to your question. All you need to control PTZ in C++, is to web query the mentioned urls. So this should be your searching point.
Many Answers for this topic are already on stack overflow. This is the first result I got while googling "c++ http get post".
How do you make a HTTP request with C++?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/46557735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Template function that calls itself results in "instantiation depth exceeds maximum of 900" I am trying to create a function that accepts a lambda function callback and some candidate_primes that can be a set or a vector, but it gives the error:
template instantiation depth exceeds maximum of 900
template<class PrimeIter, class Function>
void iter_feasable_primes(
const PrimeIter& candidate_primes, const uint32_t larger_prime, uint8_t index, Function cb
) {
std::vector<uint32_t> next_candidate_primes;
for (const uint32_t p : candidate_primes) {
// Updates next_candidate_primes
}
if (index == 0) {
// No valid tuples of primes were found
return;
}
for (const uint32_t p : next_candidate_primes) {
iter_feasable_primes(next_candidate_primes, p, index - 1, [&](std::vector<uint32_t> smaller_primes) {
smaller_primes.push_back(p);
cb(smaller_primes);
});
}
}
I think the problem is the function uses a lambda to call itself, and the compiler doesn't understand the function itself just has a single lambda callback type.
How can I solve this?
I tried to make index a template parameter to no avail, as the compiler doesn't seem to understand that it'll never get to index = -1, and complains to me that -1 isn't defined. But ideally I don't want index to be a template parameter.
A: Indeed lambda have unique type and so you have to instantiate infinite recursion.
One way to solve that is to give an unique type, either a custom functor, or a type-erased type as std::function
template<class PrimeIter>
void iter_feasable_primes(
const PrimeIter& candidate_primes,
uint32_t larger_prime,
uint8_t index,
std::function<void(std::vector<uint32_t>)> cb)
{
std::vector<uint32_t> next_candidate_primes;
for (const uint32_t p : candidate_primes) {
// Updates next_candidate_primes
}
if (index == 0) {
// No valid tuples of primes were found
return;
}
for (const uint32_t p : next_candidate_primes) {
iter_feasable_primes(next_candidate_primes, p, index - 1, [&](std::vector<uint32_t> smaller_primes) {
smaller_primes.push_back(p);
cb(smaller_primes);
});
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/63245234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: (CSS) Ways to bind element to bottom? Is there a other way to put an HTML element to the bottom except :
position: absolute;
bottom: 0;
and at the parent element:
position: relative
...or setting the margin so that the element fits exactly at the bottom?
A: Well, you can also use position: fixed; bottom: 0;, which will stick the element to the bottom of the window. That means it won't even scroll with the rest of the page.
When you use that for a full-width footer or the like (the most likely use case), you'd then need to add a margin to the rest of the page content so that it doesn't get hidden behind (or hide) the footer.
Other than that you're pretty much stuck with the options you mentioned.
Full documentation on the position property can be found here:
https://developer.mozilla.org/en-US/docs/Web/CSS/position
A: You can also use position: fixed
.elem {
position: fixed;
bottom: 0;
}
Or you can use Flex then set your element to align-self: flex-end
e.g.
.container {
height: 150px;
width: 150px;
display: flex;
display: -webkit-flex;
-webkit-flex-direction: row;
flex-direction: row;
border: 1px solid red;
}
.container div {
-webkit-align-self: flex-end;
align-self: flex-end
}
Fiddle
| {
"language": "en",
"url": "https://stackoverflow.com/questions/31461406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Basic file IO in C I am working through the excellent The C Programming Language at the moment, and have got stuck while trying to open and read a file. The program compiles, but seg faults on execution:
$ ./a.out
Segmentation fault
Here is the code:
#include <stdio.h>
main()
{
FILE *fp;
fp=fopen("/home/c-sandbox/index.html", "r");
fprintf(fp, "Testing...\n");
fclose(fp);
}
Note that the path points to a real file containing the string "hello, world".
Any ideas on where I am going wrong?
A: *
*Make sure fp is not NULL before trying to write to it. For example:
if(fp == NULL)
{
fprintf(stderr, "Cannot open file\n");
return EXIT_FAILURE; // defined in stdlib.h
}
*You need to open the file with something other than "r", which only allows file reading. Read the man page for fopen to find out which mode would work the best for you. Example:
*
*"w" - Truncate to zero length or create file for writing.
*"a" - Append; open or create file for writing at end-of-file.
A: You opened the file for reading only, and are attempting to write to it.
Use "a" if you want to append to the end of the existing file.
Edit: As others have noted, you're also not checking to see if the file was opened. fopen will return NULL if it fails and set the global variable errno to a value that indicates why it failed. You can get a human-readable explanation using strerror(errno)
if( fp == NULL ) {
printf( "Error opening file: %s\n", strerror( errno ) );
}
A: You are opening it in readonly mode! Need to use w or a for writing/appending to the file :)
fopen("/home/c-sandbox/index.html", "w");
A: You should check that fopen does not return NULL. I suspect it is returning NULL and either the fprintf and/or fclose calls are getting messed up.
A: #include <stdio.h>
main()
{
FILE *fp;
fp=fopen("/home/c-sandbox/index.html", "r");
if(!fp)
{
perror ("The following error occurred");
return ;
}
fgets(line,len,fp);
printf("%s",line);
fclose(fp);
fp=fopen("/home/c-sandbox/index.html", "a");
if(!fp)
{
perror ("The following error occurred");
return ;
}
fprintf(fp, "Testing...\n");
fclose(fp)
}
for reading "hello, world" string present in file.
after reading write to the same file "Testing..."
| {
"language": "en",
"url": "https://stackoverflow.com/questions/8804299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Condensing Wide Data Based on Column Name Is there an elegant way to do what I'm trying to do in Pandas? My data looks something like:
df = pd.DataFrame({
'alpha': [1, np.nan, np.nan, np.nan],
'bravo': [np.nan, np.nan, np.nan, -1],
'charlie': [np.nan, np.nan, np.nan, np.nan],
'delta': [np.nan, 1, np.nan, np.nan],
})
print(df)
alpha bravo charlie delta
0 1.0 NaN NaN NaN
1 NaN NaN NaN 1.0
2 NaN NaN NaN NaN
3 NaN -1.0 NaN NaN
and I want to transform that into something like:
position value
0 alpha 1
1 delta 1
2 NaN NaN
3 bravo -1
So for each row in the original data I want to find the non-NaN value and retrieve the name of the column it was found in. Then I'll store the column and value in new columns called 'position' and 'value'.
I can guarantee that each row in the original data contains exactly zero or one non-NaN values.
My only idea is to iterate over each row but I know that idea is bad and there must be a more pandorable way to do it. I'm not exactly sure how to word my problem so I'm having trouble Googling for ideas. Thanks for any advice!
A: We can use DataFrame.melt to un pivot your data, then use sort_values and drop_duplicates:
df = (
df.melt(var_name='position')
.sort_values('value')
.drop_duplicates('position', ignore_index=True)
)
position value
0 bravo -1.0
1 alpha 1.0
2 delta 1.0
3 charlie NaN
Another option would be to use DataFrame.bfill over the column axis. Since you noted that:
can guarantee that each row in the original data contains exactly zero or one non-NaN values
values = df.bfill(axis=1).iloc[:, 0]
dfn = pd.DataFrame({'positions': df.columns, 'values': values})
positions values
0 alpha 1.0
1 bravo 1.0
2 charlie NaN
3 delta -1.0
A: Another way to do this. Actually, I just noticed, that it is quite similar to Erfan's first proposal:
# get the index as a column
df2= df.reset_index(drop=False)
# melt the columns keeping index as the id column
# and sort the result, so NaNs appear at the end
df3= df2.melt(id_vars=['index'])
df3.sort_values('value', ascending=True, inplace=True)
# now take the values of the first row per index
df3.groupby('index')[['variable', 'value']].agg('first')
Or shorter:
(
df.reset_index(drop=False)
.melt(id_vars=['index'])
.sort_values('value')
.groupby('index')[['variable', 'value']].agg('first')
)
The result is:
variable value
index
0 alpha 1.0
1 delta 1.0
2 alpha NaN
3 bravo -1.0
| {
"language": "en",
"url": "https://stackoverflow.com/questions/64850612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: PDF link in HTML opens up blank page I'm sharing links to PDF files that reside in an external server.
Here is how the links look like:
<a href="http://static.googleusercontent.com/external_content/untrusted_dlcp/www.google.com/en//help/hc/pdfs/mobile/AndroidUsersGuide-40-en.pdf">link 1</a>
<a href="http://mywindowsazureblob.blob.core.windows.net/levelblob/files/93c9f263-fbba-4e51-9726-95884aca6f2f.pdf">link 2</a>
In the above sample page I made (azure domain renamed), the first link opens in Chrome's PDF viewer, the 2nd one however opens in a blank page.
What can be the reason?
A: I think I've found the solution.
When uploading a file to Azure blob, the Azure server isn't smart enough to set the content type of the file according to its extension/content, thus when downloaded by client it's misleading the browser.
The default Azure blob content type is application/octet-stream.
Check here and here for more.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14107838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: progressdialog bar not working im uploading files and im trying to use progressbar while its uploading
frist of all i start the upload with this code:
@Override
public void onClick(View v) {
if(v== ivAttachment){
//on attachment icon click
showFileChooser();
}
if(v== bUpload){
//on upload button Click
if(selectedFilePath != null){
dialog = new ProgressDialog(upload.this);
dialog.setMax(100);
dialog.setMessage("Subiendo Archivo...");
dialog.setProgressStyle(ProgressDialog.STYLE_HORIZONTAL);
dialog.setProgress(0);
dialog.show();
//dialog.show(upload.this,"","Subiendo Archivo...",true);
new Thread(new Runnable() {
@Override
public void run() {
//creating new thread to handle Http Operations
uploadFile(selectedFilePath);
}
}).start();
}else{
Toast.makeText(upload.this,"Escoge un archivo",Toast.LENGTH_SHORT).show();
}
}
}
then uploadFile(selectedFilePath); starts here is my code:
//android upload file to server
public int uploadFile(final String selectedFilePath){
int serverResponseCode = 0;
File sourceFile = new File(selectedFilePath);
int totalSize = (int)sourceFile.length();
HttpURLConnection connection;
DataOutputStream dataOutputStream;
String lineEnd = "\r\n";
String twoHyphens = "--";
String boundary = "*****";
int bytesRead,bytesAvailable,bufferSize;
byte[] buffer;
int maxBufferSize = 1 * 1024 * 1024;
File selectedFile = new File(selectedFilePath);
String[] parts = selectedFilePath.split("/");
final String fileName = parts[parts.length-1];
if (!selectedFile.isFile()){
dialog.dismiss();
runOnUiThread(new Runnable() {
@Override
public void run() {
tvFileName.setText("Source File Doesn't Exist: " + selectedFilePath);
}
});
return 0;
}else{
try{
FileInputStream fileInputStream = new FileInputStream(selectedFile);
URL url = new URL(SERVER_URL);
connection = (HttpURLConnection) url.openConnection();
connection.setDoInput(true);//Allow Inputs
connection.setDoOutput(true);//Allow Outputs
connection.setUseCaches(false);//Don't use a cached Copy
connection.setRequestMethod("POST");
connection.setRequestProperty("Connection", "Keep-Alive");
connection.setRequestProperty("ENCTYPE", "multipart/form-data");
connection.setRequestProperty("Content-Type", "multipart/form-data;boundary=" + boundary);
connection.setRequestProperty("uploaded_file",selectedFilePath);
//creating new dataoutputstream
dataOutputStream = new DataOutputStream(connection.getOutputStream());
//writing bytes to data outputstream
dataOutputStream.writeBytes(twoHyphens + boundary + lineEnd);
dataOutputStream.writeBytes("Content-Disposition: form-data; name=\"uploaded_file\";filename=\""
+ selectedFilePath + "\"" + lineEnd);
dataOutputStream.writeBytes(lineEnd);
//returns no. of bytes present in fileInputStream
bytesAvailable = fileInputStream.available();
//selecting the buffer size as minimum of available bytes or 1 MB
bufferSize = Math.min(bytesAvailable,maxBufferSize);
//setting the buffer as byte array of size of bufferSize
buffer = new byte[bufferSize];
//reads bytes from FileInputStream(from 0th index of buffer to buffersize)
bytesRead = fileInputStream.read(buffer,0,bufferSize);
int totalBytesWritten = 0;
//loop repeats till bytesRead = -1, i.e., no bytes are left to read
while (bytesRead > 0) {
//write the bytes read from inputstream
dataOutputStream.write(buffer, 0, bufferSize);
bytesAvailable = fileInputStream.available();
bufferSize = Math.min(bytesAvailable, maxBufferSize);
bytesRead = fileInputStream.read(buffer, 0, bufferSize);
totalBytesWritten += bytesRead;
if (dialog != null) {
dialog.setProgress((int)(totalBytesWritten/ totalSize * 100));
}
}
dataOutputStream.writeBytes(lineEnd);
dataOutputStream.writeBytes(twoHyphens + boundary + twoHyphens + lineEnd);
serverResponseCode = connection.getResponseCode();
String serverResponseMessage = connection.getResponseMessage();
Log.i(TAG, "Server Response is: " + serverResponseMessage + ": " + serverResponseCode);
//response code of 200 indicates the server status OK
if(serverResponseCode == 200){
runOnUiThread(new Runnable() {
@Override
public void run() {
tvFileNames.setText("Sigue el link para ver tu archivo:");
tvFileName.setText(Html.fromHtml("<a href=\"https://cloud.dattasolutions.com.mx/app/uploads/" + fileName + "\"><font color=\"red\">" + fileName + "</font></a> "));
tvFileName.setMovementMethod(LinkMovementMethod.getInstance());
// tvFileName.setTextColor(Color.BLUE);
//tvFileName.setText("Escoger otro archivo");
ivAttachment.setVisibility(View.VISIBLE);
vView1.setVisibility(View.GONE);
tvFileNames.setVisibility(View.VISIBLE);
ivAttachment.setImageResource(R.drawable.attach_icon);
android.view.ViewGroup.LayoutParams layoutParams = ivAttachment.getLayoutParams();
layoutParams.width = 300;
layoutParams.height = 300;
ivAttachment.setLayoutParams(layoutParams);
//POSISION TEXTO
RelativeLayout.LayoutParams params= new RelativeLayout.LayoutParams(ViewGroup.LayoutParams.MATCH_PARENT,ViewGroup.LayoutParams.WRAP_CONTENT);
params.addRule(RelativeLayout.BELOW, R.id.ivAttachment);
params.setMargins(0, 73, 0 ,0);
RelativeLayout.LayoutParams params1= new RelativeLayout.LayoutParams(ViewGroup.LayoutParams.MATCH_PARENT,ViewGroup.LayoutParams.WRAP_CONTENT);
params1.addRule(RelativeLayout.BELOW, R.id.tv_file_names);
params1.setMargins(0, 123, 0 ,0);
tvFileNames.setLayoutParams(params);
tvFileName.setLayoutParams(params1);
Toast.makeText(getApplicationContext(),
"Archivo subido correctamente!", Toast.LENGTH_LONG)
.show();
}
});
}
//closing the input and output streams
fileInputStream.close();
dataOutputStream.flush();
dataOutputStream.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(upload.this,"File Not Found",Toast.LENGTH_SHORT).show();
}
});
} catch (MalformedURLException e) {
e.printStackTrace();
Toast.makeText(upload.this, "URL error!", Toast.LENGTH_SHORT).show();
} catch (IOException e) {
e.printStackTrace();
Toast.makeText(upload.this, "Cannot Read/Write File!", Toast.LENGTH_SHORT).show();
}
dialog.dismiss();
return serverResponseCode;
}
}
the problem is here:
int totalBytesWritten = 0;
//loop repeats till bytesRead = -1, i.e., no bytes are left to read
while (bytesRead > 0) {
//write the bytes read from inputstream
dataOutputStream.write(buffer, 0, bufferSize);
bytesAvailable = fileInputStream.available();
bufferSize = Math.min(bytesAvailable, maxBufferSize);
bytesRead = fileInputStream.read(buffer, 0, bufferSize);
totalBytesWritten += bytesRead;
if (dialog != null) {
dialog.setProgress((int)(totalBytesWritten/ totalSize * 100));
}
exactly here:
if (dialog != null) {
dialog.setProgress((int)(totalBytesWritten/ totalSize * 100));
}
proggress bar is on 100% all the time
A: onCreate or in your onClick if you want you should new up a Handler
Handler mMainHandler = new Handler(Looper.prepareMainLooper());
then you can use
private void updateUI(){
mMainHandler.post(new Runnable(){
//touch dialog
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/46557071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How can i call the whole html page to a request handler JavaScript file? I have an html page with a simple form with a post method and a text area field.
I want to call the whole of this html page to with a local variable and use http write method to return it to the browser.
This is the code snippet of the html.
<!DOCTYPE html>
<html id="upload-htm" lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Upload file</title>
</head>
<body>
<form action="/upload" method="post">
<textarea name="inpt-text" id="in-txt" cols="30" rows="10"></textarea>
<button type="submit">upload file</button>
</form>
</body>
<script src="./requestHandlers.js"></script>
</html>
Then i also have a requestHandler.js file. My request handler should display the response it gets to the browser.
function start(res) {
console.log("Request handler 'start' was called");
let body = document.getElementById('upload-htm');
res.writeHead(200, {
"content-type": "text/plain"
});
res.write(body);
res.end();
}
function upload(res) {
console.log("Request handler 'upload' was called");
res.writeHead(200, {
"content-type": "text/plain"
});
res.write(`Hello upload Mr. Paullaster`);
res.end();
}
export {
start,
upload
};
When i run the code i get the following error from request handler start
PS C:\Users\paullaster-geek\OneDrive\Desktop\Projects\Dive node> node -r esm
index.js
Response ready
Request for /upload recieved
About to route a request for /upload
Request handler 'upload' was called
Request for /start recieved
About to route a request for /start
Request handler 'start' was called
ReferenceError: document is not defined
at Object.start (C:\Users\paullaster-geek\OneDrive\Desktop\Projects\Dive
node\requestHandlers.js:5:15)
at route (C:\Users\paullaster-geek\OneDrive\Desktop\Projects\Dive
node\router.js:4:28)
at Server.onRequest (C:\Users\paullaster-geek\OneDrive\Desktop\Projects\Dive
node\server.js:11:8)
at Server.emit (events.js:314:20)
at Server.EventEmitter.emit (domain.js:486:12)
at parserOnIncoming (_http_server.js:781:12)
at HTTPParser.parserOnHeadersComplete (_http_common.js:119:17)
PS C:\Users\paullaster-geek\OneDrive\Desktop\Projects\Dive node>
A: A website usually consists of two major portions, the server side logic (back-end) and the client side logic (front-end). Both logics usually execute separately from each other, and therefore can only communicate via network channels, and not via logic. This means they do not share variables or data as conventional program logic does.
From the code you have shared, it appears that you are attempting to execute client side logic on your server side application. The document object in JavaScript is used to access and manipulate the DOM within an HTML page. The server side application does not have access to this document object, and therefore its access is invalid. In the server side application's context, document object is undefined.
You can make this work by calling these functions via client side logic. Try registering a function call for upload and start functions against a button click. That will most likely work.
A: NodeJS has a module called files system module. file system module has different methods that can be used to server nodejs static pages. it has:
const readStream = createReadStream('./staticfile/index.html');
res.writeHeader(200, {"content-type"}:"text/html");
readStream.pipe(res)
this sends the html file in chucks of data.
refer here: https://nodejs.org/api/fs.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/63125655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Which http request method should I use in Laravel routes? I'm developing a rest api in Laravel 5.7. I know how to make api controllers and how to define appropriate api resource routes referring to corresponding methods in the controller like below:
Route::apiResource('platforms', 'PlatformController');
However, I'm not sure which HTTP request method should I use when I'm trying to define some other methods in my controller rather than the five restful controller methods (index, store, show, update and delete). For example, when in the controller I'm defining a function for toggling a Boolean value in database, either 'Get', 'Post' or 'put' method does works. So, which one is the best choice?
A: Here the basic route descriptions
you can know more from
https://laravel.com/docs/5.7/routing
┌────────┬─────────┬──────────────────────────────────┬────────────────────────┐
│ HTTP │ CRUD │ ENTIRE COLLECTION (e.g /USERS) │ SPECIFIC ITEM │
│ METHOD │ │ │ (e.g. /USERS/123) │
├────────┼─────────┼──────────────────────────────────┼────────────────────────┤
│ POST │ Create │ 201 (Created), 'Location' │ Avoid using POST │
│ │ │ with header link to /users/{id} │ on single resource │
│ │ │ containing new ID. │ │
├────────┼─────────┼──────────────────────────────────┼────────────────────────┤
│ GET │ Read │ 200 (OK), list of users. Use │ 200 (OK), single user │
│ │ │ pagination, sorting and │ 404 (Not Found), If ID │
│ │ │ filtering to navigate big lists. │ not found or invalid. │
├────────┼─────────┼──────────────────────────────────┼────────────────────────┤
│ PUT │ Update/ │ 404 (Not Found), unless you want │ 200 (OK), or 204 (No │
│ │ Replace │ to update every resource in the │ Content). Use 404 (Not │
│ │ │ entire collection of resource. │ Found). If ID not │
│ │ │ │ found or invalid. │
├────────┼─────────┼──────────────────────────────────┼────────────────────────┤
│ PATCH │ Partial │ 404 (Not Found), unless you want │ 200 (OK), or 204 (No │
│ │ Update/ │ to modify the collection itself. │ Content). Use 404 (Not │
│ │ Modify │ │ Found). If ID not │
│ │ │ │ found or invalid. │
├────────┼─────────┼──────────────────────────────────┼────────────────────────┤
│ DELETE │ Delete │ 404 (Not Found), unless you want │ 200 (OK), 404 (Not │
│ │ │ to delete the whole collection - │ Fpund). If ID not │
│ │ │ use with caution. │ found or invalid │
└────────┴─────────┴──────────────────────────────────┴────────────────────────┘
A: Here are the basic rules to use http method,
GET : When you need to fetch or retrive information
POST : When You need to create or insert information
PUT : When you need to update existing record
For more information you can use this link.
https://restfulapi.net/http-methods/
A: Adding to the answer by Lokesh in reference to Laravel.
"index" method uses GET REQUEST as it retrieves records from the database.
"store" method uses POST REQUEST as it stores records in the database.
"update" method uses PUT REQUEST as it updates record in the database.
"show" method uses GET REQUEST as it retrieves single record from the database.
"delete" method uses DELETE REQUEST as it retrieves single record from the database.
Therefore you would want to use POST/PUT REQUEST if you want to change the record in the database. While toggling the status, the standard option is to use PUT, as your are updating a record.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54643849",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to Diff Files Directly from the Linux Kernel GIT Repository? I'd like to be able to diff files / directories directly from the Linux Kernel GIT repository without having to download full source.
Specifically, I'm interested in two potential solutions:
*
*The ability to do diff's via a web browser ( firefox )
*A GUI utility for Ubuntu that can do remote diffs.
*A tutorial how to setup option #2
Edit
As an example of what I'm looking for, I used to use CrossVC for the above tasks on a CVS repo.
A: Gitweb at kernel.org allows to view diff between arbitrary commits, see for example the following link for diff between v2.6.32-rc6 and v2.6.32-rc7:
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;hp=refs/tags/v2.6.32-rc6;h=refs/tags/v2.6.32-rc7
(use patch link to get plain patch that you can apply), and between arbitrary versions of file / between arbitrary versions of arbitrary files, e.g.: diff to current link in history view.
Unfortunately neither official gitweb version (distributed together with Git itself), nor the fork used by kernel.org generates links between arbitrary commits, so you would have to handcraft (create by hand) URLs to give to gitweb. In the case of commitdiff view (action) the iparameters you need are 'h' (hash) and 'hp' (hash parent); in the case of blobdiff view they are 'hb' (hash base) and 'hpb' (hash parent base), and also 'f' (filename) and 'fp' (file parent).
Templates
*
*For diff between two arbitrary commits (equivalent of git diff A B from command line)
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;hp=A;h=B
*For diff between two versions of the same file (equivalent of git diff A B <filename>).
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blobdiff;f=<filename>;hpb=A;hp=B
Note that core gitweb (but not the fork used by kernel.org, currently) you can use path_info version, e.g.:
http://repo.or.cz/w/git.git/blobdiff/A..B:/<filename>
How to find it
*
*Find in a web interface a commit which is a merge commit, for example
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=1c5aefb5b12a90e29866c960a57c1f8f75def617
*Find a link to diff between a commit and a second parent, for example
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/diff/?id=1c5aefb5b12a90e29866c960a57c1f8f75def617&id2=54a217887a7b658e2650c3feff22756ab80c7339
*Replace SHA-1 of compared commits with revision names or revision identifiers you want to compare, for example to generate diff between v3.15-rc8 and v3.15-rc7
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/diff/?id=v3.15-rc8&id2=v3.15-rc7
or to generate patch (rawdiff)
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/rawdiff/?id=v3.15-rc8&id2=v3.15-rc7
A: The system which creates the diff (whether that might be your webserver or your local system) must have a full copy (clone) of the git repo.
So you cannot do "remote diffs".
So, if you want to avoid doing a git clone of the whole kernel, why not just point your web browser to http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=summary?
A: Since 2013, the reworked kernel.org website uses cgit to browse repositories.
As an example of cgit URL for a diff between two tags:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/diff/?id=v3.19-rc2&id2=v3.19-rc1&dt=2
That is also why Git 2.38 (Q3 2022) modified gitweb: gitweb had legacy URL shortener that is specific to the way projects hosted on kernel.org. It used to (but no longer) work, and has been removed.
See commit 75707da (26 Jul 2022) by Julien Rouhaud (rjuju).
(Merged by Junio C Hamano -- gitster -- in commit dcdcc37, 05 Aug 2022)
gitweb: remove title shortening heuristics
Signed-off-by: Julien Rouhaud
Those heuristics are way outdated and too specific to the kernel project to be useful outside of kernel.org.
Since kernel.org doesn't use gitweb anymore and at least one project complained about incorrect behavior, entirely remove them.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1737306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Converting PuTTY's svn repository to mercurial repository I'm going to manage a Korean localized version of PuTTY in mercurial.
The requirements for the mercurial repository:
*
*We should be able to keep track of the latest revisions from the PuTTY svn repository.
*No pushing is required.
My plan is to have the original trunk and branches as named branches in the mercurial repository, and to add my own branch.
I'm going to use hgsubversion for continuous pulling after initial conversion.
The problem is, the PuTTY repository (http://svn.tartarus.org/sgt/) is not in the standard layout, and more worse, contains other projects also. DVCS conversion tools work well with standard layout repositories, but not with non-standard ones.
So I have to map the directories to make it "standard" like:
*
*/putty => /trunk
*/putty-0.xx => /tags/0.xx
*/putty-branch-0.xx => /branches/0.xx
*ignore all other directories
If the trunk has every revisions required for the releases, converting only the trunk would be okay.
But unfortunately, the version 0.62 is released at the putty-branch-0.61 branch! So I could not get the latest revisions for it. :(
I'm trying to use svnsync, svnadmin dump and svndumpfilter to convert the original svn repository standard before mercurial conversion, but manual mapping the directories as I want is not possible with them. (Or maybe I don't know how to do it with them.)
Any suggestions and comments?
A: I took a look at the repository. You are correct that svndumpfilter cannot be used to rename a file throughout the history, so I wrote a small script that does the renaming in the dump file. The only tricky part was to add the creation of the tags and branches folder. To use the script, you should make a cronjob or similar that:
*
*downloads the latest Putty SVN dump file:
$ wget http://www.chiark.greenend.org.uk/~sgtatham/putty/putty-svn.dump.gz
*fixes the dump file with the script:
$ zcat putty-svn.dump.gz | fix-dump.py > fixed.dump
*loads it into a new empty repository:
$ svnadmin create putty
$ svnadmin load putty < fixed.dump
*converts the Subversion repository into a Mercurial repository:
$ hg convert file://$PWD/putty
As far as I can see, the branches and tags are created correctly.
You ask for continuous pulling (incremental conversion). Luckily, both hg convert and hgsubversion support this. You'll need to redo steps 1–3 every day before you can convert the changesets into Mercurial. This will work since the first three steps are deterministic. That means that your putty SVN repository behaves as if the Putty developers worked directly in it using the proper branch and tag names you maintain there.
The script is below:
#!/usr/bin/python
import sys, re
moves = [(r"^Node(-copyfrom|)?-path: %s" % pattern, r"Node\1-path: %s" % repl)
for (pattern, repl) in [(r"putty-branch-(0\...)", r"branches/\2"),
(r"putty-(0\...)", r"tags/\2"),
(r"putty(/|\n)", r"trunk\2")]]
empty_dir_template = """\
Node-path: %s
Node-kind: dir
Node-action: add
Prop-content-length: 10
Content-length: 10
PROPS-END\n\n"""
created_dirs = False
for line in sys.stdin:
if not created_dirs and line == "Node-path: putty\n":
sys.stdout.write(empty_dir_template % "tags")
sys.stdout.write(empty_dir_template % "branches")
created_dirs = True
for pattern, repl in moves:
line, count = re.subn(pattern, repl, line, 1)
if count > 0: break
sys.stdout.write(line)
A: I have decided to keep track of ONLY the released source code, not every revision.
So the result is here: https://bitbucket.org/daybreaker/iputty/changesets .
To do this, I have followed these steps (for example):
svn ls -R svn://svn.tartarus.org/sgt/putty-0.58 > 58.txt
svn ls -R svn://svn.tartarus.org/sgt/putty-0.59 > 59.txt
svn ls -R svn://svn.tartarus.org/sgt/putty-0.60 > 60.txt
svn ls -R svn://svn.tartarus.org/sgt/putty-0.61 > 61.txt
svn ls -R svn://svn.tartarus.org/sgt/putty-0.62 > 62.txt
hg init iputty
cd iputty
svn export --force svn://svn.tartarus.org/sgt/putty-0.58 .
hg branch original
hg add
hg commit -m 'Imported PuTTY 0.58 release.'
svn export --force svn://svn.tartarus.org/sgt/putty-0.59 .
diff -U3 ../58.txt ../59.txt
hg add (added files from diff)
hg rm (removed files from diff)
hg commit -m 'Imported PuTTY 0.59 release.'
(repeat this for the remaining releases)
hg up -r(rev# of 0.60 release)
svn export --force (URL of my own modified PuTTY repository) .
hg branch default
hg commit -m 'Imported the most recent dPuTTY source code. blah blah'
| {
"language": "en",
"url": "https://stackoverflow.com/questions/8625727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Getting an in-memory representation of the JPEG rendering of an Image I need to know how to get an array of bytes from a loaded image, in java. BufferedImage seems not to supply any methods that produce an array of bytes, so what do I use?
A: BufferedImage bufferedImage; //assumed that you have created it already
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
ImageIO.write(bufferedImage,"jpg", byteStream);
byte[] byteArray = byteStream.toByteArray();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1352229",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: sed only replaces first leading white space match for only a particular file - dealing with CR-only line endings
Editor's note:
The title was amended later with the benefit of hindsight; there were two distinct problems:
(a) it turned out that the input file had \r-only (CR-only) line endings (classic Mac OS-style)
(b) attempts to use \t and \r in sed regexes failed, because BSD Sed (as used on OSX) doesn't support such escapes.
I'm working on an Automator program that uses Python to find-and-replace certain words in a text file. The program uses a dictionary, and there are instances in which the value used as a replacement is '' (meaning, nothing). I don't think that the program is causing this issue, but I just mention this by way of context. (The problem, I think, lies with sed, so I was reluctant to tag Python.)
Some lines in the file have leading white space that are created inadvertently after certain words at the beginning of a file are replaced by nothing. I want to get rid of them, and I think sed is the best tool for the job in this case.
Let's say this is what the text file looks like:
Display
Display
BOX,
So I'm running the edited file through sed using this:
sed -e 's/^[ \t]*//g'
This is the result:
Display
Display
BOX,
Only the first match is edited. Why?
By way of a test, I created a brand new plain text file like this:
hello
hello
hello
Then I ran the command above on it. That actually worked as expected. Why?
Is it possible that there is some other form of space being used (a non-printable character?) that was created by the Python program? But then why would sed work at least once?
By the way, I am open to another portable solution or tool compatible with OS X for trimming leading white space from every line in a plain text file.
Edit: Here is some of the xxd output of the file (replaced most actual content with X):
0000000: 2044 6973 706c 6179 2043 616c 6962 7261 X X
0000010: 7469 6f6e 2046 6978 7475 7265 2046 4952 X X X
0000020: 4d57 4152 4520 4b49 545e 4d20 4469 7370 X X^M X
0000030: 6c61 7920 4361 6c69 6272 6174 696f 6e20 X X
0000040: 4669 7874 7572 6520 524d 6163 426f 6f6b X X
0000050: 2041 6972 2028 3131 2d69 6e63 682c 204d X X
0000060: 6964 2032 3031 3229 2050 4f52 5420 4b49 X X) X X
0000070: 545e 4d42 4f58 2c20 5245 434f 5645 5259 T^MBOX, X
A: tl;dr
None of the solutions below update the input file in place; the stand-alone sed commands could be adapted with -i '' to do that; the awk solutions require saving to a different file first.
*
*The OP's input appears to be a file with classic Mac OS \r-only line breaks
Thanks, @alvits.
.
*sed invariably reads such a file as a whole, which is typically undesired and gets in the way of the OP's line-leading whitespace-trimming approach.
*awk is therefore the better choice, because it allow specifying what constitutes a line break (via the so-called input record separator):
Update: Replaced the original awk command with a simpler and faster alternative, adapted from peak's solution:
awk -v RS='\r' '{ sub(/^[ \t]+/, ""); print }'
If it's acceptable to also trim trailing whitespace, if any, from each line and to normalize whitespace between words on a line to a single space each, you can simplify to:
awk -v RS='\r' '{ $1=$1; print }'
Note that the output lines will be \n-separated, as is typically desired.
For an explanation and background information, including how to preserve \r as line breaks, read on.
Note: The first part of the answer applies generally, but assumes that the input has \n-terminated lines; the OP's special case, where lines are apparently \r-only-terminated, is handled in the 2nd part.
BSD Sed, as used on OSX, only supports \n as a control-character escape sequence; thus, \t for matching tab chars. is not supported.
To still match tabs, you can splice an ANSI C-quoted string yielding an actual tab char. into your Sed script ($'\t'):
sed 's/^[ '$'\t'']*//'
In this simple case you could use an ANSI C-quoted string for the entire Sed script (sed -e $'s/^[ \t]*//'), but this can get tricky with more complex scripts, because such strings have their own escaping rules.
*
*Note that option g was removed, because it is pointless, given that the regex is anchored to the start of the input (^).
*For a summary of the differences between GNU and BSD Sed, see this answer of mine.
As @alvits points out in a comment, the input file may actually have \r instances instead of the \n instances that Sed requires to separate lines.
I.e., the file may have Pre-OSX Mac OS line terminators: an \r by itself terminates a line.
An easy way to verify that is to pass the input file to cat -et: \r instances are visualized as ^M, whereas \n instances are visualized as $ (additionally, \t instances are visualized as ^I).
If only ^M instances, but no $ instances are in the output, the implication is that lines aren't terminated with \n (also), and the entire input file is treated as a single string, which explains why only the first input "line" was processed: the ^ only matched at the very beginning of the entire string.
Since a Sed solution (without preprocessing) causes the entire file to be read as a whole, awk is the better choice:
To create \n-separated output, as is customary on Unix-like platforms:
awk -v RS='\r' '{ sub(/^[ \t]+/, ""); print }'
*
*-v RS='\r' tells Awk to split the input into records by \r instances (special variable RS contains the input record separator).
*sub(/^[ \t]+/, "") searches for the first occurrence of regex ^[ \t]+ on the input line and replaces it with "", i.e., it effectively trims a leading run of spaces and tabs from each input line. Note that sub() without an explicit 3rd argument implicitly operates on $0, the whole input line.
*print then prints the potentially modified modified input line.
*By virtue of \n being Awk's default output record separator (OFS), the output records will be \n-terminated.
If you really want to retain \r as the line separator:
awk 'BEGIN { RS=ORS="\r" } { sub(/^[ \t]+/, ""); print }'
*
*RS=ORS="\r" sets both the input and the output record separator to \r.
If it's acceptable to also trim trailing whitespace, if any, from each line and to normalize whitespace between words on a line to a single space each, you can simplify the \n-terminated variant to:
awk -v RS='\r' '{ $1=$1; print }'
*
*Not using -F (and neither setting FS, the input field separator, in the script) means that Awk splits the input record into fields by runs of whitespace (spaces, tabs, newlines).
*$1=$1 is dummy assignment whose purpose is to trigger rebuilding of the input line, which happens whenever a field variable is assigned to.
The line is rebuilt by joining the fields with OFS, the output-field separator, which defaults to a single space.
In effect, leading and trailing whitespace is thereby trimmed, and each run of line-interior whitespace is normalized to a single space.
If you do want to stick with sed1
- even if that means reading the whole file at once:
sed $'s/^[ \t]*//; s/\r[ \t]*/\\\n/g' # note the $'...' to make \t, \r, \n work
This will output \n-terminated lines, as is customary on Unix.
If, by contrast, you want to retain \r as the line separators, use the following - but note that BSD Sed will invariably add a \n at the very end.
sed $'s/^[ \t]*//; s/\r[ \t]*/\r/g'
[1] peak's answer originally showed a pragmatic multi-utility alternative more clearly: replace all \r instances with \n instances using tr, and pipe the result to the BSD-Sed-friendly version of the original sed command:
tr '\r' '\n' file | sed $'s/^[ \t]*//'
A: If (as seems to be the case) the input file uses \r as the "end-of-line" character, then whatever else is done, it would probably make sense to convert the '\r' to '\n' or CRLF, depending on the platform. Assuming that '\n' is acceptable, and if there is any point in saving the original file with the CR replaced by LF, you could use tr:
tr '\r' '\n' < INFILE > OUTFILE
With a bash-like shell, you could then invoke sed like so:
sed -e $'s/^[ \t]*//' OUTFILE
The tr and sed commands could of course be strung together (tr ... | sed ...) but that incurs the overhead of a pipeline.
If you have no interest in saving the original file with the CR replaced by LF, then you may wish to consider the following one-stop awk variation:
awk -v RS='[\r]' '{s=$0; sub(/^[ \t]*/,"",s); print s}'
This variation is both fast and safe as no parsing into fields is involved.
(As pointed out elsewhere, one advantage of using awk is that ORS can be used to set the output-record-separator if the default setting is unsatisfactory.)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35327981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: keep rows that start with certain text strings Background
I have the following df
import pandas as pd
df = pd.DataFrame({'Text' : ['\n[SPORTS FAN]\nHere',
'Nothing here',
'\n[BASEBALL]\nTHIS SOUNDS right',
'\n[SPORTS FAN]\nLikes sports',
'Nothing is here',
'\n[NOT SPORTS]\nTHIS SOUNDS good',
'\n[SPORTS FAN]\nReally Big big fan',
'\n[BASEBALL]\nRARELY IS a fan'
],
'P_ID': [1,2,3,4,5,6,7,8],
'P_Name' : ['J J SMITH',
'J J SMITH',
'J J SMITH',
'J J SMITH',
'MARY HYDER',
'MARY HYDER',
'MARY HYDER',
'MARY HYDER']
})
Output
P_ID P_Name Text
0 1 J J SMITH \n[SPORTS FAN]\nHere
1 2 J J SMITH Nothing here
2 3 J J SMITH \n[BASEBALL]\nTHIS SOUNDS right
3 4 J J SMITH \n[SPORTS FAN]\nLikes sports
4 5 MARY HYDER Nothing is here
5 6 MARY HYDER \n[NOT SPORTS]\nTHIS SOUNDS good
6 7 MARY HYDER \n[SPORTS FAN]\nReally Big big fan
7 8 MARY HYDER \n[BASEBALL]\nRARELY IS a fan
Goal
Keep rows that start with '\n[SPORTS FAN]\ and \n[BASEBALL]\n
Desired Output
P_ID P_Name Text
0 1 J J SMITH \n[SPORTS FAN]\nHere
2 3 J J SMITH \n[BASEBALL]\nTHIS SOUNDS right
3 4 J J SMITH \n[SPORTS FAN]\nLikes sports
6 7 MARY HYDER \n[SPORTS FAN]\nReally Big big fan
7 8 MARY HYDER \n[BASEBALL]\nRARELY IS a fan
Question
How do I achieve my desired output?
A: Try this:
df_new = df.loc[df['Text'].str.startswith('\n[SPORTS FAN]') | df['Text'].str.startswith('\n[BASEBALL]')]
No regex required
| {
"language": "en",
"url": "https://stackoverflow.com/questions/57498726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Scala Mongo driver getting results using Future I am intending to fetch all records that match a criteria from Mongo using Scala Mongo Driver.
Using Observables, you can access the stream by creating a subscription:
val MaxBuffer: Long = 100
var docs: Queue[Document] = Queue.empty
var sub: Option[Subscription] = None
val q: Observable[Document]
def fetchMoreRecords: Unit = sub.get.request(MaxBuffer)
q.subscribe(new Observer[Document] {
override def onSubscribe(subscription: Subscription): Unit = {
sub = Some(subscription)
fetchMoreRecords
}
override def onError(e: Throwable): Unit = fail(out, e)
override def onComplete(): Unit = {
println("Stream is complete")
complete(out)
}
override def onNext(result: Document): Unit = {
if (doc.size == maxBuffer) {
fail(out, new RuntimeException("Buffer overflow"))
} else {
docs = docs :+ result
}
}
})
(this code is incomplete)
I would need a function like:
def isReady: Future[Boolean] = {}
Which completes whenever onNext was called at least once.
The bad way to do this would be:
def isReady: Future[Boolean] = {
Future {
def wait: Unit = {
if (docs.nonEmpty) {
true
} else { wait }
}
wait
}
}
What would be the best way to achieve this?
A: You want to use Promise:
val promise = Promise[Boolean]()
...
override def onNext() = {
...
promise.tryComplete(Success(true))
}
override def onError(e: Throwable) =
promise.tryComplete(Failure(e))
val future = promise.future
You should do something to handle the case when there are no result (as it is now, the future will never be satisfied ...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/47753955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Overload bracket access and assignment C++ I'm writing a hash table for my data structs class, and I'd like to add a little syntactic sugar to my implementation.
template <typename HashedObj, typename Object>
Object & Dictionary<HashedObj, Object>::operator[](HashedObj & key)
{
return items.lookup(key);
}
That works fine when I do something like cout << dict["mykey"]. But how can I do assignment with the brackets? Something like:
dict["mykey"] = "something";
And no, this isn't part of my homework assignment (no pun intended), I just want to learn C++ a little better.
A: It is not clear what exactly you are asking here. The code that you presented already supports assignment. Just do it and at will work (or at least it should compile). It makes absolutely no difference which side of the assignment operator your overloaded [] is used on. It will work in exactly the same way on left-hand side (LHS) as it does on the right-hand side (RHS) of the assignment (or as an operand of <<, as in your original post). Your [] returns a reference to an Object, and then the actual assignment is handled by the assignment operator of your Object type, meaning that the [] itself is not really involved in the actual assignment.
The real question here is how you want your [] to act in certain special cases. What is going to happen if your key is not present in the table? Reference to what Object is your lookup going to return in this case?
It is impossibe to figure out from what you posted. I see it returns a reference, so returning NULL is out of question. Does it insert a new, empty Object for the given key? If so, then you don't have to do anything. Your [] is already perfectly ready to be used on the LHS of the assigment. (This is how [] in std::map works, BTW)
In case your lookup returns a reference to a special "guard" Object, you have to take special steps. You probably don't want to assign anything to a "guard" object, so you have to "disable" its assignment operator somehow and you are done. The rest should work as is.
If your lookup throws an exception in case of a non-existent key, then you have to decide whether this is what you want when the [] is used on the LHS of an assignment. If so, then you don't need to do anything. If not, then it will take some extra work...
So, again, what happens if you pass a non-existent key to lookup?
P.S. Additionally, it would normally make more sense to declare the [] (and lookup) with either const HashedObj& parameter or just HashedObj parameter. Non-const reference, as in your example, looks strange and might lead to problems in some (actually, in most) cases. I'm surprized it works for you now...
A: You need to overload that 2 times. One that will be const, which will be the data access part, and one that will return a reference, which will act as a "setter".
A: What you're looking for is functionality similar to the overloaded bracket operator in std::map. In std::map the bracket operator performs a lookup and returns a reference to an object associated with a particular key. If the map does not contain any object associated with the key, the operator inserts a new object into the map using the default constructor.
So, if you have std::map<K,V> mymap, then calling mymap[someKey] will either return a reference to the value associated with someKey, or else it will create a new object of type V by calling V() (the default constructor of V) and then return a reference to that new object, which allows the caller to assign a value to the object.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1618524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Convert varchar column to datetime2 without affecting column data? I need to change column datatype without affecting data in that column. I have table structure below:
CREATE TABLE [dbo].[PROJECT_LOG](
[ID] [int] IDENTITY(1,1) NOT NULL,
[UNIT] [int] NULL,
[NAME] [nvarchar](100) NULL,
[STATUS] [numeric](1, 0) NULL,
[LOG] [nvarchar](200) NULL,
[LAST_UPDATE] [nvarchar](100) NULL
) ON [PRIMARY]
GO
Now, I have around 200 records below:
I want to change column LAST_UPDATE (current mm/dd/yyyy format) to datetime2. Can anyone help me with this?
I tried using convert query for the same as the guys suggested below and answer here.
-- Add new column.
ALTER TABLE [dbo].[PROJECT_LOG] ADD LAST_UPDATE_TIME DATETIME;
-- Convert value.
UPDATE [dbo].[PROJECT_LOG]
SET LAST_UPDATE_TIME = CONVERT(nvarchar,
REPLACE(LEFT(LAST_UPDATE, 11), '/', ' ') + ' ' + RIGHT(LAST_UPDATE, 8), 101);
While executing query its throwing error:
A: Changing the table
The approach:
*
*Add a column with a substitute with a correct type (date recommended instead of datetime2(7)
*Update this column with Convert( date, LAST_UPDATE, 101 )
*Drop the original column
*Rename the new column to the name of the original column
Important note: Check all the import scripts to this table to fix the functions used to set LAST_UPDATE.
Alternative
*
*Add a column with name LAST_UPDATE_DATE type date as derived column
*Derived column formula: AS Convert( date, LAST_UPDATE, 101 ) [PERSISTED]
*Keep both values as imported and as needed
Important note: If you get any other date format other than US then this formula breaks as it explicitly expects the 101 US format.
View as crazy alternative
Build a view on top of this table that does the transformation. In SQL Server 2008 there is no TRY_CAST function to fail graciously.
Use the view for downstream work.
Why date?
Type date costs 3 bytes and is perfect for date only values.
datetime2(0) costs 6 bytes, the default datetime2(7) costs 8 bytes.
References:
Cast and Convert https://learn.microsoft.com/en-us/sql/t-sql/functions/cast-and-convert-transact-sql?view=sql-server-ver15
Datetime2 https://learn.microsoft.com/en-us/sql/t-sql/data-types/datetime2-transact-sql?view=sql-server-ver15
Try_Cast https://learn.microsoft.com/en-us/sql/t-sql/functions/try-cast-transact-sql?view=sql-server-ver15
A: Since this you are using the 2008 version try_cast will be not helpful.
One safe method will be to implement the update with a loop and go through the entire table row by row with a try catch block within the loop where you can handle in case of any failure at the time of casting , in catch block you can give the query to update the value as NULL to identify if casting failed.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/67981177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Web page auto dimming when hover on main menu I have a webpage running magento and it is using vmegamenu.
Now my customer wants to dim the page when someone is hovering on the main menu or the dropdown box.
Example : http://jsfiddle.net/rkLL6yd8/3/
CSS
.link {
z-index: 700;
list-style-type: none;
padding: 0.5em;
background: black;
display: inline-block;
cursor: pointer;
color: white;
}
.dim {
width: 100%;
height: 100%;
z-index: -6;
display: none;
content: "";
position: absolute;
top: 0;
left: 0;
background: rgba(0, 0, 0, 0.4);
}
body {
background-color: orange;
}
JavaScript (jQuery)
$('.link').hover(function() {
$('.dim').fadeIn(200);
}, function() {
$('.dim').fadeOut(200);
});
HTML
<div class="dim"></div>
<ul>
<div class="link">
<li>Home</li>
</div>
<div class="link">
<li>Main</li>
</div>
<div class="link">
<li>Home</li>
</div>
<div class="link">
<li>Home</li>
</div>
</ul>
Some text here
The website is here : www.profileauto.co.uk
I have the current files for vmegamenu
vmegamenu.phtml
<div class="nav-vcontainer hidden-xs hidden-sm">
<div class="nav-inner">
<div class="vmegamenu-title"><h2><i class="fa fa-bars"></i><?php echo $this->__('Product Categories') ?></h2></div>
<div class="vmegamenu-contain">
<div id="nav_vmegamenu" class="nav_vmegamenu">
<?php $megamenu = $this->getLayout()->createBlock('megamenu/megamenu'); ?>
<?php //echo $megamenu->drawMegamenuHome(); ?>
<?php echo $megamenu->drawMegamenuMain(); ?>
<?php //echo $megamenu->drawMegamenuExtra(); ?>
<?php //echo $megamenu->drawMegamenuLink(); ?>
</div>
</div>
</div>
<script type="text/javascript">
//<![CDATA[
var MEGAMENU_EFFECT = <?php echo (int)Mage::getStoreConfig('megamenu/general/effect')?>;
//]]>
</script>
</div>
And this file
vmegamenu.css
.nav-vcontainer {
border-top: 0;
margin-top: -88px;
margin-bottom: 20px;
}
.catalog-category-view .main .nav-vcontainer,
.catalog-product-view .main .nav-vcontainer {
margin-top: -58px;
}
.cms-index-index .nav-vcontainer {
margin: -48px 0 0 ;
}
.nav-inner {
position:relative;
}
.vmegamenu-contain { border: 1px solid #f0f0f0; border-top: 0; margin: 0 -1px;}
.nav-vcontainer:hover .vmegamenu-contain { display: block;}
.nav-vcontainer .vmegamenu-title { }
.nav-vcontainer .vmegamenu-title h2 { color: #fff; font-size: 14px; text-transform: uppercase; padding: 15px; background: #181818; font-weight: normal; margin: 0; font-family: Montserrat; height: 48px;}
.nav-vcontainer .vmegamenu-title h2 i { margin-top: 2px; font-size: 14px; float: right;}
.nav_vmegamenu {
position:relative;
margin: 0 auto;
padding: 0;
background: #fff;
border-top: 0;
}
.nav_vmegamenu div.megamenu .level-top { position: relative;}
.nav_vmegamenu div.megamenu .hot,
.nav_vmegamenu div.megamenu .new {
display: none;
}
.nav_vmegamenu div.megamenu { position:relative; padding: 0; }
.nav_vmegamenu div.megamenu .level-top a {
padding: 11px 15px;
text-decoration: none;
display:block;
line-height: 30px;
color: #555;
border: none;
text-transform: capitalize;
margin: 0;
position: relative;
border-top: 1px solid #f0f0f0;
font-size: 14px;
background: url(../images/icon-menu.png) 0 0 no-repeat;
padding-left: 45px;
}
.nav_vmegamenu div.megamenu.nav-1 .level-top a { border: 0; background-position: 15px 18px; padding-top: 10px; padding-bottom: 10px; }
.nav_vmegamenu div.megamenu.nav-2 .level-top a { background-position: 15px -36px; padding-top: 10px; padding-bottom: 10px; }
.nav_vmegamenu div.megamenu.nav-3 .level-top a { background-position: 15px -86px; }
.nav_vmegamenu div.megamenu.nav-4 .level-top a { background-position: 15px -137px; }
.nav_vmegamenu div.megamenu.nav-5 .level-top a { background-position: 15px -190px; }
.nav_vmegamenu div.megamenu.nav-6 .level-top a { background-position: 17px -242px; }
.nav_vmegamenu div.megamenu.nav-7 .level-top a { background-position: 17px -294px; }
.nav_vmegamenu div.megamenu.nav-8 .level-top a { background-position: 15px -346px; }
.nav_vmegamenu div.megamenu.nav-9 .level-top a { background-position: 15px -398px; }
.nav_vmegamenu div.megamenu.nav-10 .level-top a { background-position: 15px -450px; }
.nav_vmegamenu div.megamenu .level-top a > .fa,
.nav_vmegamenu div.megamenu .level-top span.block-title > .fa {
position: absolute; right: 15px; top: 20px;
}
.nav_vmegamenu div.megamenu.megamenu_no_child .level-top a >.fa { display: none;}
.nav_vmegamenu div.megamenu .level-top a .fa-angle-down:before {
content: "\f0da";
color: #bababa;
font-size: 12px;
}
.nav_vmegamenu div.megamenu div.dropdown {
position: absolute;
background-color:#fff;
text-align:left;
margin: 0 0 0 20px;
opacity: 0;
z-index: -1;
top: 0;
left: 100% !important;
border: 1px solid #f0f0f0;
/*box-shadow: 1px 1px 1px 1px #eee;*/
border-top: 0;
width: 935px;
}
.nav_vmegamenu .block1 {
padding: 25px;
}
.nav_vmegamenu .block2 {
}
.nav_vmegamenu div.megamenu:hover div.dropdown {
opacity: 1;
margin: 0;
z-index: 1000;
}
.nav_vmegamenu div.megamenu.active .level-top a,
.nav_vmegamenu div.megamenu.act .level-top a,
.nav_vmegamenu div.megamenu.active .level-top span.block-title,
.nav_vmegamenu div.megamenu .level-top a:hover,
.nav_vmegamenu div.megamenu .level-top span.block-title:hover,
.nav_vmegamenu #pt_menu_link ul li a.act,
.nav_vmegamenu #pt_menu_link ul li a:hover,
.nav_vmegamenu div.megamenu.act {
background-color: #00adee;
color: #fff;
}
.nav_vmegamenu div.megamenu .level-top a:hover .fa-angle-down:before,
.nav_vmegamenu div.megamenu.active .level-top a .fa:before,
.nav_vmegamenu div.megamenu.act .level-top a .fa:before{
color: #fff;
}
.nav_vmegamenu div.dropdown a {
display:block;
line-height: 30px;
}
.nav_vmegamenu .itemMenu h4.level1,
.nav_vmegamenu .itemMenu a.level1{
font-size: 14px;
text-transform: uppercase;
font-weight: bold;
color: #555555;
font-family: Montserrat;
}
.megamenu .itemMenu a.level2:before,
.megamenu .itemMenu a.level3:before,
.megamenu .itemMenu a.level4:before{
content:"\f111";
margin-right: 10px;
color: #bababa;
font-family: FontAwesome;
font-size: 5px;
position: relative;
top: -2px;
}
.megamenu .itemMenu a:hover:before {
content: "";
margin: 0;
}
.megamenu .itemMenu a.level2:hover,
.megamenu .itemMenu a.level3:hover,
.megamenu .itemMenu a.level4:hover {
padding-left: 10px;
}
.nav_vmegamenu .itemSubMenu a.itemMenuName {
color: #555555;
text-transform: none;
font-weight: normal;
font-size: 13px;
border-top: 1px solid #ededed;
line-height: 38px;
text-transform: capitalize;
}
.nav_vmegamenu div.column {
float:left;
width:200px; /* column width */
margin-right: 5px;
padding-right: 5px;
}
.nav_vmegamenu div.column.last {
border-right: 0 none;
margin-right: 0;
padding-right: 0;
width: 250px;
}
.nav_vmegamenu div.itemSubMenu {
padding-top: 5px;
margin-bottom: 15px;
}
.nav_vmegamenu .block2{
float: left;
}
.nav_vmegamenu div.dropdown .block1{
overflow: hidden;
float: left;
}
.nav_vmegamenu div.dropdown .block1 .column{
margin-bottom: -99999px;
padding-bottom: 99999px;
}
.nav_vmegamenu div.dropdown .blockright img{
max-width: 100%;
}
.nav_vmegamenu div.megamenu .level-top p{
margin: 0;
padding: 0;
}
.nav_vmegamenu #pt_menu_link{
padding: 0;
}
.nav_vmegamenu #pt_menu_link .level-top ul li{
float: left;
list-style: none;
}
.nav_vmegamenu #pt_menu_link .level-top ul li a{
float: left;
padding: 0 10px;
display: block;
}
.nav_vmegamenu .clearBoth {
clear:both;
}
I am trying to achive what this website has done.
www.johnlewis.co.uk
When you hover on the main menu all the page is dim and dark and only the main menu and drop down is visible.
Please help.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38279492",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Call Range.Find method with MatchCase parameter I have some Powershell code that is trying to find a particular string in a range of an Excel sheet. It uses the Find method from the excel libraries. Here is the code:
$default = [Type]::Missing
$objRange.find($Search,$default,$default,$xlLookAt::xlPart,$xlSearchOrder::xlByRows,$xlSearchDirection::xlNext, $true ,$default,$default)
What does not work is the $true value I am passing to specify MatchCase mode. I have tried True, $true, -1, "true", etc. True works on ISE command line, but the Powershell ISE editor does not like it.
What do I pass for MatchCase parameter for True, or for False?
Here is the documentation of the Range.Find method.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/21321436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Authentication failing for Modulr API - Python The API docs are here
The only code example is in Java, here
Every time I try to authenticate I get:
{
"error": "Authorization field missing, malformed or invalid"
}
I have been through the auth docs many times over and still no luck.
Here is my code:
import requests
import secrets
import codecs
from wsgiref.handlers import format_date_time
from datetime import datetime
from time import mktime
import hashlib
import hmac
import base64
import urllib.parse
key = '<API_KEY>'
secret = '<API_SECRET>'
# Getting current time
now = datetime.now()
stamp = mktime(now.timetuple())
# Formats time into this format --> Mon, 25 Jul 2016 16:36:07 GMT
formated_time = format_date_time(stamp)
# Generates a secure random string for the nonce
nonce = secrets.token_urlsafe(30)
# Combines date and nonce into a single string that will be signed
signature_string = 'date' + ':' + formated_time + '\n' + 'x-mod-nonce' + ':' + nonce
# Expected output example --> date: Mon, 25 Jul 2016 16:36:07 GMT\nx-mod-nonce: 28154b2-9c62b93cc22a-24c9e2-5536d7d
# Encodes secret and message into a format that can be signed
secret = bytes(secret, encoding='utf-8')
message = bytes(signature_string,encoding='utf-8')
# Signing process
digester = hmac.new(secret, message, hashlib.sha1)
# Converts to hex
hex_code = digester.hexdigest()
# Decodes the signed string in hex into base64
b64 = codecs.encode(codecs.decode(hex_code, 'hex'), 'base64').decode()
# Encodes the string so it is safe for URL
url_safe_code = urllib.parse.quote(b64,safe='')
# Adds the key and signed response
authorization = f'Signature keyId="{key}",algorithm="hmac-sha1",headers="date x-mod-nonce",signature="{url_safe_code}"'
account_id = 'A120BU48'
url = f'https://api-sandbox.modulrfinance.com/api-sandbox/accounts/{account_id}'
headers = {
'Authorization': authorization, # Authorisation header
'Date' : formated_time, # Date header
'x-mod-nonce': nonce, # Addes nonce
'accept': 'application/json',
}
response = requests.get(url,headers=headers)
print(response.text)
I am not sure where the process is going wrong, as far as I know, the signature is being signed correctly as I added in the test data from the authentication example and I get the expected string.
If you want to try with real API keys, register for access here
The docs for the API endpoint I am trying to call is here
A: The docs you linked has a space between the colon and the values.
signature_string = 'date' + ':' + formated_time + '\n' + 'x-mod-nonce' + ':' + nonce
should be:
signature_string = 'date' + ': ' + formated_time + '\n' + 'x-mod-nonce' + ': ' + nonce
or (simpler):
signature_string = 'date: ' + formated_time + '\n' + 'x-mod-nonce: ' + nonce
Update
I registered to see what is going on. I also ran your code on the example given in the documentation and saw that the signature is not entirely correct.
In addition to the change I suggested above, a further change was necessary.
After changing the line
b64 = codecs.encode(codecs.decode(hex_code, 'hex'), 'base64').decode()
to
b64 = codecs.encode(codecs.decode(hex_code, 'hex'), 'base64').decode().strip()
the signature of the example matched.
After this I was able to connect to the API with my own keys.
Here is the complete working code:
import codecs
import hashlib
import hmac
import secrets
import urllib.parse
from datetime import datetime
from time import mktime
from wsgiref.handlers import format_date_time
import requests
key = '<key>'
secret = '<secret>'
account_id = '<account id>'
url = f'https://api-sandbox.modulrfinance.com/api-sandbox/accounts/{account_id}'
# Getting current time
now = datetime.now()
stamp = mktime(now.timetuple())
# Formats time into this format --> Mon, 25 Jul 2016 16:36:07 GMT
formatted_time = format_date_time(stamp)
# Generates a secure random string for the nonce
nonce = secrets.token_urlsafe(30)
# Combines date and nonce into a single string that will be signed
signature_string = 'date' + ': ' + formatted_time + '\n' + 'x-mod-nonce' + ': ' + nonce
# Encodes secret and message into a format that can be signed
secret = bytes(secret, encoding='utf-8')
message = bytes(signature_string, encoding='utf-8')
# Signing process
digester = hmac.new(secret, message, hashlib.sha1)
# Converts to hex
hex_code = digester.hexdigest()
# Decodes the signed string in hex into base64
b64 = codecs.encode(codecs.decode(hex_code, 'hex'), 'base64').decode().strip()
# Encodes the string so it is safe for URL
url_safe_code = urllib.parse.quote(b64, safe='')
# Adds the key and signed response
authorization = f'Signature keyId="{key}",algorithm="hmac-sha1",headers="date x-mod-nonce",signature="{url_safe_code}"'
headers = {
'Authorization': authorization, # Authorisation header
'Date': formatted_time, # Date header
'x-mod-nonce': nonce, # Adds nonce
'accept': 'application/json',
}
response = requests.get(url, headers=headers)
print(response.text)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/56805662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Duplicate function name in Class View when defined in a namespace I'm using Visual Studio Professional 2013 Update 4.
I've defined a couple functions in my header file:
class CFileReader;
class CFileWriter;
namespace FileFixer
{
bool makeFixedFileName ( const wchar_t* inFile , wchar_t* outFile , size_t maxLen );
bool fixFile ( CFileReader& fileReader , CFileWriter& fileWriter );
}
And in the source file:
#include "FileReader.h"
#include "FileWriter.h"
namespace FileFixer
{
bool makeFixedFileName ( const wchar_t* inFile , wchar_t* outFile , size_t maxLen )
{
// Do something here ...
return true;
}
bool fixFile ( CFileReader& fileReader , CFileWriter& fileWriter )
{
// Do something more here ...
return true;
}
}
In the Class View pane the functions name are repeated, but the first name got a small white arrow behind the 3d purple box icon like this:
If I right click on both of them the menu is the same, if I double click also the behaviour is identical. I didn't find anything on the online help here. What is this for?
A: Ideally, it should hardly matter to you. But I see the arrow in my code base for the functions implemented outside the class-declaration (i.e. into implementation file). All inline methods are shown without the arrow symbol.
I am not sure why they are duplicated in your case. May be namespace implementation has something to do.
But, again - why does it matter?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30138272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Invoking std::thread with pointer to freestanding function I tried to invoke std::thread perfect forwarding constructor (template< class Function, class... Args > explicit thread( Function&& f, Args&&... args );) with a pointer to function (NOT a pointer to member function), as shown in the following M(N)WE:
#include <thread>
#include <string>
static void foo(std::string query, int & x)
{
while(true);
}
int main() {
int i = 1;
auto thd = std::thread(&foo, std::string("bar"), i);
thd.join();
}
Live demo: https://godbolt.org/g/Cwi6wd
Why does the code not compile on GCC, Clang and MSVC, complaining about a missing overload of invoke (or similar names)?
A function argument is a pointer to a function, so it should be a Callable, right?
Please note: I know that using a lambda would solve the problem; I want to understand why the problem arises.
A: std::thread stores copies of the arguments it is passed. Which as Massimiliano Janes pointed out, is evaluated in the context of the caller to a temporary. For all intents and purposes, it's better to consider it as a const object.
Since x is a non-const reference, it cannot bind to the argument being fed to it by the thread.
If you want x to refer to i, you need to use std::reference_wrapper.
#include <thread>
#include <string>
#include <functional>
static void foo(std::string , int & )
{
while(true);
}
int main() {
int i = 1;
auto thd = std::thread(foo, std::string("bar"), std::ref(i));
thd.join();
}
Live Example
The utility std::ref will create it on the fly.
A: std::thread constructor performs a decay_copy on its arguments before invoking the callable perfect-forwarding the result to it; in your foo, you're trying to bind a lvalue reference (int& x) to an rvalue reference (to the temporary), hence the error; either take an int, an int const& or an int&& instead ( or pass a reference wrapper ).
A: Following on from StoryTeller's answer, a lambda may offer a clearer way to express this:
I think there are a couple of scenarios:
If we really do want to pass a reference to i in our outer scope:
auto thd = std::thread([&i]
{
foo("bar", i);
});
And if foo taking a reference just happens to be an historical accident:
auto thd = std::thread([]() mutable
{
int i = 1;
foo("bar", i);
});
In the second form, we have localised the variable i and reduced the risk that it will be read or written to outside the thread (which would be UB).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/47070592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: App crashes on higher versions of android I have been creating an app, built on target sdk version 15 and with minimum sdk version of 8. Every thing is running perfect when I run it on a devices running version 8. But when I try it on anything higher than version 10 it crashes with a NullPointerException.
The logcat gives me this:
10-06 19:23:12.927: E/AndroidRuntime(589): FATAL EXCEPTION: main
10-06 19:23:12.927: E/AndroidRuntime(589): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.fansheroid.facts.chicks/com.fansheroid.facts.chicks.MainActivity}: java.lang.NullPointerException
10-06 19:23:12.927: E/AndroidRuntime(589): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1955)
10-06 19:23:12.927: E/AndroidRuntime(589): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1980)
10-06 19:23:12.927: E/AndroidRuntime(589): at android.app.ActivityThread.access$600(ActivityThread.java:122)
10-06 19:23:12.927: E/AndroidRuntime(589): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1146)
10-06 19:23:12.927: E/AndroidRuntime(589): at android.os.Handler.dispatchMessage(Handler.java:99)
10-06 19:23:12.927: E/AndroidRuntime(589): at android.os.Looper.loop(Looper.java:137)
10-06 19:23:12.927: E/AndroidRuntime(589): at android.app.ActivityThread.main(ActivityThread.java:4340)
10-06 19:23:12.927: E/AndroidRuntime(589): at java.lang.reflect.Method.invokeNative(Native Method)
10-06 19:23:12.927: E/AndroidRuntime(589): at java.lang.reflect.Method.invoke(Method.java:511)
10-06 19:23:12.927: E/AndroidRuntime(589): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:784)
10-06 19:23:12.927: E/AndroidRuntime(589): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:551)
10-06 19:23:12.927: E/AndroidRuntime(589): at dalvik.system.NativeStart.main(Native Method)
10-06 19:23:12.927: E/AndroidRuntime(589): Caused by: java.lang.NullPointerException
10-06 19:23:12.927: E/AndroidRuntime(589): at org.json.JSONTokener.nextCleanInternal(JSONTokener.java:116)
10-06 19:23:12.927: E/AndroidRuntime(589): at org.json.JSONTokener.nextValue(JSONTokener.java:94)
10-06 19:23:12.927: E/AndroidRuntime(589): at org.json.JSONObject.<init>(JSONObject.java:154)
10-06 19:23:12.927: E/AndroidRuntime(589): at org.json.JSONObject.<init>(JSONObject.java:171)
10-06 19:23:12.927: E/AndroidRuntime(589): at com.fansheroid.facts.chicks.MainActivity.getTumblrs(MainActivity.java:156)
10-06 19:23:12.927: E/AndroidRuntime(589): at com.fansheroid.facts.chicks.MainActivity.onCreate(MainActivity.java:62)
10-06 19:23:12.927: E/AndroidRuntime(589): at android.app.Activity.performCreate(Activity.java:4465)
10-06 19:23:12.927: E/AndroidRuntime(589): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1049)
10-06 19:23:12.927: E/AndroidRuntime(589): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1919)
10-06 19:23:12.927: E/AndroidRuntime(589): ... 11 more
I have been trying to figure out the problem for the past two days, but it just doesn't make sense for me.
EDIT ADDED CODE
public ArrayList<Tumblr> getTumblrs() throws ClientProtocolException,
IOException, JSONException {
String searchUrl = "http://api.tumblr.com/v2/blog/factsandchicks.com/posts?api_key=API_KEY";
ArrayList<Tumblr> tumblrs = new ArrayList<Tumblr>();
HttpClient client = new DefaultHttpClient();
HttpGet get = new HttpGet(searchUrl);
ResponseHandler<String> responseHandler = new BasicResponseHandler();
String responseBody = null;
try {
responseBody = client.execute(get, responseHandler);
} catch (Exception ex) {
ex.printStackTrace();
}
JSONObject jsonObject = new JSONObject(responseBody);
JSONArray posts = jsonObject.getJSONObject("response").getJSONArray(
"posts");
for (int i = 0; i < posts.length(); i++) {
JSONArray photos = posts.getJSONObject(i).getJSONArray("photos");
for (int j = 0; j < photos.length(); j++) {
JSONObject photo = photos.getJSONObject(j);
String url = photo.getJSONArray("alt_sizes").getJSONObject(0)
.getString("url");
Tumblr tumblr = new Tumblr(url);
tumblrs.add(tumblr);
}
}
return tumblrs;
}
Line 156:
JSONObject jsonObject = new JSONObject(responseBody);
A: Ah, look at this:
E/AndroidRuntime(589): Caused by: java.lang.NullPointerException 10-06 19:23:12.927:
E/AndroidRuntime(589): at org.json.JSONTokener.nextCleanInternal(JSONTokener.java:116) 10-06 19:23:12.927:
E/AndroidRuntime(589): at org.json.JSONTokener.nextValue(JSONTokener.java:94) 10-06 19:23:12.927:
E/AndroidRuntime(589): at org.json.JSONObject.(JSONObject.java:154) 10-06 19:23:12.927:
E/AndroidRuntime(589): at org.json.JSONObject.(JSONObject.java:171) 10-06 19:23:12.927:
E/AndroidRuntime(589): at com.fansheroid.facts.chicks.MainActivity.getTumblrs(MainActivity.java:156) 10-06 19:23:12.927:
E/AndroidRuntime(589): at com.fansheroid.facts.chicks.MainActivity.onCreate(MainActivity.java:62)
It basically says the JSON parser fails: you're probably passing NULL on line 156 of MainActivity.java. You should check the value at that line and see if your device upgrade hasn't somehow wiped out a value or failed to retrieve a value correctly.
A: try {
responseBody = client.execute(get, responseHandler);
} catch (Exception ex) {
ex.printStackTrace();
}
That section of code is where your response is failing. You assign null to responsebody, chances are that your call there is failing so that responsebody is still null when it hits line 156.
Try looking in your debug output for the ex.printStackTrace(); output to see what is going on.
A: Solved the problem by using HttpResponse and HttpEntity instead of ResponseHandler, and this solved the problem.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12763126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Android charting library with inbuilt support to plot graph for days, weeks, months and years break down from input data Is there any charting library in Android that has built-in support to plot graph for days, weeks, months and years break down of input data.
How to develop a graph which looks like this ?
A: Looks like no library has built-in support to show data grouped by day, week, month and year, etc. So we are doing it ourselves.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/72426937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Byte buffer transfer via UDP Can you provide an example of a byte buffer transferred between two java classes via UDP datagram?
A: Hows' this ?
import java.io.IOException;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetSocketAddress;
public class Server {
public static void main(String[] args) throws IOException {
DatagramSocket socket = new DatagramSocket(new InetSocketAddress(5000));
byte[] message = new byte[512];
DatagramPacket packet = new DatagramPacket(message, message.length);
socket.receive(packet);
System.out.println(new String(packet.getData(), packet.getOffset(), packet.getLength()));
}
}
import java.io.IOException;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetSocketAddress;
public class Client {
public static void main(String[] args) throws IOException {
DatagramSocket socket = new DatagramSocket();
socket.connect(new InetSocketAddress(5000));
byte[] message = "Oh Hai!".getBytes();
DatagramPacket packet = new DatagramPacket(message, message.length);
socket.send(packet);
}
}
A: @none
The DatagramSocket classes sure need a polish up, DatagramChannel is slightly better for clients, but confusing for server programming. For example:
import java.io.IOException;
import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.DatagramChannel;
public class Client {
public static void main(String[] args) throws IOException {
DatagramChannel channel = DatagramChannel.open();
ByteBuffer buffer = ByteBuffer.wrap("Oh Hai!".getBytes());
channel.send(buffer, new InetSocketAddress("localhost", 5000));
}
}
Bring on JSR-203 I say
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Query takes almost two seconds but matches only two rows - why isn't the index helping? Table:
CREATE TABLE `Alarms` (
`AlarmId` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
`DeviceId` BINARY(16) NOT NULL,
`Code` BIGINT(20) UNSIGNED NOT NULL,
`Ended` TINYINT(1) NOT NULL DEFAULT '0',
`NaturalEnd` TINYINT(1) NOT NULL DEFAULT '0',
`Pinned` TINYINT(1) NOT NULL DEFAULT '0',
`Acknowledged` TINYINT(1) NOT NULL DEFAULT '0',
`StartedAt` TIMESTAMP NOT NULL DEFAULT '0000-00-00 00:00:00',
`EndedAt` TIMESTAMP NULL DEFAULT NULL,
`MarkedForDeletion` TINYINT(1) NOT NULL DEFAULT '0',
PRIMARY KEY (`AlarmId`),
KEY `Key1` (`Ended`,`Acknowledged`),
KEY `Key2` (`Pinned`),
KEY `Key3` (`DeviceId`,`Pinned`),
KEY `Key4` (`DeviceId`,`StartedAt`,`EndedAt`),
KEY `Key5` (`DeviceId`,`Ended`,`EndedAt`),
KEY `Key6` (`MarkedForDeletion`),
KEY `KeyB` (`MarkedForDeletion`,`DeviceId`,`StartedAt`,`EndedAt`,`Acknowledged`,`Pinned`)
) ENGINE=INNODB;
It currently has about three million rows in it.
Query:
SELECT
COUNT(`AlarmId`) AS `n`
FROM `Alarms`
WHERE `StartedAt` < FROM_UNIXTIME(1519101900)
AND (`EndedAt` IS NULL OR `EndedAt` > FROM_UNIXTIME(1519101900))
AND `DeviceId` = UNHEX('00030000000000000000000000000000')
AND `MarkedForDeletion` = FALSE
AND (
(`Alarms`.`EndedAt` IS NULL AND `Alarms`.`Acknowledged` = FALSE)
OR ( `Alarms`.`EndedAt` IS NOT NULL AND `Alarms`.`Pinned` = TRUE)
)
Query plan:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE Alarms range Key2,Key3,Key4,Key5,Key6,KeyB KeyB 21 1574778 Using where; Using index
Elapsed time: 1,763,222μs
In this particular case the query (correctly) doesn't even match many rows (the result is n = 2).
Taking what I learnt from working with index merges (though I still haven't got that right), I tried reorganising the conditions a bit (the original was generated by some C++, based on input conditions, hence the strange operator distribution):
SELECT COUNT(`AlarmId`) AS `n`
FROM `Alarms`
WHERE
(
`EndedAt` IS NULL
AND `Acknowledged` = FALSE
AND `StartedAt` < FROM_UNIXTIME(1519101900)
AND `MarkedForDeletion` = FALSE
AND `DeviceId` = UNHEX('00030000000000000000000000000000')
) OR (
`EndedAt` > FROM_UNIXTIME(1519101900)
AND `Pinned` = TRUE
AND `StartedAt` < FROM_UNIXTIME(1519101900)
AND `MarkedForDeletion` = FALSE
AND `DeviceId` = UNHEX('00030000000000000000000000000000')
);
…but the result is the same.
So why does it take so long? How can I modify it / the indexes to make it work instantly?
A: I think the problem is that I was trying to use a range condition halfway through the index.
I added a key on:
(`MarkedForDeletion`,`DeviceId`,`Acknowledged`,`Ended`,`StartedAt`)
Then rewrote the query to this:
SELECT COUNT(`AlarmId`) AS `n`
FROM `Alarms`
WHERE
(
`Ended` = FALSE
AND `Acknowledged` = FALSE
AND `StartedAt` < FROM_UNIXTIME(1519101900)
AND `MarkedForDeletion` = FALSE
AND `DeviceId` = UNHEX('00030000000000000000000000000000')
) OR (
`EndedAt` > FROM_UNIXTIME(1519101900)
AND `Pinned` = TRUE
AND `StartedAt` < FROM_UNIXTIME(1519101900)
AND `MarkedForDeletion` = FALSE
AND `DeviceId` = UNHEX('00030000000000000000000000000000')
);
Now I get an index merge and the query is instant.
A: *
*OR is notoriously hard to optimize.
*MySQL almost never uses two indexes in a single query.
To avoid both of those, turn OR into UNION. Each SELECT can use its a different index. So, build an optimal INDEX for each.
Actually, since you are only doing COUNT, you may as well evaluate two separate counts and add them.
SELECT ( SELECT COUNT(*)
FROM `Alarms`
WHERE `EndedAt` IS NULL
AND `Acknowledged` = FALSE
AND `StartedAt` < FROM_UNIXTIME(1519101900)
AND `MarkedForDeletion` = FALSE
AND `DeviceId` = UNHEX('00030000000000000000000000000000' )
) +
( SELECT COUNT(*)
FROM `Alarms`
WHERE `EndedAt` > FROM_UNIXTIME(1519101900)
AND `Pinned` = TRUE
AND `StartedAt` < FROM_UNIXTIME(1519101900)
AND `MarkedForDeletion` = FALSE
AND `DeviceId` = UNHEX('00030000000000000000000000000000')
) AS `n`;
INDEX(DeviceId, Acknowledged, MarkedForDeletion, EndedAt, StartedAt) -- for first
INDEX(DeviceId, Pinned, MarkedForDeletion, EndedAt, StartedAt) -- for second
INDEX(DeviceId, Pinned, MarkedForDeletion, StartedAt, EndedAt) -- for second
Well, that won't work if there is overlap. So, let's go back to the UNION pattern:
SELECT COUNT(*) AS `n`
FROM
(
( SELECT AlarmId
FROM `Alarms`
WHERE `EndedAt` IS NULL
AND `Acknowledged` = FALSE
AND `StartedAt` < FROM_UNIXTIME(1519101900)
AND `MarkedForDeletion` = FALSE
AND `DeviceId` = UNHEX('00030000000000000000000000000000')
)
UNION DISTINCT
( SELECT AlarmId
FROM `Alarms`
WHERE `EndedAt` > FROM_UNIXTIME(1519101900)
AND `Pinned` = TRUE
AND `StartedAt` < FROM_UNIXTIME(1519101900)
AND `MarkedForDeletion` = FALSE
AND `DeviceId` = UNHEX('00030000000000000000000000000000')
)
);
Again, add those indexes.
The first few columns in each INDEX can be in any order, since they are tested with = (or IS NULL). The last one or two are "range" tests. Only the first range will be used for filtering, but I included the other column so that the index would be "covering".
My formulations may be better than "index merge".
| {
"language": "en",
"url": "https://stackoverflow.com/questions/48890660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Ajax post works on localhost, but not on nginx server I was experimenting with building a small CMS for a project. Part of it was, that when you click on the save-button your current edits will be saved.
$('.save-button').click(function() {
var speisekarte_content = $(".speisekarte-content").html();
console.log(speisekarte_content);
var ajaxurl = 'save.php',
data = {
'content': speisekarte_content
};
$.post(ajaxurl, data, function(response) {
alert("action performed successfully");
});
});
The save.php looks like this:
<?php
$post_data = $_POST['content'];
if (function_exists('fopen')) {
if (!empty($post_data)) {
$filename = 'speisekarte-content.php';
$handle = fopen($filename, "w");
fwrite($handle, $post_data);
fclose($handle);
echo $file;
}
};
?>
So it basically just adds the content that should be saved to a file called speisekarte-content.php … This worked perfectly on localhost – until I uploaded it to my nginx server and it stopped working as supposed.
This is the error log that I found in the javascript console:
POST
http://www.myurl.com/editable/save.php net::ERR_TIMED_OUT
k.cors.a.crossDomain.send @ jquery.min.js:4
n.extend.ajax @ jquery.min.js:4
n.(anonymous function) @ jquery.min.js:4
(anonymous function) @ main.js:99
The nginx error logs are the following
2015/06/22 08:43:45 [error] 6804#0: *63817 FastCGI sent in stderr: "PHP message: PHP Notice: Undefined index: content in /var/www/myurl.com/html/editable/save.php on line 2
PHP message: PHP Warning: fopen(autosave.php): failed to open stream: Permission denied in /var/www/myurl.com/html/editable/save.php on line 27
PHP message: PHP Warning: fwrite() expects parameter 1 to be resource, boolean given in /var/www/myurl.com/html/editable/save.php on line 28
PHP message: PHP Warning: fclose() expects parameter 1 to be resource, boolean given in /var/www/myurl.com/html/editable/save.php on line 29
PHP message: PHP Notice: Undefined variable: file in /var/www/myurl.com/html/editable/save.php on line 30" while reading response header from upstream, client: 176.0.1.54, server: myurl.com, request: "POST /editable/save.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.myurl.com", referrer: "http://www.myurl.com/editable/"
Is this related to nginx or my code?
A: Your log say that your application don't have the right to write in his current emplacement.
Your should check the right on your file
can you post the output of a ls -al in your app's folder ?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30980101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Uncaught RangeError: noUiSlider I'm trying to add few sliders on website and I got stucked. I've downloaded nouislider script directly from their website and before that, I added jQuery 1.8.0 to my website. This is what I produced:
index.html
<head>
...
<link rel="stylesheet" href="Content/noUiSlider/jquery.nouislider.css" type="text/css">
<script src="Scripts/jquery-1.8.0.min.js"></script>
<script src="Scripts/jquery.nouislider.js"></script>
<script src="Scripts/website.js"></script>
...
</head>
<body>
...
<div id="sslider" class="noUi-target noUi-ltr noUi-horizontal noUi-background"></div>
...
</body>
website.js
(function () {
$("#sslider").noUiSlider({
start: 5,
range: {
'min': 1,
'max': 80
}
});
})();
When I debug in Chrome, I' catching exactly this:
Can you help me?
A: I copied your example into a test file, and it works fine. I also can't see how the error message in your example could be generated by the current version of noUiSlider, so I'd suggest using the latest version of noUiSlider and the latest version of jQuery.
It would also be a good idea to only run this JS after the page has loaded, like this:
$(function() {
$("#sslider").noUiSlider({
start: 5,
range: {
'min': 1,
'max': 80
}
});
});
A: Ok, problem solved. I have downloaded jquery.nouislider.js from NuGet (from VisualStudio 2013) and they provided me older or broken file. I thought the file was from website, but I browsed NuGet and noticed, that I have already downloaded from there.
There is no problem with code but with file. Lesson: before asking question, compare official files with those you already have installed.
A: Try to follow the official docs:
$("#slider").noUiSlider({
start: [5],
connect: true,
range: {
'min': 1,
'max': 80
}
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/22906791",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Settings Array in C++ My aim is to set up a data structure to store the settings of my application.
In PHP I would just write...
$settings = array(
"Fullscreen" => true,
"Width" => 1680,
"Height" => 1050,
"Title" => "My Application",
);
Now I tried to create a similar structure in C++ but it can't handle different datatypes yet. By the way if there is a better way of storing such settings data, please let me know.
struct Setting{ string Key, Value; };
Setting Settings[] = {
("Fullscreen", "true"), // it's acceptable to store the boolean as string
("Width", "1680"), // it's not for integers as I want to use them later
("Height", 1050), // would be nice but of course an error
("Title", "My Application") // strings aren't the problem with this implementation
};
How can I model a structure of an associative array with flexible datatypes?
A: An associative data structure with varying data types is exactly what a struct is...
struct SettingsType
{
bool Fullscreen;
int Width;
int Height;
std::string Title;
} Settings = { true, 1680, 1050, "My Application" };
Now, maybe you want some sort of reflection because the field names will appear in a configuration file? Something like:
SettingsSerializer x[] = { { "Fullscreen", &SettingsType::Fullscreen },
{ "Width", &SettingsType::Width },
{ "Height", &SettingsType::Height },
{ "Title", &Settings::Title } };
will get you there, as long as you give SettingsSerializer an overloaded constructor with different behavior depending on the pointer-to-member type.
A: C++ is a strongly typed language. The containers hold exactly one type of object so by default what you are trying to do cannot be done with only standard C++.
On the other hand, you can use libraries like boost::variant or boost::any that provide types that can hold one of multiple (or any) type, and then use a container of that type in your application.
Rather than an array, you can use std::map to map from the name of the setting to the value:
std::map<std::string, boost::variant<bool,int,std::string> >
A: #include <map>
#include <string>
std::map<std::string,std::string> settings;
settings.insert("Fullscreen","true");
settings.insert("Width","1680");
settings.insert("Height","1050");
settings.insert("Title","My Application");
Could be one way of doing it if you want to stick with the STL.
A: One solution could be to define the ISetting interface like:
class ISetting{
public:
virtual void save( IStream* stream ) = 0;
virtual ~ISetting(){}
};
after that you can use a map in order to store your settings:
std::map< std::string, ISetting* > settings;
One example of the boolean setting is:
class BooleanSetting : public ISetting{
private:
bool m_value;
public:
BooleanSetting(bool value){
m_value = value
}
void save( IStream* stream ) {
(*stream) << m_value;
}
virtual ~BooleanSetting(){}
};
in the end:
settings["booleansetting"]=new BooleanSetting(true);
settings["someothersetting"]=new SomeOtherSetting("something");
A: One possible solution is to create a Settings class which can look something like
class Settings {
public:
Settings(std::string filename);
bool getFullscreen() { return Fullscreen; }
// ...etc.
private:
bool Fullscreen;
int Width;
int Height;
std::string Title;
};
This assumes that the settings are stored in some file. The constructor can be implemented to read the settings using whatever format you choose. Of course, this has the disadvantage that you have to modify the class to add any other settings.
A: To answer your question, you could use boost::any or boost::variant to achieve what you would like. I think variant is better to start with.
boost::variant<
std::string,
int,
bool
> SettingVariant;
std::map<std::string, SettingVariant> settings;
To not answer your question, using typeless containers isn't what I would recommend. Strong typing gives you a way to structure code in a way that the compiler gives you errors when you do something subtly wrong.
struct ResolutionSettings {
bool full_screen;
size_t width;
size_t height;
std::string title;
};
Then just a simple free function to get the default settings.
ResolutionSettings GetDefaultResolutionSettings() {
ResolutionSettings settings;
settings.full_screen = true;
settings.width = 800;
settings.height = 600;
settings.title = "My Application';
return settings;
}
If you're reading settings off disk, then that is a little different problem. I would still write strongly typed settings structs, and have your weakly typed file reader use boost::lexical cast to validate that the string conversion worked.
ResolutionSettings settings;
std::string str = "800";
size_t settings.width = boost::lexical_cast<size_t>(str);
You can wrap all the disk reading logic in another function that isn't coupled with any of the other functionality.
ResolutionSettings GetResolutionSettingsFromDisk();
I think this is the most direct and easiest to maintain (especially if you're not super comfortable in C++).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12116549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: How to loop in reactJS in table cell How should I loop roles for each project in a row?
I want to set roles in each cell for each project. I currently have a problem with syntax as this is my first time developing in reactJS.
render() {
...
<Table
resourceName="projects"
columns={[
{
Header: 'Name',
accessor: 'name',
},
{
Header: 'Client',
accessor: 'client.name',
},
{
Header: 'Type',
accessor: 'type',
},
{
id: 'roles',
Header: 'Roles',
accessor: 'roles',
Cell: (props: { value: Array<{name: string}> }) => {
console.log(args);
return ',';
},
},
]}
/>
</Box>
);
}
A: You should do it in accessor, not in Cell.
accessor: d => d.roles.map(role => role.name).join(', ')
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55594136",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Total number of Subsets I am getting curious about this question..
How many ways can you choose a subset from the set P = {1, 2, 3, .. n}? The subset S should meet the following condition:
When you choose x (x ∈ P, x is an element of the set P) to create S, you cannot choose a * x and b * x for S.
Constraints :
1 <= n <= 1000
2 <= a < b <= n
b % a != 0 ( b is not divisible by a)
Example :
n = 3 , a = 2, b = 3
so total subsets are 5 ,i.e, {}, {1}, {2}, {3}, {2, 3}
as if in a particular subset there is 1 so 1*2 = 2 and 1*3 cant be there.
so {1,2}, {1,3} and {1,2,3} can't be there
A: Updated
This is related to sequence A051026 : Number of primitive subsequences of {1, 2, ..., n} in OEIS, the Online Encyclopedia of Integer Sequences.
I don't think there is any easy way to calculate the terms. Even the recursive computations are not trivial, except when n is prime where:
a(n) = 2 * a(n-1) - 1
Both the problem here and "A051026" can be thought of subproblems of a generalization of the above sequence. "A051026" is the instance with (a,b,..) = (2,3,4,5...), e.g. "all the integers >= 2".
A: I believe it will be easier to calculate the complimentary- that is the number of subsets of S that are not allowed. This is the number of subsets of S for each there is at least one pair (a,b) such that a divides b. After you calculate that number M' simply subtract it from the total number of subsets of S that is 2n.
Now to calculate the number of subsets of S that are not allowed you will have to apply the inclusion-exclusion principle. The solution is not very easy to implement but I don't think there is an alternative approach.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15455034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How Do I Get the Instantiated DataTemplate object for an item displayed in a ContentControl? I have a ContentControl whose content is being set in a ViewModel through binding. I have a couple things that I would like to programatically set on the view/template that is being applied to the data object. If I understand it right, the "Template" property is for the ContentControl, not the actual Content of the ContentControl. How would I access the actual view obejct when WPF creates it and applies it? For the example below, I want to make an adjustment to the vw:InfoType1View or vw:InfoType2View object when it gets instantiated.
<ContentControl Name="mainContentArea" Content="{Binding CurrentInfo}">
<ContentControl.Resources>
<ResourceDictionary>
<DataTemplate DataType="{x:Type vm:InfoType1}">
<vw:InfoType1View />
</DataTemplate>
</ResourceDictionary>
<ResourceDictionary>
<DataTemplate DataType="{x:Type vm:InfoType2}">
<vw:InfoType2View />
</DataTemplate>
</ResourceDictionary>
</ContentControl.Resources>
</ContentControl>
A: I am assuming you are using the MVVM pattern? In this case, you really shouldn't be programmatically making changes to your view!
Anyhow, you could handle the Loaded event, or the LayoutUpdated event (hard to determine which you need without more code). You can then navigate the visual tree, using my Linq-to-VisualTree for example, to navigate the elements in the view that was constructed - and make your changes.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/7422735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Memory Leak VBA - Arrays with Dictionaries of Dictionaries I'm running out of memory (I have 16GB) in a script I am running. Here is a little background:
I am generating an array that is ~150k rows by 8 column as the basis for my calculations.
Then, for each "row" of values (nothing is ever written to the worksheet), I am making a LOT of calculations, which I am storing in various data structures (I have 3 main structures at this point). One of these structures is composed of the following:
A Variant Array with 150k Dictionaries. Each Dictionary has ~4 Key-Item pairs. Each item in each dictionary is another dictionary, containing exactly 9 key-item pairs. All keys are Strings and all items are Doubles.
Essentially, this is the same thing as:
Dim Array(1 to 150000, 1 to 4, 1 to 9) as Double
Except I want to be able to reference the values with text strings -- hence the dictionaries.
An example would be
Value = Array(2401)("Key1")("Key2")
I wouldn't think this would be too much for VBA to handle -- we're talking 150,000 * 4 * 9 individual doubles = 5.4M doubles for each of the 3 main data structures. I don't have that much experience with programming and memory management, but sure that wouldn't consume 16GB of memory!
As such, I'm thinking there's a problem in how I'm generating these data structures that is causing a memory leak somewhere.
Essentially the loop looks like this
Dim TempDict1 as Dictionary
Dim TempDict2 as Dictionary
Dim FinalArray() as Variant
ReDim FinalArray(1 to 150000) as Variant
Dim Calculations as Double
For i = 1 to 150000
Set TempDict = NewDictionary
for j = 1 to 5
Set TempDict2 = NewDictionary
for k = 1 to 9
Calculations = 2*2
TempDict2.Add Key:= KeyK, Item:= Calculations
Next k
TempDict.Add Key:= KeyJ, Item:= TempDict2
' TempDict2.RemoveAll (This causes an error)
next j
Set FinalArray(i) = TempDict
' TempDict.RemoveAll (This causes an error)
Next i
What am I doing wrong here? I've tried destroying the temporary dictionaries after adding them to the parent item, but that actually gives me a type error.
UPDATE: I've tried setting the temporary dictionaries to nothing instead of removing all. This doesn't cause any errors, but it still consumes a lot of memory. With 37k iterations, it consumes 8.4GB of memory.
A: I'm not sure if this is some sort of bug or memory leak, but the fact is that this is an extremely memory-inefficient way to store data.
As such, I've edited the code to use 3 dimensional arrays with separate dictionaries to associate the indices of arrays with text strings, which is working great.
I was able to store 8 different 3D arrays (150k by 8 by 13) and 7 different 2D arrays (150k 13), both of doubles. Overhead doesn't seem to be too high. Given a size of 8 bytes per double, theoretical memory usage is 1.11GB, which isn't too bad.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29843103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: SwiftUI List based on @FetchedRequest and filtering computed properties crashes I have such code that has @FetchRequest loading Contacts then I have computed properties doing filter based on @State variable
@FetchRequest var fetchRequest: FetchedResults<Contact>
private var contacts : Array<Contact> {
Array(fetchRequest).filter { contact in
if self.sectionSelection == 1 {
return contact.type == "person"
} else if self.sectionSelection == 2 {
return contact.type == "company"
} else {
return true
}
}
}
@State private var sectionSelection : Int = 0
But the List crashes with index out of range error!
List {
ForEach(0..<contacts.count) { i in
ZStack {
NavigationLink(destination: ContactDetails(contact: contacts[i])) {
| {
"language": "en",
"url": "https://stackoverflow.com/questions/59263830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Extra build/missing object files with header-tracking Makefile I have written a (GNU make) Makefile designed to perform automatic dependency tracking in header includes. Everything works great except that upon typing make a second time, the entire code base rebuilds. Only typing make the third time and successive times gives the message that nothing is to be done.
SRCDIR := src
INCDIR := inc
ifeq ($(DEBUG),1)
OBJDIR := debug_obj
BINDIR := debug_bin
else
OBJDIR := obj
BINDIR := bin
endif
BINS := prog1 prog2 prog3 prog4
SRCS := $(wildcard $(SRCDIR)/*.cpp)
OBJS := $(patsubst $(SRCDIR)/%,$(OBJDIR)/%,$(SRCS:.cpp=.o))
DEPS := $(OBJS:.o=.d)
CC := g++
COMMON_FLAGS := -Wall -Wextra -Werror -std=c++11 -pedantic
ifeq ($(DEBUG),1)
CXX_FLAGS := $(COMMON_FLAGS) -Og -g
else
CXX_FLAGS := $(COMMON_FLAGS) -O3 -D NDEBUG
endif
all: $(addprefix $(BINDIR)/,$(BINS)) | $(BINDIR)
$(OBJDIR) $(BINDIR):
@ mkdir -p $@;
$(BINDIR)/%: $(OBJDIR)/%.o | $(BINDIR)
$(CC) $(CPP_FLAGS) $< -o $@;
$(OBJDIR)/%.o: $(SRCDIR)/%.cpp | $(OBJDIR)
$(CC) $(CPP_FLAGS) -MMD -MP -c $< -o $@;
-include $(DEPS)
.PHONY: all clean
clean:
- rm -f $(OBJS);
- rm -f $(DEPS);
- rm -f $(addprefix $(BINDIR)/,$(BINS));
- rmdir $(OBJDIR) $(BINDIR) 2> /dev/null || true
Clearly some dependency had changed, so I tried running make -n -d | grep 'newer' following the first invocation of make, which shows this:
Prerequisite obj/prog1.o' is newer than targetbin/prog1'.
Prerequisite obj/prog2.o' is newer than targetbin/prog2'.
Prerequisite obj/prog3.o' is newer than targetbin/prog3'.
Prerequisite obj/prog4.o' is newer than targetbin/prog4'.
And ls -la obj/*
Showed the existence of the dependency (*.d) files but not the object (*.o) files. I assume that this is related to how g++ -MMD -MP works, but despite the apparent absence of object files, binaries are present after the first make.
The answer to this question suggests that both are generated at the same time, and man g++ does not dispute this as far as I can tell.
I've read a couple other questions and answers related to automatic dependency tracking, but I don't see this issue arising. Why is this happening? Can you suggest a fix?
Update
A more careful look at the first invocation of make shows this unexpected (to me) line at the end:
rm obj/prog1.o obj/prog2.o obj/prog3.o obj/prog4.o
That answers one question but raises another.
Update
I also found this in the debugging output.
Considering target file `prog1'.
File `prog1' does not exist.
make: *** No rule to make target `prog1'. Stop.
No implicit rule found for `prog1'.
Finished prerequisites of target file `prog1'.
Must remake target `prog1'.
For which I note that prog1 is missing the bin/ prefix. Nothing explains why the first run removes the object files, but the second run leaves them, however. That seems to be at the heart of the issue.
A: make was treating the object files as intermediates and deleting them accordingly. Adding:
.SECONDARY: $(OBJS)
solved the problem. I do not know why it was doing this the first invocation but not the second invocation. Comments are welcome.
A: The reason that the .o files are not present is that they're considered intermediate files so make deletes them. However, that shouldn't cause any problems in your build, because as long as make can envision the intermediate file it will realize it doesn't need to be rebuilt if its prerequisites are older than its parents (in this case, as long as prog1 is newer than prog1.cpp for example).
I was not able to reproduce your experience with the second build rebuilding everything. More details will be needed. The output you showed is not interesting because that's just saying that make does NOT need to rebuild the .o file (it's newer than the prerequisite). You need to find the lines in the output that explain why make does need to rebuild the .o file. If you provide that info we may be able to help.
Just a couple of comments on your makefile: first, I don't think it's a good idea to force the mkdir rule to always succeed. If the mkdir fails you WANT your build to fail. Probably you did this so it would not be a problem if the directory already exists, but that's not needed because the mkdir -p invocation will never fail just because the directory exists (but it will fail if the directory can't be created for other reasons such as permissions). Also you can combine those into a single rule with multiple targets:
$(BINDIR) $(OBJDIR):
@mkdir -p $@
Next, you don't need the semicolons in your command lines and in fact, adding them will cause your builds to be slightly slower.
Finally, a small nit, but the correct order of options in the compile line is -c -o $@ $<; the source file is not (this is a common misconception) an argument to the -c option. The -c option, like -E, -s, etc. tells the compiler what output to create; in the case of -c it means compile into an object file. Those options do not take arguments. The filename is a separate argument.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16385498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to show only one category from the start jquery I don't know a lot of jQuery, I only know CSS and HTML but I'm guessing that what I need to do is pretty simple.
I grabbed a code somewhere and modified it to make a category filter.
Here's an image of what I've done.
Everything works fine.
The only thing I want to do is: when it loads at first, show only a specific category... Not all.
Here's the original code I grabbed:
(as mine has some spanish words and long svg paths which are not relevant)
(function($) {
'use strict';
var $filters = $('.filter [data-filter]'),
$boxes = $('.boxes [data-category]');
$filters.on('click', function(e) {
e.preventDefault();
var $this = $(this);
$filters.removeClass('active');
$this.addClass('active');
var $filterColor = $this.attr('data-filter');
if ($filterColor == 'all') {
$boxes.removeClass('is-animated')
.fadeOut().finish().promise().done(function() {
$boxes.each(function(i) {
$(this).addClass('is-animated').delay((i++) * 200).fadeIn();
});
});
} else {
$boxes.removeClass('is-animated')
.fadeOut().finish().promise().done(function() {
$boxes.filter('[data-category = "' + $filterColor + '"]').each(function(i) {
$(this).addClass('is-animated').delay((i++) * 200).fadeIn();
});
});
}
});
})(jQuery);
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
html {
font: 18px/1.65 sans-serif;
text-align: center;
}
li {
list-style-type: none;
}
a {
text-decoration: none;
display: block;
color: #333;
}
h2 {
color: #333;
padding: 10px 0;
}
.filter {
margin: 30px 0 10px;
}
.filter a {
display: inline-block;
padding: 10px;
border: 2px solid #333;
position: relative;
margin-right: 20px;
margin-bottom: 20px;
}
.boxes {
display: flex;
flex-wrap: wrap;
}
.boxes a {
width: 23%;
border: 2px solid #333;
margin: 0 1% 20px 1%;
line-height: 60px;
}
.all {
background: khaki;
}
.green {
background: lightgreen;
}
.blue {
background: lightblue;
}
.red {
background: lightcoral;
}
.filter a.active:before {
content: '';
position: absolute;
left: 0;
top: 0;
display: inline-block;
width: 0;
height: 0;
border-style: solid;
border-width: 15px 15px 0 0;
border-color: #333 transparent transparent transparent;
}
.is-animated {
animation: .6s zoom-in;
}
@keyframes zoom-in {
0% {
transform: scale(.1);
}
100% {
transform: none;
}
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div class="cta filter">
<a class="green" data-filter="green" href="#" role="button">Show Green Boxes</a>
<a class="blue" data-filter="blue" href="#" role="button">Show Blue Boxes</a>
<a class="red" data-filter="red" href="#" role="button">Show Red Boxes</a>
</div>
<div class="boxes">
<a class="red" data-category="red" href="#">Box1</a>
<a class="green" data-category="green" href="#">Box2</a>
<a class="blue" data-category="blue" href="#">Box3</a>
<a class="green" data-category="green" href="#">Box4</a>
<a class="red" data-category="red" href="#">Box5</a>
<a class="green" data-category="green" href="#">Box6</a>
<a class="blue" data-category="blue" href="#">Box7</a>
<a class="red" data-category="red" href="#">Box8</a>
<a class="green" data-category="green" href="#">Box9</a>
<a class="blue" data-category="blue" href="#">Box10</a>
<a class="red" data-category="red" href="#">Box11</a>
<a class="green" data-category="green" href="#">Box12</a>
<a class="blue" data-category="blue" href="#">Box13</a>
<a class="green" data-category="green" href="#">Box14</a>
<a class="red" data-category="red" href="#">Box15</a>
<a class="blue" data-category="blue" href="#">Box16</a>
</div>
A: I only added two lines to your minimal demo.
// Onload Show green ones:
$('a[class="green"]').click();
And it shows the green items onload.
(function($) {
'use strict';
var $filters = $('.filter [data-filter]'),
$boxes = $('.boxes [data-category]');
$filters.on('click', function(e) {
e.preventDefault();
var $this = $(this);
$filters.removeClass('active');
$this.addClass('active');
var $filterColor = $this.attr('data-filter');
if ($filterColor == 'all') {
$boxes.removeClass('is-animated')
.fadeOut().finish().promise().done(function() {
$boxes.each(function(i) {
$(this).addClass('is-animated').delay((i++) * 200).fadeIn();
});
});
} else {
$boxes.removeClass('is-animated')
.fadeOut().finish().promise().done(function() {
$boxes.filter('[data-category = "' + $filterColor + '"]').each(function(i) {
$(this).addClass('is-animated').delay((i++) * 200).fadeIn();
});
});
}
});
// Onload Show green ones:
$('a[class="green"]').click();
})(jQuery);
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
html {
font: 18px/1.65 sans-serif;
text-align: center;
}
li {
list-style-type: none;
}
a {
text-decoration: none;
display: block;
color: #333;
}
h2 {
color: #333;
padding: 10px 0;
}
.filter {
margin: 30px 0 10px;
}
.filter a {
display: inline-block;
padding: 10px;
border: 2px solid #333;
position: relative;
margin-right: 20px;
margin-bottom: 20px;
}
.boxes {
display: flex;
flex-wrap: wrap;
}
.boxes a {
width: 23%;
border: 2px solid #333;
margin: 0 1% 20px 1%;
line-height: 60px;
}
.all {
background: khaki;
}
.green {
background: lightgreen;
}
.blue {
background: lightblue;
}
.red {
background: lightcoral;
}
.filter a.active:before {
content: '';
position: absolute;
left: 0;
top: 0;
display: inline-block;
width: 0;
height: 0;
border-style: solid;
border-width: 15px 15px 0 0;
border-color: #333 transparent transparent transparent;
}
.is-animated {
animation: .6s zoom-in;
}
@keyframes zoom-in {
0% {
transform: scale(.1);
}
100% {
transform: none;
}
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div class="cta filter">
<a class="green" data-filter="green" href="#" role="button">Show Green Boxes</a>
<a class="blue" data-filter="blue" href="#" role="button">Show Blue Boxes</a>
<a class="red" data-filter="red" href="#" role="button">Show Red Boxes</a>
</div>
<div class="boxes">
<a class="red" data-category="red" href="#">Box1</a>
<a class="green" data-category="green" href="#">Box2</a>
<a class="blue" data-category="blue" href="#">Box3</a>
<a class="green" data-category="green" href="#">Box4</a>
<a class="red" data-category="red" href="#">Box5</a>
<a class="green" data-category="green" href="#">Box6</a>
<a class="blue" data-category="blue" href="#">Box7</a>
<a class="red" data-category="red" href="#">Box8</a>
<a class="green" data-category="green" href="#">Box9</a>
<a class="blue" data-category="blue" href="#">Box10</a>
<a class="red" data-category="red" href="#">Box11</a>
<a class="green" data-category="green" href="#">Box12</a>
<a class="blue" data-category="blue" href="#">Box13</a>
<a class="green" data-category="green" href="#">Box14</a>
<a class="red" data-category="red" href="#">Box15</a>
<a class="blue" data-category="blue" href="#">Box16</a>
</div>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43219496",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to solve "Socket read timed out" when using hikari connection pool I am developing an application using play framework (version 2.8.0), java(version 1.8) with an oracle database(version 12C).
There is only zero or one hit to the database in a day, I am getting below error.
java.sql.SQLRecoverableException: IO Error: Socket read timed out
at oracle.jdbc.driver.T4CConnection.logoff(T4CConnection.java:919)
at oracle.jdbc.driver.PhysicalConnection.close(PhysicalConnection.java:2005)
at com.zaxxer.hikari.pool.PoolBase.quietlyCloseConnection(PoolBase.java:138)
at com.zaxxer.hikari.pool.HikariPool.lambda$closeConnection$1(HikariPool.java:447)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: Socket read timed out
at oracle.net.nt.TimeoutSocketChannel.read(TimeoutSocketChannel.java:174)
at oracle.net.ns.NIOHeader.readHeaderBuffer(NIOHeader.java:82)
at oracle.net.ns.NIOPacket.readFromSocketChannel(NIOPacket.java:139)
at oracle.net.ns.NIOPacket.readFromSocketChannel(NIOPacket.java:101)
at oracle.net.ns.NIONSDataChannel.readDataFromSocketChannel(NIONSDataChannel.java:80)
at oracle.jdbc.driver.T4CMAREngineNIO.prepareForReading(T4CMAREngineNIO.java:98)
at oracle.jdbc.driver.T4CMAREngineNIO.unmarshalUB1(T4CMAREngineNIO.java:534)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:485)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:252)
at oracle.jdbc.driver.T4C7Ocommoncall.doOLOGOFF(T4C7Ocommoncall.java:62)
at oracle.jdbc.driver.T4CConnection.logoff(T4CConnection.java:908)
... 6 common frames omitted
db {
default {
driver=oracle.jdbc.OracleDriver
url="jdbc:oracle:thin:@XXX.XXX.XXX.XX:XXXX/XXXXXXX"
username="XXXXXXXXX"
password="XXXXXXXXX"
hikaricp {
dataSource {
cachePrepStmts = true
prepStmtCacheSize = 250
prepStmtCacheSqlLimit = 2048
}
}
}
}
It seems it is causing due to inactive database connection, How can I solve this?
Please let me know if any other information is required?
A: You can enable TCP keepalive for JDBC - either be setting directive or by adding "ENABLE=BROKEN" into connection string.
*
*Usually Cisco/Juniper cuts off TCP connection when it is inactive for more that on hour.
*While Linux kernel starts sending keepalive probes after two hours(tcp_keepalive_time). So if you decide to turn tcp keepalive on, you will also need root, to change this kernel tunable to lower value(10-15 minutes)
*Moreover HikariCP should not keep open any connection for longer than 30 minutes - by default.
So if your FW, Linux kernel and HikariCP all use default settings, then this error should not occur in your system.
See HikariCP official documentation
maxLifetime:
This property controls the maximum lifetime of a connection in the
pool. An in-use connection will never be retired, only when it is
closed will it then be removed. On a connection-by-connection basis,
minor negative attenuation is applied to avoid mass-extinction in the
pool. We strongly recommend setting this value, and it should be
several seconds shorter than any database or infrastructure imposed
connection time limit. A value of 0 indicates no maximum lifetime
(infinite lifetime), subject of course to the idleTimeout setting. The
minimum allowed value is 30000ms (30 seconds). Default: 1800000 (30
minutes)
A: I have added the below configuration for hickaricp in configuration file and it is
working fine.
## Database Connection Pool
play.db.pool = hikaricp
play.db.prototype.hikaricp.connectionTimeout=120000
play.db.prototype.hikaricp.idleTimeout=15000
play.db.prototype.hikaricp.leakDetectionThreshold=120000
play.db.prototype.hikaricp.validationTimeout=10000
play.db.prototype.hikaricp.maxLifetime=120000
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62632709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Setting the width of columns with an array as parameter I have the following function:
function LoadColWidth(thewidtharray) {
for (i = 0; i < thewidtharray.length; i++)
{
$('#MyGrid .tr:first').eq(i).width(thewidtharray[i]);
}
};
var NarrowWidth = new Array(70, 72, 97, 72, 97, 72, 96, 76, 76, 76, 76, 75);
I'm calling LoadColWidth with different arrays as the parameter and the goal is to resize the width of columns. I'm struggling with the jquery call: it's supposed to loop through each columns by index but it's not working. Any suggestions?
Thanks.
A: Select the <td> elements in the first row, and iterate over them using the each()(docs) method.
Inside the .each(), use the index parameter it provides to select from your Array of widths.
// Use the "index" parameter--------------v
$('#MyGrid tr:first > td').each(function( i ) {
$(this).width( thewidtharray[i] );
});
Or here's an alternative if you're using jQuery 1.4.1 or later:
// Use the "index" parameter---------------v
$('#MyGrid tr:first > td').width(function( i ) {
return thewidtharray[i];
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/5070757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Xamarin Community Toolkit CameraView video file location supplied becomes invalid IOS and Android I'm using the CameraView to capture a series of short videos. I want these videos to be viewable at a later time from within the app. I'm using the MediaCaptured event to get the path of captured videos and saving these paths in a SQLite db.
The first problem is on iOS, the path is valid while the app is open, but if I close the app and open it again the path is no longer valid. I've worked around this by copying the video to the AppDataDirectory but this seems bad because I haven't figured out how to delete the original so now two copies of the video exist.
The second problem is on both iOS and Android, after some amount of time (a few days or a week or more) these paths become invalid for some unknown reason.
What is the correct way to deal with this?
private void MediaCaptured(object obj)
{
MediaCapturedEventArgs args = obj as MediaCapturedEventArgs;
string sPath = "";
switch (Device.RuntimePlatform)
{
case Device.iOS:
//On iOS args.Video.File returns a path that isn't valid when the app is restarted. To get around this issue I am copying the file to the App Data Directory.
//The drawback is there are now two video files and I can't delete the original.
var pathSplit = args.Video.File.Split('/');
sPath = Path.Combine(FileSystem.AppDataDirectory, pathSplit[pathSplit.Length - 1]);
File.Copy(args.Video.File, sPath);
//TODO Should probalby be deleting the original video but not sure how (or if its possible).
break;
case Device.Android:
sPath = args.Video.File;
break;
}
SavePathToDB(sPath);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/72778941",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Java and 2 threads I am trying to learn Java's threads in order to do an assignment, but I do not understand how I can make each thread to do its own code. I also get an error:
Program.java:1: error: Program is not abstract and does not override abstract me
thod run() in Runnable
public class Program implements Runnable {
^
1 error
Because it is required by the assignment, I have to do everything within the same file, so I tried the code below:
public class Program implements Runnable {
Thread thread1 = new Thread () {
public void run () {
System.out.println("test1");
}
};
Thread thread2 = new Thread () {
public void run () {
System.out.println("test2");
}
};
public void main (String[] args) {
thread1.start();
thread2.start();
}
}
Could you please fix it for me and show how to have 2 threads which do different tasks from each other? I have already seen examples that print threads' names, but I did not find them helpful.
Thank you.
A: Your Program class is defined as implementing the Runnable interface. It therefore must override and implement the run() method:
public void run () {
}
Since your two Thread objects are using anonymous inner Runnable classes, you do not need and your should remove the implements Runnable from your Program class definition.
public class Program {
...
A: try this:
class Program {
public static void main(String[] args) {
Thread thread1 = new Thread() {
@Override
public void run() {
System.out.println("test1");
}
};
Thread thread2 = new Thread() {
@Override
public void run() {
System.out.println("test2");
}
};
thread1.start();
thread2.start();
}
Or you can create a separate class implementing Runnable and ovverriding method run().
Then in main method create an instance of Thread with you class object as argument :
class SomeClass implements Runnable {
@Override
run(){
...
}
}
and in main:
Thread thread = new Thread(new SomeClass());
A: When you implement an interface (such as Runnable) you must implement its methods, in this case run.
Otherwise for your app to compile and run just erase the implements Runnable from your class declaration:
public class Program {
public void main (String[] args) {
Thread thread1 = new Thread () {
public void run () {
System.out.println("test1");
}
};
Thread thread2 = new Thread () {
public void run () {
System.out.println("test2");
}
};
thread1.start();
thread2.start();
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/9909216",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.