text
stringlengths 64
89.7k
| meta
dict |
---|---|
Q:
underscore in Rust: "consider using"
Rust newbie here.
When providing a parameter and leaving it unused in a function declaration (e.g. when learning Rust...) the compiler warns about the fact that the variable is unused in the scope, and proposes to consider putting an underline before it. Doing so, the warning disappears.
warning: unused variable: `y`
--> src/main.rs:23:29
|
23 | fn another_function(x: i32, y: i32) {
| ^ help: consider using `_y` instead
|
= note: #[warn(unused_variables)] on by default
why? How is the variable treated differently then?
A:
It's just a convention: Rust doesn't emit a warning if a variable whose name starts with an underscore is not used because sometimes you may need a variable that won't be used anywhere else in your code.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is it legal to have a pointer to a reserved vector element?
I'm curious if this sort of thing is legal:
std::vector<some_class_type> vec;
vec.reserve(10);
some_class_type* ptr = vec.data() + 3; // that object doesn't exist yet
Note that I'm not attempting to access the value pointed to.
This is what the standard says about data(), but I'm not sure if it's relevant:
Returns: A pointer such that [data(),data() + size()) is a valid
range. For a non-empty vector, data() == &front().
A:
The example you provided does not exhibit any immediate undefined behavior. According to the standard since the number of elements you are reserving is greater than the current capacity of the vector a reallocation will occur. Since the allocation occurs at the point where reserve is called the pointer returned by data() is itself valid.
23.3.6.3/2 (Emphasis mine)
Effects: A directive that informs a vector of a planned change in size, so that it can manage the storage allocation accordingly. After reserve(), capacity() is greater or equal to the argument of reserve if reallocation happens; and equal to the previous value of capacity() otherwise. Reallocation happens at this point if and only if the current capacity is less than the argument of reserve(). If an exception is thrown other than by the move constructor of a non-CopyInsertable type, there are no effects.
If however you attempt to dereference the pointer prior to adding enough elements where the pointer is outside of data() + size() or if you add more than capacity() slements then undefined behavior occurs.
| {
"pile_set_name": "StackExchange"
} |
Q:
Using Entity Framework 4.1 with SharePoint 2010?
First, I am not completely familiar with SharePoint development. I have been tasked to build an application with it.
I need some advice on using entity framework 4.1 with SharePoint. I have some code already written that makes use of EF 4.1 and the repository pattern (which I don’t want to give up). I want to have control over my data model and allow the SP applications to take care of workflow and document management.
What is the best approach to using these together? I have read about creating a web service layer that SP will communicate with. I am hoping there are some resources out there that I couldn’t find.
A:
SharePoint 2010 is based on the .NET FrameWork 3.5, which requries IIS to use the ASP.NET 2.0 Runtime. Entity Framework 4.1 uses the .NET FrameWork 4.0, which requries IIS to use the ASP.NET 4.0 runtime. As a result you cannot run EF 4.1 natively in SharePoint 2010. What you can do is use web services as you mentioned in your question to wrap your EF 4.1 objects.
Other options are to have a Silverlight hosted application that use the SP 2010 Client Object Model to do the SP functionality you require. You could also do a combination of HTML/jQuery to access your business objects from Web Services.
John
| {
"pile_set_name": "StackExchange"
} |
Q:
Array of struct initialization in C
I cannot find the solution to this. I can initialize an array of struct like this:
typedef struct S_A {
int x;
} T_A;
T_A ta1[3];
ta1[0] = (T_A){0};
ta1[1] = (T_A){1};
ta1[2] = (T_A){2};
T_A ta2[3] = { {0}, {1}, {2} };
But how can I do a one-line initialization after declaration?
T_A ta3[3];
ta3 = (?){ {?}, {?}, {?} };
ta3 = (T_A[3]){ { 0 }, { 1 }, { 2 } }; // error
ta3 = (T_A*) { { 0 }, { 1 }, { 2 } }; // error
A:
Arrays are special in C. You can only once initialize arrays. You can't then "re-initialize" arrays. Assignments on arrays don't work. Array is in most contexts a pointer that you can't assign to. You can't:
int arr[3];
// arr = {1,2,3}; // will not work
// arr = anything; // will not work
You only can memcpy to them with a compound literal:
memcpy(ta3, (T_A[3]){ { 0 }, { 1 }, { 2 } }, sizeof(ta3));
Or without compund literal initialize a temporary variable and memcpy:
const T_A temp[3] = { { 0 }, { 1 }, { 2 } };
memcpy(ta3, temp, sizeof(ta3));
A:
In C, an initialization is something that you do simultaneously as the declaration. You cannot do it afterwards.
This can be seen in the grammar that you can find here: https://www.lysator.liu.se/c/ANSI-C-grammar-y.html
Whenever you use the = after you have finished the declaration, it's an assignment and not an initialization, and assignments have different rules. One of these rules is that the common way of initializing arrays - = {1,2,3} - simply is not allowed. You have to use memcpy or something like that.
When it comes to nonarrays, like int, double etc and their pointers, it is still true that you cannot formally initialize them after declaration, but for those, assignment has the same syntax so it can be confusing.
However, there is a trick that can be used for arrays. Wrap the array in a struct and do like this:
struct wrapper{
int arr[3];
} x;
x = (struct wrapper){{1,2,3}};
| {
"pile_set_name": "StackExchange"
} |
Q:
Qt: Listing file aliases in the directory
I am trying to list all alias files in Mac Osx using the Qt. I am using the following function:
QString QDir::fromNativeSeparators ( const QString & pathName )
By specifying the relative path to a directory. It lists only the normal files and skips the aliases.
Please let me know if there is a better way to list both the normal and alias files using Qt.
A:
Use QDirIterator, Qt documentation is here:
http://doc.qt.io/qt-4.8/qdiriterator.html
| {
"pile_set_name": "StackExchange"
} |
Q:
Command to close an application of console?
I need to close the console when the user selects a menu option.
I tried using close() but it did not work..
how can I do this?
A:
Environment.Exit and Application.Exit
Environment.Exit(0) is cleaner.
http://geekswithblogs.net/mtreadwell/archive/2004/06/06/6123.aspx
A:
By close, do you mean you want the current instance of the console app to close, or do you want the application process, to terminate? Missed that all important exit code:
Environment.Exit(0);
Or to close the current instance of the form:
this.Close();
Useful link.
A:
You can Try This
Application.Exit();
| {
"pile_set_name": "StackExchange"
} |
Q:
symbols in legend loose edge
I make my own symbols for plotting:
onesymb = Graphics[{EdgeForm[Directive[AbsoluteThickness[2], Black]], Red, Rectangle[]}, ImageSize -> 18]
twosymb = Graphics[{EdgeForm[Directive[AbsoluteThickness[2], Black]], White, Polygon[Dynamic@ Flatten[Table[{{Cos[i*2 \[Pi]/ns], Sin[i*2 \[Pi]/ns]},
0.4 {Cos[(i + 0.5)*2 \[Pi]/ns], Sin[(i + 0.5)*2 \[Pi]/ns]}}, {i, ns + 1}], 1]]}, ImageSize -> 26]
but when I use them:
ListPlot[{{{0.5, 0.5}, {2, 2}}, {{1, 1}, {1, 4}}}, PlotMarkers -> {onesymb,twosymb}, PlotRange -> {{0, 3}, {0, 3}}, PlotLegends - Placed[LineLegend[{{"one", "two"}}, LabelStyle -> {Black, FontFamily -> "Helvetica", FontSize -> 20}, LegendMarkerSize -> 20], Right]]
I notice that, in the legend, the symbols loose their (black) edge. How to repair this in Mathematica 12? thanks
A:
Use EdgeForm[Directive[AbsoluteThickness[2], Black, Opacity[1]]] when you define you markers.
You can also use EdgeForm[Directive[AbsoluteThickness[2], Opacity[1, Black]]].
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I track a remote public Subversion repository with a local Git repository?
I've been trying to figure this one out by reading the git-svn man-page but am having trouble understanding how to achieve it.
What I want to do is:
Checkout a public subversion trunk
Create a local git repo to track this
Create a 'vendor' branch which I will not commit to and will only use for pulling changes from the subversion repo
Create git branches for patches I wish to work on
Pull updates to the 'vendor' branch and merge into my branches.
Submit patches with git-format-patch
How do I achieve this?
A:
Suggested workflow for git-svn:
Pull using git-svn mentioned above.
By default, you will be in master. DO NOT WORK DIRECTLY ON THIS BRANCH.
Do all your development in a feature branch (use git checkout -b <feature> from master).
Commit changes to your feature branch.
Do git rebase master in your feature branch.
5a. If you want to pull in svn updates at this time, then switch to master and do git svn rebase. Then switch back to feature branch, do git rebase master.
Switch to master branch and do git merge <feature>.
Check that merge goes smoothly, then do git svn dcommit to push changes to svn.
Remember, once you do git svn dcommit all your changes will become visible to others. So be absolutely sure you have done things properly before doing git svn dcommit.
| {
"pile_set_name": "StackExchange"
} |
Q:
plot some data such as pairs in matlab
I want to plot some data, but I can't.
It is assumed that we have 820 rows in 2 columns, representing the x and y coordinates.
My code is as follows:
load('MyMatFileName.mat');
[m , n]=size(inputs);
s = zeros(m,2);
for m=1:820
if inputs(m,end-1)<2 & inputs(m,end)<2
x = inputs(m,end-1)
y = inputs(m,end)
plot(x,y,'r','LineWidth',1.5)
hold on
end
end
A:
I've edited your code and added comments to explain the changes you could make, but see below I've also re-written your code to be more like how it should be done:
load('MyMatFileName.mat'); % It's assumed "inputs" is in here
% You don't use the second size output, so use a tilde ~
[m, ~] = size(inputs);
%s = zeros(m,2); % You never use s...
% Use a different variable for the loop, as m is already the size variable
% I've assumed you wanted ii=1:m rather than m=1:820
figure
hold on % Use hold on and hold off around all of your plotting
for ii=1:m
if inputs(m,end-1)<2 && inputs(m,end)<2 % Use double ampersand for individual comparison
x = inputs(m,end-1)
y = inputs(m,end)
% Include the dot . to specify you want a point, not a line!
plot(x, y, 'r.','LineWidth',1.5)
end
end
hold off
A better way of doing this whole operation in Matlab would be to vectorise your code:
load('MyMatFileName.mat');
[m, ~] = size(inputs);
x = inputs(inputs(:,end-1) < 2 & inputs(:,end) < 2, end-1);
y = inputs(inputs(:,end-1) < 2 & inputs(:,end) < 2, end);
plot(x, y, 'r.', 'linewidth', 1.5);
Note that this will plot points, if you want to plot the line, use
plot(x, y, 'r', 'linewidth', 1.5); % or plot(x, y, 'r-', 'linewidth', 1.5);
| {
"pile_set_name": "StackExchange"
} |
Q:
What stops the middle point of a power line from falling?
Say you have a system that is a uniformly weighted string with slack suspended from two points; i.e. a power line.
There are three forces acting on any given point on this string: string tension going left, string tension going right, and gravity.
Consider the point exactly in the middle of the string. The tension forces act tangent to the string, which (in this case) is directly left and right. So these forces have no upwards component, so no matter how large they are, they won't be able to counteract gravity.
But the string is not moving, and the middle point is not actually accelerating downwards. So what am I missing? What's counteracting gravity?
A:
The part at the exact middle of the string has zero mass.
That seems silly, but consider - if you consider a very small section of the string in the middle - say 1 mm - then the pieces of string on either side exert forces with tiny, but nonzero upward components. If you half the length we are considering to 0.5 mm, then the upward component of the forces is smaller, but so is the weight! Half that again to 0.25 mm, and the same happens. By the time you're actually considering about the part of the string that is in the exact middle of the wire, as you say the tension forces are perfectly horizontal, but that piece of the wire has zero mass & weight, so there's no need for any vertical force to support it.
In reality, that's a little silly, because wires are not ideal objects. But the same principle applies.
| {
"pile_set_name": "StackExchange"
} |
Q:
how to manage concurrency for inserting in C#?
Many optimistic concurrency examples refer to updates by using a database timestamp or flag.
However, I want to handle optimistic concurrency for INSERTS
To illustrate here is a fake scenario:
Multiple users can insert a journal entry at the same time, but allow only one journal entry is allowed per date.
Without any concurrency control since multiple users can create journal entries, I can end up with multiple journal entries on the same date.
How do I prevent this from happening at the application layer WITHOUT the use of the database (i.e. a database unique key constraint)
A:
Whether or not you define an explicit unique key constraint, that's exactly what you're asking for.
You can write an IF block in your SQL code to check for the existence of a journal entry of the specified date.
But an explicit unique key constraint is going to give better performance.
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I use local state along with redux store state in the same react component?
I have a table that displays contacts and I want to sort the contacts by first name. The contacts array comes from the redux store, which will come then come through the props, but I want the local state to hold how those contacts are sorted, since it's local UI state. How do I achieve this? I so far have placed contacts into componentWillReceiveProps but for some reason it doesn't receive the props when it changes. How do I update the local state each time the redux store state changes?
const Table = React.createClass({
getInitialState () {
return {contacts: []}
},
componentWillReceiveProps () {
this.setState({ contacts: this.props.data.contacts})
},
sortContacts (parameter, e){
...
},
render () {
return (
<table>
<thead>
<tr>
<th onClick={this.sortContacts.bind(this, "firstName")}>First Name</th>
</tr>
</thead>
<tbody>
{contactRows}
</tbody>
</table>
)
}
})
update of current code that includes filtering
import React, {Component} from 'react'
import TableRow from './TableRow'
class Table extends Component {
constructor (props) {
super(props)
this.state = { sortBy: "fistName" }
}
sortContacts (parameter) {
console.log('in sortContacts')
this.setState({ sortBy: parameter })
}
sortedContacts () {
console.log('in sortedContacts')
const param = this.state.sortBy
return (
this.props.data.contacts.sort(function (a, b){
if (!a.hasOwnProperty(param)){
a[param] = " ";
}
if (!b.hasOwnProperty(param)){
b[param] = " ";
}
const nameA = a[param].toLowerCase(), nameB = b[param].toLowerCase();
if (nameA > nameB) {
return 1;
} else {
return -1;
}
})
)
}
filteredSortedContacts () {
console.log('in filteredSortedContacts')
const filterText = this.props.data.filterText.toLowerCase()
let filteredContacts = this.sortedContacts()
if (filterText.length > 0) {
filteredContacts = filteredContacts.filter(function (contact){
return (
contact.hasOwnProperty('lastName') &&
contact.lastName.toLowerCase().includes(filterText)
)
})
}
return filteredContacts
}
contactRows () {
console.log('in contactRows')
return this.filteredSortedContacts().map((contact, idx) =>
<TableRow contact={contact} key={idx}/>
)
}
render () {
return (
<div className="table-container">
<table className="table table-bordered">
<thead>
<tr>
<th className="th-cell" onClick={this.sortContacts.bind(this, "firstName")}>First Name</th>
<th onClick={this.sortContacts.bind(this, "lastName")}>Last Name</th>
<th>Date of Birth</th>
<th>Phone</th>
<th>Email</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
{this.contactRows()}
</tbody>
</table>
</div>
)
}
}
export default Table
The issue I'm seeing now is that contactRows, filteredSortedContacts, sortedContacts are being called multiple times, once for each TableRow. I don't see how this can be happening if I'm only calling contactRows once in the body.
A:
Your approach to use both redux store and local store is correct.
Just do not try to duplicate the state from redux store in your component. Keep referring to it via props.
Instead create a sortedContacts function that computes value on the fly by applying locally-stored sortBy param to redux-stored contacts.
const Table extends React.Component {
constructor(props) {
super(props);
this.state = {
sortBy: 'id' // default sort param
}
}
sortContacts(param) {
this.setState({ sortBy: param})
}
sortedContacts() {
return this.props.contacts.sort(...); // return sorted collection
}
render() {
return (
<table>
<thead>
<tr>
<th onClick={() => this.sortContacts("firstName")}>First Name</th>
</tr>
</thead>
<tbody>
{this.sortedContacts()}
</tbody>
</table>
)
}
}
A:
The componentWillReceiveProps() method is not called for the initial render. What could do, if you only intend to use the data from props as the initial data, is something like:
getInitialState () {
return {
contacts: this.props.data.contacts
}
}
In the React docs they suggest you name the props initialContacts, just to make it really clear that the props' only purpose is to initialize something internally.
Now if you want it to update when this.props.contacts change, you could use componentWillReceiveProps() like you did. But I'm not sure it's the best idea. From the docs:
Using props to generate state in getInitialState often leads to
duplication of "source of truth", i.e. where the real data is. This is
because getInitialState is only invoked when the component is first
created.
Whenever possible, compute values on-the-fly to ensure that they don't
get out of sync later on and cause maintenance trouble.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to add MVC3 controller from database first EF model?
Trying to add a controller from my EF model, not sure what I'm doing wrong.
I created a model from my database, but when I try add a controller I get an error:
"Unable to retrieve metadata for "JobsApp.Category". Unable to determine the principal end of an association between the types "JobsApp.Job" and "JobsApp.Category". The principal end of this association must be explicitly configured using either the relationship fluent API or data annotations."....
A:
It looks like you have some problems with your model classes - you're going to need to solve those first.
A question (and answer) about the "principal end of this association must be explicitly configured using either the relationship fluent API or data annotations" exception you're seeing is on SO.
| {
"pile_set_name": "StackExchange"
} |
Q:
Python Requests Library post requests failing on local development server?
Ok so I have been looking at the code I have for far too long I know through I number of tests that I must be facing an issue beyond the scope of my knowledge.
In short, I am trying to send data that I have received from an Arduino (connected to my laptop, and communicating via serial port) to a server that is running on my laptop.
I am trying to send various pieces of information in a POST requests using the Requests Library as follows:
import requests
import json
url = 'http://<usernames computer>.local/final/'
headers = {'Content-type': 'application/json'}
data = [
('state','true'),
('humidity', 45),
('temperature',76)
]
r = requests.post(url, data, headers = headers)
print r.text
This code works. I know this because I tested it at http://www.posttestserver.com/. All of the data is sent properly.
But I am trying to send it to a server side script that looks like this:
<?php
$state = $_POST["state"];
$myfile = fopen("./data/current.json", "w") or die("Unable to open file!");
$txt = "$state";
fwrite($myfile, $txt);
fclose($myfile);
echo "\nThe current state is:\n $state\n";
?>
However when I run the code, my script spits out:
<br />
<b>Notice</b>: Undefined index: state in
<b>/Applications/XAMPP/xamppfiles/htdocs/final/index.php</b> on line
<b>2</b><br />
The current state is:
<This is where something should come back, but does not.>
What could be going wrong? Thanks for your help!
A:
$state = $_POST["state"];
You are sending the data as type application/json, but PHP won't auto de-serialize the string into json for you. Also Python Requests will not autoserialize:
[
('state','true'),
('humidity', 45),
('temperature',76)
]
into json.
What you will want to do is serialize the request for on the client side:
data = [
('state','true'),
('humidity', 45),
('temperature',76)
]
r = requests.post(url, json=data, headers=headers)
Now on the server side, de-serialize it:
if ($_SERVER["CONTENT_TYPE"] == "application/json") {
$postBody = file_get_contents('php://input');
$data = json_decode($postBody);
$state = $data["state"];
//rest of your code...
}
| {
"pile_set_name": "StackExchange"
} |
Q:
How to use captcha in auth middleware in Laravel 5.4
I used Auth middleware in Laravel 5.4 for my login page. Now I've added a captcha package from https://packagist.org/packages/bonecms/laravel-captcha to add captcha to the login page but it does not validate captcha correctly. What is the problem?How should I change my controller?
this is my view:
<form class="form-horizontal" method="POST" action="{{ route('login') }}" style="margin:0 auto;padding: 0 !important;">
{{ csrf_field() }}
<div class="form-group{{ $errors->has('email') ? ' has-error' : '' }}">
<label for="email" class="col-md-4 control-label" id="emailadd">ایمیل</label>
<div class="col-md-8">
<input id="email" type="email" class="form-control" name="email" value="{{ old('email') }}" required autofocus>
@if ($errors->has('email'))
<span class="help-block">
<strong>{{ $errors->first('email') }}</strong>
</span>
@endif
</div>
</div>
<div class="form-group{{ $errors->has('password') ? ' has-error' : '' }}">
<label for="password" class="col-md-4 control-label" id="passwordbox">رمز عبور</label>
<div class="col-md-8">
<input id="password" type="password" class="form-control" name="password" required>
@if ($errors->has('password'))
<span class="help-block">
<strong>{{ $errors->first('password') }}</strong>
</span>
@endif
</div>
</div>
<div class="form-group{{ $errors->has('captcha') ? ' has-error' : '' }}">
<label for="captcha" class="col-md-4 control-label" id="emailadd">کد امنیتی</label>
<div class="col-md-8">
<div style="display:block;margin:10px auto;">@captcha</div>
<input type="text" id="captcha" name="captcha" class="form-control" required>
@if ($errors->has('captcha'))
<span class="help-block">
<strong>{{ $errors->first('captcha') }}</strong>
</span>
@endif
</div>
</div>
<div class="form-group{{ $errors->has('email') ? ' has-error' : '' }}">
<div class="col-md-12">
<button type="submit" class="btn" style="width:100%;background-color:#00aeef;color:#ffffff;margin: 10px 0px">
login
</button>
</div>
</div>
</form>
this is my controller:
namespace App\Http\Controllers\Auth;
use App\Http\Controllers\Controller;
use App\User;
use Illuminate\Foundation\Auth\AuthenticatesUsers;
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Lang;
class LoginController extends Controller
{
/*
|--------------------------------------------------------------------------
| Login Controller
|--------------------------------------------------------------------------
|
| This controller handles authenticating users for the application and
| redirecting them to your home screen. The controller uses a trait
| to conveniently provide its functionality to your applications.
|
*/
use AuthenticatesUsers;
/**
* Where to redirect users after login.
*
* @var string
*/
protected $redirectTo = '/';
/**
* Create a new controller instance.
*
* @return void
*/
public function __construct()
{
$this->middleware('guest')->except('logout');
}
protected function sendFailedLoginResponse(Request $request)
{
$this->validate($request, [
'captcha' => 'required|captcha'
],
[ 'captcha.required' => 'کد امنیتی را وارد نکرده اید.',
'captcha.captcha' => 'کد امنیتی اشتباه است',
]);
if ( ! User::where('email', $request->email)->first() ) {
return redirect()->back()
->withInput($request->only($this->username(), 'remember'))
->withErrors([
$this->username() => Lang::get('آدرس ایمیل اشتباه است'),
]);
}
if ( ! User::where('email', $request->email)->where('password', bcrypt($request->password))->first() ) {
return redirect()->back()
->withInput($request->only($this->username(), 'remember'))
->withErrors([
'password' => Lang::get('رمز عبور اشتباه است'),
]);
}
}
}
A:
Route login goes to LoginController::login() method. Add your validation rules to LoginController::login() method.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is it possible to allocate a 2D array as shared memory with IPC?
I want to allocate shared memory as a 2D array using IPC. I tried the following:
id_shmem = shmget(ipc_key, sizeof(int)*rows*columns, IPC_CREAT|0666);
matrix = (int **)shmat(id_shmem, 0, 0);
The problem is that whenever I try to write something into the matrix, I get a segment fault.
A:
int** is not 2D array, it is rather an array of pointers. You should not store pointers in shared memory, as shared memory segment may be allocated at different addresses in different processes. Try to use simple, flat 1D array, which will "emulate" 2D array with some index magic, ie.
x,y -> y*width+x
A:
Common practice with structures in shared memory is storing offsets and not pointers. This is to get around the fact that memory could be mapped at different virtual addresses in different processes.
Another common approach is to let first process request OS-provided mapping and then somehow pass the resulting virtual address to all other processes that need to be attached to the same memory, and have them request fixed mapping at that address.
| {
"pile_set_name": "StackExchange"
} |
Q:
userns container fails to start, how to track down the reason?
When creating a userns (unprivileged) LXC container on Ubuntu 14.04 with the following command line:
lxc-create -n test1 -t download -- -d $(lsb_release -si|tr 'A-Z' 'a-z') -r $(lsb_release -sc) -a $(dpkg --print-architecture)
and (without touching the created configuration file) then attempting to start it with:
lxc-start -n test1 -l DEBUG
it fails. The log file shows me:
lxc-start 1420149317.700 INFO lxc_start_ui - using rcfile /home/user/.local/share/lxc/test1/config
lxc-start 1420149317.700 INFO lxc_utils - XDG_RUNTIME_DIR isn't set in the environment.
lxc-start 1420149317.701 INFO lxc_confile - read uid map: type u nsid 0 hostid 100000 range 65536
lxc-start 1420149317.701 INFO lxc_confile - read uid map: type g nsid 0 hostid 100000 range 65536
lxc-start 1420149317.701 WARN lxc_log - lxc_log_init called with log already initialized
lxc-start 1420149317.701 INFO lxc_lsm - LSM security driver AppArmor
lxc-start 1420149317.701 INFO lxc_utils - XDG_RUNTIME_DIR isn't set in the environment.
lxc-start 1420149317.702 DEBUG lxc_conf - allocated pty '/dev/pts/2' (5/6)
lxc-start 1420149317.702 DEBUG lxc_conf - allocated pty '/dev/pts/7' (7/8)
lxc-start 1420149317.702 DEBUG lxc_conf - allocated pty '/dev/pts/8' (9/10)
lxc-start 1420149317.702 DEBUG lxc_conf - allocated pty '/dev/pts/10' (11/12)
lxc-start 1420149317.702 INFO lxc_conf - tty's configured
lxc-start 1420149317.702 DEBUG lxc_start - sigchild handler set
lxc-start 1420149317.702 DEBUG lxc_console - opening /dev/tty for console peer
lxc-start 1420149317.702 DEBUG lxc_console - using '/dev/tty' as console
lxc-start 1420149317.702 DEBUG lxc_console - 14946 got SIGWINCH fd 17
lxc-start 1420149317.702 DEBUG lxc_console - set winsz dstfd:14 cols:118 rows:61
lxc-start 1420149317.905 INFO lxc_start - 'test1' is initialized
lxc-start 1420149317.906 DEBUG lxc_start - Not dropping cap_sys_boot or watching utmp
lxc-start 1420149317.906 INFO lxc_start - Cloning a new user namespace
lxc-start 1420149317.906 INFO lxc_cgroup - cgroup driver cgmanager initing for test1
lxc-start 1420149317.907 ERROR lxc_cgmanager - call to cgmanager_create_sync failed: invalid request
lxc-start 1420149317.907 ERROR lxc_cgmanager - Failed to create hugetlb:test1
lxc-start 1420149317.907 ERROR lxc_cgmanager - Error creating cgroup hugetlb:test1
lxc-start 1420149317.907 INFO lxc_cgmanager - cgroup removal attempt: hugetlb:test1 did not exist
lxc-start 1420149317.908 INFO lxc_cgmanager - cgroup removal attempt: perf_event:test1 did not exist
lxc-start 1420149317.908 INFO lxc_cgmanager - cgroup removal attempt: blkio:test1 did not exist
lxc-start 1420149317.908 INFO lxc_cgmanager - cgroup removal attempt: freezer:test1 did not exist
lxc-start 1420149317.909 INFO lxc_cgmanager - cgroup removal attempt: devices:test1 did not exist
lxc-start 1420149317.909 INFO lxc_cgmanager - cgroup removal attempt: memory:test1 did not exist
lxc-start 1420149317.909 INFO lxc_cgmanager - cgroup removal attempt: cpuacct:test1 did not exist
lxc-start 1420149317.909 INFO lxc_cgmanager - cgroup removal attempt: cpu:test1 did not exist
lxc-start 1420149317.910 INFO lxc_cgmanager - cgroup removal attempt: cpuset:test1 did not exist
lxc-start 1420149317.910 INFO lxc_cgmanager - cgroup removal attempt: name=systemd:test1 did not exist
lxc-start 1420149317.910 ERROR lxc_start - failed creating cgroups
lxc-start 1420149317.910 INFO lxc_utils - XDG_RUNTIME_DIR isn't set in the environment.
lxc-start 1420149317.910 ERROR lxc_start - failed to spawn 'test1'
lxc-start 1420149317.910 INFO lxc_utils - XDG_RUNTIME_DIR isn't set in the environment.
lxc-start 1420149317.910 INFO lxc_utils - XDG_RUNTIME_DIR isn't set in the environment.
lxc-start 1420149317.910 ERROR lxc_start_ui - The container failed to start.
lxc-start 1420149317.910 ERROR lxc_start_ui - Additional information can be obtained by setting the --logfile and --logpriority options.
Now I see two errors here, the latter probably being a result of the former, which is:
lxc_start - failed creating cgroups
However, I see /sys/fs/cgroup mounted:
$ mount|grep cgr
none on /sys/fs/cgroup type tmpfs (rw)
and cgmanager is installed:
$ dpkg -l|awk '$1 ~ /^ii$/ && /cgmanager/ {print $2 " " $3 " " $4}'
cgmanager 0.24-0ubuntu7 amd64
libcgmanager0:amd64 0.24-0ubuntu7 amd64
Note: My host defaults still to upstart.
In case there's any doubt, the kernel support cgroups:
$ grep CGROUP /boot/config-$(uname -r)
CONFIG_CGROUPS=y
# CONFIG_CGROUP_DEBUG is not set
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_HUGETLB=y
CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_SCHED=y
CONFIG_BLK_CGROUP=y
# CONFIG_DEBUG_BLK_CGROUP is not set
CONFIG_NET_CLS_CGROUP=m
CONFIG_NETPRIO_CGROUP=m
Note: My host defaults still to upstart.
A:
Turns out, surprise surprise, this is a Ubuntu-specific thing.
The cause
The problem: although the kernel has cgroups enabled (check with grep CGROUP /boot/config-$(uname -r)) and cgmanager is running, there is no cgroup specific to my user. You can check that with:
$ cat /proc/self/cgroup
11:hugetlb:/
10:perf_event:/
9:blkio:/
8:freezer:/
7:devices:/
6:memory:/
5:cpuacct:/
4:cpu:/
3:name=systemd:/
2:cpuset:/
if your UID is given in each of the relevant lines, it's alright, but if no cgroups have been defined there will only be a slash after the second colon on each line.
My problem was specific to starting an unprivileged container. I could start privileged containers just fine.
It turned out that my problem was closely related to this thread on the lxc-users mailing list.
Remedy
On Ubuntu 14.04 upstart is the default, as opposed to systemd. Hence certain components that would be installed on a systemd-based distro do not get installed by default.
There were two packages in addition to cgmanager which I had to install in order to get beyond the error shown in my question: cgroup-bin and libpam-systemd. Quite frankly I am not 100% certain that the former is strictly needed, so you could try to leave it out and comment here.
After the installation of the packages and a reboot, you should then see your UID (id -u, here 1000) in the output:
$ cat /proc/self/cgroup
11:hugetlb:/user/1000.user/1.session
10:perf_event:/user/1000.user/1.session
9:blkio:/user/1000.user/1.session
8:freezer:/user/1000.user/1.session
7:devices:/user/1000.user/1.session
6:memory:/user/1000.user/1.session
5:cpuacct:/user/1000.user/1.session
4:cpu:/user/1000.user/1.session
3:name=systemd:/user/1000.user/1.session
2:cpuset:/user/1000.user/1.session
After that, the error upon attempting to start the guest container becomes (trimmed for brevity):
lxc-start 1420160065.383 INFO lxc_cgroup - cgroup driver cgmanager initing for test1
lxc-start 1420160065.419 ERROR lxc_start - failed to create the configured network
lxc-start 1420160065.446 ERROR lxc_start - failed to spawn 'test1'
lxc-start 1420160065.451 ERROR lxc_start_ui - The container failed to start.
So still no success, but we're one step closer.
The above linked lxc-users thread points to /etc/systemd/logind.conf not mentioning three controllers: net_cls, net_prio and debug. For me only the last one was missing. After the change you'll have to re-login, though, as the the changes take effect upon creation of your login session.
This blog post by one of the authors of LXC gives the next step:
Your user, while it can create new user namespaces in which it’ll be
uid 0 and will have some of root’s privileges against resources tied
to that namespace will obviously not be granted any extra privilege on
the host.
One such thing is creating new network devices on the host or changing
bridge configuration. To workaround that, we wrote a tool called
“lxc-user-nic” which is the only SETUID binary part of LXC 1.0 and
which performs one simple task. It parses a configuration file and
based on its content will create network devices for the user and
bridge them. To prevent abuse, you can restrict the number of devices
a user can request and to what bridge they may be added.
An example is my own /etc/lxc/lxc-usernet file:
stgraber veth lxcbr0 10
This declares that the user “stgraber” is allowed up to 10 veth type
devices to be created and added to the bridge called lxcbr0.
Between what’s offered by the user namespace in the kernel and that
setuid tool, we’ve got all that’s needed to run most distributions
unprivileged.
If your user has sudo rights and you're using Bash, use this:
echo "$(whoami) veth lxcbr0 10"|sudo tee -a /etc/lxc/lxc-usernet
and make sure the type (veth) matches the one in the container config and the bridge (lxcbr0) is configured and up.
And now we get another set of errors:
lxc-start 1420192192.775 INFO lxc_start - Cloning a new user namespace
lxc-start 1420192192.775 INFO lxc_cgroup - cgroup driver cgmanager initing for test1
lxc-start 1420192192.923 NOTICE lxc_start - switching to gid/uid 0 in new user namespace
lxc-start 1420192192.923 ERROR lxc_start - Permission denied - could not access /home/user. Please grant it 'x' access, or add an ACL for the container root.
lxc-start 1420192192.923 ERROR lxc_sync - invalid sequence number 1. expected 2
lxc-start 1420192192.954 ERROR lxc_start - failed to spawn 'test1'
lxc-start 1420192192.959 ERROR lxc_start_ui - The container failed to start.
Brilliant, that can be fixed. Another lxc-users thread by the same protagonists as in the first thread paves the way.
For now a quick test sudo chmod -R o+X $HOME will have to do, but ACLs are a viable option here as well. YMMV.
| {
"pile_set_name": "StackExchange"
} |
Q:
Cayley table for 2-bit integers ${Z_4}$
Let us consider the multiplication operation, denoted by $ \odot $ on the set of 2-bit integers ${Z_4}$ defined as follows:
$$\eqalign{
& a \odot b = (ab\,\bmod \,5)\,\bmod \,4\,if\,a \ne 0,\,b \ne 0 \cr
& 0 \odot a = a \odot 0 = (4a\,\bmod \,5)\,\bmod \,4 \cr
& 0 \odot 0 = 1 \cr} $$
The task is
Compute the Cayley table for $ \odot $
Show that $({Z_4}, \odot )$ is isomorphic with multiplicative group of the field ${Z_5}$
For the first part i have constructed the Cayley table. Is it correct?
\begin{array}{ccc}
\odot & \textbf{0} & \textbf{1} & \textbf{2} & \textbf{3} \\
\textbf{0} & 1 & 0 & 3 & 2 & & \\
\textbf{1} & 0 & 1 & 2 & 3 & & \\
\textbf{2} & 3 & 2 & 0 & 1 & & \\
\textbf{3} & 2 & 3 & 1 & 0 & &
\end{array}
How can i show that ${Z_4}$ is isomorphic with multiplicative group of the field ${Z_5}$?
A:
Your multiplication table is correct. As for showing that your group is isomorphic to the multiplicative group $\Bbb Z_5$, note that the multiplication table is given by
\begin{array}{c|cccc}
& 1 & 2 & 3 & 4 \\ \hline
1 & 1 & 2 & 3 & 4 \\
2 & 2 & 4 & 1 & 3 \\
3 & 3 & 1 & 4 & 2 \\
4 & 4 & 3 & 2 & 1
\end{array}
Note how similar this is to your multiplication table. You want a function to map $\{0,1,2,3\}$ to $\{1,2,3,4\}$ such that your multiplication table looks like mine. Isomorphic groups have identical multiplication tables (modulo "naming" of variables and rearranging of rows/columns accordingly). This is, in fact, why we call them isomorphic groups - up to naming, we can't really distinguish them apart since they have the same action. Hint: notice that $1\odot a = a = a\odot 1$ in your group and look at the diagonals in our multiplication tables - these will give you a couple of good ideas as to how to map things.
| {
"pile_set_name": "StackExchange"
} |
Q:
Build Json string to tree view using jQuery and asp.net
I am using asp.net and ajax to generate Json data.
When I am trying to draw tree using below code it's not working on other hand
Code 2 section is working when I put it as static
Note: using library treant.js
link: http://fperucic.github.io/treant-js/
Code 1:
$(function () {
$.ajax({
type: "POST",
url: "Default.aspx/Hello",
data: "{}",
contentType: "application/json; charset=utf-8",
dataType: "json",
success: function (data)
{
var Details = data.d;
if (Details != "")
{
var tree_design = '';
//sessionStorage.setItem("str_data", tree_design);
var currDepth = 0;
var totalData = $.map(Details, function (n, i) { return i; }).length;
var lastNodeIndex = parseInt(totalData) - 2;
//-----------------------------------------//
$.each(Details, function (index, item)
{
if (Details[parseInt(index) + 1] === undefined || Details[parseInt(index) + 1] == null || Details[parseInt(index) + 1] == "")
{
//alert("undefined");
}
else
{
//console.log(index);
//console.log(item.Name);
//console.log(item.Depth);
//alert(item.Depth);
//console.log(item.Lft);
//-----------------------------------//
// Level down? (or the first)
if (((parseInt(item.Depth) > parseInt(currDepth)) || parseInt(index) == 0) && parseInt(item.Depth) != 0) {
tree_design += 'children: [';
}
//----------------------------------//
// Level up?
if (parseInt(item.Depth) < parseInt(currDepth)) {
tree_design += '' + '}],'.repeat(parseInt(currDepth) - parseInt(item.Depth));
}
//----------------------------------//
if (parseInt(item.Depth) != 0)
{
tree_design += '{ connectors: { style: { stroke: "#000000" } },';
}
//---------Print Node Text-------------//
tree_design += 'text: { name: "' + item.Name + '" },HTMLclass: "blue",image: "images/no_member.png",';
//---------------------------------------//
//console.log(Details[parseInt(index) + 1].Depth);
var nextEleDepth = Details[parseInt(index) + 1].Depth;
//console.log(nextEleDepth);
// Check if there's chidren
if (parseInt(index) != lastNodeIndex && (parseInt(nextEleDepth) <= parseInt(item.Depth)))
{
tree_design += '},'; // If not, close the <li>
}
//---------------------------------------//
// Adjust current depth
currDepth = parseInt(item.Depth);
//---------------------------------------//
//console.log(parseInt(index)+"=="+lastNodeIndex);
// Are we finished?
if (parseInt(index) == lastNodeIndex) {
//console.log("Are we finished");
tree_design += '' + '}],'.repeat(currDepth);
}
//------------------------------------//
}
});
//------------------Draw Tree---------------------------//
//console.log(tree_design);
var chart_config = {
chart: {
container: "#basic-example",
nodeAlign: "BOTTOM",
connectors: {
type: "step"
},
node: {
HTMLclass: "nodeExample1"
}
},
nodeStructure: {
tree_design
}
};
//console.log(tree_design);
new Treant(chart_config);
//-------------------------------------------------------//
}
}
});
});
Code 2: Working
$(function () {
$.ajax({
type: "POST",
url: "Default.aspx/Hello",
data: "{}",
contentType: "application/json; charset=utf-8",
dataType: "json",
success: function (data)
{
var Details = data.d;
if (Details != "")
{
var tree_design = '';
//sessionStorage.setItem("str_data", tree_design);
var currDepth = 0;
var totalData = $.map(Details, function (n, i) { return i; }).length;
var lastNodeIndex = parseInt(totalData) - 2;
//-----------------------------------------//
$.each(Details, function (index, item)
{
if (Details[parseInt(index) + 1] === undefined || Details[parseInt(index) + 1] == null || Details[parseInt(index) + 1] == "")
{
//alert("undefined");
}
else
{
//console.log(index);
//console.log(item.Name);
//console.log(item.Depth);
//alert(item.Depth);
//console.log(item.Lft);
//-----------------------------------//
// Level down? (or the first)
if (((parseInt(item.Depth) > parseInt(currDepth)) || parseInt(index) == 0) && parseInt(item.Depth) != 0) {
tree_design += 'children: [';
}
//----------------------------------//
// Level up?
if (parseInt(item.Depth) < parseInt(currDepth)) {
tree_design += '' + '}],'.repeat(parseInt(currDepth) - parseInt(item.Depth));
}
//----------------------------------//
if (parseInt(item.Depth) != 0)
{
tree_design += '{ connectors: { style: { stroke: "#000000" } },';
}
//---------Print Node Text-------------//
tree_design += 'text: { name: "' + item.Name + '" },HTMLclass: "blue",image: "images/no_member.png",';
//---------------------------------------//
//console.log(Details[parseInt(index) + 1].Depth);
var nextEleDepth = Details[parseInt(index) + 1].Depth;
//console.log(nextEleDepth);
// Check if there's chidren
if (parseInt(index) != lastNodeIndex && (parseInt(nextEleDepth) <= parseInt(item.Depth)))
{
tree_design += '},'; // If not, close the <li>
}
//---------------------------------------//
// Adjust current depth
currDepth = parseInt(item.Depth);
//---------------------------------------//
//console.log(parseInt(index)+"=="+lastNodeIndex);
// Are we finished?
if (parseInt(index) == lastNodeIndex) {
//console.log("Are we finished");
tree_design += '' + '}],'.repeat(currDepth);
}
//------------------------------------//
}
});
//------------------Draw Tree---------------------------//
//console.log(tree_design);
var chart_config = {
chart: {
container: "#basic-example",
nodeAlign: "BOTTOM",
connectors: {
type: "step"
},
node: {
HTMLclass: "nodeExample1"
}
},
nodeStructure: {
text: { name: "BOD" },HTMLclass: "blue",image: "images/no_member.png",children: [{ connectors: { style: { stroke: "#000000" } },text: { name: "Engr. Kawser Hasan (Managing Director)" },HTMLclass: "blue",image: "images/no_member.png",},{ connectors: { style: { stroke: "#000000" } },text: { name: "Engr. Md. Abdullah Al Baki (Director)" },HTMLclass: "blue",image: "images/no_member.png",}]
}
};
//console.log(tree_design);
new Treant(chart_config);
//-------------------------------------------------------//
}
}
});
});
Code-3: Server Scripting (ASP.Net C#)
using CDB.System.Common.Layout.Company;
using PRP.PPL.System.include.config.connection;
using System;
using System.Collections.Generic;
using System.Data;
using System.Data.SqlClient;
using System.Linq;
using System.Web;
using System.Web.Services;
using System.Web.UI;
using System.Web.UI.WebControls;
namespace PPL.Data.HRD.Organogram.Tree1
{
public partial class _default : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
}
[WebMethod]
public static Details[] Hello()
{
string sql;
db_ppl Connstring = new db_ppl();
sql = @"SELECT node.category_id, node.name, COUNT(parent.category_id) - 1 AS depth, node.lft, node.rgt
FROM nested_category AS node CROSS JOIN
nested_category AS parent
WHERE (node.lft BETWEEN parent.lft AND parent.rgt)
GROUP BY node.category_id, node.name, node.lft, node.rgt
ORDER BY node.lft";
List<Details> details_data = new List<Details>();
using (SqlConnection con = Connstring.getcon)
{
using (SqlCommand cmd = new SqlCommand(sql, con))
{
con.Open();
SqlDataReader reader = cmd.ExecuteReader();
while (reader.Read())
{
Details col_data = new Details();
col_data.category_id = reader.GetInt32(0);
col_data.Name = reader.GetString(1);
col_data.Depth = reader.GetInt32(2);
col_data.Lft = reader.GetInt32(3);
col_data.Rgt = reader.GetInt32(4);
details_data.Add(col_data);
}
}
}
return details_data.ToArray();
}
//---------------For Details Data----------------//
public class Details
{
public Int32 category_id { get; set; }
public string Name { get; set; }
public Int32 Depth { get; set; }
public Int32 Lft { get; set; }
public Int32 Rgt { get; set; }
}
}
}
Code-4: asp.Net Part
<%@ Page Title="" Language="C#" MasterPageFile="~/CDB/System/Common/Layout/Master/Panel.Master" AutoEventWireup="true" CodeBehind="default.aspx.cs" Inherits="PPL.Data.HRD.Organogram.Tree1._default" %>
<asp:Content ID="Content1" ContentPlaceHolderID="head" runat="server">
</asp:Content>
<asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" runat="server">
<style type="text/css">
#basic-example
{
overflow: unset !important;
}
</style>
<link href="../../../../../CDB/System/Assets/plugins/Organogram/Treant.css" rel="stylesheet" />
<link href="../../../../../CDB/System/Assets/plugins/Organogram/basic-example.css" rel="stylesheet" />
<script src="../../../../../CDB/System/Assets/plugins/Organogram/raphael.js"></script>
<script src="../../../../../CDB/System/Assets/plugins/Organogram/Treant.js"></script>
<script src="default.js"></script>
<div class="content-wrapper" style="padding:0;margin:0;">
<!-- Main content -->
<section class="content">
<div class="row">
<!-- left column -->
<div class="col-md-12">
<!-- general form elements -->
<div class="box box-primary">
<div class="box-header with-border">
<h3 class="box-title">Book Progress</h3>
</div>
<!-- /.box-header -->
<div class="box-body">
<div class="row">
<div class="col-sm-12">
<div class="form-group" style="overflow:scroll;">
<div class="chart" id="basic-example"></div>
</div>
</div>
</div>
<div class="box-footer">
</div>
</div>
</div>
<!-- /.box -->
</div>
</div>
</section>
</div>
</asp:Content>
Output :
A:
var tree_design='text: { name: "BOD" },HTMLclass: "blue",image: "images/no_member.png",children: [{ connectors: { style: { stroke: "#000000" } },text: { name: "Engr. Kawser Hasan (Managing Director)" },HTMLclass: "blue",image: "images/no_member.png",},{ connectors: { style: { stroke: "#000000" } },text: { name: "Engr. Md. Abdullah Al Baki (Director)" },HTMLclass: "blue",image: "images/no_member.png",}]';
nodeStructure: {
tree_design
}
when you link tree_design in nodeStructure it's in string format but nodeStructure expect in json format
either convert tree_design in json or do this way
var tree_design='{ name: "BOD" },HTMLclass: "blue",image: "images/no_member.png",children: [{ connectors: { style: { stroke: "#000000" } },text: { name: "Engr. Kawser Hasan (Managing Director)" },HTMLclass: "blue",image: "images/no_member.png",},{ connectors: { style: { stroke: "#000000" } },text: { name: "Engr. Md. Abdullah Al Baki (Director)" },HTMLclass: "blue",image: "images/no_member.png",}]';
nodeStructure: {
text: tree_design
}
| {
"pile_set_name": "StackExchange"
} |
Q:
How to create a bootable OS X Snow Leopard USB drive?
How can I achieve this from either macOS Catalina or Windows 10? I can't seem to find a guide online. I've downloaded Mac OS X Snow Leopard Install DVD from the Internet Archive. Keep in mind that this is an .iso file, not .dmg.
I don't need a full step-by-step guide, just need to know how to format the drive (MBR/GPT?) and how to copy the .iso file to it.
A:
Creating a USB Flash Drive Installer from a Snow Leopard ISO File.
This was tested using an iMac (21.5-inch, Mid 2011) 2.5 GHz Intel Core i5 with MacOS High Sierra 10.13.6 installed. The procedure should be the same for Catalina. The flash drive needs to be at least 8 GB in size. The Snow Leopard ISO file was the same as discussed in this answer, which should be the same ISO you have linked to in your question.
Use the Finder application to mount the Snow Leopard ISO file.
Use the Disk Utility application to erase the flash drive, as shown below. I chose the default name Untitled. Finally, click on the Erase button.
Highlight the name Untitled, then click on the Restore button on the top of the Disk Utility application window. In the popup window, select restore from Mac OS X Install DVD, as shown below. Finally, click on the Restore button in the popup window.
| {
"pile_set_name": "StackExchange"
} |
Q:
iOS Blocks - defining UIView animation-like blocks
I'm trying to create a custom block like the UIView animation blocks. Basically I want to be able to pass either a method or any number of instructions and also provide a completion handler. My question is how would I specify the arguments part of the block definition?
A:
You can have a method declaration such as:
- (void) performAnimationWithCompletion:(void (^)(BOOL finished))completion {
[UIView animateWithDuration:0.5 animations:^{
// your own animation code
// ...
} completion:^(BOOL finished) {
// your own completion code
// if completion block defined, call it
if(completion){
completion(YES);
}
}];
}
Then, you can call it with:
[instance performAnimationWithCompletion:^(BOOL complete){
// define block code to be executed on completion
}];
| {
"pile_set_name": "StackExchange"
} |
Q:
Windows vs Windows Server
How does Windows Server differ from any other Windows, say Windows XP?
Does it have an inbuilt server of some kind? I can always start a server on any Windows machine, so I don't understand what Windows Server OS is for?
A:
Windows Server is built on the same codebase as the workstation OS. 2003 is based on XP, 2008 on Vista, 2008 R2 on 7. There are still plenty of differences though.
Licensing - This is probably one of the larger differences. Consumer version of Windows are only licensed for 5 connections. Professional versions of Windows workstations are licensed for 10 connections. You may be able to bypass the technical restrictions imposed by these connection limits, but you won't be able to do it in an ethical way. If you're running IIS on XP Pro, just hope your website is never popular enough to exceed its 10 connection limit.
Security - Windows Server is has extra security built into it. Some of these things can be done with the workstation OSes, other can not.
High Availability - You aren't going to be able to cluster workstation versions of Windows to maintain high availability. Only Windows Server Enterprise and Datacenter give you this capability
Additional Services - Services like DHCP server, DNS server, Active Directory, File Server Resource Manager, and HTTP print server are available in the server OS, not in the workstation OS. You could possibly add some of these services to a workstation OS through third parties but they likely won't be as easy to use, might not be as powerful, and could violate the workstation license
Support - If you have your business running on a workstation OS, don't expect Microsoft to support it when it fails. Server OSes do not come with support, but at least you can purchase support tickets for them. If you call them up wondering why your Samba install on XP is no longer authenticating, they will let you know it's an unsupported scenario and refuse to help.
I'm sure there's many many many more reasons. It could all probably be summed up like this though: If you're going to set up a server, use server grade products, not the same stuff your grandma uses.
A:
Jason Berg made excellent points, so I will try not to go in to so much detail on them.
The main differences come down to what they are fundamentally designed to do.
Windows XP, Windows Vista and Windows 7 out of the box are designed to be easy to use for a desktop environment and have many user oriented features.
On the other hand, Windows Server 2003, 2003 R2, 2008 and 2008 R2 are designed purely as servers - they are not designed to look (or sound) pretty, they are just designed so that you can configure and leave it running interrupted - optimised purely for background tasks and services.
There is nothing stopping you from turning off many of the services inside desktop Windows in order to make performance near to the Server, or vice-versa - but it still is not 100% the same.
As for running services and applications on XP - you can always install a third party DNS service or use Apache or other programs - they work very well... However, I am not sure on the licensing constraints of using this edition of Windows for public access - I am guessing it is not allowed, but more than this, if you then wanted to play a game or do some video editing - unless you start to mess around with CPU priories, the server/service may suffer - Server OSes are just designed out the box to serve, and they do it very well.
A:
One often misunderstood difference is that some versions of 32-bit Windows Server support PAE, allowing use of "all" 4GB or more physical memory. For example, this would allow three "2GB" processes to run "all in RAM" with 6GB of memory. (It would not allow one "6GB" process, because it's still a 32-bit OS. And the "scare quotes" are used because memory usage is not that simple.)
Such support is disabled in all non-Server versions, like XP, because of driver compatibility. Some drivers break with PAE, and consumers would complain. Those running Server would tend to be pickier and "know better".
This is now mostly moot since workstation/consumer versions of 64-bit Windows are common with good driver support, other reasons to require 32-bit Windows are on the wane, and the latest Windows Server (2008 R2) is 64-bit-only.
| {
"pile_set_name": "StackExchange"
} |
Q:
Login page - Goes straight to members page and redirects to login page
I am having this minor problem with the login page. Clicking the login button goes straight to members page and redirects back to login page. Please take a look at my website: http://www.nigelsham.co.uk > click "login".
If you enter the login details:
username: 007
password: Password123!
It shows the users data in members page.
Here is the code:
Login page - http://pastebin.com/8XJepv7d
Members page PHP code:
<?php
session_start();
if (!$_SESSION["myusername"]) {
$ogmeta = '<meta http-equiv="refresh" content="0;url=http://nigelsham.co.uk/login.php">';
echo $ogmeta;
} else{
$user = $_SESSION['myusername'];
$pass = $_SESSION['mypassword'];
$email = $_SESSION['myemail'];
}
?>
Any suggestions?
I have spent a lot of time fixing the SESSIONS, do you think SESSIONS could be part of the problem? Or could it be the $ogmeta variable?
A:
It seems to me that the problem is the login button should be directly to login, right?
So, change the href in the login button to redirect to login.php
| {
"pile_set_name": "StackExchange"
} |
Q:
Adding together System.Drawing.Points
I have come across the following code that uses the constructor of the System.Drawing.Size class to add two System.Drawing.Point objects.
// System.Drawing.Point mpWF contains window-based mouse coordinates
// extracted from LParam of WM_MOUSEMOVE message.
// Get screen origin coordinates for WPF window by passing in a null Point.
System.Windows.Point originWpf = _window.PointToScreen(new System.Windows.Point());
// Convert WPF doubles to WinForms ints.
System.Drawing.Point originWF = new System.Drawing.Point(Convert.ToInt32(originWpf.X),
Convert.ToInt32(originWpf.Y));
// Add WPF window origin to the mousepoint to get screen coordinates.
mpWF = originWF + new Size(mpWF);
I consider the use of the + new Size(mpWF) in the last statement a hack because when I was reading the above code, it slowed me down as I did not immediately understand what was going on.
I tried deconstructing that last statement as follows:
System.Drawing.Point tempWF = (System.Drawing.Point)new Size(mpWF);
mpWF = originWF + tempWF; // Error: Addition of two Points not allowed.
But it didn't work as addition is not defined for two System.Drawing.Point objects. Is there any other way to perform addition on two Point objects that is more intuitive than the original code?
A:
Create an Extension Method for it:
public static class ExtensionMethods
{
public static Point Add(this Point operand1, Point operand2)
{
return new Point(operand1.X + operand2.X, operand1.Y + operand2.Y);
}
}
Usage:
var p1 = new Point(1, 1);
var p2 = new Point(2, 2);
var reult =p1.Add(p2);
| {
"pile_set_name": "StackExchange"
} |
Q:
How to get value Unity3D InputField GUI
So how can I get the value of the text that is inside the Input field (textbox) or Unity3D
Here is what I tried in c#
public void UserInput (string UserInput)
{
UserName = UserInput;
}
public void PassInput (string PassInput)
{
Password = PassInput;
}
In Unity I put this. http://gyazo.com/b90e2d05b806b0fde90122e7a2302463
Please Help. Thank You!
A:
It doesn't work because you're not actually calling the InputField. The correct way to get that value would be
public void UserInput(InputField userField)
{
string username = userField.text;
}
public void PassInput (InputField passField)
{
string password = passField.text;
}
Of course like this you have to pass the InputField as a parameter, if you only need this method to assign a value to a string you could just pass the InputField.text as a parameter.
To make your setup work, you have to assign "Scripts" to an object and drag that object in the editor (where you dragged Scripts). You also have to modify slightly your functions, like this:
using UnityEngine.UI;
[SerializeField] private InputField _userField;
[SerializeField] private InputField _passField;
// Other code.
public void UserInput()
{
string username = _userField.text;
// Code that uses the username variable.
}
public void PassInput ()
{
string password = _passField.text;
// Code that uses the password variable.
}
This way you can actually drag the InputFields from the scene hierarchy to the "Scripts" object directly in the editor.
Hope this helps!
| {
"pile_set_name": "StackExchange"
} |
Q:
Can I use Twilio Video JS SDK 2.0 in jQuery?
The official documentation does not have any example for implementation in jQuery or plain JavaScript. We do not use Node for our projects unless specifically demanded which is rare.
Do we really have to limit our projects to v1? Or can someone point me to a related tutorial?
A:
Twilio developer evangelist here.
If you're not using Node to install dependencies and build a client bundle then you can still use Twilio Video SDK version 2. You can do so by loading the SDK from the CDN here:
<script src="//media.twiliocdn.com/sdk/js/video/releases/2.0.0-beta11/twilio-video.min.js"></script>
When you include that on the page you will then find the global Twilio object. You can use all the video functions from the Video namespace.
const Video = Twilio.Video;
Video.connect(token);
You can see examples of this in the Video SDK documentation.
Let me know if this helps at all.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is incorrect? Can I use the match from v-for inside the v-bind on same element?
I am trying to render some HTML for each match in matches, however, I'm not quite sure if <match v-for='match in matches' v-bind:match='match'></match> is actually correct.
More specifically, I am not sure if I can use the v-bind:match='match' on the same element as the loop v-for='match in matches'. Does the information contained in match actually get sent as a prop to the component?
A:
Yes it does.
This is a working Example:
Vue.component('match',
{
props :['match'],
template : `<div><span>{{match.matchName}}</span></div>`
})
var mapp = new Vue({
el: "#app",
data: {
matches: [
{ matchName: "First Match"},
{ matchName: "Second Match"},
{ matchName: "Yet another Match"}
]
}
})
<script src="https://cdnjs.cloudflare.com/ajax/libs/vue/2.5.9/vue.js"></script>
<div id="app">
<match v-for='match in matches' v-bind:match='match'></match>
</div>
| {
"pile_set_name": "StackExchange"
} |
Q:
Perl + nginx 403 errors again
It seems a never ending battle getting Perl and nginx playing nice :( I've setup a new dev server. I wont bore you with all of the details, but suffice to say I have installed (via apt-get);
sudo apt-get update && sudo apt-get upgrade
sudo apt-get install nginx
sudo apt-get install php5-cli php5-cgi spawn-fcgi php-pear
sudo apt-get install mysql-server php5-mysql
sudo apt-get install fcgiwrap
I have then configured my site, using:
server {
listen 80;
server_name site.net.net www.site.net.net;
access_log /srv/www/site.net.net/logs/access.log;
error_log /srv/www/site.net.net/logs/error.log;
root /srv/www/site.net.net/www;
location / {
index index.html index.htm;
}
location ~ \.php$ {
try_files $uri =404;
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /srv/www/site.net.net/www$fastcgi_script_name;
}
location ~ \.cgi$ {
try_files $uri =404;
gzip off;
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/var/run/fcgiwrap.socket;
fastcgi_index index.cgi;
fastcgi_param SCRIPT_FILENAME /srv/www/site.net.net/www/cgi-bin/$fastcgi_script_name;
}
}
I've sym-linked the config files into sites-enabled, so that its visible on nginx. I then rebooted nginx, and tried:
index.html - works fine
index.php - works fine
index.cgi - 403 error
I managed to fumble my way through it last time, but I can't figure out what I did different (I know it was a real pig to get configured the first time around)
The only thing showing in the error_log, is:
2015/07/31 15:52:25 [error] 10434#0: *7 open() "/srv/www/site.net/www/favicon.ico" failed (2: No such file or directory), client: 81.174.134.xx, server: sitenet, request: "GET /favicon.ico HTTP/1.1", host: "site.net"
So not much help :/
Any suggestions from the experts?
UPDATE:
If I update the error log to "debug" level, i.e:
error_log /srv/www/steampunkjunkiesdev.net/logs/error.log debug;
...below is what is outputted (from just 1 request). Not sure if there is anything helpful in there though?
Server: nginx/1.6.2
Date: Fri, 31 Jul 2015 15:11:49 GMT
Content-Type: text/html
Transfer-Encoding: chunked
Connection: keep-alive
Content-Encoding: gzip
2015/07/31 16:11:49 [debug] 3557#0: *1 write new buf t:1 f:0 0000000000F049D8, pos 0000000000F049D8, size: 185 file: 0, size: 0
2015/07/31 16:11:49 [debug] 3557#0: *1 http write filter: l:0 f:0 s:185
2015/07/31 16:11:49 [debug] 3557#0: *1 http output filter "/favicon.ico?"
2015/07/31 16:11:49 [debug] 3557#0: *1 http copy filter: "/favicon.ico?"
2015/07/31 16:11:49 [debug] 3557#0: *1 image filter
2015/07/31 16:11:49 [debug] 3557#0: *1 xslt filter body
2015/07/31 16:11:49 [debug] 3557#0: *1 http postpone filter "/favicon.ico?" 0000000000F04AF8
2015/07/31 16:11:49 [debug] 3557#0: *1 http gzip filter
2015/07/31 16:11:49 [debug] 3557#0: *1 malloc: 0000000000EED690:12288
2015/07/31 16:11:49 [debug] 3557#0: *1 gzip alloc: n:1 s:5936 a:8192 p:0000000000EED690
2015/07/31 16:11:49 [debug] 3557#0: *1 gzip alloc: n:512 s:2 a:1024 p:0000000000EEF690
2015/07/31 16:11:49 [debug] 3557#0: *1 gzip alloc: n:512 s:2 a:1024 p:0000000000EEFA90
2015/07/31 16:11:49 [debug] 3557#0: *1 gzip alloc: n:512 s:2 a:1024 p:0000000000EEFE90
2015/07/31 16:11:49 [debug] 3557#0: *1 gzip alloc: n:256 s:4 a:1024 p:0000000000EF0290
2015/07/31 16:11:49 [debug] 3557#0: *1 gzip in: 0000000000EF6EE8
2015/07/31 16:11:49 [debug] 3557#0: *1 gzip in_buf:0000000000F04AA8 ni:00000000006DC600 ai:116
2015/07/31 16:11:49 [debug] 3557#0: *1 malloc: 0000000000EF06A0:4096
2015/07/31 16:11:49 [debug] 3557#0: *1 deflate in: ni:00000000006DC600 no:0000000000EF06A0 ai:116 ao:4096 fl:0 redo:0
2015/07/31 16:11:49 [debug] 3557#0: *1 deflate out: ni:00000000006DC674 no:0000000000EF06A0 ai:0 ao:4096 rc:0
2015/07/31 16:11:49 [debug] 3557#0: *1 gzip in_buf:0000000000F04AA8 pos:00000000006DC600
2015/07/31 16:11:49 [debug] 3557#0: *1 gzip in: 0000000000EF6EF8
2015/07/31 16:11:49 [debug] 3557#0: *1 gzip in_buf:0000000000EF6E20 ni:00000000006DCDA0 ai:52
2015/07/31 16:11:49 [debug] 3557#0: *1 deflate in: ni:00000000006DCDA0 no:0000000000EF06A0 ai:52 ao:4096 fl:4 redo:0
2015/07/31 16:11:49 [debug] 3557#0: *1 deflate out: ni:00000000006DCDD4 no:0000000000EF0711 ai:0 ao:3983 rc:1
2015/07/31 16:11:49 [debug] 3557#0: *1 gzip in_buf:0000000000EF6E20 pos:00000000006DCDA0
2015/07/31 16:11:49 [debug] 3557#0: *1 free: 0000000000EED690
2015/07/31 16:11:49 [debug] 3557#0: *1 http chunk: 10
2015/07/31 16:11:49 [debug] 3557#0: *1 http chunk: 121
2015/07/31 16:11:49 [debug] 3557#0: *1 write old buf t:1 f:0 0000000000F049D8, pos 0000000000F049D8, size: 185 file: 0, size: 0
2015/07/31 16:11:49 [debug] 3557#0: *1 write new buf t:1 f:0 0000000000EF7058, pos 0000000000EF7058, size: 4 file: 0, size: 0
2015/07/31 16:11:49 [debug] 3557#0: *1 write new buf t:0 f:0 0000000000000000, pos 00000000006E0240, size: 10 file: 0, size: 0
2015/07/31 16:11:49 [debug] 3557#0: *1 write new buf t:1 f:0 0000000000EF06A0, pos 0000000000EF06A0, size: 121 file: 0, size: 0
2015/07/31 16:11:49 [debug] 3557#0: *1 write new buf t:0 f:0 0000000000000000, pos 00000000004B2EB8, size: 7 file: 0, size: 0
2015/07/31 16:11:49 [debug] 3557#0: *1 http write filter: l:1 f:1 s:327
2015/07/31 16:11:49 [debug] 3557#0: *1 http write filter limit 0
2015/07/31 16:11:49 [debug] 3557#0: *1 writev: 327
2015/07/31 16:11:49 [debug] 3557#0: *1 http write filter 0000000000000000
2015/07/31 16:11:49 [debug] 3557#0: *1 http copy filter: 0 "/favicon.ico?"
2015/07/31 16:11:49 [debug] 3557#0: *1 http finalize request: 0, "/favicon.ico?" a:1, c:1
2015/07/31 16:11:49 [debug] 3557#0: *1 set http keepalive handler
2015/07/31 16:11:49 [debug] 3557#0: *1 http close request
2015/07/31 16:11:49 [debug] 3557#0: *1 http log handler
2015/07/31 16:11:49 [debug] 3557#0: *1 free: 0000000000EF06A0
2015/07/31 16:11:49 [debug] 3557#0: *1 free: 0000000000000000
2015/07/31 16:11:49 [debug] 3557#0: *1 free: 0000000000F03B20, unused: 8
2015/07/31 16:11:49 [debug] 3557#0: *1 free: 0000000000EF6A40, unused: 2156
2015/07/31 16:11:49 [debug] 3557#0: *1 free: 0000000000F09E70
2015/07/31 16:11:49 [debug] 3557#0: *1 hc free: 0000000000000000 0
2015/07/31 16:11:49 [debug] 3557#0: *1 hc busy: 0000000000000000 0
2015/07/31 16:11:49 [debug] 3557#0: *1 reusable connection: 1
2015/07/31 16:11:49 [debug] 3557#0: *1 event timer add: 14: 65000:1438355574073
2015/07/31 16:11:49 [debug] 3557#0: *1 post event 0000000000F4A118
2015/07/31 16:11:49 [debug] 3557#0: *1 delete posted event 0000000000F4A118
2015/07/31 16:11:49 [debug] 3557#0: *1 http keepalive handler
2015/07/31 16:11:49 [debug] 3557#0: *1 malloc: 0000000000F09E70:1024
2015/07/31 16:11:49 [debug] 3557#0: *1 recv: fd:14 -1 of 1024
2015/07/31 16:11:49 [debug] 3557#0: *1 recv() not ready (11: Resource temporarily unavailable)
2015/07/31 16:11:49 [debug] 3557#0: *1 free: 0000000000F09E70
A:
Eugh, I feel like a total idiot now! In my config, I had:
location ~ \.cgi$ {
gzip off;
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/var/run/fcgiwrap.socket;
fastcgi_index index.cgi;
fastcgi_param SCRIPT_FILENAME /srv/www/site.net/www/cgi-bin$fastcgi_script_name;
}
However, that SCRIPT_FILENAME was wrong... should be:
location ~ \.cgi$ {
gzip off;
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/var/run/fcgiwrap.socket;
fastcgi_index index.cgi;
fastcgi_param SCRIPT_FILENAME /srv/www/site.net/www/$fastcgi_script_name;
}
(notice the missing /cgi-bin on the 2nd to last line). So simple when you can see it! Just thought I'd share it, with the hope it will help someone else out at some point in the future. Duh!
| {
"pile_set_name": "StackExchange"
} |
Q:
movieClip in Array displays null, and aren't showing up on stage.addChild(Array[i])
I am new to Actionscript3, I need to know why I keep getting the error Parameter child must be non-null. And my code won't display 5 enemyBlock objects onto the stage but only just one.
any tips and help will be much appreciated. thanks in advance.
Returns:
TypeError: Error #2007: Parameter child must be non-null.
at flash.display::DisplayObjectContainer/addChild()
at flash.display::Stage/addChild()
at BlockDrop_fla::MainTimeline/EnemyBlockPos()
at BlockDrop_fla::MainTimeline/frame2()
// declare varibles
var isEnemyMoving:Boolean = false;
var enemyArray:Array;
var enemyBlock:MovieClip = new EnemyBlock(); // assign EnemyBlock class to enemyBlock
var enemyBlockMC:MovieClip;
var count:int = 5;
var mapWidth:Number = 800;
var mapHeight:Number = 600;
function EnemyBlockPos() :void {
// assign new MovieClip not null
enemyBlockMC = new MovieClip;
enemyArray = new Array();
for(var i=1; i<= count; i++){
// add class to MC
enemyBlockMC.addChild(enemyBlock);
// randomize position
enemyBlock.x = Math.round(Math.random()*mapWidth);
enemyBlock.y = Math.round(Math.random()*mapHeight);
// set motion
enemyBlock.movement = 5;
// add MC to array
enemyArray.push(enemyBlockMC);
}
for (var w = 1; w <= enemyArray.length; w++) {
addChild(enemyArray[w]);
}
} // endOf EnemyBlockPos
A:
Ooh dude I think I have it.
Your approach is fine but I think I see where the error occurs. As far as I can see, you add an enemyBlock each time you loop to the one enemyBlockMC - Then you add that enemyBlockMC to the array (e.g.) 5 times.
therefore you'll have the 5 same referances to the enemyBlockMC in enemyArray.
- So you'll have enemyBlockMC the same time each itteration in your second for loop.
If you intended to have 5 different enemyBlock's on the stage you need to do something like this:
for(var i:int =0; i<= count - 1; i++){
// add class to MC
/*
Move this line of code into the for loop, creating a new version every time.
*/
enemyBlockMC = new MovieClip;
/*
Also move this into your loop, ensuring you make a new EnemyBlock() every time
*/
var enemyBlock:MovieClip = new EnemyBlock(); // assign EnemyBlock class to enemyBlock
enemyBlockMC.addChild(enemyBlock);
// randomize position
enemyBlock.x = Math.round(Math.random()*mapWidth);
enemyBlock.y = Math.round(Math.random()*mapHeight);
// set motion
enemyBlock.movement = 5;
// add MC to array
enemyArray.push(enemyBlockMC);
}
That way, every time you push enemyBlockMC into your enemyArray, is a new version of enemyBlock wrapped inside a movieclip.
With that said, you'll have nth number of enemyBlocks of which are all new versions. Therefore when you addChild(enemyArray[w]); in your second for loop, you'll have a new version every time.
In essence (to clarify) enemyArray[0] is an entirely different object to enemyArray[2]
Hope it makes sense. - If you need me to explain it again, just ask.
Is that what your were going for?
Sorry about the code formatting -- o_O
| {
"pile_set_name": "StackExchange"
} |
Q:
Checking if the username is available with AJAX
I'm using a basic registration form with AJAX, but the form is not connecting to the database. I'm obviously overlooking something.
So here's the field I want to validate.
Username:<input type="text" name="user" id="user" maxlength="30">
<span id="msgbox" style="display:none"/></input>
Then I use jQuery, here's the code:
$(document).ready(function() {
$("#user").blur(function() {
//remove all the class add the messagebox classes and start fading
$("#msgbox").removeClass().addClass('messagebox').text('Checking...').fadeIn("slow");
//check the username exists or not from ajax
$.post("user_availability.php",{ user_name:$(this).val() },
function(data) {
if(data=='no') { //if username not avaiable
$("#msgbox").fadeTo(200,0.1,function() {//start fading the messagebox
//add message and change the class of the box and start fading
$(this).html('This User name Already exists').addClass('messageboxerror').fadeTo(900,1);
});
} else {
$("#msgbox").fadeTo(200,0.1,function() { //start fading the messagebox
//add message and change the class of the box and start fading
$(this).html('Username available to register').addClass('messageboxok').fadeTo(900,1);
});
} // else
} // function
); // $.post
}); // blur
}); // ready
And I have this code, user_availability.php:
mysql_connect(localhost,$user,$password);
or die('There is error to connect to server:-'.mysqli_connect_error());
$db_selected = mysql_select_db($database);
$user_name=$_POST['user_name'];
$sql = "select * from members where username='$user_name'";
$result = mysql_query($sql);
while($row = mysql_fetch_assoc($result))
{
$existing_users[] = $row['username'];
}
if (in_array($user_name, $existing_users))
{
echo "no"; //user name is not availble
}
else
{
echo "yes"; //user name is available
}
I get no database errors. The form will be more substantial, but I can't get it to work with this field. Any suggestions?
A:
Just a nitpick on the select statement. Since you only need to get how many rows exist for the entered username you don't need to do a "select *" as depending on the number of columns in your users table you could be returning quite a bit of data. I would revise your query to be like so:
$sql = "select COUNT(username) from members where username=$user_name";
Then you check to make sure that the result of the query equals zero. If it does then the username is available.
I know that the above wasn't an answer to your question but just thought I would point it out since this looks to be a function that is going to get called a lot depending on the traffic on your registration form and the creativity of your visitors.
| {
"pile_set_name": "StackExchange"
} |
Q:
why when i enter less than 201 its also run the last if statement i don't enter understand
I do not understand the reason why this code is not working proper when I enter less than 200 then last if statement also executed can anyone tell the problem.
This the main problem of this code is not working properly
public class main {
public static void main(String args[]){
Scanner input=new Scanner(System.in);
double unit ;
double extra;
double total_unit;
System.out.println("enter total unit");
unit=input.nextInt();
if(unit >=1 && unit <=200){
unit=unit *8;
System.out.println(" bill of 200 units is "+ unit);
}
if(unit >=201 && unit <=300){
extra = unit - 200;
extra= extra * 10;
total_unit = 200 * 8 + extra;
System.out.println("Total bill is: " + total_unit);
}
if(unit >=301 && unit<=400){
extra=unit-300;
extra=extra *15;
total_unit=200*8 +100*10+ extra;
System.out.println("total bill of more than 300 units is "+total_unit);
}
if(unit >=401 && unit<=500){
extra=unit-400;
extra=extra*20;
total_unit=200*8+ 100*10 + 100*15 + extra ;
System.out.println("total bill between 401 to 500 units" + total_unit);
}
if(unit>501){
extra=unit-500;
System.out.println("unit consumed " + extra + " that above");
extra=extra *25;
System.out.println("------------unit above 500 bill-------- \n" +extra);
total_unit=200*8 + 100*10 +100*15 +100*20 + extra;
System.out.println("---------total bill----------\n " + total_unit);
}
}
}
A:
I have writen the answer in code why it is executing second condition read the comments in code. if you have any ?s post a comment
//here your reading the unit value as ex: 190
unit=input.nextInt();
if (unit >= 1 && unit <= 200) {
//here your changin the unit value to 190 * 8 so now unit value is 1520
unit = unit * 8;
System.out.println(" bill of 200 units is " + unit);
}
//now here your given value is 190 but unit value is changed to 1520 here condition true, that is the reason second condition also excecuting.
if (unit > 501) {
extra = unit - 500;
System.out.println("unit consumed " + extra + " that above");
extra = extra * 25;
System.out.println("------------unit above 500 bill-------- \n"
+ extra);
total_unit = 200 * 8 + 100 * 10 + 100 * 15 + 100 * 20 + extra;
System.out
.println("---------total bill----------\n " + total_unit);
}
| {
"pile_set_name": "StackExchange"
} |
Q:
How to reset permissions in drupal 7 manually through mysql?
I have developed a site in drupal 7 and forget its admin password. Now when I request for new password it sent me a link to "one time" login screen. When I click on "LOGIN" button it takes me to following page where it is stated that: "You are not authorized to access this page."
http://somedomain.com/?q=user/1/edit&pass-reset-token=K33tsqRwVOhtt0om0H9hGa4TfUGeyQCIbvCA7qV-2tk
Its my first site in drupal and I believe I modified permissions for pages is basic cause of the problem. Is there a way I can reset permissions without loosing any data through mysql or some thing like this.
A:
You can reset your password without being logged in to Drupal but not from MySQL alone. Check out the instructions in the documentation at https://drupal.org/node/1023428
| {
"pile_set_name": "StackExchange"
} |
Q:
Can i use Babel Require Hook in production for ES6?
I have a node app that uses ES6. I'm using Babel for transpiling.
As far as i understand use a build system like Gulp for example or use the require hook .
The Require Hook approach seems appealing to me since i will be able to keep my source files in ES6 but still execute them using node server.js without the use of some build system.
This seems extremely useful for develop at least, my main concerns is about the use of this approach in production.
Does it have any penalty hit every time that a user makes a request ?
How this works exactly ?
For context i'm using it with an Express app.
A:
Does it have any penalty hit every time that a user makes a request ?
No, not in the sense that it would have to re-transpile the code on every request.
How this works exactly ?
The require hook basically hi-jacks all subsequent calls to require(), and performs the ES6->ES5 transpiling prior to that module being executed.
I've been using the babel require hook in production for awhile now (with moderate traffic levels -- peak traffic in the mid-hundreds req/sec range), and it's never been an issue.
| {
"pile_set_name": "StackExchange"
} |
Q:
Hidden Markov Models - Identifying Phonemes
I'm developing a project that identifies Phonemes to be able to identify whether someone is saying either "Yes" or "No".
So far in the project, I have used Zero-crossings to identify what the person is saying, this works really well and seems simple enough to understand. The project, however, needs a few enhancements and has to be developed using a Hidden Markov Model.
My question is this:
I want to develop a Hidden Markov Model, without erasing the work that I have already completed. I.e. I strip the data that do not warrant consideration by counting the number of zero-crossings as well as the summation of the blocks.
I do not understand what data I would need to train the HMM in order to be able to identify these Phonemes. E.g.
With Zero-crossings I have identifies that:
Yes - Zero-crossings start low and then the value increases
No - Zero-crossings start low and then do not increase with value.
Could I train my HMM algorithm so that it interprets these values?
Or could anyone suggest a method of which I can train the HMM to be able to identify the word that is inputted in the sample?
Hope someone can help :)!
A:
Could I train my HMM algorithm so that it interprets these values?
Yes, definitely
Or could anyone suggest a method of which I can train the HMM to be able to identify the word that is inputted in the sample?
You just need to put zero crossing rate in a feature file together with MFCC features like 14th feature and use any standard HMM training toolkit like CMUSphinx or HTK to train the HMM and decode using it. For more information see
http://cmusphinx.sourceforge.net/wiki/mfcformat
or
http://speech-research.com/htkSearch/index.php?ID=297039
http://speech-research.com/SRTxt2User/index.html
| {
"pile_set_name": "StackExchange"
} |
Q:
how to pass a Scala function to another function as a function?
I'm just not understanding how to build a function from previous functions.
For example, math.min() function that takes the minimum of two numbers. What if I wanted to create a function min3Z(a:int, b:int, c:Int)? How would I build this from min?
A:
Your question is unclear. Are you simply trying to utilize math.min() in min3Z()? Because in that case, you can do:
def min3Z(a: Int, b: Int, c: Int) = math.min(a, math.min(b, c))
If you want to pass in an arbitrary function (for example, making it so you can specify max or min), you can specify a function as a parameter:
def min3Z(a: Int, b: Int, c: Int, f: (Int, Int) => Int) = f(a, f(b, c))
The syntax (Int, Int) => Int in Scala is the type for a function that takes two parameters of type Int and returns a result of type Int. math.min() and math.max() both fit this criteria.
This allows you to call min3Z as min3Z(1, 2, 3, math.min) or min3Z(1, 2, 3, math.max). It's even possible to make a version of min3Z() that takes an arbitrary number of Ints and an arbitrary (Int, Int) => Int function, but that's beyond the scope of your question.
| {
"pile_set_name": "StackExchange"
} |
Q:
DER para aluno, disciplina e curso
Um estudante só pode ter um curso.
Um curso tem várias disciplinas.
O aluno pode cursar várias disciplinas do curso.
A minha única dúvida é como ficaria isso no DER?
Ficaria realmente desse jeito?
Já que aluno só tem um curso, a table seria disciplinaAlunoCurso? Ou só disciplinaAluno?
Vai que o aluno mude de curso depois, então a melhor opção seria essa? Colocar as três chaves de cada tabela como estrangeira na tabela de ligação?
A:
Vou dar uma resposta para lhe dar uma orientação. Se não é isso que precisa, complicou, porque não dá para ficarmos debatendo aqui até chegar onde você quer, deixaria de ser uma pergunta e uma resposta.
Você não especificou bem o problema. Claro, quando especificar o modelo fica praticamente pronto. Mas sem especificar fica difícil lhe ajudar. Terei que especular. Principalmente porque você disse que não quer algo mais complexo. Para fazer tudo correto algo mais complexo é necessário. Mas se você aceita que não esteja tudo "correto", qual é o limite aceitável? Só você sabe.
De fato esse correto é bem entre aspas porque é difícil dizer que tem certo ou errado em modelagem. O correto do ponto de vista acadêmica nem sempre é muito prático. Muitas vezes para otimizar o banco de dados é necessário fazer o "errado".
Quando você quer soluções simples que apenas funcione, fazer tudo como mandam os livros pode ser um exagero.
Mas só você sabe o ponto de inflexão.
Vou dar algumas dicas do que eu faria olhando por cima e fazendo algumas especulações.
Considerações
O campo cursoID de aluno pode até ser usado, mas se quer fazer o certo deveria ter uma tabela que amarra o aluno ao curso. Pode ser exagero criar uma tabela só para isto mas faz mais sentido (é o que eu falei que não dá para saber o limite). Um aluno não tem por característica um curso, curso é algo transitório, é algo externo e independente do aluno.
Para isto existe o relacionamento 1:1. Curiosamente as pessoas acham que se é 1:1 pode sempre ignorar a criação de uma tabela e juntar tudo. Mas se você achar que é exagero e cabe ficar na mesma tabela, posso concordar. Você é que decide. E quem pode falar que vai dar algum problema?
Eu imagino que o aluno pode ter zero ou um curso e não apenas um. Então é para se pensar. Claro que o null existe para casos como este. Há muita controvérsia se deve ou não usar nulos para isto (eliminar um relacionamento).
Uma disciplina só existe para aquele curso? Parte da modelagem diz que sim, parte diz que não.
Não importa como vai relacionar, falta ligar a tabela disciplina com curso de alguma forma.
Se a disciplina só existe para aquele cursos é uma relação 1:1, então você pode escolher se vai fazer uma tabela para fazer a amarração da disciplina ao curso ou vai usar um campo como você fez. Qual é o mais certo é difícil dizer. Uma disciplina pode ficar sem um curso?
Se a disciplina está vinculada a apenas um curso, se o aluno mudar de curso ele não está mais cursando aquela disciplina.
Mas se a disciplina pode existir em vários cursos tem duas coisas a considerar: precisa ter uma amarração entre cursos e disciplina; e então não importa (?!? depende) em que curso o aluno está cursando ela, ele cursa a disciplina.
E aí o campo cursoId de disciplinaAlunoCurso não seria necessário.
Você levantou essa possibilidade do aluno trocar de curso e ainda continuar cursando a disciplina, então a disciplina não pode simplesmente pertencer ao curso, mas sim vários cursos pode ter relacionamento com as disciplinas.
Depois você vai usar a informação disponível de todas disciplinas que ele completou e todas necessárias para o curso para saber se ele concluiu o curso. Mas já estamos indo além da modelagem.
Já que se falou tanto em turmas nos comentários e é o correto considere criar uma tabela de turmas e relacionar o aluno e a disciplina à turma. Isto não resolve todas as situações possível em uma escola, mas já resolve uma importante. E aí obviamente a amarração será aluno X disciplina X turma. Eventualmente de curso também. Eu sou um exemplo vivo de que isto é necessário porque mudei de turno e tive que fazer diversas disciplinas do meu curso em turmas de outros cursos (o que às vezes descobri que a disciplina não era bem a mesma).
Conclusão
Se for analisar tudo o que pode acontecer, pensar que uma mudança de cursos provavelmente precisa ser registrada adequadamente guardando histórico do que ele fez, que podem existir turmas diferentes, e que tem uma série de outros elementos e fatores envolvidos, vai ficar bem complexo. Se deixar tudo isto de lado em última análise pode ter uma tabela única que cuida de tudo.
Mas não garanto nada disto, pode ser que precise ser diferente. É mais fácil fazer modelagem de situações reais. Ou pelo menos de situações hipotéticas bem especificadas.
| {
"pile_set_name": "StackExchange"
} |
Q:
Difference between multiple values and plain tuples in Racket?
What is the difference between values and list or cons in Racket or Scheme? When is it better to use one over the other? For example, what would be the disadvantage if quotient/remainder returns (cons _ _) rather than (values _ _)?
A:
Back in 2002 George Caswell asked that question in comp.lang.scheme.
The ensuing thread is long, but has many insights. The discussion
reveals that opinions are divided.
https://groups.google.com/d/msg/comp.lang.scheme/ruhDvI9utVc/786ztruIUNYJ
My answer back then:
> What are the motivations behind Scheme's multiple return values feature?
> Is it meant to reflect the difference in intent, or is there a
> runtime-practical reason?
I imagine the reason being this.
Let's say that need f is called by g. g needs several values from f.
Without multiple value return, f packs the values in a list (or vector),
which is passed to g. g then immediately unpacks the list.
With multple values, the values are just pushed on the stack. Thus no
packing and unpacking is done.
Whether this should be called an optimization hack or not, is up to you.
--
Jens Axel Søgaard
We don't need no side-effecting We don't need no allocation
We don't need no flow control We don't need no special-nodes
No global variables for execution No dark bit-flipping for debugging
Hey! did you leave the args alone? Hey! did you leave those bits alone?
(Chorus) -- "Another Glitch in the Call", a la Pink Floyd
A:
They are semantically the same in Scheme and Racket. In both you need to know how the return looks like to use it.
values is connected to call-with-values and special forms like let-values are just syntax sugar with this procedure call. The user needs to know the form of the result to use call-with-values to make use of the result. A return is often done on a stack and a call is also on a stack. The only reason to favor values in Scheme would be that there are no overhead between the producer return and the consumer call.
With cons (or list) the user needs to know how the data structure of the return looks like. As with values you can use apply instead of call-with-values to do the same thing. As a replacement for let-values (and more) it's easy to make a destructuring-bind macro.
In Common Lisp it's quite different. You can use values always if you have more information to give and the user can still use it as a normal procedure if she only wants to use the first value. Thus for CL you wouldn't need to supply quotient as a variant since quotient/remainder would work just as well. Only when you use special forms or procedures that take multiple values will the fact that the procedure does return more values work the same way as with Scheme. This makes values a better choice in CL than Scheme since you get away with writing one instead of more procedures.
In CL you can access a hash like this:
(gethash 'key *hash* 't)
; ==> T; NIL
If you don't use the second value returned you don't know if T was the default value or the actual value found. Here you see the second value indicating the key was not found in the hash. Often you don't use that value if you know there are only numbers the default value would already be an indication that the key was not found. In Racket:
(hash-ref hash 'key #t)
; ==> #t
In racket failure-result can be a thunk so you get by, but I bet it would return multiple values instead if values did work like in CL. I assume there is more housekeeping with the CL version and Scheme, being a minimalistic language, perhaps didn't want to give the implementors the extra work.
A:
Edit: Missed Alexis' comment on the same topic before posting this
One oft-overlooked practical advantage of using multiple return values over lists is that Racket's compose "just works" with functions that return multiple values:
(define (hello-goodbye name)
(values (format "Hello ~a! " name)
(format "Goodbye ~a." name)))
(define short-conversation (compose string-append hello-goodbye))
> (short-conversation "John")
"Hello John! Goodbye John."
The function produced by compose will pass the two values returned by hello-goodbye as two arguments to string-append. If you're writing code in a functional style with lots of compositions, this is very handy, and it's much more natural than explicitly passing values around yourself with call-with-values and the like.
| {
"pile_set_name": "StackExchange"
} |
Q:
Non-vanishing section on compact manifolds
Now if we have a compact smooth manifold M and a rank k vector bundle on it. Then I want to find a non-vanishing smooth section on M if $k>dim M$. But I have met some difficulties: The main idea is similar to the case when we proof the weak Whitney embedding theorem. Suppose
$\eta:M\rightarrow E$ Is the zero section, since M is compact, it is also an embedding, which we denote the submanifold by $S=\eta(M)$, then we want to show that there is section $\sigma:M\rightarrow E$, whose image intersects with S empty. It seems to me that the amount of this kind of $\sigma$ Is very large due to dimensional reasons, but I just don’t know how to extract one of them? How can I make it?
A:
I actually feel that this is a very important structural theorem about vector bundles that is mysteriously hard to find in any introductory textbook.
The main theorem that I'll use can be found in Bredon's "Topology and Geometry" under number II.15.3:
Let $E\overset{\pi}{\to}M$ be a smooth vector bundle over a smooth manifold $M$. Let $M'$ be a smooth manifold and $f:M'\to E$ a smooth map. Then there is a smooth section $\sigma:M\to E$ that can be chosen arbitrarily close to the zero section, such that $\sigma\pitchfork f$.
Here $\pitchfork$ denotes transversality of maps, i.e. the images of the differentials of the two maps sum up to the entire tangent space in the codomain at every point in the intersection of their images.
Now, if $E\overset{\pi}{\to}M^m$ has rank $k$ and $m=\mathrm{dim} M$, then we have three cases:
$k>m$. Then by the above theorem, there is a section $\sigma$ that is transverse to the zero section $E_0$, denoted $\sigma_0:M\to E$. By dimension counting, this implies that $\mathrm{im}(\sigma)\cap E_0=\varnothing$. Then, by induction, $E$ contains a rank $k-m$ trivial subbundle. For this reason, vector bundles of rank higher than the dimension of the base are not very interesting from the point of view of any classification of bundles (e.g. K-theory).
$k=m$. In this case the "generic" section of $E$ is transverse to the zero section and by dimension counting has only isolated zeros. This is the case e.g. for sections of tangent bundles, or complex line bundles over Riemann surfaces, and allows one to define the degree of a section.
$k<m$. In this case the "generic" section of $E$ is transverse to the zero section and turns to zero along a submanifold of $M$ of dimension $m-k$.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is using Table variables faster than temp tables
Am I safe to assume that where I have stored procedures using the tempdb to write a temporary table, I'd be better off switching these to table variables to get better performance?
A:
Temp tables are better in performance. If you use a Table Variable and the Data in the Variable gets too big, the SQL Server converts the Variable automatically into a temp table.
It depends, like almost every Database related question, on what you try to do. So it is hard to answer without more information.
So my answer is, try it and have a look at the execution plan. Use the fastest way with the lowest costs.
MSDN - Displaying Graphical Execution Plans (SQL Server Management Studio)
| {
"pile_set_name": "StackExchange"
} |
Q:
What is the actual range of ICON A5?
Does anyone currently own or have experience flying an ICON A5 aircraft?
They claim a range of 300 NM (555.6 km). I would like to know whether this is a theoretical calculated range, or a true/actual number?
A:
I can't speak directly to the ICON A5 as I neither own nor fly one, but for every aircraft I'm familiar with the manufacturer's "book numbers" are generous theoretical values - for example, they typically assume flying perfectly straight-and-level in a no-wind condition, and getting the best possible fuel economy performance from the powerplant.
The Icon folks are openly honest about this on their specs page: Performance specifications are estimates only., and range is certainly "Performance" in my book.
Personally I have no doubt that the A5 could manage 300NM under the conditions for which its designers did the math to arrive at that number - it's a slick little plane with a fuel-sipping engine - but they may have done a "to empty tanks" calculation (leaving you to account for the VFR fuel reserves you're legally required to have), and in the real world the 15-knot headwind you run into will substantially reduce your range (or, conversely, you can fly in the other direction and increase it).
As with all aircraft your mileage will, quite literally, vary depending on the day and direction of flight.
| {
"pile_set_name": "StackExchange"
} |
Q:
Python Pandas Dataframe: length of index does not match - df['column'] = ndarray
I have a pandas Dataframe containing EOD financial data (OHLC) for analysis.
I'm using https://github.com/cirla/tulipy library to generate technical indicator values, that have a certain timeperiod as option. For Example. ADX with timeperiod=5 shows ADX for last 5 days.
Because of this timeperiod, the generated array with indicator values is always shorter in length than the Dataframe. Because the prices of first 5 days are used to generate ADX for day 6..
pdi14, mdi14 = ti.di(
high=highData, low=lowData, close=closeData, period=14)
df['mdi_14'] = mdi14
df['pdi_14'] = pdi14
>> ValueError: Length of values does not match length of index
Unfortunately, unlike TA-LIB for example, this tulip library does not provide NaN-values for these first couple of empty days...
Is there an easy way to prepend these NaN to the ndarray?
Or insert into df at a certain index & have it create NaN for the rows before it automatically?
Thanks in advance, I've been researching for days!
A:
Maybe make the shift yourself in the code ?
period = 14
pdi14, mdi14 = ti.di(
high=highData, low=lowData, close=closeData, period=period
)
df['mdi_14'] = np.NAN
df['mdi_14'][period - 1:] = mdi14
I hope they will fill the first values with NAN in the lib in the future. It's dangerous to leave time series data like this without any label.
| {
"pile_set_name": "StackExchange"
} |
Q:
Слово "неприятнее" выглядит неправильным. Прав ли я?
Иностранец написал:
В Москве климат неприятнее, чем в Сочи
На мой слух, это звучит неправильно. Можно сказать хуже или менее приятный, но не неприятнее. Прав ли я? Если да, то есть ли какое-то формальное правило, подтверждающее мое мнение?
A:
Нет запрета. В Нацкорпусе русского языка я нашел 225 вхождений слова неприятнее, из них 11 с "неприятнее, чем". Примеры:
Но сегодня было и еще… сегодня было неприятнее, чем всегда, и Алексей не мог понять почему. [Андрей Битов. Сад (1960-1963)]
Видите, я допускаю, что моя болтливость вам может быть еще неприятнее, чем моя смерть. [В. В. Набоков. Solus Rex (1940-1942)]
| {
"pile_set_name": "StackExchange"
} |
Q:
Does setting exact mime types for files give improvements?
I was wondering - does it improve download performance (less file size or smth) when i set exact mime types in server. E.g. for js path in server i will set: application/javascript.
A:
I don't think so. It just makes it easier for client code to tell what it is that's being downloaded from your server.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to pass list from one activity to other activity and all the time mantain same instance?
I don't know how to solve my problems. It goes like this:
In main activtiy I make my own objects and added them to list(arrayList) and after that when I go to another activity I would like to:
to send list of list og my own ojbects, I know you make it with: intent.putExtra but you don't have type list myObject
when I pass the list of lists of objects I think it makes new istance of these datas, but I would like to have one istance all the tame and I would like them to read and manipulate on first instance.
More explanation in my main activity I make objects and they are ready for all other activityes and all other activitys can read and write my list od lists. Only one activity is active in a time.
And I am also intetrested in when I manupilated the data in some activity and I would like to go to main menu and pick other activity how to send from that first activity to menu activity and pass them to next activity to process.
Would you help me, please. Best regards Robert
A:
I suggest you implement an application object, and have the objects that you're referring to live in the application object rather than an activity object. There can be only application object associated with any particular app, and it gets created before any Activity objects, and is independent of whatever activity objects are created/destroyed during the lifetime of your app. Hence you don't need to worry about sharing your objects between activities, because they're all available globally in the application object.
| {
"pile_set_name": "StackExchange"
} |
Q:
how to use toogle switch in windows phone application
I am trying to add the toogle switch in windows phone application, I had searched in Internet also but didn't find the solution for using toogle switch. I followed the example as shown below. but it's showing some error , and saying toogle switch did't exist. Can anybody suggest what is happening here and a better way to use toogle switch. The Code which I used is from C# corner, and is like this.
xmlns:tool="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone.Controls.Toolkit" //
<Grid x:Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0">
<tool:ToggleSwitch x:Name="tglSwitch"
Header="wifi"
Checked="tglSwitch_Checked"
Unchecked="tglSwitch_Unchecked"/>
</Grid>
A:
The ToggleSwitch is a control that can be found in the Windows Phone Toolkit library.
You can easily add that library to your project via NuGet: right click on your project -> "Manage NuGet packages" then search for "WPtoolkit".
| {
"pile_set_name": "StackExchange"
} |
Q:
How are voiced and voiceless consonants distinguished while whispering?
When I whisper, none of my consonants is voiced. But I can tell the difference between voiced and voiceless consonants. How is that possible?
A:
Whispering involves a low-amplitude non-periodic noise source at the glottis. This is harder to hear than modal phonation, but is enough to allow phoneme discrimination. In English, voicing is encoded in a number of ways, sometimes with vocal fold vibration, but also with duration, aspirations, and constriction size. Probably the most salient feature is aspiration, and this actually gives some evidence in support of the claim that voiceless fricatives are aspirated in English, since "sue" and "zoo" are still distinguishable (as long as the room is quiet). Also, there are a number of allophonic differences that are still applied, such as vowel raising before voiceless (rapid has a higher, shorter vowel; rabid has a longer, lower vowel, and this is maintained under whispering). It would be interesting to see how whispering plays out in in tone languages, and in languages with a contrast between voiced stops vs. voiceless stops with negligible voice lag.
| {
"pile_set_name": "StackExchange"
} |
Q:
Excel distorting markers in scatter plot
I've run into an issue with scatter plots in Excel. For whatever reason, Excel is distorting the appearance of markers on the plot, such that markers within the same series do not look the same. For example, in a series with circular markers, some of the markers appear as ellipses, some appear as smaller circles, etc.
Does anyone know why Excel is doing this? Better yet, any ideas how to stop this? For reference, I'm copying these graphs into Adobe Illustrator to create graphics for a print application.
A:
Just a guess: Try saving as (exporting to) to a PDF or other print format then copy the image from that.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why do I sometimes have to double escape spaces in rsync paths?
When referring to paths with spaces, e.g. Steve Jobs, I can either backslash escape, or surround with quotes, when the path is the local directory:
rsync Steve\ Jobs user@remote:/
or
rsync "Steve Jobs" user@remote:/
However, when Steve Jobs is the remote directory, surrounding with quotes is not sufficient. I also have to backslash escape the space:
rsync user@remote:/"Steve\ Jobs" .
Anyone know why this oddity happens?
If this is an artifact of SSH, why does SSH present this oddity?
A:
rsync is passing the string within quotes to the remote machine. If you escape a space in quotes, then that backslash becomes part of the string (it is not eaten by your shell).
The rsync manual page refers to escaping:
If you need to transfer a filename that contains whitespace, you can either specify the --protect-args (-s) option, or you'll need to escape the whitespace in a way that the remote shell will understand. For instance:
CWrsync -av host:'file\ name\ with\ spaces' /dest
You can see how special characters are (or are not) eaten by the shell using a simple script like this:
#!/bin/sh
n=1
echo "$@"
for i in "$@"
do
echo "$n:$i"
n=`expr $n + 1`
done
calling that args:
$ args "xx\ yy"; args "xx yy"
xx\ yy
1:xx\ yy
xx yy
1:xx yy
Since rsync does not add quotes (unless asked, using -s), the spaces have to be handled specially when sent in as part of a shell command to the other machine.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is it possible to find-out whether given group is $\mathbb{Z}_{15}$ or else?
Suppose we are given two multiplication tables of two groups. one corresponds to $\mathbb{Z}_{15}$ and other corresponds to $ \mathbb{Z}_5 \times \mathbb{Z}_3$. I know that these two groups are isomorphic. Is it possible to find out which table corresponds to $\mathbb{Z}_{15}$ out of these two given tables?
If yes how otherwise state explicitly what other information is needed to differenitate these two table.
A:
No, you cannot. Precisely because they are isomorphic.
A:
There is no way to differentiate the two groups, up to isomorphism.
Since $\gcd(5, 3) = 1$, we know that
$$\mathbb Z_{5} \times \mathbb Z_{3} \cong \mathbb Z_{15}$$
The set of elements are ordered pairs in $$\mathbb Z_5 \times \mathbb Z_3 = \{(0, 0), (0, 1), (0, 2), (1, 0), (1, 1), \ldots (2, 0), \ldots (5, 0), (5, 1), (5, 2)\}$$ You'd need to make a list 15 elements along the column headers, and the same along the row headers, just as you get when making a table for $\mathbb Z_{15}$. The element $(0,0)$ is the identity, and recall that the operation on $\mathbb Z_{5} \times \mathbb Z_{3}$ is component-wise addition, modulo 5 for the first term, mod 3 for the second term.
For example: $(1, 3) + (2, 2) = (3_{\text{ mod }5}, 2_{\text{ mod }3}) = (3, 2).$
Every of the 15 elements in $\mathbb Z_5\times \mathbb Z_3$ can be mapped to a unique element in $\mathbb Z_{15}$. So the Cayley tables will be identical, save for the name of each element.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to check one specific result in MYSQL join from duplicate rows
I am trying to check specific record from join of another table using id but the other table have multiple records with the same id I want to choose the records where there is no everyone in the column.
My SQL is:
SELECT b.*
, a.*
FROM engine4_blog_blogs b
LEFT
JOIN engine4_authorization_allow a
ON a.resource_id = b.blog_id
WHERE a.resource_type = 'blog'
AND a.role NOT IN ('everyone')
AND a.action = 'view'
AND b.draft = 0
AND b.search = 1
ORDER
BY b.creation_date DESC
I only want to select record not having everyone and select only one row not duplicates.
blog 2 view everyone 0 1 NULL
blog 2 view owner_member 0 1 NULL
blog 2 view owner_member_member 0 1 NULL
blog 2 view owner_network 0 1 NULL
blog 2 view registered 0 1 NULL
Thanks,
A:
If you want to choose rows that don't have "everyone", you need to move all conditions on the second table to the on clause:
SELECT b.*, a.*
FROM engine4_blog_blogs b LEFT JOIN
engine4_authorization_allow a
ON a.resource_id = b.blog_id AND
a.resource_type = 'blog' AND
a.role IN ('everyone') AND
a.action = 'view'
WHERE b.draft = 0 AND b.search = 1 AND a.resource_id IS NULL
ORDER BY b.creation_date DESC ;
If you put conditions in the WHERE clause, it filters out all non-matching rows, turning the LEFT JOIN into an INNER JOIN.
A:
you can use distinct command to avoid duplicate record.
SELECT DISTINCT b.*
, a.*
FROM engine4_blog_blogs b
LEFT
JOIN engine4_authorization_allow a
ON a.resource_id = b.blog_id
WHERE a.resource_type = 'blog'
AND a.role NOT IN ('everyone')
AND a.action = 'view'
AND b.draft = 0
AND b.search = 1
ORDER
BY b.creation_date DESC
| {
"pile_set_name": "StackExchange"
} |
Q:
Finding conditions for $\lim\limits_{n\to\infty}\sum\limits_{k=0}^{\lfloor\frac1{a_n}\rfloor}(-1)^k\binom nk(1-ka_n)^{n-1}=1$
Question: Find necessary and sufficient condition on the sequence $(a_n)_{n=1}^∞$ so that$$\lim_{n→∞}\sum_{k=0}^{\lfloor\frac1{a_n}\rfloor}(-1)^k\binom nk(1-ka_n)^{n-1}=1\tag 1$$given that $\lim\limits_{n\to\infty}a_n=0$ and $a_n\gt 0$ for all $n\in\Bbb{N}$.
After some guesswork I got to a condition that if $\sum\limits_{n\ge 1} a_n=\infty$ then eq.(1) holds. But I was not able to prove it neither could I find a counterexample for the conjecture. Searching on internet I found that this sum is very closely related to a special case of Dvoretzky covering problem but still couldn't find the necessary and sufficient condition. Until now I have tried using approximations for the Binomial Coefficient and binomial approximation to tackle the sum to no avail. I would be glad if someone could help.
Edit: I have got a counterexample for my conjecture i.e. $\sum\limits_{n\ge 1} a_n=\infty$ is alone not sufficient for eq.(1) to hold. So what should be the necessary and sufficient condition?
A:
This isn't an answer so much as me reporting what I've found simply testing different $a_n$ sequences. First, if $a_n=\frac{1}{n^p}$ ($p\in\mathbb{N}$) then the sum is always zero. Also, if $a_n$ grows as fast or faster than $\frac1n$ then the sum converges to zero. Now, one case which did go to $1$ in the limit was
$$a_n=\frac{\log(n^a)}{n}$$
for $a>1$. Unfortunately, I can't tell what happens when $a=1$ (it may very well converge to $1$) but at $a\in\{2,3,1.5,...\}$ it always seems to converge to $1$.
Again, this is not an answer, but if I were you I would investigate the function $\frac{\log(n^a)}{n}$ and see if that might be some sort of cutoff point.
| {
"pile_set_name": "StackExchange"
} |
Q:
Should Eclipse-specific files in an VCS be ignored, if using Maven?
I know why not to commit Eclipse/IDE-specific files into a VCS like Git (which I am actually using). That is one of the reasons I am using Maven and having it generating these files for you and not having them under version control.
But I wonder, if these files should be ignored in .gitignore which itself is under control of VCS/Git:
.classpath
.project
.settings/
target/
Is this okay?
What are the pros and cons since the .gitignore file itself becomes kind of IDE-specific as the files ignored there are IDE-spefific? Any best-practice?
A:
With the team's I've worked on, the general rule has been that you don't check in anything that is generated by or obtained by Maven. Since the pom.xml contains everything you need to create the .project, .classpath, and .settings files, we don't check them in.
My .gitignore always contains .classpath, .settings, .project, and target for Maven projects.
Edit: As mentioned in my comment below, here is another suggestion: If you really want to avoid having Maven or IDE specific entries in your .gitignore, try writing your .gitignore so you list only what you DO want checked in.
*
!stuffIDoWantToCheckIn
| {
"pile_set_name": "StackExchange"
} |
Q:
ASP.NET Core 5.0 error CS0012: The type 'Object' is defined in assembly 'mscorlib
In Visual Studio 2015 I have a kproj, in this project I wanted to add a reference to an assembly that is not available in any public nuget package source, so I've created my own nuget package and this way was able to add the reference to this assembly.
The problem is that now I'm getting the following exception:
ASP.NET Core 5.0 error CS0012: The type 'Object' is defined in an assembly that is not referenced. You must add a reference to assembly 'mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.
Any ideas on how to overcome this?
A:
Solved it.
Removed the framework "aspnetcore50" from the project.json
A:
As you wrote, removing aspnetcore50 from targeted frameworks removes the problem. However, I wanted to know why and what comes with it and I found the answer.
The difference between aspnet50 and aspnetcore50 is that they use .NET Framework 4.6 and .NET Core 5 respectively. An article What is .NET Core 5 and ASP.NET 5 within .NET 2015 Preview well explains the differences, which in short are:
When you run your ASP.NET 5 application on top of the Core-CLR and therefore .NET Core 5 framework, you’ll get an end-to-end stack optimized for server/cloud workloads which means a high throughput, very small footprint in memory and one of the most important things, side-by-side execution of the .NET Core 5 framework version (KRE or K runtime environment) related to your application, no matter what other versions of .NET might be installed in the same server or machine. Additionally, and like mentioned, you could run that web application on a web service running on Mac or Linux.
On the other hand, when you run your ASP.NET 5 application on top of the regular CLR and therefore .NET Framework 4.6 you’ll get the highest level of compatibility with existing .NET libraries and less restrictions than what you get when running on top of .NET Core 5.
It also means that to take an advantage of these great features, you need to use libraries which are .NET Core 5 compatible. If you do have an already compiled DLL, which is targetting .NET Framework, most probably it won't be compatible and you will have to use .NET Framework 4.6.
The reason for it is that .NET Core 5 doesn't contain Basic Class Library, which contains such common components like Collections, IO, LINQ, etc. The BCL components are now available as separate NuGet packages, so that you can include in your project only the pieces you need.
On how different .NET Core 5 targeted libraries are you can read in Creating multi-target NuGet Packages with vNext
A:
In fact, the problem is an old lib that requires an asp.net 4.0 or 4.5 vesion (less than Core).
Microsoft provides a solution for it by installing the fallowing NuGet package.
PM> Install-package Microsoft.NETCore.Portable.Compatibility
this way you will be able to run your code with old libs.
| {
"pile_set_name": "StackExchange"
} |
Q:
Access QWizardPage created by Qt Designer
I am using Qt Designer to create a QWizardPage.
I have imported the file into Qt Creator and it runs fine so far (I can build the project and run the wizard just fine)
Now I need to reimplement the isComplete function but I an unable to understand how to do it. The pages are named wizardPage{,1,2_1,_2}. I would like to know whats the best way to reimplement the isComplete() function now.
A:
In order to overload the isComplete() function of a QWizardPage, you need to create the QWizardPage yourself. Basically, layout a form -- like you would a dialog -- for only the page you want. Create a class for that page. This class inherits from QWizardPage and reimplements isComplete() for whatever checks you desire. Now in your wizard, find where you want the page to be. Delete all the widgets on it, then right-click and select Promote Widget.... Enter your class name and the path to the header file. Now when you compile, it should use an instance of your class for that page, including your override for isComplete().
| {
"pile_set_name": "StackExchange"
} |
Q:
Does JUnit set randomly properties to null?
I have a series of tests methods which exactly depend on each other. I keep having problems with object being set to default state or just their properties being changed to null. In normal program, nothing like this ever happens. Setting things randomly to final seems to help, but I can't modify production code just because of tests.
The fact that setting properties to final also means that it's not in my code - if my code was changing these, compiler wouldn't compile it (no reflection in my code).
Before I dig even deeper in it, could you please tell me whether JUnit should be doing something like this? If yes, what rules can I read about it?
A:
JUnit does not set anything to null randomly. It sets things to null very deterministically: before each test method is run, a new instance of the test class is created. Thus, all instance variables are set to the default value, which is null for object references.
Unit tests should not depend on each other. They must be runnable in any order. If you have parts that depend on each other, you have to put them in the same test method (or in seperate non-test methods which are called from your test method in the right order).
If you want to initialize data that are required by all tests, you can annotate methods with the @Before annotation (this method will then be run before each test method) or with the @BeforeClass annotation (this method will then be run once before all test methods).
| {
"pile_set_name": "StackExchange"
} |
Q:
How to show Alert when UIApplication is not able to open URL?
This method opens up the url in Safari when the website string is not null and it is atleast of length 3. But when I have supplierWebsite=@"www.heritage.com", nothing happens. I know that heritage.com is not valid website and so it is not activating in UIApplication. I would like to display atleast a pop up that would tell user that website is not available. Is there any way i can show Alertview telling that website is not available.
- (IBAction)doWebOpen:(UIButton *)sender {
if (self.provider.supplierWebSite && [self.provider.supplierWebSite length] > 3) {
NSString *urlString = [self.provider supplierWebSite];
NSURL *url = [NSURL URLWithString:urlString];
[[UIApplication sharedApplication] openURL:url];
}else {
NSError *err = [NSError errorWithDomain:@"com.cantopenweb" code:509 andDescription:@"This supplier does not have a website."];
[self showErrorAlert:err];
}}
A:
You could use canOpenURL method,
[[UIApplication sharedApplication] canOpenURL:[NSURL
URLWithString:@"your website"]];
The method returns a BOOL, so check that for YES or NO.
If YES, it CAN else NO.
A:
Just use canOpenURL of UIApplication class, like:
if([[UIApplication sharedApplication] canOpenURL:url])
{
[[UIApplication sharedApplication] openURL:url];
}
else
{
//show alert
}
canOpenURL:
Returns whether an application can open a given URL resource.
- (BOOL)canOpenURL:(NSURL *)url
Parameters
url
A URL object that identifies a given resource. The URL’s scheme—possibly a custom scheme—identifies which application can
handle the URL.
Return Value
NO if no application is available that will accept the URL; otherwise,
returns YES. Discussion
This method guarantees that that if openURL: is called, another
application will be launched to handle it. It does not guarantee that
the full URL is valid. Availability
Available in iOS 3.0 and later.
Declared In UIApplication.h
| {
"pile_set_name": "StackExchange"
} |
Q:
Saving file in a different location in iphone
I am unable to save a file to a different folder(outside app) since i have to retrieve the data from it after the application uninstalled.I have to get the data from that file after installing application.The file is getting deleted with the application.please help me solve this issue..
A:
Not possible. iOS apps are sandboxed. You cannot write anything outside your app sandbox.
For security reasons, iOS places each app (including its preferences and data) in a sandbox at install time. A sandbox is a set of fine-grained controls that limit the app’s access to files, preferences, network resources, hardware, and so on. As part of the sandboxing process, the system installs each app in its own sandbox directory, which acts as the home for the app and its data.
| {
"pile_set_name": "StackExchange"
} |
Q:
SSAS MDX - aggregate with multiple members
I have the MDX query below and the result is not what I expected. If my where clause is included just 1 city ([Geography].[Iorg].[City].[San Francisco]) then my aggregate result is correct for that city, but if Included 2 cities then my aggregate result is for the whole state of california which is not what I wanted. I just want to return result of those two cities.
{ [Geography].[Iorg].[City].[San Francisco]
,[Geography].[Iorg].[City].[San Jose]
}
This clause is for security {[Geography].[State].[California]} but I don't get when 1 city is included then the result is good but when I included two cities then the result is for state California.
If I remove my [Geography].[Country Name].children ON ROWS then the result is correct but I need that in my query. Any help would be appreciated.
SELECT
CROSSJOIN ({
[Measures].[Fleet]},
{[Time Calculations].[Current Period] }) ON COLUMNS
,
[Geography].[Country Name].children
ON ROWS
FROM [DMI]
WHERE
(
[Date].[Date Hierarchy].[Date].&[2019-02-12T00:00:00] ,
{ [Geography].[Iorg].[City].[San Francisco]
,[Geography].[Iorg].[City].[San Jose]
}
,{[Geography].[State].[California]}
)
A:
You should query like this
SELECT
CROSSJOIN ({
[Measures].[Fleet]},
{[Time Calculations].[Current Period] }) ON COLUMNS
,
[Geography].[Country Name].children
ON ROWS
From (select {[Geography].[Iorg].[City].[San Francisco],
[Geography].[Iorg].[City].[San Jose]}on 1 FROM [DMI]
)
WHERE
(
[Date].[Date Hierarchy].[Date].&[2019-02-12T00:00:00]
,{[Geography].[State].[California]}
)
| {
"pile_set_name": "StackExchange"
} |
Q:
how can i make the video reload after changing the src?
hello everyone i want to make a script to reload a src of video and i write a code to change the src and it's working but after changing the src nothing happend the new src did't work so i tried to reload the src of video frame but didn't work too and this is the script
<button type="button" onclick="myFunction1();">Click Me!</button>
<video id='hls-video' style="width: 100%; height: 100%;">
<source id="change-src" src='http://vtpii.net:8000/live/946108483118595/641391346746584/185.m3u8' title='480p' type='application/x-mpegURL'/>
</video>
<script>
var myFP = fluidPlayer('hls-video',{layoutControls:{autoPlay:true,allowTheatre: true}});
function myFunction1() {
document.getElementById("change-src").src = "http://livecdnh2.tvanywhere.ae/hls/mbc1/index.m3u8";
document.getElementById("hls-video").reload();
}
</script>
so how can i reload and make the new src working after click and thankyou
A:
<video> tag has no reload() method. You can use load() & play() methods in combination to get the desired behaviour like this:
var video = document.getElementById('hls-video');
var source = document.getElementById('change-src');
function myFunction1() {
video.pause()
source.setAttribute('src', 'https://www.w3schools.com/html/mov_bbb.mp4');
video.load();
video.play();
}
<button type="button" onclick="myFunction1();">Click Me!</button>
<video id='hls-video' style="width: 100%; height: 90vh;" controls>
<source id="change-src" src='https://www.w3schools.com/html/mov_bbb.mp4' title='480p' type='video/mp4'/>
</video>
| {
"pile_set_name": "StackExchange"
} |
Q:
Oracle 12c Client to 10g Server (OPENQUERY)
Scenario: We currently have a SQL 2008 R2 server (with Oracle 10g Client) using OPENQUERY to return data from an Oracle 10g database.
Problem: We would like to upgrade to SQL Server 2014 SP1 (with Oracle 12c Client) and still pull data from the Oracle 10g server. We have run some test on returning the data using OPENQUERY and the results are less then satifactory. A simple select * from a table went from 9seconds to 54 seconds!
Testing: We created a test SQL Server 2008 R2 server but this time put Oracle 12c client on it. The select * query ran in 26seconds this time but still alot longer then the 9 from the original server.
Question: Could there be a setting that isn't set on the new servers that would affect the speed this much? If so does anyone have any suggestions?
Note: I believe using a 12c client to connect to a 10g server is supported correct?
Thanks in advance.
A:
So for some reason setting the processor affinity makes all the difference. The steps to fix this issue are below.
Using SSMS, go to Instance properties, the select "Processors" page
You will notice both "Automatically set processor affinity mask for all processors" and "Automatically set I/O affinity mask for all processors" will be selected by default
Uncheck both these settings
And then in grid below select "Processor Affinity" for all processor checkbox (DO NOT check I/O affinity) and test query.
If the issue persists, restart the SQL Service and test the query again
It fixed the speed issue we were having with our Oracle queries on SQL Server 2014. We did also find that they would run faster without SP1
Hope this helps
| {
"pile_set_name": "StackExchange"
} |
Q:
How to send a POST request a file with Content-Type: application/octet-stream in Node js
I'm trying to upload something to facebook's server. Their official documentation states:
With the token from the dialog, you can submit the following call to our Graph API to submit your .zip. Note that we are using the video sub-domain, but that's intentional, since that URL is configured to receive larger uploads.
curl -X POST https://graph-video.facebook.com/{App ID}/assets
-F 'access_token={ASSET UPLOAD ACCESS TOKEN}'
-F 'type=BUNDLE'
-F 'asset=@./{YOUR GAME}.zip'
-F 'comment=Graph API upload'
I'm trying to convert this curl request to node.js using request module.
const myzip = zipDir+file.appName+".zip"
console.log(myzip)
var url = "https://graph-video.facebook.com/"+file.appId+"/assets";
const options = {
url: url,
headers: {
"Content-Type": "application/octet-stream"
}
}
var req = request.post(options, function (err, resp, body) {
console.log('REQUEST RESULTS:', err, resp.statusCode, body);
if (err) {
console.log('Error!');
reject();
} else {
console.log('URL: ' + body);
resolve();
}
});
var form = req.form();
var zipReadStream = fs.createReadStream(myzip,{encoding: "binary"})
zipReadStream.setEncoding('binary')
form.append('asset', zipReadStream);
form.append("access_token", file.token);
form.append("type", "BUNDLE");
form.append("comment", mycomment)
Although I have set the headers to "Content-Type": "application/octet-stream" , I still get the error from facebook that
OAuth "Facebook Platform" "invalid_request" "(#100) Invalid file. Expected file of one of the following types: application/octet-stream"
Also when I try to log my request I get content as 'Content-Type': 'multipart/form-data and not as application/octet-stream event though I have explicitly specified this.
A:
If you're uploading data as part of a form, you must use multipart/form-data for your Content-Type.
The Content-Type for a particular file on a form can be set per-file, but it seems that Facebook doesn't want that extra data for this.
Don't set Content-Type for your HTTP request and you should be fine, as the Request module will set it for you. Also, you can find the documentation here: https://github.com/request/request#multipartform-data-multipart-form-uploads
| {
"pile_set_name": "StackExchange"
} |
Q:
Server always responds 200 with curl, but gives error messages when accessed in browser
I'm trying to communicate with a third party server using curl.
The authentication system is as follows:
Use credentials to ask for cookie
Set cookie
Make requests
This in itself has not been a problem and I am able to make requests etc, but whatever I send the server always responds 200 even if i send an invalid data format. If i put the same url in the browser it returns error messages about the invalid format.
Does anyone know why this might be? Thanks in advance,
function go($url,$postvars=null)
{
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_VERBOSE, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_COOKIEJAR, '/tmp/cookie_.txt');
curl_setopt($ch, CURLOPT_COOKIEFILE, '/tmp/cookie_.txt');
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, true);
curl_setopt($ch,CURLOPT_USERAGENT,'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13');
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch,CURLOPT_POST,count($postvars));
$postfields = http_build_query($postvars);
curl_setopt($ch, CURLOPT_POSTFIELDS,$postfields);
curl_exec($ch);
curl_close($ch);
}
go($url,$login_array); //login=>'',pass=>''
go($url,$some_request_array);
A:
One very likely possibility is a bug in the server code that means it sends 200 statuses with its error messages. Hence while in the browser it'll be an error message, because that's what is in the body, it'll still be "successful". Put an intercept (firebug network, fiddler, etc) on the browser access and see what the status is.
If this is the case you've two options:
Get the party responsible for the server to fix it.
Parse for the error message yourself, and then treat it as an error condition.
| {
"pile_set_name": "StackExchange"
} |
Q:
Importing laptop as gift in France, how much duty will I have to pay?
I will fly into France from the U.S. with my personal laptop. I also want to take a new one as a gift for my hostess.
Will I have to pay a tax duty on it?
A:
Yes, you'll be liable to pay 20% VAT on a new laptop if its value is greater than €430 euro in to France from outside the EU customs area. On the plus side, there's no specific duty to pay (see "Ordinateur portable" on this page)
You could try and just walk through the "nothing to declare" aisle anyway - but you'd then be smuggling and at some risk of facing penalties.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why does R session-specific temp directory disappear in long-running R process?
R creates a session-specific temp directory in a subdirectory of TMPDIR using mkdtemp(), and removes that directory tree when R exits. This directory is used by default when calling tempfile(), and when calling pdf().
I have found that for some very long-running (e.g. 2 day) Rscript jobs, the session-specific temp directory appears to be gone. I get a "No such file or directory" error produced by plot.new(), or similar error writing to a path created with tempfile().
I've used the same code on several data sets, but I only see this problem on the largest, longest-running ones. The same process the eventually gets the error is able to create other PDFs earlier in the process, so presumably the temp directory existed at some point in the life of the process. Note also that in the PDF case the problem eventually results in a segfault, which ends the R process before the on-exit cleanup of the session-specific temp directory. I pulled the name of the session-specific temp directory from the core file and confirmed that it does not exist.
Any idea what is deleting the session-specific temp directory?
The segfault that shows that R crashes before it gets to the temp directory cleanup code:
#0 0x00000037ff467934 in fwrite () from /lib64/libc.so.6
#1 0x00002ae33b8c10e2 in PDF_endpage () at devPS.c:6509
#2 0x00002ae33b8c284b in PDF_Close () at devPS.c:7257
#3 0x00002ae338dbf05e in removeDevice.part.0 ()
from /broad/software/free/Linux/redhat_6_x86_64/pkgs/r_3.3.0/lib64/R/lib/libR.so
#4 0x00002ae338dbf4c9 in Rf_KillAllDevices ()
from /broad/software/free/Linux/redhat_6_x86_64/pkgs/r_3.3.0/lib64/R/lib/libR.so
#5 0x00002ae338ee42e4 in Rstd_CleanUp ()
from /broad/software/free/Linux/redhat_6_x86_64/pkgs/r_3.3.0/lib64/R/lib/libR.so
#6 0x00002ae338e21947 in run_Rmainloop ()
from /broad/software/free/Linux/redhat_6_x86_64/pkgs/r_3.3.0/lib64/R/lib/libR.so
#7 0x00000000004007bb in main () at Rmain.c:29
A:
Per this answer by @kba on Serverfault, to the question When does /tmp get cleared?
That depends on your distribution. On some system, it's deleted only when booted, others have cronjobs running deleting items older than n hours.
On Debian-like systems: on boot (the rules are defined in /etc/default/rcS).
On RedHat-like systems: by age (RHEL6 it was /etc/cron.daily/tmpwatch ; RHEL7 and RedHat-like with systemd it's configured in /usr/lib/tmpfiles.d/tmp.conf, called by systemd-tmpfiles-clean.service).
On Gentoo /etc/conf.d/bootmisc.
You can get around this by changing your R working directory. In your ~/.Renviron file, put
TEMP=~/Rworkdir
or something similar. (You may have to use TMP instead of TEMP.) This will tell R to create its working directory under your home directory. Substitute any path that works for you.
| {
"pile_set_name": "StackExchange"
} |
Q:
Make a function return something with different properties
I'll get straight to the point. To be honest, I think there is an answer to this, but I don't know how to word it.
int playerXCoord = Player.getPosition().x
//this is just an example
How do you define something in C++?
I am familiar with classes and instances but I would like to know how to do this (in C/C++).
A:
Something like this perhaps:
class Player
{
public:
struct Coordinates
{
int x;
int y;
};
Coordinates const& getPosition() const
{
return position_;
}
private:
Coordinates position_;
};
Now you can do e.g.
Player player;
int x = player.getPosition().x;
Note that you don't have to return a reference, the getPosition function could just as easily have been defined as
Coordinates getPosition() const { ... }
The "trick" is to have a function which returns the correct structure. That structure can of course be a complete class with its own member functions, which means you can easily "chain" many member function calls as long as you return an object of some kind.
| {
"pile_set_name": "StackExchange"
} |
Q:
Save opened images and reopen them after reboot
My linux scripting skills are poor so I'm hoping someone can help.
I have 50+ images open via FEH from many different locations, I do not want to individually search for them after reboot.
ps aux | grep feh
shows many
[...] 0:00 feh --start-at /media/[...]/x.jpg
along with
[...] 0:00 grep --color=auto feh
I need a script that would allow me to save them into a file and then be able to restore them from file somehow.
A:
Extract the list of start commands and save the output in a file:
ps aux|grep -oP "[f]eh.*" >~/feh_commands
Make it a proper script by adding a shebang, quoting the file names and adding & to each line to run them simultaneously:
sed -i 's_/_"/_;s/$/" \&/;1i#!/bin/bash' ~/feh_commands
Now you just need to run the script with
bash ~/feh_commands
or make it executable and run it.
| {
"pile_set_name": "StackExchange"
} |
Q:
Windows C API: what's the difference between wincrypt and sspi?
I'm looking at different solutions for windows cryptography, and stumbled across these two libraries. Their header files are Wincrypt.h and Sspi.h. They both seem to provide encryption and decryption routines: CryptEncryptMessage and EncryptMessage, they both provide encryption context handles and are really similar. So what do I use them for?
P.S. Also there is CNG, but that is, as I understood, just a successor to wincrypt, which will soon become deprecated.
A:
According to MSDN, the EncryptMessage function encrypts a message to provide privacy. EncryptMessage allows the application to choose among cryptographic algorithms supported by the chosen mechanism.
This function is available as a SASL mechanism only.
For example, if you want use Microsoft's Security Support Provider Interface (SSPI) in a Windows Domain environment to send encrypted and signed messages between two entities (in C++).Then you can use EncryptMessage .
But CryptEncryptMessage is the CAPI2 PKI encryption API.
Very generically, in absence of context, the EncryptMessage is meant
to encrypt data for some entity for which you have a cert (only uses
crypto) and works offline, the CryptEncryptMessage can only be used
between a client and a server after they have established a security
context using InitializeSecurityContext/AcceptSecurityContext.
If you want to learn more, please refer: Difference between CryptEncryptMessage EncryptMessage(Negotiate)
| {
"pile_set_name": "StackExchange"
} |
Q:
General expression of the redshift: explanation?
In some papers, authors put the following formula for the cosmological redshift $z$ :
$1+z=\frac{\left(g_{\mu\nu}k^{\mu}u^{\nu}\right)_{S}}{\left(g_{\mu\nu}k^{\mu}u^{\nu}\right)_{O}}$
where :
$S$ is the source and $O$ the observer
$g_{\mu\nu}$ is the metric
$k^{\mu}=\frac{dx^{\mu}}{d\lambda}$ is the coordinate derivative regarding the affine parameter $\lambda$
$u^{\nu}$ is the 4-velocity of the cosmic fluid
My first question is : according to the Einstein summation, is it a fraction of sums $\left(\frac{\sum_{\mu\nu}X}{\sum_{\mu\nu}Y}\right)$ or a sum of fractions $\left(\sum_{\mu\nu}\frac{X}{Y}\right)$ ?
My second question (and more important) is : where does this formula come from ? Where can I find a "demonstration"/"derivation"/"explanation" of this ?
A:
For the first question, it is the same quantity, the Minkowski dot product of the four vectors $k$ and $u$ that you may call $A_{O}$ and $A_{S}$, in general
$$g_{\mu\nu}k^{\mu}u^{\nu}=k_{\nu}u^{\nu}\equiv A$$
computed for the source and computed for the observer. So you have
$$1+z=\frac{A_{S}}{A_{O}} $$
| {
"pile_set_name": "StackExchange"
} |
Q:
Change color multiple marker in google maps API
I would like to ask about how to change color of a marker in google maps. The condition is, I have a program to create multiple markers in google maps. But how I can give specified color to each marker?
this is my code for now,
var markers = [];
var map;
var labels = 'ABCD';
var labelIndex = 0;
function addMarker(location) {
var marker = new google.maps.Marker({
position: location,
label: labels[labelIndex++ % labels.length],
icon: 'http://maps.google.com/mapfiles/ms/icons/green-dot.png'
map: map
});
markers.push(marker);
A:
One way would be to pass the color into the addMarker function:
function addMarker(location, color) {
var marker = new google.maps.Marker({
position: location,
label: labels[labelIndex++ % labels.length],
icon: 'http://maps.google.com/mapfiles/ms/icons/'+color+'.png',
map: map
});
markers.push(marker);
}
proof of concept fiddle
code snippet:
var markers = [];
var map;
var labels = 'ABCD';
var labelIndex = 0;
function initialize() {
map = new google.maps.Map(
document.getElementById("map_canvas"), {
center: new google.maps.LatLng(40.7127837, -74.0059413),
zoom: 11,
mapTypeId: google.maps.MapTypeId.ROADMAP
});
// New York, NY, USA (40.7127837, -74.0059413)
// Newark, NJ, USA (40.735657, -74.1723667)
// Jersey City, NJ, USA (40.72815749999999, -74.07764170000002)
// Bayonne, NJ, USA (40.6687141, -74.11430910000001)
addMarker({
lat: 40.7127837,
lng: -74.0059413
}, "red");
addMarker({
lat: 40.735657,
lng: -74.1723667
}, "green");
addMarker({
lat: 40.7281575,
lng: -74.0776417
}, "yellow");
addMarker({
lat: 40.6687141,
lng: -74.1143091
}, "orange");
}
google.maps.event.addDomListener(window, "load", initialize);
function addMarker(location, color) {
var marker = new google.maps.Marker({
position: location,
label: labels[labelIndex++ % labels.length],
icon: {
url: 'http://maps.google.com/mapfiles/ms/icons/' + color + '.png',
labelOrigin: new google.maps.Point(15, 10)
},
map: map
});
markers.push(marker);
}
html,
body,
#map_canvas {
height: 100%;
width: 100%;
margin: 0px;
padding: 0px
}
<script src="https://maps.googleapis.com/maps/api/js"></script>
<div id="map_canvas"></div>
| {
"pile_set_name": "StackExchange"
} |
Q:
PHPMailer: Failed to connect to server: php_network_getaddresses: getaddrinfo failed
I'm trying to send an email using PHPMailer and the code below, but i get this error:
2017-10-10 17:39:33 SMTP ERROR: Failed to connect to server:
php_network_getaddresses: getaddrinfo failed: Name or service not
known (0) 2017-10-10 17:39:33 SMTP ERROR: Failed to connect to server:
php_network_getaddresses: getaddrinfo failed: Name or service not
known (0) SMTP connect() failed.
https://github.com/PHPMailer/PHPMailer/wiki/Troubleshooting Message
could not be sent.Mailer Error: SMTP connect() failed.
https://github.com/PHPMailer/PHPMailer/wiki/Troubleshootingfas
use PHPMailer\PHPMailer\PHPMailer;
use PHPMailer\PHPMailer\Exception;
require 'vendor/autoload.php';
$mail = new PHPMailer(true); // Passing `true` enables exceptions
try {
//Server settings
$mail->SMTPDebug = 2; // Enable verbose debug output
$mail->isSMTP(); // Set mailer to use SMTP
$mail->Host = 'smtp1.example.com;smtp2.example.com'; // Specify main and backup SMTP servers
$mail->SMTPAuth = true; // Enable SMTP authentication
$mail->Username = '[email protected]'; // SMTP username
$mail->Password = '*********'; // SMTP password
$mail->SMTPSecure = 'tls'; // Enable TLS encryption, `ssl` also accepted
$mail->Port = 587; // TCP port to connect to
//Recipients
$mail->setFrom('[email protected]', 'Mailer');
$mail->addAddress('[email protected]', 'Joe User'); // Add a recipient
$mail->addAddress('[email protected]'); // Name is optional
$mail->addReplyTo('[email protected]', 'Information');
$mail->addCC('[email protected]');
$mail->addBCC('[email protected]');
//Attachments
//Content
$mail->isHTML(true); // Set email format to HTML
$mail->Subject = 'Here is the subject';
$mail->Body = 'This is the HTML message body <b>in bold!</b>';
$mail->AltBody = 'This is the body in plain text for non-HTML mail clients';
$mail->send();
echo 'Message has been sent';
} catch (Exception $e) {
echo 'Message could not be sent.';
echo 'Mailer Error: ' . $mail->ErrorInfo;
}
A:
You realise this is an example using placeholder data? example.* domains are guaranteed not to exist, specifically so they can be used safely in example code and documentation. You need to substitute your own domains for all those example addresses, and then DNS lookups will work (which is what is failing for you at the moment).
| {
"pile_set_name": "StackExchange"
} |
Q:
Recursive self relational via join column
Having this specific simple many to many self referential structure
An item owns other items through the joins table;
trying to retrieve the whole tree structure as json while querying a given ite.
For example, while querying the item with id 1 :
SELECT it.*, fulltree from items where id = 1;
Outputting the following :
{
id: 1,
title: "PARENT",
children: [
{
id: 2,
title: "LEVEL 2",
children: [
{
id: 3,
title: "LEVEL 3.1",
children: [
{
id: 4,
title: "LEVEL 4.1"
},
{
id: 5,
title: "LEVEL 4.2"
}
]
},
{
id: 6,
title: "LEVEL 3.2"
}
]
}
]
}
I've digged into json capabilities postgres offers, managed such output by repeating nested query (simple but ugly, and limitated to the amount of repeats :/)
I've found out about recursive queries. The examples found here and there not being that simple, I'm having had times finding an entrypoint to understand it correctly and adapt those to my needs
Example Data
CREATE TABLE items (
id serial primary key,
title VARCHAR(255)
);
insert into items (title) values ('PARENT');
insert into items (title) values ('LEVEL 2');
insert into items (title) values ('LEVEL 3.1');
insert into items (title) values ('LEVEL 4.1');
insert into items (title) values ('LEVEL 4.2');
insert into items (title) values ('LEVEL 3.2');
CREATE TABLE joins (
id serial primary key,
item_id INT,
child_id INT
);
insert into joins (item_id, child_id) values (1,2);
insert into joins (item_id, child_id) values (2,3);
insert into joins (item_id, child_id) values (3,4);
insert into joins (item_id, child_id) values (3,5);
insert into joins (item_id, child_id) values (2,6);
I hope the example here will be simple enough to find help from experienced users; in advanced thx a lot
A:
Here is an example query,
WITH RECURSIVE t(item_id, json) AS (
SELECT item_id, to_jsonb(items)
FROM items
WHERE NOT EXISTS (
SELECT 1
FROM joins
WHERE items.item_id = joins.item_id
)
UNION ALL
SELECT parent.item_id, to_jsonb(parent) || jsonb_build_object( 'children', t.json )
FROM t
JOIN joins AS j
ON t.item_id = j.child_id
JOIN items AS parent
ON j.item_id = parent.item_id
)
SELECT item_id, jsonb_pretty(json)
FROM t
WHERE item_id = 1;
item_id | jsonb_pretty
---------+---------------------------------------
1 | { +
| "title": "PARENT", +
| "item_id": 1, +
| "children": { +
| "title": "LEVEL 2", +
| "item_id": 2, +
| "children": { +
| "title": "LEVEL 3.2", +
| "item_id": 6 +
| } +
| } +
| }
1 | { +
| "title": "PARENT", +
| "item_id": 1, +
| "children": { +
| "title": "LEVEL 2", +
| "item_id": 2, +
| "children": { +
| "title": "LEVEL 3.1", +
| "item_id": 3, +
| "children": { +
| "title": "LEVEL 4.1",+
| "item_id": 4 +
| } +
| } +
| } +
| }
1 | { +
| "title": "PARENT", +
| "item_id": 1, +
| "children": { +
| "title": "LEVEL 2", +
| "item_id": 2, +
| "children": { +
| "title": "LEVEL 3.1", +
| "item_id": 3, +
| "children": { +
| "title": "LEVEL 4.2",+
| "item_id": 5 +
| } +
| } +
| } +
| }
(3 rows)
Note, we're not actually merging the paths to form a completed tree. You either have to build the tree from the root node down, or the leaf nodes to the top. In this case, you'll have to merge the discrete paths. Look for a deep json merge in Javascript and tie it together with plv8
Test data
I modified your example schema a bit,
DROP TABLE joins;
CREATE TABLE items (
item_id serial PRIMARY KEY,
title text
);
CREATE TABLE joins (
id serial PRIMARY KEY,
item_id int,
child_id int
);
INSERT INTO items (item_id,title) VALUES
(1,'PARENT'),
(2,'LEVEL 2'),
(3,'LEVEL 3.1'),
(4,'LEVEL 4.1'),
(5,'LEVEL 4.2'),
(6,'LEVEL 3.2');
INSERT INTO joins (item_id, child_id) VALUES
(1,2),
(2,3),
(3,4),
(3,5),
(2,6);
| {
"pile_set_name": "StackExchange"
} |
Q:
Is our policy adequate to our quality standards?
I witnessed a common phenomenomn where an objectively poor quality answer gets widely upvoted because people believe it's funny or because they approve people being salty toward OP.
I identify poor quality as generally short answers, opinion and advice-based, and possibly including salt and disregard to OP situation. For example advocating OP to quit their job.
I wonder to an extent if self-moderation in the Workplace function well enough for quality content to raise. It seem to have been discussed before to add in the FAQ a "back it up" policy to ensure answers are at least based on experience rather than pure opinion, but as much as I can see this have not been successfuly implemented. I wonder if other kind of policy could help identifying and moderating poor content.
In Interpersonal Stack Exchange, a stack I'm active in too, this wouldn't happen so often, because the number of post per moderator is low and the policy for both questions and answers are much more strict so the request to meet quality standards are always there. For example, a question that could belong to the Workplace have been closed because it lacked detail and expected outcome and would likely generate opinion based answers, but I'm fairly sure would be still open if posted here initially. This left me wondering if we are incitative enough to quality content.
As I would imagine, there should be general guidelines and example as to what is a good or bad answer, but the help and the FAQ are very relaxed on posting advices as answers.
So there is two components to this question:
Do you believe the posting policy are adequate to meet our quality standards?
If not, is there any specific policy that could help getting less salty and opinion-based answers?
I originally asked another question and edited for clarity.
How can we further improve the quality, especially getting rid of salty and opinion-based answers?
A:
I witnessed a common phenomenomn where an objectively poor quality answer gets widely upvoted because people believe it's funny or because they approve people being salty toward OP.
Except it's not objectively poor - you're saying you find it poor, so to you it's subjectively poor. Kilisi's writing style may be a tad brusque at times but "salty", really? I see no evidence that Kilisi had any emotion towards the OP, let alone the negativity you're ascribing to him. For full disclosure I was one of the 150 people who upvoted that answer - not because it was "funny" or "salty", but because I thought it was correct. 31 users disagreed - and downvoted the answer, this is how the system works.
If you see a post that crosses the line into rude/abusive territory then flag it - our able mods will then do what they do best.
In Interpersonal Stack Exchange, a stack I'm active in too, this wouldn't happen so often, because the number of post per moderator is low and the policy for both questions and answers are much more strict so the request to meet quality standards are always there. For example, a question that could belong to the Workplace have been closed because it lacked detail and expected outcome and would likely generate opinion based answers, but I'm fairly sure would be still open if posted here initially. This left me wondering if we are incitative enough to quality content.
This isn't IPS - and I'd be strongly resistant to the idea that we should implement their policy. I mean no disrespect to the good folks of IPS but that policy is staggeringly flawed IMO, as Magisch mentions in a comment (emphasis mine):
All it does is force people to add anecdotes from their experience which are usually tangential and in case of someone who just wants to stir the pot, can be trivially made up.
Aside from that I come to TWP to try and help others - often that means that answers will come in large part from my experience, often from accumulated knowledge rather than specific incidents. It's simply impractical for me to explain fully every time what leads me to believe what I'm saying is the appropriate answer. Of course, you don't know me from Adam - I'm just words on a screen. I could just be making up any old rubbish and posting it as answers. But we have the voting mechanic precisely to ensure that poor quality content like that doesn't rise to the top.
When it comes to regulatory matters - then that's different, these sorts of things can be objectively backed up (usually with freely available information) and in those cases they absolutely should be, many questions are going to generate answers that can be viewed subjectively. As evidenced here - you hated Kilisi's answer and I didn't, and that's okay - you're entitled to your opinion that a particular answer is poor or not useful, that's why SE provides the downvote button. You downvote it, I upvote it, and if more people agree with you than me the scores going to go negative and vice-versa.
When we're looking to avoid "opinion-based" what we really mean by that is avoiding things where diametrically opposed answers are equally valid. If someone asks two people what their favorite color is and one answers "black" and the other answers "white" nobody got it "wrong" despite their answers being literal opposites of each other. But if someone asks "what color paper should my resume be printed on" then the two answers aren't equally "correct", even though there's no technical standard or regulation telling you what color paper should be using.
A:
As I would imagine, there should be general guidelines and example as to what is a good or bad answer, but the help and the FAQ are very relaxed on posting advices as answers.
I think that cuts to the heart of the matter, and I think there's a good reason why the help and FAQ are structured that way. Ultimately, you posted a two part question:
So there is two components to this question:
Do you believe the posting policy are adequate to meet our quality standards?
If not, is there any specific policy that could help getting less salty and opinion-based answers?
I think you left off the zero-th component, which is do we agree on what the quality standards are?
Our site tour gives the follow description of the purpose of our site:
The Workplace Stack Exchange is a question and answer site about the workplace and other career-related topics. It is for members of the workforce to get answers on topics such as the job hunting process, interviewing, salary negotiation, and professionalism within the Workplace.
The challenge here is, many workplace "questions" on these topics are inherently subjective - or, at least, they don't inherently have a single, knowable, provable answer. So, we need to be careful when talking about quality standards, to ensure that we don't choke the site out of existence in the name of enforcing strict standards.
If I go on a technical SE site and ask about database indexes or some coding bug, there isn't really much room for subjectivity. It's easy to have objective quality standards, and it's often easy to trivially prove an answer is correct (by trying it out). A workplace question is rarely that black and white. If someone asks about including something on their resume, you can't go run off to a test environment, put it on your resume, run a unit test, and see if the answer was right or not.
So in the absence of strict "quality" standards, or strict guidelines on objective answers, we are in a position where we have a gray area around interpreting answers. Luckily, as DarkCygnus's answer pointed out, we also have a variety of tools for handling the gray area.
Going back to your opening paragraph, you said:
I witnessed a common phenomenomn where an objectively poor quality answer gets widely upvoted because people believe it's funny or because they approve people being salty toward OP.
Maybe you can ask yourself: why are you upset about this answer getting widely upvoted? What do those votes mean to you? What can you do about it?
If you disagree with the quality of the answer, you can comment to suggest improvement and/or downvote (or use other tools as appropriate). But others are ultimately free to express their own thoughts about the answer (i.e. upvote it).
If you feel that the answer is literally bad advice and you are concerned about the question's asker being left empty-handed, you can always post your own answer containing what you think the right solution is, and providing whatever backup or explanation you think is appropriate.
Ultimately, that's the beauty of this format when applied to workplace questions. Everyone is free to provide input, through a variety of channels. Even on questions with highly upvoted answers, I will often write my own answer if I feel something important is missing, or if I disagree with the existing answers. The person who asked the question (or anyone who later finds it in search or via a dupe tag) gets the benefit of a wide range of possibilities to consider. Rather than focusing on the ultimate answer, the community is able to provide and self-moderate a range of answer responses.
| {
"pile_set_name": "StackExchange"
} |
Q:
Resources for understanding meta-regression?
I'd like to learn more about the statistical techniques that one should use for a meta-regression. I'm interested in both general theory, as well as examining methodologies in R.
A:
Echoing Subhash's suggestion, if you intend to meta-analyze regression weights, and eventually examine continuous moderators of those weights via meta-regression, you need to be sure the effect sizes (i.e., the regression weights) came from identical models. That is to say, the models for each effect size contained the exact same variables. As this kind of model consistency is rare--at least it is in my field--it seems much more common for people to meta-analyze zero-order correlation coefficients.
As for resources about techniques for carrying out meta-regression, most meta-analysis texts will provide good introductory coverage; Borenstein et al.'s (2009) book is a good choice, and I have also heard nice things about Schmidt & Hunter's (2014)if you're going to be meta-analyzing correlation coefficients in particular. Alternatively, Cheung's (2014) paper describes an SEM approach to meta-analysis/meta-regression that has unique benefits.
In terms of R packages, Cheung (2015) mentions some of those available, including meta, rmeta, mvmeta, metaLik, and metafor, while introducing his own metasem package. metafor is a great comprehensive meta-analysis package; you'll easily be able to fit fixed- random- and mixed-effect models (i.e., conducting meta-regression), test for publication bias, and create useful meta-analytic visualizations (e.g., forest and funnel plots). If, however, you want to meta-analyze dependent effect sizes (e.g., one sample may yield multiple effect sizes that you wish to include in your meta-analysis), then the metasem package is what I would recommend. It makes it easy to conduct meta-analysis and meta-regression--you will be able to specify moderators at both level 2 (e.g., varying within a sample) and at level 3 (e.g., varying between samples).
References
Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to meta-analysis. West Sussex, UK: Wiley.
Cheung, M. W. L. (2014). Modeling dependent effect sizes with three-level meta-analysis: A structural equation modeling approach. Psychological Methods, 19, 211-229.
Cheung, M. W. L. (2015). metaSEM: An R package for meta-analysis using structural equation modeling. Frontiers in Psychology, 5, 1521.
Schmidt, F. L., & Hunter, J. E. (2014). Methods of meta-analysis: Correcting error and bias in research findings (3rd Edition). London, UK: Sage.
| {
"pile_set_name": "StackExchange"
} |
Q:
Disproving a function exists
Prove/Disprove: There exists a function $f: \Bbb R \to\Bbb R$ such that
$$
\arctan(f(x)) = 2x/(\cos^2(x) + 3)
$$
for every x ∈ $\Bbb R$
I know that it's not true because arctan is limited between $\frac{-0.5}{\pi} \leq x \leq \frac{0.5}{\pi}$, but how can you fully prove it without just giving an example? seeing that it is an "exists" proof
A:
I don't understand your request. Take $x>\pi$. Then$$\frac{2x}{\cos^2(x)+3}\geqslant\frac{2x}4=\frac x2>\frac\pi2.$$Therefore, for such a $x$ you cannot possible have$$\arctan\bigl(f(x)\bigr)=\frac{2x}{\cos^2(x)+3},$$since$$(\forall y\in\mathbb{R}):\arctan(y)<\frac\pi2.$$What can possibly be wrong with this proof?
| {
"pile_set_name": "StackExchange"
} |
Q:
How to resize webview according to screen resolution in iphone app?
I am using storyboards. I am unable to resize webview according to the screen size in my storyboard.
I want header to be of same size for both resolution but change the height of webview according to screen resolution.
I can't use Autolayout because I want to deploy my app for ios5 and above.
Please help.
A:
you can set Your Webview frame according to UIScreen like Bellow where you alloc your `webview
CGRect webFrame = [[UIScreen mainScreen] applicationFrame];
webFrame.origin.y -= 20.0; // statusbar 20 px cut from y of frame
webView = [[UIWebView alloc] initWithFrame:webFrame];
webView.scalesPageToFit = YES;
webView.autoresizesSubviews = YES;
webView.autoresizingMask=(UIViewAutoresizingFlexibleHeight | UIViewAutoresizingFlexibleWidth);
[webView setBackgroundColor:[UIColor clearColor]];
| {
"pile_set_name": "StackExchange"
} |
Q:
Careers button needs a space
The "Extend" and "User Permissions" buttons need an between them.
A:
| {
"pile_set_name": "StackExchange"
} |
Q:
Reduce space between columns when using \usepackage{multicol} & \SetEnumitemKey
The MWE code below executes perfectly. But I need to reduce by 50% the horizontal space between the 2 cols.
Have tried pasting this string at different places in the preamble and after \begin{document}
Nothing works. Your help is sincerely appreciated. If possible, please explain why \setlength\columnsep{10pt} not working.
Also (and this is being really OCD), is it possible to correct the horizontal alignment between the 2 cols? I suspect the use of both fractions and non-fraction answer choices in the same line is messing up the alignment.
screenshot of horizontal misalignment
MWE
\documentclass[addpoints]{exam}
\usepackage{amsmath}
\usepackage{enumitem}
\usepackage{multicol}
\newlist{options}{enumerate}{1}
\setlist[options]{label*=(\Alph*)}
\newcommand{\option}{\item}
\SetEnumitemKey{twocol}{
before=\raggedcolumns\begin{multicols}{2},
after=\end{multicols}}
\begin{document}
%This code creates the text before the first question
%-------------------------------------------------------------------
\begin{center}
\fbox{\fbox{\parbox{5.5in}{\centering
Pre-Algebra Covid: Week 4 Item 2}}}
\end{center}
\vspace{5mm}
%Here, the questions begin
\begin{questions}
% Q 1
\question $\displaystyle\frac{4x^4y^3}{2x^3}$
\bigskip
\begin{options}[twocol]
\option $2xy^3$\\
\option $\displaystyle\frac{4x^2y^3}{2}$\\
\option $\displaystyle\frac{24x^7}{y^2}$\\
\option $\displaystyle\frac{4x}{9y^2}$\\
\end{options}
\end{questions}
\end{document}
A:
I'm not really sure what you are trying.
Given the design you made you have
text in full width .............
................................
a multicol at half of
splitting the the \linewidth
So your 4 answers come exactly where they should and a bigger \columnsep would not change that, it only makes the columns smaller but not alter their starting point. Using a negative \columnsep is different but it effectively means your second column overwrites the first. (works in your case as you columns are basically empty, but wouldn't if (A) or (B) has more material).
So I'm not sure why you don't simply use {multicols}{4} and have all answers on a single line given that they are so short.
But if you are really determined to have 2 columns and a lot of white spaces on the right try something like this:
\SetEnumitemKey{twocol}{
before=\setlength\linewidth{.5\linewidth}%
\raggedcolumns\begin{multicols}{2},
after=\end{multicols}}
This way multicols sees a text line widths of only half of what it really is and accordingly the two columns it forms are absed on that width.
As to the vertical "mis"alignment ...: multicol doesn't align individual lines in different columns it just breaks the galley into columns and if (A) is normalsized by (C) is extra high, then they do not line up.
If you want everything line up, you better use a table or an array but if you want auto breaking via multicol then either you have to accept this or manually make things the same height, e.g.,
\option $2xy^3 \phantom{\displaystyle\frac{4x}{9y^2}}$\\
which is not that pretty (but could be done nicer).
Anyway, with the above changes you get
The first line in the example was generated using
\SetEnumitemKey{fourcol}{
before=\raggedcolumns\begin{multicols}{4},
after=\end{multicols}}
To reduce the width even there, one can apply the same trick, eg
\SetEnumitemKey{fourcol}{
before=\setlength\linewidth{.6\linewidth}% % -- 60 percent of the normal width
\raggedcolumns\begin{multicols}{4},
after=\end{multicols}}
or some other value, or one could generally redue the width of the galley by specifing
\setlength\textwidth{4in} % or some other value
in the document preamble.
| {
"pile_set_name": "StackExchange"
} |
Q:
SQL Query - Performance Optimization
Im not so good at SQL, so I asked you guys for help on writing a query.
SQL Query - Table Joining Problems
I got an answer and it works! Its just noticeably slow. I hate to do it but Im really hoping someone out there wants to recommend some ways to optimize the query. I have not even attempted this on my own because I dont know enough about SQL to even begin googling.
A:
What might help is to create indexes on the columns you're joining with. For example;
CREATE INDEX name_for_index ON Main (StatusID);
It generates a look-up table for this column that the algoritm that performs the query will use.
Edit: If you're not allowed to change the database, you may be out of luck. I have seen cases where easing on the JOIN statements improved the performance, that would be this;
...
FROM
Main m, Status st, Secondary s
WHERE
st.ID = m.StatusID
AND s.MainID = m.ID
AND
( s.MainID IS NULL AND m.WhenDate = <YourDate>
OR
s.MainID IS NOT NULL AND s.WhenDate = <YourDate> )
AND TypeId = <TypeFilter>
AND ... other filters, if you need any ...
And then handling your other case, where the INNER JOIN is needed a bit more verbose.
| {
"pile_set_name": "StackExchange"
} |
Q:
RelativeSource data trigger binding not working
I am trying to set the background color of a DataGridTextColumn to another color if it is read-only. I am doing so with the following code:
<DataGridTextColumn Header="Test" IsReadOnly="True">
<DataGridTextColumn.ElementStyle>
<Style TargetType="{x:Type TextBlock}">
<Style.Triggers>
<DataTrigger Binding="{Binding IsReadOnly, RelativeSource={RelativeSource Mode=FindAncestor, AncestorType={x:Type DataGridTextColumn}}}" Value="True">
<Setter Property="Background" Value="LightGreen"/>
</DataTrigger>
</Style.Triggers>
</Style>
</DataGridTextColumn.ElementStyle>
</DataGridTextColumn>
I am having no luck, however removing the triggers results in the background always being light green. Is something wrong with the data trigger binding? I am relatively new to WPF but this is what I could find online. Ideally this would be in App.XAML so it would work across all columns such as this, so would there then be a way to translate this to a style? Thanks.
Edit---------
If I bind by ElementName it works:
<DataTrigger Binding="{Binding IsReadOnly, ElementName=stupid}" Value="True">
<Setter Property="Foreground" Value="Red" />
</DataTrigger>
However I would like this to be more generic if possible. Thanks again.
A:
Edit: Didn't check for a background property on DataGridTextColumn at first.
This answered your original question -
<DataGridTextColumn Header="Test" IsReadOnly="True" Binding="{Binding name}" x:Name="MyColumn">
<DataGridTextColumn.ElementStyle>
<Style TargetType="{x:Type TextBlock}">
<Style.Triggers>
<DataTrigger Binding="{Binding IsReadOnly, ElementName=MyColumn}" Value="True">
<Setter Property="Background" Value="Orange" />
</DataTrigger>
</Style.Triggers>
</Style>
</DataGridTextColumn.ElementStyle>
</DataGridTextColumn>
To answer your second question, the DataTrigger binding you are looking for is:
<DataTrigger Binding="{Binding IsReadOnly, RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type DataGridCell}}}" Value="True">
In Summary, look for the parent DataGridCell instead of DataGridTextColumn. The reason for this is the TextBlock you are trying to style is not actually a child of DataGridTextColumn, but a child of the DataGridTextColumn's peer.
| {
"pile_set_name": "StackExchange"
} |
Q:
disabled bootstrap button can be accessed from tabbing
So by doing the below the button is supposed to be disabled, but you can actually access the button using tabbing:
<button type="submit" class="btn btn-primary disabled">Submit</button>
So to prevent that from happening do I have to add tabindex="-1" to all of the elements I want to be disabled? I thought by using the disabled class this will be taken care of, but it seems not.
Is there any other way to do this?
A:
Yes, that is annoying but also rather logical. Tabbing and focus is a different browser-task; CSS is (with a few exceptions) mostly about visual behaviour. Adding tabindex="-1" would become hellish, since you most likely also want to get the button focused through tabbing once it is no longer disabled.
I would suggest a handler that moves focus to either the previous or next element, if the button is receiving focus and it is disabled :
$('button').focus(function(e) {
if ($(this).hasClass('disabled')) {
e.currentTarget.nextElementSibling.focus()
//or e.currentTarget.previousElementSibling.focus()
}
})
| {
"pile_set_name": "StackExchange"
} |
Q:
Call private method from public method?
I´ve defined a class like this:
function Class1(){
this.Func1 = function(){
/* Methods and vars */
};
function Func2(){
/* Methods and vars */
};
};
I want to find out a way to call the public method (or get the value of a public variable) from the private one (Func2()). Any sugestions?
Pd: Sorry if the terminology I used is strongly oriented to objects, because I am a C++ programer, and I'm kinda newby in javascript programming.
A:
From Func1, you can call Func2 directly:
this.Func1 = function() {
Func2();
};
However, you cannot do the same to call Func1 from Func2 because Func2 will (probably) have a different scope and different definition of this when it is called; this.Func1 will be undefined. As alx suggested below, you can save the scope using another variable that will retain its value when used from the inside function. You can also save a reference to Func1 in local scope as follows:
var Func1 = this.Func1 = function() {
// fun stuff
};
function Func2() {
Func1();
}
This works because it does not rely on the changing reference this.
A:
use closure:
function Class1(){
this.Func1 = function(){
/* Methods and vars */
};
var me = this;
function Func2(){
me.Func1();
};
};
| {
"pile_set_name": "StackExchange"
} |
Q:
Grammaticalization of third person singular -s
Is there any evidence that the third person singular -s can be traced back to a lexical item before it became an inflection ? I am trying to see if the theory of grammaticalization applies to its diachronic process. Any information would be most helpful. Thanks so much.
A:
Almost certainly not. The usual 3rd person singular inflection in Old English was -th or -eth and it looks as if its replacement by -s came about by a process of sound change.
| {
"pile_set_name": "StackExchange"
} |
Q:
Nmap showing ISPs router's DNS port in addition to target's ports
Whenever I do nmap scans, it seems that the information related to port 53 is altered by my ISP router as follows:
$ nmap -T4 -A -v stackoverflow.com
PORT STATE SERVICE VERSION
53/tcp open domain MikroTik RouterOS named or OpenDNS Updater
.... // Ports related to the actual stackoverflow.com scan: 21, 22, 25, 80...
This happens for every target IP or hostname: scanme.nmap.org, google.org, etc
However, if I do a scan using an online scanner such as https://hackertarget.com/nmap-online-port-scanner/, it shows the exact details and doesn't show this 53 service.
Starting Nmap 6.46 ( http://nmap.org ) at 2015-12-26 22:11 CST
Nmap scan report for stackoverflow.com (104.16.37.249)
Host is up (0.00093s latency).
Other addresses for stackoverflow.com (not scanned): 104.16.35.249 104.16.34.249 104.16.33.249 104.16.36.249
PORT STATE SERVICE VERSION
21/tcp filtered ftp
22/tcp filtered ssh
25/tcp filtered smtp
80/tcp open http cloudflare-nginx
443/tcp open ssl/https cloudflare-nginx
3389/tcp filtered ms-wbt-server
How do I avoid this.
Important: Off topic but my network may be in a MITM attack. Could a MITM cause this?
A:
It apears that your router's firewall has hijacked your DNS requests. I found these instructions on your router's wiki page which describe exactly how to do that:
Force users to use specified DNS server
This is just simple firewall rule which will force all Your users
behind RB to use DNS server which You will define.
In /ip firewall nat
add chain=dstnat action=dst-nat to-addresses=192.168.88.1 to-ports=53
protocol=tcp dst-port=53
add chain=dstnat action=dst-nat to-addresses=192.168.88.1 to-ports=53
protocol=udp dst-port=53
This rule will force all users with custom defined DNS server to use
192.168.88.1 as their DNS server, this rule will simply redirect all request sent to ANY-IP:53 to 192.168.88.1:53
If you can log into your router, you may be able to undo the damage yourself. If not, your ISP would have to make the change. Your other options will be complex, like setting up a VPN.
Please note that your country may have laws regarding DNS that may have forced your ISP to provide only government-approved DNS name resolutions. For example, Turkey required the removal of Twitter's servers from DNS due to all the public tweets criticizing their corrupt cabinet. I would not advise you to attempt to work around this restriction if it means going to jail.
| {
"pile_set_name": "StackExchange"
} |
Q:
Grouping/Windowing in hive
In the below image the first is the table(script provided) in question and 2nd is the expected output.
In column C we have different items like T1,T2,T3 and the records will be available in groups, T1 records and then T2 or T3. there should not be any gap between those, T1 will start and finish and then only T2 item can appear. But if T1 is reappearing after other items, i want to consider it differently. What are the options to achieve the result in hive/spark?
I tried with rank in one column and then next value in other columns and tried to run some comparisons, but that did not help.
Any pointers please
CREATE TABLE TEST_A (A STRING, B STRING, C STRING);
INSERT INTO TEST_A (A, B, C) VALUES ('a','1-Jan','T1'), ('a','2-Jan','T1'),('a','3-Jan','T2'),('a','4-Jan','T3') ,('a','5-Jan','T1'),('a','6-Jan','T1')
A:
This is a gap-and-islands problem. I am going to propose putting each "island" of adjacent rows into a separate row.
One approach -- that works in this case -- is to use the difference of row numbers:
select a, c, min(b), max(b)
from (select t.*,
row_number() over (partition by a order by b) as seqnum,
row_number() over (partition by a, c order by b) as seqnum_2
from t
) t
group by a, c, (seqnum - seqnum_2);
You can pivot this into multiple columns if you really want. However, I think that just confuses the problem, because you may not know how many groups there are for a given a/c combination.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to override renderers for different Entries throughout the application?
I'm setting up different designs for two different custom Entry LoginEntry and a CommonEntry for my application and I want to be able to override the renderer for these two different scenarios for different designs throughout the application.
I have the following code that I tried but it's getting an error for LoginEntry is a type which is not valid for this context.
protected override void OnElementChanged(ElementChangedEventArgs<Entry> e)
{
base.OnElementChanged(e);
if (e.OldElement != null) return;
if (e.NewElement == LoginEntry)
{
UpdateEntryStyle();
}
}
A:
The == operator, in C#, is used to mostly compare values (for primitive types like int and char) or references (for objects). It cannot be used to compare a object to a Type, as you're trying to do in your example.
When trying to compare types you should do a Type Check, which methods are explained here.
| {
"pile_set_name": "StackExchange"
} |
Q:
Stack error when jumping from bootloader to application
I have been trying to jump from my bootloader to an application for a while now but I can not figure out what is going wrong. I am hoping someone here might be able to help me.
I am using Texas Instrument's CC2652 and Texas Instrument's Code Composer Studio to develop and flash the bootloader. The code below is the code I use to jump to the application.
void startApp(uint32_t prgEntry) {
static uint32_t temp;
temp = prgEntry;
// Reset the stack pointer,
temp +=4;
asm(" LDR SP, [R0, #0x0] ");
((void (*)(void))(*((uint32_t*)temp)))();
}
When I run this code I will end up in the FaultISR. Here I can use the debugger to look at the registers. In the CFSR (Configurable Fault Status Register) I can see that the STKERR and the IBUSERR bits are set.
prgEntry Is 0x2E000 in this case. I defined this address as the start of the application in the linker command files of both the bootloader and the application as well. I copied the command files further below.
The application I am trying to jump to is first uploaded to the device in intel hex format and looks like this https://pastebin.com/DAerFkXr. Texas Instruments has a Flash Programmer application for Windows with which I can read a range of addresses from the internal flash of the device. I went through the intel hex file by hand and compared it to what I saw in the Flash Programmer application and I believe everything is put in the correct location. I posted a screenshot of the output of the Flash Programmer below the command files.
When I flash the application to the device using Code Composer Studio without my bootloader I can see that the application runs normally so I know it is not a problem with the application.
Application command file:
--stack_size=1024 /* C stack is also used for ISR stack */
--heap_size=256
/* Retain interrupt vector table variable */
--retain=g_pfnVectors
/* Override default entry point. */
--entry_point resetISR
/* Allow main() to take args */
--args 0x8
/* Suppress warnings and errors: */
/* - 10063: Warning about entry point not being _c_int00 */
/* - 16011, 16012: 8-byte alignment errors. Observed when linking in object */
/* files compiled using Keil (ARM compiler) */
--diag_suppress=10063,16011,16012
#define BOOT_BASE 0x0
#define BOOT_SIZE 0x8000
#define FLASH_BASE 0x2E000
#define FLASH_SIZE 0x2A000
#define RAM_BASE 0x20000000
#define RAM_SIZE 0x14000
#define GPRAM_BASE 0x11000000
#define GPRAM_SIZE 0x2000
/* System memory map */
MEMORY
{
/* Application stored in and executes from internal flash */
FLASH (RX) : origin = FLASH_BASE, length = FLASH_SIZE
/* Application uses internal RAM for data */
SRAM (RWX) : origin = RAM_BASE, length = RAM_SIZE
/* Application can use GPRAM region as RAM if cache is disabled in the CCFG
(DEFAULT_CCFG_SIZE_AND_DIS_FLAGS.SET_CCFG_SIZE_AND_DIS_FLAGS_DIS_GPRAM = 0) */
GPRAM (RWX): origin = GPRAM_BASE, length = GPRAM_SIZE
}
/* Section allocation in memory */
SECTIONS
{
.intvecs : > FLASH_BASE
.text : > FLASH
.TI.ramfunc : {} load=FLASH, run=SRAM, table(BINIT)
.const : > FLASH
.constdata : > FLASH
.rodata : > FLASH
.binit : > FLASH
.cinit : > FLASH
.pinit : > FLASH
.init_array : > FLASH
.emb_text : > FLASH
.ccfg : > FLASH (HIGH)
.vtable : > SRAM
.vtable_ram : > SRAM
vtable_ram : > SRAM
.data : > SRAM
.bss : > SRAM
.sysmem : > SRAM
.stack : > SRAM (HIGH)
.nonretenvar : > SRAM
.gpram : > GPRAM
}
Bootloader command file:
--stack_size=1024 /* C stack is also used for ISR stack */
--heap_size=256
/* Retain interrupt vector table variable */
--retain=g_pfnVectors
/* Override default entry point. */
--entry_point resetISR
/* Allow main() to take args */
--args 0x8
/* Suppress warnings and errors: */
/* - 10063: Warning about entry point not being _c_int00 */
/* - 16011, 16012: 8-byte alignment errors. Observed when linking in object */
/* files compiled using Keil (ARM compiler) */
--diag_suppress=10063,16011,16012
#define BOOT_BASE 0x0
#define BOOT_SIZE 0x8000
#define FLASH_BASE 0x2E000
#define FLASH_SIZE 0x2A000
#define RAM_BASE 0x20000000
#define RAM_SIZE 0x14000
#define GPRAM_BASE 0x11000000
#define GPRAM_SIZE 0x2000
/* System memory map */
MEMORY
{
/* The bootloader will be stored and executed from this location in internal flash */
BOOT (RX) : origin = BOOT_BASE, length = BOOT_SIZE
/* Application stored in and executes from internal flash */
FLASH (RX) : origin = FLASH_BASE, length = FLASH_SIZE
/* Application uses internal RAM for data */
SRAM (RWX) : origin = RAM_BASE, length = RAM_SIZE
/* Application can use GPRAM region as RAM if cache is disabled in the CCFG
(DEFAULT_CCFG_SIZE_AND_DIS_FLAGS.SET_CCFG_SIZE_AND_DIS_FLAGS_DIS_GPRAM = 0) */
GPRAM (RWX): origin = GPRAM_BASE, length = GPRAM_SIZE
}
/* Section allocation in memory */
SECTIONS
{
.intvecs : > BOOT_BASE
.text : > BOOT
.TI.ramfunc : {} load=BOOT, run=SRAM, table(BINIT)
.const : > BOOT
.constdata : > BOOT
.rodata : > BOOT
.binit : > BOOT
.cinit : > BOOT
.pinit : > BOOT
.init_array : > BOOT
.emb_text : > BOOT
.ccfg : > BOOT (HIGH)
.vtable : > SRAM
.vtable_ram : > SRAM
vtable_ram : > SRAM
.data : > SRAM
.bss : > SRAM
.sysmem : > SRAM
.stack : > SRAM (HIGH)
.nonretenvar : > SRAM
.gpram : > GPRAM
}
Screenshot from Flash Programmer:
Description of STKERR:
Stacking from exception has caused one or more bus faults. The SP is still adjusted and the values in the context area on the stack might be incorrect. BFAR is not written.
Description of IBUSERR:
Instruction bus error flag. This flag is set by a prefetch error. The fault stops on the instruction, so if the error occurs under a branch shadow, no fault occurs. BFAR is not written.
Edit:
This is the disassembly of the startApp function:
void startApp(uint32_t prgEntry) {
startApp():
push {r3, r14}
str r0, [r13]
temp = prgEntry;
ldr r1, [pc, #0x18]
ldr r0, [r13]
str r0, [r1]
temp +=4;
ldr r1, [pc, #0x14]
ldr r0, [r1]
adds r0, r0, #4
str r0, [r1]
asm(" LDR SP, [R0, #0x0] ");
ldr.w r13, [r0]
((void (*)(void))(*((uint32_t*)temp)))();
ldr r0, [pc, #8]
ldr r0, [r0]
ldr r0, [r0]
blx r0
}
A:
I think you should single-step through this code to see what is in R0 just before the LDR SP, [R0]. Remember that the value in R0 is interpreted as an address, and the instruction fetches whatever is at that address and shoves it into the stack pointer.
It appears to me that your code is taking the value 0x0002E004 and using it as an address. Whatever value is stored at 0x0002E004 is then stored in the stack pointer. Is that what you intended?
Depending on endianess, it looks like the SP gets 0x1F1F1F00 or 0x001F1F1F.
| {
"pile_set_name": "StackExchange"
} |
Q:
Riemann Sphere Mapping
this is my first post so sorry if my question is too vague. I didn't see a related question posted, hence why I'm asking.
I can't find any resources on it, but there's supposed to be a bijective mapping from the complex plane to the reimann sphere, correct? The only mapping I've seen is by creating a line from a point in the complex plane to the top of the sphere and the intersection point with the sphere is the function value. How can this be expressed, and how is it injective?
On a intuitive note, how can the complex plane be isomorphic to a sphere? I'd think you could 'unfold' the sphere which would create a finite plane hence not being injective.
A:
Given a complex number $z=x+iy$ we think of it as a vector in $\mathbb{R^3}$ as $(x,y,0)$. There is a unique line which passes through this point and through the north pole $(0,0,1)$. It is given by the equation $f(t)=(1-t)(0,0,1)+t(x,y,0)=(tx,ty,1-t)$. We want to find where does this line intersect the sphere. So let's see when $||f(t)||^2$ equals to $1$.
$||f(t)||^2=t^2x^2+t^2y^2+(1-t)^2=t^2x^2+t^2y^2+1-2t+t^2=t^2(x^2+y^2+z^2)+1-2t$
So let's compare this to $1$. We get $t^2(x^2+y^2+1)+1-2t=1$ and from here $t^2(x^2+y^2+1)=2t$. One possible solution is obviously $t=0$ and then the point of intersection with the sphere we will get is the north pole. But we are interested in the other point of intersection. So let's suppose $t\ne 0$ and divide by $t$. Then we get $t=\frac{2}{x^2+y^2+1}$, so the point of intersection is:
$(tx,ty,1-t)=(\frac{2x}{x^2+y^2+1},\frac{2y}{x^2+y^2+1},\frac{x^2+y^2-1}{x^2+y^2+1})=(\frac{2Re(z)}{|z|^2+1},\frac{2Im(z)}{|z|^2+1},\frac{|z|^2-1}{|z|^2+1})$
Alright, so we finally got the required formula. And it has an inverse map from $S^2\setminus\{(0,0,1)\}$ to $\mathbb{C}$ given by $(\xi,\eta,\lambda)\to \frac{\xi+i\eta}{1-\lambda}$ as you can check. So it is a bijection between the complex plane and the sphere without the north pole. Note that it is very important we exclude the north pole.
| {
"pile_set_name": "StackExchange"
} |
Q:
Force landscape mode at the beginning of app, but allow later changes in orientation
I am planning to allow user to rotate the devices however during launch, i want the app start from landscape mode. May i know how can i do that?
Here is my code now for the orientation
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation
{
if ([[UIDevice currentDevice] userInterfaceIdiom] == UIUserInterfaceIdiomPhone) {
return (interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown);
} else {
return YES;
}
}
A:
You need to check the orientation of the status bar.
Please check the following links :-
Force iOS app to launch in landscape mode
Modal View Controller force Landscape orientation in iOS 6
Force Landscape Orientation on iOS 6 in Objective-C
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I do a step based design form my applicaiton?
I want to create an application that has some optional steps. But I could not decide how to do step plans. It looks like a workflow.
I have a Work entity. This work includes specific business steps.
Step-1 : Demand of work. ( DemanderName, DemandDate, DemandFiles )
Step-2 : ...
Step-3 : ...
Step-4 : Work investigation starts. ( InvestigatorName, StartDate, EndDate, Files )
Step-5 : After investigation, a decision done. This work is accepted or cancelled. ( Result, Date )
After a work created, the steps will done by order. But some steps could be cancelled. For example: I work on a work that does not include step-2 and step-3.
Should I need create a database table for every step? (And I need to sign the step of work as completed or not completed.)
I need to show work completion percentage. (Work-1 %20, Work-2 %60). If I use tables, how can I see the percentages?
I could not decide how I can design this.
A:
Should I need create a database table for every step? (And I need to sign the step of work as completed or not completed.)
Do you need to store all of the results of the work in a database? I mean, you could do this, but it's not clear if it's necessary.
If each step has distinct attributes, and you want to store results from many executions of your workflow, using a database and having a separate table for each step might be a good approach. But if you want to be able to add/remove steps to your workflow easily, or regularly change the attributes of each step, database tables might not be flexible enough for you. Of course, you could come up with a data model that keeps all of the data in one table (probably with some supporting reference-data tables), though that might be more complex. It's really hard to recommend anything concrete without knowing more detail about the problem you're having.
I need to show work completion percentage. (Work-1 %20, Work-2 %60). If I use tables, how can I see the percentages?
How you do this would probably depend on how you model the data. In general, you would probably query all of the "step" tables for a given workflow execution. If only the "step 1" table and "step 2" table are populated for "Work-1", then you have 2/5 tables populated, so that would be 20% complete.
| {
"pile_set_name": "StackExchange"
} |
Q:
Angular 6: Calling service observer.next from Http Interceptor causes infinite request loop
I'm working on a website with authentication using JWT. I've created a HTTP interceptor class that adds the token to all requests headers and is used for catching 401 errors.
import {Injectable} from '@angular/core';
import {HttpEvent, HttpHandler, HttpInterceptor, HttpRequest} from '@angular/common/http';
import {Observable, of} from 'rxjs';
import {JwtService} from '../service/jwt.service';
import {catchError} from 'rxjs/operators';
import {AlertService} from '../../shared/service/alert.service';
import {Router} from '@angular/router';
import {AlertType} from '../../shared/model/alert.model';
@Injectable()
export class HttpTokenInterceptor implements HttpInterceptor {
constructor(private jwtService: JwtService, private alertService: AlertService, private router: Router) {
}
/**
* Intercept HTTP requests and return a cloned version with added headers
*
* @param req incoming request
* @param next observable next request
*/
intercept(req: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> {
// Add headers to all requests
const headersConfig = {
'Accept': 'application/json'
};
// Add token bearer to header when it's available
const token = this.jwtService.getToken();
if (token) {
headersConfig['Authorization'] = `Bearer ${token}`;
headersConfig['Content-Type'] = 'application/json';
}
const request = req.clone({setHeaders: headersConfig});
// Return adjusted http request with added headers
return next.handle(request).pipe(
catchError((error: any) => {
// Unauthorized response
if (error.status === 401) {
this.handleError();
return of(error);
}
throw error;
})
);
}
/**
* Handle http errors
*/
private handleError() {
// Destroy the token
this.jwtService.destroyToken();
// Redirect to login page
this.router.navigate(['/login']);
// This is causing infinite loops in HTTP requests
this.alertService.showAlert({
message: 'Your token is invalid, please login again.',
type: AlertType.Warning
});
}
}
The class uses my JwtToken class to remove the token from the localstorage and redirect the user to the login page using the Angular Router. The showAlert method from the alertService is causing the http request to be repeated infinitely.
I think it's being caused by the Observer implementation in the alert service. But I've tried so many different implementations that I have really no idea what is going wrong.
import {Injectable} from '@angular/core';
import {Alert} from '../model/alert.model';
import {Subject} from 'rxjs';
/**
* Alert Service: Used for showing alerts all over the website
* Callable from all components
*/
@Injectable()
export class AlertService {
public alertEvent: Subject<Alert>;
/**
* AlertService constructor
*/
constructor() {
this.alertEvent = new Subject<Alert>();
}
/**
* Emit event containing an Alert object
*
* @param alert
*/
public showAlert(alert: Alert) {
this.alertEvent.next(alert);
}
}
The alertService class is being used by an alert component that displays all the alert messages. This component is used in two main components: Dashboard & login.
import {Component} from '@angular/core';
import {AlertService} from '../../shared/service/alert.service';
import {Alert} from '../../shared/model/alert.model';
@Component({
selector: '*brand*-alerts',
templateUrl: './alerts.component.html',
})
export class AlertsComponent {
// Keep list in global component
public alerts: Array<Alert> = [];
constructor(private alertService: AlertService) {
// Hook to alertEvents and add to class list
alertService.alertEvent.asObservable().subscribe(alerts => {
// console.log(alerts);
this.alerts.push(alerts);
});
}
}
In the following image is the issue clearly visible:
video of loop
Kind regards.
Edit: solved
In the page that did a request there was a subscription initilialised on the alert service and that caused the http request to fire again. I simply have the alert component being the only subscriber to the alertService now and created a new service for the refresh. The answer from @incNick is indeed a correct implementation. Thanks!
A:
Sorry of I'm busy on my job, but may my source make helpful.
import { Observable, throwError } from 'rxjs';
import { tap, catchError } from 'rxjs/operators';
...
return httpHandler.handle(request).pipe(
tap((event: HttpEvent<any>) => {
if (event instanceof HttpResponse) {
//this.loadingService.endLoading();
}
},
(err: any) => {
//this.loadingService.endLoading();
}),
catchError((err: any) => {
if (err.status === 401) {
/*
this.modalController.create({
component: LoginComponent,
componentProps: {returnUrl: this.router.url},
showBackdrop: true
}).then(modal => modal.present());
*/
} else {
//this.messageService.showToast(`Some error happen, please try again. (Error-${err.status})`, 'error');
}
return throwError(err);
})
);
I'm return throwError(err) at end.
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I query rows with unique values on a joined column?
I'm trying to have my popular_query subquery remove dupe Place.id, but it doesn't remove it. This is the code below. I tried using distinct but it does not respect the order_by rule.
SimilarPost = aliased(Post)
SimilarPostOption = aliased(PostOption)
popular_query = (db.session.query(Post, func.count(SimilarPost.id)).
join(Place, Place.id == Post.place_id).
join(PostOption, PostOption.post_id == Post.id).
outerjoin(SimilarPostOption, PostOption.val == SimilarPostOption.val).
join(SimilarPost,SimilarPost.id == SimilarPostOption.post_id).
filter(Place.id == Post.place_id).
filter(self.radius_cond()).
group_by(Post.id).
group_by(Place.id).
order_by(desc(func.count(SimilarPost.id))).
order_by(desc(Post.timestamp))
).subquery().select()
all_posts = db.session.query(Post).select_from(filter.pick()).all()
I did a test printout with
print [x.place.name for x in all_posts]
[u'placeB', u'placeB', u'placeB', u'placeC', u'placeC', u'placeA']
How can I fix this?
Thanks!
A:
This should get you what you want:
SimilarPost = aliased(Post)
SimilarPostOption = aliased(PostOption)
post_popularity = (db.session.query(func.count(SimilarPost.id))
.select_from(PostOption)
.filter(PostOption.post_id == Post.id)
.correlate(Post)
.outerjoin(SimilarPostOption, PostOption.val == SimilarPostOption.val)
.join(SimilarPost, sql.and_(
SimilarPost.id == SimilarPostOption.post_id,
SimilarPost.place_id == Post.place_id)
)
.as_scalar())
popular_post_id = (db.session.query(Post.id)
.filter(Post.place_id == Place.id)
.correlate(Place)
.order_by(post_popularity.desc())
.limit(1)
.as_scalar())
deduped_posts = (db.session.query(Post, post_popularity)
.join(Place)
.filter(Post.id == popular_post_id)
.order_by(post_popularity.desc(), Post.timestamp.desc())
.all())
I can't speak to the runtime performance with large data sets, and there may be a better solution, but that's what I managed to synthesize from quite a few sources (MySQL JOIN with LIMIT 1 on joined table, SQLAlchemy - subquery in a WHERE clause, SQLAlchemy Query documentation). The biggest complicating factor is that you apparently need to use as_scalar to nest the subqueries in the right places, and therefore can't return both the Post id and the count from the same subquery.
FWIW, this is kind of a behemoth and I concur with user1675804 that SQLAlchemy code this deep is hard to grok and not very maintainable. You should take a hard look at any more low-tech solutions available like adding columns to the db or doing more of the work in python code.
| {
"pile_set_name": "StackExchange"
} |
Q:
Getting error in controller while unit testing using karma jasmine
I'm getting this error while I'm running unit test using Karma-Jasmine
ReferenceError: myModule is not defined
My sample test case is as follows..
describe("Unit Testing", function() {
beforeEach(angular.mock.module('myModule.common'));
var scope, ngTableParams, filter ,testTableParam;
it('should have a commonController controller', function () {
expect(myModule .common.controller('commonController ', function (commonController ) {
$scope:scope;
ngTableParams:ngTableParams;
$filter: filter;
tableParams: testTableParam
}
)).toBeDefined();
});});
I have injected the module name as myModule.common.
Can you please suggest a solution?
A:
Try following code snippet it might help
describe('testing myModule.common', function() {
var $rootScope, $scope, $filter, $controller, ngTableParams, testTableParam;
beforeEach(module('myModule.common'));
beforeEach(function() {
inject(function($injector) {
$rootScope = $injector.get('$rootScope');
$scope = $rootScope.$new();
$filter = $injector.get('$filter');
testTableParam = $injector.get('testTableParam');
ngTableParams = $injector.get('ngTableParams');
$controller = $injector.get('$controller')('commonController ', {
$scope: $scope
});
});
});
it('testing commonController ', function() {
expect('commonController ').toBeDefined();
});
});
It will solve your problem
| {
"pile_set_name": "StackExchange"
} |
Q:
Encrypting/Decrypting an SMTP Password in a Client/Server App
I have a client app that needs to save a username/password for an SMTP Server. This data will be going into SQL Server 2005, and consumed by my server app. The server app will use the System.Net.Mail namespace to send e-mail messages using the supplied credentials (and from that user's e-mail address). How can I encrypt/decrypt the password easily/securely so that I don't have plain-text passwords flying across the wire? Note that the client and server apps are NOT guaranteed to be on the same computer.
A:
There is whole encryption namespace in .NET - System.Security.Cryptography (example) So you can encrypt/decrypt the data on the client.
Now how to store the key to the cipher. This can be stored in app.config encrypted as described here. Note though, that if the user has admin access to the machine, they can decrypt the keys stored in your app.config.
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I detect when my View has moved?
I have a View in my Activity. I want to detect when it has moved around the screen due to any actions (user scrolls, relative views resize shifting contents, etc.). This is meant to be in a library, so I can query the hierarchy, but I don't control it, nor can I modify it other than adding event listeners.
Is there any way to get this as an event, rather than polling?
A:
There is an event for this: View.getViewTreeObserver().addOnScrollChangedListener()
final ViewTreeObserver.OnScrollChangedListener mScrollChangedListener = ...;
@Override
protected void onAttachedToWindow() {
super.onAttachedToWindow();
getViewTreeObserver().addOnScrollChangedListener(mScrollChangedListener);
}
@Override
protected void onDetachedFromWindow() {
super.onDetachedFromWindow();
getViewTreeObserver().removeOnScrollChangedListener(mScrollChangedListener);
}
Source: Android Source Code (4.0), android.view.SurfaceView.java:205
| {
"pile_set_name": "StackExchange"
} |
Q:
Equilibrium - Pressure Vs. Concentration
Does increasing the pressure increase the concentration of reactants as well?
If:
$$\ce{A(g) + B(g)<=>C(g) + D(g)}$$
Assuming all reactants and products are gases.
According to Le Chatelier's Principle, increasing the pressure will cause the system to shift wherever has the least amount of gaseous molecules. However, in this case, the molar ratios are 1:1; thus, increasing the pressure will not cause a shift. That being said, when the pressure is increased, does the concentration of reactants and products both go up? Or do they maintain the same.
Thanks!
A:
when the pressure is increased, does the concentration of reactants and products both go up? Or do they maintain the same?
Le Chatelier's principle in its most general form makes statements about what happens to a reaction that used to be at equilibrium when changes are made to concentrations, temperature or pressure.
To keep things simple, let's say the temperature stays constant, but we are changing the overall pressure of the reaction mix by decreasing the volume. As a result, all concentrations (or partial pressures) will increase by the same factor. If the sum of the stoichiometric factors for reactants in the gas phase is equal to that of the products, the reaction quotient Q will not change (all factors cancel out) and the system stays at equilibrium. If this is not the case, the reaction will shift to re-establish equilibrium.
On the other hand, if you change the pressure at constant volume by changing the temperature, the concentrations (partial pressures) will stay the same, but you are out of equilibrium anyway because the equilibrium constant is temperature dependent.
In both cases, you can use Le Chatelier's rules to predict in which direction the reaction has to shift to re-establish equilibrium.
| {
"pile_set_name": "StackExchange"
} |
Q:
Mongoose: Linked Model Schema "Hasn't Been Registered for Model"
Currently using Mongoose with MongoDB for an Express server, however I attempted to link up several Mongoose models together and I am getting
MissingSchemaError: Schema hasn't been registered for model "Semester".
Use mongoose.model(name, schema)
with my execution.
The current project structure is as follows
app.js
www.js
models
|-- member.js
|-- semester.js
routes
|-- members.js
I've reviewed about two dozen other Stackoverflow questions regarding the same error and all of which pointed at using require for the model(s) before require('express').
However, I am currently following that practice but still getting an unrecognized Schema (which leads me to believe that how I linked the models together was incorrect).
app.js
// DEPENDENCIES
var bodyParser = require('body-parser');
var mongoose = require('mongoose'); // NOTE: Mongoose needs to be required before express
// MODELS
require('./models/member');
require('./models/semester');
// ROUTES
var members = require('./routes/members'); // NOTE: Routes need to be required before express as well
var express = require('express');
var app = express();
// ... More Stuff ...
module.exports = app;
My ./routes/members.js
var Member = require('../app/models/member');
require('../app/models/semester');
var express = require('express');
var router = express.Router();
// ... Logic is here ...
module.exports = router;
Finally the models themselves
member.js
var mongoose = require('mongoose');
var Schema = mongoose.Schema;
var Semester = require('mongoose').model('Semester').schema;
// Would var Semester = mongoose.model('Semester').schema also work?
var MemberSchema = new Schema({
name: {
first: {type: String, default: ''},
last: {type: String, default: ''}
},
studentID: {type: Number, default: 0},
email: {type: String, default: ''},
social: {type: Social},
semesters: [Semester],
currenPosition: {type: String, default: ''}
});
module.exports = mongoose.model('Member', MemberSchema);
semester.js
var mongoose = require('mongoose');
var Schema = mongoose.Schema;
var Event = require('mongoose').model('Event').schema; // These are valid models, but I haven't included them in the scope of this question
var Project = require('mongoose').model('Project').schema;
// Object which describes a singular semester
var SemesterSchema = new Schema({
term: {type: String, default: ''},
year: {type: Number, default: ''},
position: {type: String, default: ''}
events: [Event],
projects: [Project]
});
module.exports = mongoose.model('Semester', SemesterSchema);
I am rather certain at this point that it's a non-mongoose related mistake I made when requiring and linking different models together.
A:
This error could be happening because you're requiring Member model before Semester in app.js, this is because Semester model and schema not exist when './models/member' is required
Try to change the order in app.js to:
// MODELS
require('./models/semester');
require('./models/member');
To avoid this situation you could require the model from the script file ('./models/semester') instead of obtaining it from mongoose through model in the script file were Member model is declared (./models/member)
| {
"pile_set_name": "StackExchange"
} |
Q:
Numerical iterative method, estimating error
Given iterative method: $x_{n+1}=0.7\sin x_n +5 = \phi(x_n)$ for finding solution for $x=0.7\sin x +5$, I want to estimate $|e_6|=|x_6-r|$ as good as possible, with $x_0=5$, where $r$ is exact solution. This method obviously converges, because $\phi$ is contraction, so $r=\phi(r)$ is a fixed point. So, with mean value theorem:
$|e_{n+1}|=|x_{n+1}-r|=|\phi(x_n)-\phi(r)|\le \max_{c\in\mathbb{R}}|\phi'(c)|\cdot |x_n-r|$
and we have:
$|e_n|\le 0.7^n \cdot |e_0|$
But I don't know how can I estimate $|e_0|$ without a computer? I suppose there is some simple way to finish it and with clever observation $|e_0|\le 0.7$. Can anybody help?
A:
You have already shown that the iteration converges to a fixed point $r$ with
$r = 0.7 \sin r + 5$, i.e. you know $r - 5 = 0.7 \sin r$. Using $|\sin r| \le 1$ here is the required simple estimate for $e_0$:
$$|e_0| = |x_0 - r|= |5-r|= |0.7 \sin r|= 0.7|\sin r| \le 0.7$$
Note: Because $r \approx 4.3463686514876$, the true value is
$|e_0| \approx 0.6536313485123$, thus the estimate 0.7 is not so bad.
| {
"pile_set_name": "StackExchange"
} |
Q:
SQLite JDBC PRAGMA setting
I am trying to set PRAGMA foreign_key = ON; in sqlite database. I code some software in java using jdbc driver for sqlite, this one: http://www.xerial.org/trac/Xerial/wiki/SQLiteJDBC.
I am also using connection pooling to speed up queries for DB. I am using this library:
http://commons.apache.org/dbcp/.
Up to this point, all is good. Now, I need to set PRAGMA setting, concretely PRAGMA foreign_key = ON; before creating tables, because I need to be sure about consistency between some columns in db.
When I create DB, it is automatically set to OFF. So I have to turn it on in order to use it.
But I do not know how to do it, the way I am preparing poolable data source is like this:
public static DataSource getDataSource(String connectURI) {
GenericObjectPool connectionPool = new GenericObjectPool(null);
ConnectionFactory connectionFactory =
new DriverManagerConnectionFactory(connectURI, null);
PoolableConnectionFactory poolableConnectionFactory =
new PoolableConnectionFactory(connectionFactory, connectionPool, null, null, false, true);
DataSource dataSource = new PoolingDataSource(connectionPool);
return dataSource;
}
But I do not know how to set that pragma properly, I found that this one is possible:
SQLiteConfig config = new SQLiteConfig();
config.enforceForeignKeys(true);
But I do not know how to use it in connection with that poolable tricky settings ...
Any ideas?
A:
Unfortunately, this is more of a DBCP question than SQLite. I agree, there has to be a place somewhere in DBCP to set/update the config for a given data source. I would expect to see it in the PoolingDataSource class, but of course its not there.
One option to consider is to use a jdbc pooling library that leverages the ConnectionPoolDataSource interface. If so, you can use the SQLiteConnectionPoolDataSource to set up a connection pool like this:
//Set config
org.sqlite.SQLiteConfig config = new org.sqlite.SQLiteConfig();
config.enforceForeignKeys(true);
//Create JDBC Datasource
SQLiteConnectionPoolDataSource dataSource = new SQLiteConnectionPoolDataSource();
dataSource.setUrl("jdbc:sqlite:" + db.toString().replace("\\", "/"));
dataSource.setConfig(config);
Note that there is an extension to DBCP called DriverAdapterCPDS. It is an implementation of the ConnectionPoolDataSource interface so in theory, you should be able to use the SQLiteConnectionPoolDataSource with DBCP .
| {
"pile_set_name": "StackExchange"
} |
Q:
Angular UI-Router sending root url to 404
I am having an infuriating issue regarding ui-router. Everything works as I want, where all bad URLs are sent to the 404 state. However, even though my default state is correctly rendered when the url is /#/, the url of / is redirected to /404/. How can I server the default state to both / and /#/?
app.js
MyApp.config( function ($stateProvider, $urlRouterProvider) {
// For any unmatched url, send to 404
$urlRouterProvider.otherwise("/404/");
$stateProvider
// Home (default)
.state('default', {
url: '/',
resolve: {
...
},
}
});
A:
I think this will accomplish your needs -
MyApp.config(function($stateProvider, $urlRouterProvider) {
// the known route
$urlRouterProvider.when('', '/');
// For any unmatched url, send to 404
$urlRouterProvider.otherwise('/404');
$stateProvider
// Home (default)
.state('default', {
url: '/',
resolve: {
// ...
}
// ....
});
});
I refered to this post. You may need to use a regular expression to handle routes with data.
Instead of the # in your URL you can use a query string too and grab it via $stateParams in a state.
Here is how you can do that -
// ....
$stateProvider
.state('default', {
url: '/?data',
templateUrl: 'templates/index.html',
controller: 'defaultCtrl'
})
And then you can use this below to go to home with data -
var toStateData = {
key1: 'value1',
key2: 'value2'
}
$state.go('default', {data: JSON.stringify(toStateData)});
You don't have to stringify the data in $stateParams but this will allow more options with one parameter. In the defaultCtrl controller you can get the passed query string like this -
var stateData = JSON.parse($stateParams.data);
var key1 = stateData.key1;
// ....
| {
"pile_set_name": "StackExchange"
} |
Q:
draw() on finish callback
I'm trying to find any callback/event that can notify me about the finish of layer draw() method.
I would something like this:
var layer = new Konva.Layer();
layer.draw(() => console.log('Draw finished'))
Or:
var layer = new Konva.Layer();
layer.on('redraw', () => console.log('Draw finished'))
layer.draw()
A:
layer.draw() is a synchronous function. So you don't need to use a callback for it.
You can just do this:
layer.draw();
console.log('Draw finished')
If you need an event you can use this:
layer.on('draw', () => {
console.log('Draw finished')
})
| {
"pile_set_name": "StackExchange"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.