text
stringlengths 64
89.7k
| meta
dict |
---|---|
Q:
How Is A SAN Storage Device Managed?
Apologies if this is a stupid question but my only previous experience with SAN technology is using a SAN virtualisation tool (StarWind iSCSI SAN Free). This tool comes with a management interface that allows for iSCSI targets to be configured on the simulated SAN that can then be accessed.
My question is basically: How is physical SAN device managed?
Is an operating system required to be installed on the SAN device and then managed via software? Or can it be managed remotely via an application with no OS on the device?
Thanks for any help.
A:
Edit: As @Pauska points out in his answer, you're referring to managing shared storage, not necessarily a SAN. Many times, "SAN" is used in the context you use it, so I didn't point it out originally, but it's important to understand this as a beginner.
Different manufacturers handle it differently. Most have a management application that runs on a remote computer. Some have a web interface, some don't. Some have rudimentary options with a console connection, some don't. Almost all of them hav a CLI of some sort and have SSH access or similar available.
Basically, each manufacturer can choose to do whatever they want, there's no universal standard.
A:
I think you have the concept of a SAN a bit wrong.
SAN stands for Storage Area Network (like LAN = Local Area Network).
It's basically just the back end of your storage. Storage arrays do the actual datastorage, while storage networks do the transport of the data.
A computer connected to a dedicated storage network with a iSCSI disk shared (like Solarwinds) is in a SAN. It's not a great one, but it's there.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why do almost all OO languages compile to bytecode?
Of the object-oriented languages I know, pretty much all but C++ and Objective-C compile to bytecode running on some sort of virtual machine. Why have so many different languages settled on compiling to bytecode, as opposed to machine code? Is it possible in princible to have a high-level memory-managed OOP language that compiled to machine code?
Edit: I'm aware that multiplatform support is often advanced as an advantage of this approach. However, it's quite possible to compile natively on multiple platforms, without making a new compiler per platform. One can, per example, emit C code and then compile that with GCC.
A:
This is done to allow a VM or JIT compiler the chance to compile the code on demand optimally for the architecture on which the code is executed. Also, it allows for cross-platform bytecode to be created once and then executed on multiple hardware architectures. This allows for hardware specific optimizations to be placed into the compiled code.
Since byte code is not limited to a microarchitecture, it can be smaller than machine code. Complex instructions can be represented vs. the much more primitive instructions available in modern day CPUs, since the constraints in the design of CPU instructions are very different from the constraints in designing a bytecode architecture.
Then there's the issue of security. The bytecode can be verified and analyzed prior to execution (i.e., no buffer overflows, variables of a certain type being accessed as something they are not), etc...
A:
There's no reason in fact, this is a kind of coincidence. OOP now is the leading concept in "big" programming, and so virtual machines are.
Also note, that there are 2 distinct parts of traditional virtual machines - garbage collector and bytecode interpreter/JIT-compiler, and these parts can exist separately. For example, Common Lisp implementation called SBCL compiles program to a native code, but at runtime heavily uses garbage collection.
A:
Java uses bytecode because two of its initial design goals were portability and compactness. Those both came from the initial vision of a language for embedded devices, where fragments of code could be downloaded on the fly.
Python, Ruby, Smalltalk, javascript, awk and so on use bytecode because writing a native compiler is a lot of work, but a textual interpreter is too slow - bytecode hits a sweet spot of being fairly easy to write, but also satisfactorily quick to run.
I have no idea why the Microsoft languages use bytecode, since for them, neither portability nor compactness is a big deal. A lot of the thinking behind the CLR came out of computer scientists in Cambridge, so i imagine considerations like ease of program analysis and verification were involved.
Note that as well as C++ and Objective C, Eiffel, Ada 9X, Vala and Go are OO languages (of varying vintage) that are compiled straight to native code.
All in all, i'd say that OO and bytecode do not go hand in hand. Rather, we have a coincidental convergence of several streams of development: the traditional bytecoded interpreters of scripting languages like Python and Ruby, the mad Gosling masterplan of Java, and whatever it is Microsoft's motives are.
| {
"pile_set_name": "StackExchange"
} |
Q:
Trouble calling instance variables in Python
I'm just starting to learn my way around classes now and I came across something I don't understand. Let's say I have a class...
class Area(object):
def __init__(self, name, items):
self.name = name
self.items = items
Now if I initiate an instance of Area this way:
impala = Area("Impala", ["shotgun", "salt"])
and then call on a variable, say:
print impala.items
it works just fine. However, if I try to initiate it this way:
class impala(Area):
def __init__(self):
self.name = "Impala"
self.items = ["shotgun", "salt"]
and then try to do the same thing it gives me an error: "type object 'impala' has no attribute 'items'"
Could someone please tell me what I'm doing wrong in the second example and why it's happening?
A:
You are using the same name for your class and variable. impala and impala, I would presume that Python is accessing the class, not the instance.
To avoid this, I recommend following PEP-8 for your variable names, giving classes names in CapWords and local variables lowercase_with_underscores. Naturally, you should also be careful when you name your items not to have two things with the same name in the same namespace.
>>> class Area(object):
... def __init__(self, name, items):
... self.name = name
... self.items = items
...
>>> class Impala(Area):
... def __init__(self):
... self.name = "Impala"
... self.items = ["shotgun", "salt"]
...
>>> impala = Impala()
>>> impala.items
['shotgun', 'salt']
| {
"pile_set_name": "StackExchange"
} |
Q:
Is it possible to open another Web Browser form the current browser
Is it possible to Open another web Browser from current web browser using JavaScript...??
For Eg :If i am using Mozilla Firefox to view a web site and while clicking on a particular link (like an online payment mechanism), it must switch the browser to the Internet Explorer even if its not been given as a default browser.
Thanks in advance
A:
No, it's not. There is nothing special about a browser; it's just an executable like any other program. If this was possible, it'd be possible to execute an arbitrary program from a website, thus you could trick the user into running a malicious program.
| {
"pile_set_name": "StackExchange"
} |
Q:
"PLS-00526: A MAP or ORDER function is required" when comparing objects for equality
On Oracle 11gR2, I created a simple PL/SQL object type. When trying to compare two instances for equality/inequality, I get a PLS-00526: A MAP or ORDER function is required for comparing objects in PL/SQL error, even if the Oracle documentation clearly states that
If neither a MAP nor an ORDER method is specified, then only comparisons for equality or inequality can be performed.
Here is the PL/SQL code example I used to reproduce the error:
create or replace type point_t is object (x number, y number);
/
declare
p1 point_t := point_t(1, 2);
p2 point_t := point_t(1, 2);
p3 point_t := point_t(2, 1);
begin
dbms_output.put_line('p1 = p1 ' || case when p1 = p1 then 'OK' else 'FAIL' end);
dbms_output.put_line('p2 = p1 ' || case when p2 = p1 then 'OK' else 'FAIL' end);
dbms_output.put_line('p3 <> p1 ' || case when p3 <> p1 then 'OK' else 'FAIL' end);
end;
/
A:
Yes, if neither MAP nor ORDER method is specified, you can compare objects for equality or inequality but ONLY in SQL statements, NOT a PL/SQL block directly.
Quote from Database Object-Relational Developer's Guide
if you do not declare one of these methods, you can only compare objects in SQL statements, and only for equality or inequality.
create or replace type point_t is object (x number, y number);
/
select case
when point_t(1,1) = point_t(1,1) then 'OK'
else 'FAIL'
end as cmp_res
from dual;
set serveroutput on;
declare
l_p1 point_t := point_t(1,2);
l_p2 point_t := point_t(1,2);
l_res varchar2(7) := 'OK';
begin
select 'FAIL'
into l_res
from dual
where l_p1 != l_p2; -- put it in the where clause just for the sake
-- of demonstration. Can do comparison in the
-- select list as well.
dbms_output.put_line('p1 = p2 : ' || l_res);
end;
Result:
Type created.
CMP_RES
-------
OK
1 row selected.
p1 = p2 : FAIL
PL/SQL procedure successfully completed.
But if there is a need to compare objects directly in PL/SQL block you need to define object comparison rules(when one object is equal/unequal, greater or less than other object, especially when an object has many properties ) either MAP or ORDER methods needs to be implemented.
| {
"pile_set_name": "StackExchange"
} |
Q:
Coreml: Model class has not been generated yet
I've converted model from keras with coremltools, added it to project and added to targets. And then when i'm pressing on model in navigator in Model Class section i'm getting "Model class has not been generated yet.". What does it mean?
A:
Cited from apple dev forum:
Xcode has stopped automatically adding the coreml model to the build
settings of your project. To solve, go to your target, go to build
phases, find compile sources and add your coreml model. After this,
the model class is generated.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to change data from API by changing the parameters in URL ? Restfull API on javascript?
I set a slash endpoint.
I want to make it so when the user enters:
http/website.com/date=2019-08-01&station=41027&daysForward=5
be able to change three parameters to obtain different data.
date="first parameter"
station="second parameter"
daysForward="third parameter"
var express = require("express")
var app = express()
var mysql = require('mysql')
// Server port
var HTTP_PORT = 8000;
// Start server
app.listen(HTTP_PORT, () => {
console.log("Server running on port %PORT%".replace("%PORT%",HTTP_PORT))
});
var con = mysql.createConnection({
host: "192.168.0.1",
port: "3333",
user: "username",
password: "pass"
});
con.connect(function(err) {
if (err) throw err;
});
let selectCustomers = function (query, cb) {
let aladinModel = '';
if (query.date && query.station && query.daysForward) {
con.query(`CALL aladin_surfex.Get_mod_cell_values_meteogram(${query.date},${query.station},${query.daysForward})`, function (err, result, fields) {
if (err) throw err;
console.log(result);
aladinModel = result;
return cb(aladinModel);
});
}
else return cb(aladinModel);
};
// Root endpoint
app.get("/", (req, res, next) => {
selectCustomers(req.query, function (aladinModel) {
res.json({aladinModel})
const date = req.query.date;
const station = req.query.station;
const daysForward = req.query.daysForward;
const query = `CALL aladin_surfex.Get_mod_cell_values_meteogram(${date}, ${station}, ${daysForward})`;
con.query(query, function (err, result, fields) {
if (err) throw err;
aladinModel = result;
});
res.json({aladinModel})
});
app.use(function(req, res){
res.status(404);
});
I got this error:
ReferenceError: aladinModel is not defined
at app.get (C:\Users\Admin\node-express\server.js:54:13)
at Layer.handle [as handle_request] (C:\Users\Admin\node-express\node_modules\express\lib\router\layer.js:95:5)
at next (C:\Users\Admin\node-express\node_modules\express\lib\router\route.js:137:13)
at Route.dispatch (C:\Users\Admin\node-express\node_modules\express\lib\router\route.js:112:3)
at Layer.handle [as handle_request] (C:\Users\Admin\node-express\node_modules\express\lib\router\layer.js:95:5)
at C:\Users\Admin\node-express\node_modules\express\lib\router\index.js:281:22
at Function.process_params (C:\Users\Admin\node-express\node_modules\express\lib\router\index.js:335:12)
at next (C:\Users\Admin\node-express\node_modules\express\lib\router\index.js:275:10)
at expressInit (C:\Users\Admin\node-express\node_modules\express\lib\middleware\init.js:40:5)
at Layer.handle [as handle_request] (C:\Users\Admin\node-express\node_modules\express\lib\router\layer.js:95:5)
A:
Use the following code:
// Create express app
var express = require("express")
var app = express()
var mysql = require('mysql')
var date = '';
var station = '';
var daysForward = '';
// Server port
var HTTP_PORT = 8000;
// Start server
app.listen(HTTP_PORT, () => {
console.log("Server running on port %PORT%".replace("%PORT%",HTTP_PORT))
});
var con = mysql.createConnection({
host: "192.168.0.1",
port: "3306",
user: "user",
password: "password"
});
con.connect(function(err) {
if (err) throw err;
});
let selectCustomers = function (query, cb) {
let aladinModel = '';
if (query.date && query.station && query.daysForward) {
con.query(`CALL aladin_surfex.Get_mod_cell_values_meteogram(${query.date},${query.station},${query.daysForward})`, function (err, result, fields) {
if (err) throw err;
console.log(result);
aladinModel = result;
return cb(aladinModel);
});
}
else return cb(aladinModel);
};
// Root endpoint
app.get("/", (req, res, next) => {
selectCustomers(req.query, function (aladinModel) {
res.json({aladinModel})
});
});
app.use(function(req, res){
res.status(404);
});
Then use query parameters in your browser to pass date, station and daysForward like :
http://www.website.com?date=2019-08-01&station=41027&daysForward=5
| {
"pile_set_name": "StackExchange"
} |
Q:
iOS EventKit - Event is not being deleted from calendar
I'm deleting event using the following code
[store requestAccessToEntityType:EKEntityTypeEvent completion: ^(BOOL granted, NSError *error) {
if (granted) {
EKEvent *event = [store eventWithIdentifier:eventIdentifier];
NSError *eventDeleteError = nil;
if (event) {
[store removeEvent:event span:EKSpanThisEvent error:&eventDeleteError];
}
if (eventDeleteError) {
NSLog(@"Event Deletion Error: %@", eventDeleteError);
}
}];
I got no error in eventDeleteError but following message appear in the console log
CADObjectGetInlineStringProperty failed fetching UUID for EKPersistentAttendee with error Error Domain=EKCADErrorDomain Code=1010 "The operation couldn’t be completed. (EKCADErrorDomain error 1010.)"
A:
I was getting similar error on removing a calendar:
CADObjectGetIntProperty failed with error Error Domain=EKCADErrorDomain Code=1010 "The operation couldn’t be completed. (EKCADErrorDomain error 1010.)"
CADObjectGetRelation failed with error Error Domain=EKCADErrorDomain Code=1010 "The operation couldn’t be completed. (EKCADErrorDomain error 1010.)"
As it is not exactly the same message I will just explain what helped me.
The issue came from making "remove" operation on a new EventStore object. Try to make sure you hold a reference to EventStore and both adding and removing operations are called on the same object.
| {
"pile_set_name": "StackExchange"
} |
Q:
Python BeautifulSoup: Insert attribute to tags
I'm trying to insert a new attribute to all the nested tables in a html doc. I'm trying with the code below, but it's not inserting the attribute to all the table tags. I would really appreciate any help.
Input html code:
<html>
<head>
<title>Test</title>
</head>
<body>
<div>
<table>
<tr>t<td><table></table></td></tr>
<tr>t<td><table></table></td></tr>
<tr>t<td><table></table></td></tr>
</table>
</div>
</body>
</html>
Code:
from bs4 import BeautifulSoup
import urllib2
html = urllib2.urlopen("file://xxxxx.html").read()
soup = BeautifulSoup(html)
for tag in soup.find_all(True):
if (tag.name == "table"):
tag['attr'] = 'new'
print(tag)
else:
print(tag.contents)
Output html code:
<html>
<head>
<title>Test</title>
</head>
<body>
<div>
<table attr="new">
<tr>t<td><table attr="new"></table></td></tr>
<tr>t<td><table attr="new"></table></td></tr>
<tr>t<td><table attr="newe"></table></td></tr>
</table>
</div>
</body>
</html>
A:
Your tag['attr'] = 'new' seems to work correctly. The problem is that print(tag.contents) will print parts of the document recursively before the descendants have been modified.
The simple fix is to make one pass to modify the document first, then make just one print(soup) call at the end.
| {
"pile_set_name": "StackExchange"
} |
Q:
Javascript creating
I have this code that is creating to div after clicking on button. But its not working... here is the html with script in it
<img id="boxm" style="position: fixed;left: 90%;padding-top:2%"
src="images/box1.png" id="image1" onclick="diffImage(this);myFunction();" />
<script>
function myFunction()
{
document.getElementById("surprise").createElement="<img
style="width:304px;height:228px"
src='images/mese.png' />";
}
</script>
<div id="surprise">
</div>
A:
<img src='images/box1.png' onclick="showSurpriseImage()" />
<div id="surprise">
Surprise Image will be displayed here
</div>
<script>
function showSurpriseImage() {
var x = document.createElement("IMG");
x.setAttribute("src", "images/mese.png");
x.setAttribute("width", "304");
x.setAttribute("height", "228");
x.setAttribute("alt", "surprise image");
document.getElementById("surprise").appendChild(x);
}
</script>
Best Practice to set attributes checkout here
Hope this helps
| {
"pile_set_name": "StackExchange"
} |
Q:
Cloudera VM, compression codecs
I am preparing for CLOUDERA certification and some times it's very difficult to remember the compression codecs used in Sqoop import process.
For example: org.apache.hadoop.io.compress.SnappyCodec.
I will not be allowed to use google during the exam.
During the exam, is there anyway to retrieve this information?
Currently, I use Cloudera Quickstart VM and I did not find this information in Mapred-site.xml.
Where can I find the same?
A:
I'm fairly certain you have access to Cloudera documentation
https://www.cloudera.com/documentation/enterprise/5-14-x/topics/introduction_compression.html
As well as Hadoop JavaDoc
Just find the org.apache.hadoop.io.compress package
| {
"pile_set_name": "StackExchange"
} |
Q:
Grouping DataFrame by start of decade using pandas Grouper
I have a dataframe of daily observations from 01-01-1973 to 12-31-2014.
Have been using Pandas Grouper and everything has worked fine for each frequency until now: I want to group them by decade 70s, 80s, 90s, etc.
I tried to do it as
import pandas as pd
df.groupby(pd.Grouper(freq = '10Y')).mean()
However, this groups them in 73-83, 83-93, etc.
A:
pd.cut also works to specify a regular frequency with a specified start year.
import pandas as pd
df
date val
0 1970-01-01 00:01:18 1
1 1979-12-31 18:01:01 12
2 1980-01-01 00:00:00 2
3 1989-01-01 00:00:00 3
4 2014-05-06 00:00:00 4
df.groupby(pd.cut(df.date, pd.date_range('1970', '2020', freq='10YS'), right=False)).mean()
# val
#date
#[1970-01-01, 1980-01-01) 6.5
#[1980-01-01, 1990-01-01) 2.5
#[1990-01-01, 2000-01-01) NaN
#[2000-01-01, 2010-01-01) NaN
#[2010-01-01, 2020-01-01) 4.0
A:
You can do a little arithmetic on the year to floor it to the nearest decade:
df.groupby(df.index.year // 10 * 10).mean()
A:
@cᴏʟᴅsᴘᴇᴇᴅ's method is cleaner then this, but keeping your pd.Grouper method, one way to do this is to merge your data with a new date range that starts at the beginning of a decade and ends at the end of a decade, then use your Grouper on that. For example, given an initial df:
date data
0 1973-01-01 -1.097895
1 1973-01-02 0.834253
2 1973-01-03 0.134698
3 1973-01-04 -1.211177
4 1973-01-05 0.366136
...
15335 2014-12-27 -0.566134
15336 2014-12-28 -1.100476
15337 2014-12-29 0.115735
15338 2014-12-30 1.635638
15339 2014-12-31 1.930645
Merge that with a date_range dataframe ranging from 1980 to 2020:
new_df = pd.DataFrame({'date':pd.date_range(start='01-01-1970', end='12-31-2019', freq='D')})
df = new_df.merge(df, on ='date', how='left')
And use your Grouper:
df.groupby(pd.Grouper(key='date', freq = '10AS')).mean()
Which gives you:
data
date
1970-01-01 -0.005455
1980-01-01 0.028066
1990-01-01 0.011122
2000-01-01 0.011213
2010-01-01 0.029592
The same, but in one go, could look like this:
(df.merge(pd.DataFrame(
{'date':pd.date_range(start='01-01-1970',
end='12-31-2019',
freq='D')}),
how='right')
.groupby(pd.Grouper(key='date', freq = '10AS'))
.mean())
| {
"pile_set_name": "StackExchange"
} |
Q:
Como rodar o cron do schedule:run no Windows
Criei uma cron que deve verificar a cada minuto se o valor da data final é menor que a atual... se sim deve atualizar o campo status dessa tabela.
class SetStatus extends Command
{
protected $signature = 'SetStatus:cron';
public function handle()
{
//o código ta uma zona apenas pra teste...
$sql = 'select id from promotions where dt_end < NOW()';
$dados = \DB::select($sql);
foreach($dados as $i)
{
$updateStatus = 'update promotions set status = "inativo" where id IN ('.$i->id.')';
$return = \DB::update($updateStatus);
}
var_dump($return);
}
}
e adicionei no kernel
protected $commands = [
Commands\SetStatus::class,
];
protected function schedule(Schedule $schedule)
{
$schedule->command('SetStatus:cron')->everyMinute();
}
A minha duvida é que sempre que rodo os comando abaixo a CRON é executada com sucesso, porém para por ai. No minuto seguinte ela não executa novamente.
php artisan SetStatus:cron
php c:/projetos/marcelo/painel/artisan schedule:run
Após rodar esses comandos acima, já não deveria ser rodado a cada minuto igual defini no kernel?
Achei algo interessante nesse link Agendar carregamento de página através do CRON na linha
# crontab -e 00 * * * * /usr/local/bin/php /home/pedrodelfino/meu-script.php
Mas não consegui desenrolar... vi algo também como criar uma .bat, mas não achei funcional.
Estou em ambiente de desenvolvimento usando Windows, alguém poderia dar uma luz?
A:
No Linux, o recomendado é adicionar uma entrada no cron como é sugerido pela documentação do Laravel.
* * * * * php /path-to-your-project/artisan schedule:run >> /dev/null 2>&1
Esse cron rodará a cada minuto o comando schedule:run.
No Windows é possível usar o Agendador de Tarefas do sistema para fazer isso.
Abra o agendador de tarefas (ou task scheduler)
Vá em criar uma nova tarefa
Na aba geral defina um nome para a tarefa e para ela executar mesmo que o usuário não esteja logado (isso faz rodar em background)
Na aba seguinte, crie um trigger para ser executado a cada minuto indefinidamente.
E na aba de ações, crie uma nova ação especificando o executável do php, o comando artisan schedule run e o diretório do seu projeto.
Depois disso basta dar um ok e informar uma credencial do seu usuário e o schedule será executado como um cron.
| {
"pile_set_name": "StackExchange"
} |
Q:
Does MS Visio support sequence diagram?
Does MS Visio support sequence diagram?
A:
File -> New -> Software and Database -> UML Model Diagram
After that you should see the UML groups (on the left hand side bar) , which Sequence diagram should be present.
HTH someone else.
It seems only Visio 2013 and Visio 2016 are supported, according to Microsoft docs here : MS Office Article
A:
Also take a look at http://softwarestencils.com/uml/index.html. I find that these stencils are much better than the provided ones and they support the newer UML constructs to boot.
A:
I found the Software and Database templates at http://www.microsoft.com/en-us/download/details.aspx?id=25110
| {
"pile_set_name": "StackExchange"
} |
Q:
Evaluate Integral Using Change of Variables
Let $A = \{(x, y)\in \mathbb{R}^2 | x^2-xy+2y^2<1 \}$; define $f: \mathbb{R}^2 \to \mathbb{R}$ by $f(x, y) = xy$.
Express the integral
$$\int_A f$$
as an integral over the unit ball $B = \{(x, y)\in \mathbb{R}^2 | x^2+y^2<1 \}$.
Someone gave a solution here but it seems to be wrong.
My attempt:
Observe that $x^2-xy+2y^2=(x-\frac{1}{2}y)^2+(\frac{\sqrt{7}}{2}y)^2$.
Let $B^\ast = \{(r, \theta) | 0< r < 1, 0 < \theta < 2\pi\}$.
Let $A^\ast = \{(x, y)\in \mathbb{R}^2 | x^2-xy+2y^2<1$, and $x<0$ if $y=0\}$.
Then the function
$$g(r, \theta) = (r\cos\theta+\frac{1}{\sqrt{7}}r\sin\theta, \frac{2}{\sqrt{7}}r\sin\theta)$$
carries $B^\ast$ in a one-to-one fasion onto $A-\{(0, 0)\}$.
Now we obtain that
\begin{align*}
\text{det} Dg & =
\begin{vmatrix}\cos\theta+\frac{1}{\sqrt{7}}\sin\theta & -r\sin\theta+\frac{1}{\sqrt{7}}r\cos\theta \\ \frac{2}{\sqrt{7}}\sin\theta & \frac{2}{\sqrt{7}}r\cos\theta \\ \end{vmatrix} \\
& = \frac{2}{\sqrt{7}}r \\
& > 0
\end{align*}
if $(r, \theta)\in B^\ast$.
Since the non-negative $x$-axis has measure zero, using the formula of change of variables, we have that
\begin{align*}
\int_A f
& = \int_{A^\ast} f \\
& = \int_{B^\ast} f \circ g \cdot |\text{det} Dg| \\
& = \int_0^{2\pi} \int_0^1 \Big((r\cos\theta+\frac{1}{\sqrt{7}}r\sin\theta)\frac{2}{\sqrt{7}}r\sin\theta\frac{2}{\sqrt{7}}r \Big) drd\theta \\
& = \frac{2}{7}\int_0^{2\pi} \int_0^1 (2\cos\theta\sin\theta+\frac{2}{\sqrt{7}}\sin^2 \theta)r^3 drd\theta \\
& = \frac{2}{7}\int_0^{2\pi} \int_0^1 \Big( \sin2\theta+\frac{1}{\sqrt{7}}(1-\cos2\theta) \Big)r^3 drd\theta \\
& = \frac{2}{7}\int_0^{2\pi} \int_0^1 \Big( \sin2\theta-\frac{1}{\sqrt{7}}\cos2\theta+\frac{1}{\sqrt{7}} \Big)r^3 drd\theta \\
& = \frac{2}{7}\int_0^{2\pi} \Big[ \frac{1}{4}\Big( \sin2\theta-\frac{1}{\sqrt{7}}\cos2\theta+\frac{1}{\sqrt{7}} \Big)r^4 \Big]_{r=0}^{r=1} d\theta \\
& = \frac{2}{7}\int_0^{2\pi} \frac{1}{4}\Big( \sin2\theta-\frac{1}{\sqrt{7}}\cos2\theta+\frac{1}{\sqrt{7}} \Big) d\theta \\
& = \frac{1}{14} \Big[ -\frac{1}{2}\cos2\theta-\frac{1}{2\sqrt{7}}\sin2\theta+\frac{1}{\sqrt{7}}\theta \Big]_{\theta=0}^{\theta=2\pi} \\
& = \frac{\pi}{7\sqrt{7}}
\end{align*}
Am I making any mistake? Any help or suggestion is appreciated.
A:
Looks reasonable to me. As a check on your computations, here’s a solution using an approach similar to Ron Gordon’s from the original question.
Examine the matrix $Q$ associated with the quadratic form $x^2-xy+2y^2$: $$
Q = \pmatrix{1&-\frac12\\-\frac12&2}.
$$ $\det Q>0$ and $\operatorname{tr}Q>0$, so the eigenvalues are positive, which means that we have an ellipse centered on the origin and rotated through some angle $\phi$. Define the pullback $\beta:(\rho,\theta)\mapsto(x,y)$ by $$\begin{align}
\beta^*x &= \rho\,(a\cos\theta\cos\phi-b\sin\theta\sin\phi) \\
\beta^*y &= \rho\,(a\cos\theta\sin\phi+b\sin\theta\cos\phi),
\end{align}$$ where $a$ and $b$ are the half-axis lengths of the ellipse. (This is the polar equation of the ellipse multiplied by $\rho$, which can easily be derived by scaling and rotating the unit circle.) For this pullback, we also have $$\det{\partial(\beta^*x,\beta^*y)\over\partial(\rho,\theta)}=ab\rho.$$ So, if $C$ is the unit disk, $$\begin{align}
\int_Af\,dA &= \int_C\beta^*(f\,dA) \\
&=ab\int_0^{2\pi}\int_0^1\rho^3(a\cos\theta\cos\phi-b\sin\theta\sin\phi)(a\cos\theta\sin\phi+b\sin\theta\cos\phi)\,d\rho\,d\theta \\
&=\frac{ab}4\int_0^{2\pi}(a\cos\theta\cos\phi-b\sin\theta\sin\phi)(a\cos\theta\sin\phi+b\sin\theta\cos\phi)\,d\theta \\
&=\frac{ab}4\int_0^{2\pi}a^2\cos^2\theta\cos\phi\sin\phi - b^2\sin^2\theta\cos\phi\sin\phi + \cdots\,d\theta \\
&=\frac{\pi ab}4(a^2-b^2)\cos\phi\sin\phi.\tag{*}
\end{align}$$ Note that we can ignore terms that involve odd powers of $\cos\theta$ and $\sin\theta$ since we’re integrating from $0$ to $2\pi$.
To determine $a$, $b$ and $\phi$, go back to the matrix $Q$. If its eigenvalues are $\lambda_1 < \lambda_2$, then $a^2 = 1/\lambda_1$ and $b^2 = 1/\lambda_2$. It’s a bit more difficult to find an exact value for $\phi$ in general, but we don’t really need to since we can get its sine and cosine directly from the normalized eigenvectors (which is why I didn’t combine $\cos\phi\sin\phi$ into $\frac12\sin{2\phi}$). In this case, we have $$
a^2 = {2\over3-\sqrt2}, b^2 = {2\over3+\sqrt2} \\
\cos\phi = {\sqrt{2+\sqrt2}\over2}, \sin\phi = {\sqrt{2-\sqrt2}\over2}.
$$ Plugging these values into (*) and simplifying gives ${\pi\over7\sqrt7}$, which agrees with the value computed in the question above.
| {
"pile_set_name": "StackExchange"
} |
Q:
VBA Active-X buttons getting bigger with every action
I have an Excel sheet containing multiple Active-X buttons. Whenever I click on a button/perform an action, the buttons will keep getting bigger with every action.
Initially this didn't happen at my desk (laptop in dock with two big screens), but when I moved and used the program just on my laptop, it suddenly started happening. The only fix I found is to hard-wire the positions right in the code. I feel that there has to be a solution.
Below is a sample of my code.
Private Sub SpinButton1_SpinDown()
Dim myCell As Range
Dim myRange As Range
Set myRange = Selection
SpinButton1.Height = 45
SpinButton1.Width = 39
SpinButton1.Left = 283.5
SpinButton1.Top = 328.5
For Each myCell In myRange
myCell.Value = myCell.Value - 1
Next myCell
End Sub
A:
It is a Microsoft bug in some versions of Office. Not 100% sure if your version is affected, but you can check here: http://support.microsoft.com/kb/2598259
The fix is also available for download from there.
Also, it is not advisable to use ActiveX buttons unless you want to make colorful buttons with fancy decorations. Even then you can replicate the same effect using
Images which look like the buttons you want (with shadows and all for the 3D effect)
Setting MouseOver tooltips for the correct look and field for buttons
Assigning Macros for Click-behavior, etc.
| {
"pile_set_name": "StackExchange"
} |
Q:
Timer1 arduino makes Serial not work
Running the code below, when I send any character in the Serial arduino is not printing "a" back. I think it's something wrong with the timer1 code but it should work cause this code was given by my teacher in C class.
void setup() {
Serial.begin(115200);
//http://www.instructables.com/id/Arduino-Timer-Interrupts/?ALLSTEPS
noInterrupts();
TCCR1A = 0;// set entire TCCR1A register to 0
TCCR1B = 0;// same for TCCR1B
TCNT1 = 0;//initialize counter value to 0
// set compare match register for 1000000hz increments with 8 bits prescaler
OCR1A = 1;// = (16*10^6) / (1000000*8) - 1 (must be <65536)
// turn on CTC mode
TCCR1B |= (1 << WGM12);
// Set CS11 bit for 8 prescaler. Each timer has a different bit code to each prescaler
TCCR1B |= (1 << CS11);
// enable timer compare interrupt
TIMSK1 |= (1 << OCIE1A);
interrupts();
}
void loop() {
if (Serial.available()) {
Serial.println("a");
}
}
A:
The way you set TCCR1A and B is all correct.
See the 660-pg ATmega328 datasheet pgs. 132~135 for more help & info if you want to know where to look from now on for low-level help.
However, you have 2 major problems, 1 minor problem, and 1 recommendation.
Here are the 2 major problems are completely breaking your code:
Since you are enabling the Timer Compare Match 1A interrupt ("TIMSK1 |= (1 << OCIE1A);"), you MUST also define the Interrupt Service Routine (ISR) which will be called when this happens, or else you will have run-time (but not compile-time) problems. Namely, if you do not define the ISR for Output Compare Match A, once the Output Compare A interrupt occurs, the processor will get stuck in an infinite, empty, dummy ISR created for you by the compiler, and your main loop will not progress (see code below for proof of this).
Add this to the bottom of your code:
ISR(TIMER1_COMPA_vect)
{
//insert your code here that you want to run every time the counter reaches OCR1A
}
It takes a couple microseconds to step into an ISR, and a couple microseconds to step out of an ISR, plus whatever time is required to run your code IN the ISR, you need to use an OCR1A value that is large enough that the ISR even has time to execute, rather than being continually called so quickly that you never exit the ISR (this would lock up your code essentially into an infinite loop....which is happening in your case as well).
I recommend you call an ISR no more often than every 10us. Since you are using CTC mode (Clear Timer on Compare match), with a prescaler of 8, I recommend setting OCR1A to nothing less than 20 or so. OCR1A = 20 would call the ISR every 10us. (A prescaler of 8 means that each Timer1 tick take 0.5us, and so OCR1A = 20 would call the ISR every 20*0.5 = 10us).
If you set OCR1A = 20, and add the ISR code as described above, your code will run just fine.
1 Minor problem:
It is good practice to set OCR1A after you configure the rest of the timer, or else under some situations the timer may not start counting (see "Thorsten's" comment here: http://www.righto.com/2009/07/secrets-of-arduino-pwm.html)
So, move OCR1A = 20; to after your last TCCR1B line and before your TIMSK1 line.
1 recommendation:
Get rid of "noInterrupts" and "interrupts". They are not needed here.
Now, here is a code I wrote which will better demonstrate what you're trying to do, and what I'm talking about:
/*
timer1-arduino-makes-serial-not-work.ino
-a demo to help out this person here: http://stackoverflow.com/questions/28880226/timer1-arduino-makes-serial-not-work
By Gabriel Staples
http://electricrcaircraftguy.blogspot.com/
5 March 2015
-using Arduino 1.6.0
*/
//Note: ISR stands for Interrupt Service Routine
//Global variables
volatile unsigned long numISRcalls = 0; //number of times the ISR is called
void setup()
{
Serial.begin(115200);
//http://www.instructables.com/id/Arduino-Timer-Interrupts/?ALLSTEPS
// noInterrupts(); //Not necessary
TCCR1A = 0;// set entire TCCR1A register to 0
TCCR1B = 0;// same for TCCR1B
TCNT1 = 0;//initialize counter value to 0
// set compare match register for 1000000hz increments with 8 bits prescaler
OCR1A = 20;// = (16*10^6) / (1000000*8) - 1 (must be <65536) //better to put this line AFTER configuring TCCR1A and B, but in Arduino 1.6.0 it appears to be ok here (may crash code in older versions, see comment by "Thorsten" here: http://www.righto.com/2009/07/secrets-of-arduino-pwm.html
// turn on CTC mode [Clear Timer on Compare match---to make timer restart at OCR1A; see datasheet pg. 133]
TCCR1B |= (1 << WGM12);
// Set CS11 bit for 8 prescaler [0.5us ticks, datasheet pg. 135]. Each timer has a different bit code to each prescaler
TCCR1B |= (1 << CS11);
// enable timer compare match 1A interrupt; NOW YOU *MUST* SET UP THE CORRESPONDING ISR OR THIS LINE BREAKS THE CODE
TIMSK1 |= (1 << OCIE1A);
// OCR1A = 20;// = (16*10^6) / (1000000*8) - 1 (must be <65536) //SETTING OCR1A TO 1 OR 2 FOR SURE BREAKS THE CODE, as it calls the interrupt too often
// interrupts();
Serial.println("setup done, input a character");
}
void loop()
{
if (Serial.available())
{
Serial.read(); //read and throw away the first byte in the incoming serial buffer (or else the next line will get called every loop once you send the Arduino a char)
Serial.println("a");
//also print out how many times OCR1A has been reached by Timer 1's counter
noInterrupts(); //turn off interrupts while reading non-atomic (>1 byte) volatile variables that could be modified by an ISR at any time--incl while reading the variable itself.
unsigned long numISRcalls_copy = numISRcalls;
interrupts();
Serial.print("numISRcalls = "); Serial.println(numISRcalls_copy);
}
// Serial.println("test");
// delay(1000);
}
//SINCE YOU ARE ENABLING THE COMPARE MATCH 1A INTERRUPT ABOVE, YOU *MUST* INCLUDE THE CORRESPONDING INTERRUPT SERVICE ROUTINE CODE
ISR(TIMER1_COMPA_vect)
{
//insert your code here that you want to run every time the counter reaches OCR1A
numISRcalls++;
}
Run it and see what you think.
Proof that "Major Problem 1" above is real (at least as far as I understand it--and based on tests on an Arduino Nano, using IDE 1.6.0):
This code below compiles, but will not continue to print the "a" (it may print it once, however). Note that for simplicity-sake I commented out the portion waiting for serial data, and simply told it to print an "a" every half second:
void setup() {
Serial.begin(115200);
//http://www.instructables.com/id/Arduino-Timer-Interrupts/?ALLSTEPS
TCCR1A = 0;// set entire TCCR1A register to 0
TCCR1B = 0;// same for TCCR1B
TCNT1 = 0;//initialize counter value to 0
// set compare match register for 1000000hz increments with 8 bits prescaler
OCR1A = 20;// = (16*10^6) / (1000000*8) - 1 (must be <65536)
// turn on CTC mode
TCCR1B |= (1 << WGM12);
// Set CS11 bit for 8 prescaler. Each timer has a different bit code to each prescaler
TCCR1B |= (1 << CS11);
// enable timer compare interrupt
TIMSK1 |= (1 << OCIE1A);
}
void loop() {
//if (Serial.available()) {
// Serial.println("a");
//}
Serial.println("a");
delay(500);
}
//ISR(TIMER1_COMPA_vect)
//{
// //insert your code here that you want to run every time the counter reaches OCR1A
//}
This code below, on the other hand, works, and the "a" will continue to print out. The only difference between this one and the one just above is that this one has the ISR declaration uncommented at the bottom:
void setup() {
Serial.begin(115200);
//http://www.instructables.com/id/Arduino-Timer-Interrupts/?ALLSTEPS
TCCR1A = 0;// set entire TCCR1A register to 0
TCCR1B = 0;// same for TCCR1B
TCNT1 = 0;//initialize counter value to 0
// set compare match register for 1000000hz increments with 8 bits prescaler
OCR1A = 20;// = (16*10^6) / (1000000*8) - 1 (must be <65536)
// turn on CTC mode
TCCR1B |= (1 << WGM12);
// Set CS11 bit for 8 prescaler. Each timer has a different bit code to each prescaler
TCCR1B |= (1 << CS11);
// enable timer compare interrupt
TIMSK1 |= (1 << OCIE1A);
}
void loop() {
//if (Serial.available()) {
// Serial.println("a");
//}
Serial.println("a");
delay(500);
}
ISR(TIMER1_COMPA_vect)
{
//insert your code here that you want to run every time the counter reaches OCR1A
}
Extra Resources:
I keep a running list of the most helpful Arduino resources I come across at the bottom of an article I wrote here: http://electricrcaircraftguy.blogspot.com/2014/01/the-power-of-arduino.html. Check them out.
Especially look at the first links, under the "Advanced" section, by Ken Shirriff and Nick Gammon. They are excellent!
Please vote this answer up if it solves your problem, and accept it as the correct answer; thanks!
Sincerely,
Gabriel Staples
http://www.ElectricRCAircraftGuy.com/
| {
"pile_set_name": "StackExchange"
} |
Q:
Scala type inference fails on overloaded methods despite non-conflicting signature
% scala
Welcome to Scala 2.12.3 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_111).
Type in expressions for evaluation. Or try :help.
scala> trait Op[-Y, -Z, +A, +B] {
| def apply(other: (Y, Z)): (A, B)
| }
defined trait Op
scala> implicit class RichTuple2[+A, +B](t: (A, B)) {
| def ~~~(other: Int): (A, B) = ???
| def ~~~[RA, RB](other: Op[A, B, RA, RB]): (RA, RB) = other.apply(t)
| }
defined class RichTuple2
scala> def swap[A, B] = new Op[A, B, B, A] {
| override def apply(other: (A, B)) = (other._2, other._1)
| }
swap: [A, B]=> Op[A,B,B,A]
scala> (1, "foo") ~~~ swap
<console>:14: error: overloaded method value ~~~ with alternatives:
[RA, RB](other: Op[Int,String,RA,RB])(RA, RB) <and>
(other: Int)(Int, String)
cannot be applied to (Op[Nothing,Nothing,Nothing,Nothing])
(1, "foo") ~~~ swap
If I remove the first ~~~(other: Int) method, then it works:
scala> trait Op[-Y, -Z, +A, +B] {
| def apply(other: (Y, Z)): (A, B)
| }
defined trait Op
scala> implicit class RichTuple2[+A, +B](t: (A, B)) {
| def ~~~[RA, RB](other: Op[A, B, RA, RB]): (RA, RB) = other.apply(t)
| }
defined class RichTuple2
scala> def swap[A, B] = new Op[A, B, B, A] {
| override def apply(other: (A, B)) = (other._2, other._1)
| }
swap: [A, B]=> Op[A,B,B,A]
scala> (1, "foo") ~~~ swap
res0: (String, Int) = (foo,1)
The question is why type inference and method selection fails in this case? The method ~~~(other: Int) takes a parameter that isn't at all related to the type of swap (with is an Op type). And is anyone aware of a workaround?
A:
scalac sometimes has trouble finding the right implicits or inferring the right types when one mixes implicits with overloading.
There are several jira tickets on this topic, and this particular one: SI-9523, appears to be the same problem as the one in your question.
In your case scalac is unable to infer the type arguments for swap when ~~~ is overloaded, so annotating it with swap[Int, String] should work.
Overloading is generally discouraged in Scala (see Why "avoid method overloading"?) and (http://www.wartremover.org/doc/warts.html), so the best solution is to avoid overloading.
| {
"pile_set_name": "StackExchange"
} |
Q:
ASP.NET AJAX 4.0 client side databinding
I read some articles in MSDN magazine about new features in ASP.NET AJAX 4.0 - primarily client side data binding. I feel MSDN magazine sometimes contains a lot of "marketing" so I'm interested in opinions of real developers. Does it worth it? Do you plan to use it?
Edit:
Here are links for articles if anybody is interested. But at the moment it looks like dying framework for enthusiasts only.
Data binding in ASP.NET AJAX 4.0
Conditional rendering in ASP.NET AJAX 4.0
Live databinding in ASP.NET AJAX 4.0
Master-detail view with ASP.NET AJAX Library
A:
Well Microsoft started to favor jQuery over Microsoft Ajax themselves something like half a year ago. Though the new stuff looked great, and I've played with them, I like to use jQuery in combination with jqGrid myself instead. Besides that there are a lot of plugins available for jQuery as well and I consider that an added bonus.
What I do like, and still use, is the ASP.NET Ajax control toolkit. Especially the calendar control in that suite looks cool and most of my clients like me to build it in.
| {
"pile_set_name": "StackExchange"
} |
Q:
Integrals of pullbacks and the Inverse function theorem(s?)
The usual story goes like this:
Smooth picture (?):
For a smooth bijection $\phi: M \to N$ between $n$-manifolds the following
is true:
$\phi^{-1}$ is a local diffeomorphism a.e.
Given an open set $U \subset N$ and a form $\omega \in \Omega^k(U)$ we have the equality: $\int_U \omega= \int_{\phi^{-1}(U)} \phi^*\omega$.
Satisfied with this simple and very intuitive picture I slowly came to believe that this is the most general change of variables theorem there could be. That is, until I met the following theorem in a measure theoretic context.
Theorem 1:
Let $U\subset\mathbb{R}^n$ be a measurable subset and $\phi :U\to \mathbb{R}^n$ be injective and differentiable.
$\implies$ $\int_{\phi(U)} f = \int_U f\circ\phi |\det D\phi|$ for all real valued functions $f$.
Then upon searching the internet I came by a weaker version of the inverse function theorem for everywhere differentiable functions:
Theorem 2:
Let $U \subset \mathbb{R}^n$ and $f:U \to \mathbb{R}^n$ be differentiable s.t. $Df_{x_0}$ has full rank for all $x_0 \in U$
$\implies$ $f$ is a local differentiable homeomorphism.
Which leads to the following generalization:
Differentiable picture (conjecture):
For a differentiable bijection $\phi: M \to N$ between $n$-manifolds the following
is true:
$\phi^{-1}$ is a local differentiable homeomorphism a.e.
Given an open set $U \subset N$ and a form $\omega \in \Omega^k(U)$ we have the equality: $\int_U \omega= \int_{\phi^{-1}(U)} \phi^*\omega$.
In search of a unifying picture i listed the properties a function $\phi$ must have so that the pullback wont change the value of the integral.
I. $\phi$ must be locally absolutely continuous. (otherwise it could send a null set to a positive measure set). This also establishes the almost everywhere differentiability of $\phi$.
II. $D \phi$ must have full rank almost everywhere.
If we can preform change of variables with $\phi$. What more properties must it have?
Following the connection with lipschitz functions i arrived at the following unifying conjecture:
The great conjecture:
For a locally lipschitz bijection $\phi: M \to N$ between $n$-manifolds the following is true:
$\phi^{-1}
$ is locally bi-lipschitz (a.e.?)
Given an open set $U \subset N$ and a form $\omega \in \Omega^k(U)$ we have the equality: $\int_U \omega= \int_{\phi^{-1}(U)} \phi^*\omega$.
Is this true?
A:
If you consider continuous injections (resp. homeomorphisms onto their range) instead of locally Lipschitz bijections (resp. locally bi-Lipschitz), then the modified conjecture is true because of Brouwer's theorem on invariance of domain, with the proviso that in (2) you should consider $n$-forms instead of $k$-forms because the domain $U$ of integration is assumed open and hence is $n$-dimensional. Formula (2) in this case is simply the change of variables formula for integrals with respect to a measure $\mu$ and its pushforward $\phi_*\mu$, once you write $n$-forms in terms of the volume measure associated to a Riemannian metric on $M$ (recall that a continuous map is always Borel measurable). Here I'm assuming $M$ and $N$ oriented for simplicity.
Otherwise (assuming back your original hypotheses), you should consider locally $k$-rectifiable subsets $U$ of the target manifold $N$ in (2) instead of $U$ open ("locally" equals "countably" if your manifolds are second countable). All that remains is to show that the inverse of $\phi$ is locally Lipschitz. This, however, needs an additional hypothesis as follows. By Rademacher's theorem, any locally Lipschitz map $\phi:M\rightarrow N$ is differentiable almost everywhere; more precisely, the (Clarke sub)differential $D\phi$ of $\phi$ is a locally bounded set-valued map which is single-valued almost everywhere. That being said, the additional hypothesis you need in your conjecture to settle part (1) is that $D\phi$ takes only non-singular values, for in this case you can invoke Clarke's inverse function theorem for Lipschitz maps (Theorem 1 of F.H. Clarke, "On The Inverse Function Theorem". Pac. J. Math. 64 (1976) 97-102). Part (2) follows immediately by using the $k$-dimensional Hausdorff measure with respect to some Riemannian metric on $M$.
Some of the comments addressed the possibility of doing away with the hypothesis of $D\phi$ being non-singular by invoking Sard's theorem. However, this does not work with so little regularity assumed from $\phi$ - a necessary hypothesis for Sard's theorem to hold true, as pointed in Sard's original paper ("The Measure of Critical Values of Differentiable Maps", Bull. Amer. Math. Soc. 48 (1942) 883-890) is that $\phi$ should be at least $\mathscr{C}^1$. $\phi$ being locally Lipschitz is not good enough - as shown by J. Borwein and X. Wang ("Lipschitz functions with maximal subdifferentials are generic", Proc. Amer. Math. Soc. 128 (2000) 3221–3229), the set of 1-Lipschitz functions $f$ such that all points in its domain are critical (in the sense that $Df$ contains zero everywhere) is generic with respect to the uniform topology. In particular, generically one cannot expect the inverse of a locally Lipschitz bijection, albeit certainly continuous, to be locally Lipschitz (even if only up to a subset of measure zero).
| {
"pile_set_name": "StackExchange"
} |
Q:
integral of cdf function
Could anyone please help me how has this integral of cdf been solved?
(M is median)
A:
This is an application of integration by parts using the Fundamental Theorem of Calculus $F'(x)=f(x)$ and the subsitution $\alpha=x$. Consider the formula for integration by parts
$$\int u~dv=uv-\int v~du$$
by setting $u=F(\alpha)$ and $dv=1\cdot d\alpha$ therefore $du=f(\alpha)d\alpha$ and $v=\alpha$ followed by $\alpha=x$ within your given integral leads to
$$\int_m^a F(\alpha)d\alpha=[\alpha F(\alpha)]_m^a-\int_m^a \alpha f(\alpha)d\alpha\stackrel{\alpha=x}{=}aF(a)-mF(m)-\int_m^a xf(x)dx$$
| {
"pile_set_name": "StackExchange"
} |
Q:
Recursive Call in controller Lightning
I want to make recursive call in js controller on my lightning component.. Following is my js controller but I am getting an error as
'callback' must be a valid Function
check : function(component, event, helper) {
var str = component.get('v.message');
if(str == null){
str = '';
}
$A.getCallback(helper.check(component,str),1200);
}
ControllerHelper Code :
check : function(cmp,a) {
var msg = 'Welcome to Dairy Startup';
if(a.length < msg.length){
a = msg.substring(0,a.length + 1);
cmp.set('v.message',a);
}
}
How Can I make recursive call to helper method?
A:
You're actually calling the function, but instead what you want to do is bind it to get a function. That looks like this:
$A.getCallback(helper.check.bind(helper,component,str));
Where the first parameter is the "this" variable (the helper), and the parameters thereafter are function parameters.
You can read more about binding over on MDN.
You also are presumably trying to create a delay, so you need to use setTimeout:
setTimeout($A.getCallback(helper.check.bind(helper,component,str)), 1200);
| {
"pile_set_name": "StackExchange"
} |
Q:
Jenkins workspace settings issues in Windows
I have the latest stable release of Jenkins 2.121.1 installed on my Windows10Pro(x64)-build machine.
Problem no 1:
I can't find the system-wide workspace settings as shown for instance in this thread:
How to change workspace and build record Root Directory on Jenkins?
Has this been removed? I only have the workspace settings available for the specific jobs, but I would like to change it on a system-wide-level.
Problem no 2:
When I configure the custom work space for a specific job like so:
It is supposed to use the name for the item that I've created. Instead it LITERALLY creates a folder with this name, like so:
Even though the jenkins documentation says I should use "${ITEM_FULL_NAME}", I've tried different variants (ITEM_FULLNAME) etc.
A:
It looks like a bug
They mention Under the Advance Tab but I don't find it anywhere either.
Workaround
Modify the jenkins.xml directly
from
<env name="JENKINS_HOME" value="%BASE%"/>
to
<env name="JENKINS_HOME" value="newPath\Jenkins"/>
Considering the 2nd issue you can modify the config.xml
Or
You can set the env variable when you start the jenkins.war
SET JENKINS_HOME=new\Path\directory
SET ITEM_ROOTDIR=new\root\directory
java -jar jenkins.war
You will have to do this each time you start and stop the services
| {
"pile_set_name": "StackExchange"
} |
Q:
Data Attribute in OpenMP's task
I have written a code related to quick sort with OpenMP as follows:
#include <iostream>
#include <ctime>
#include <algorithm>
#include <functional>
#include <cmath>
using namespace std;
#include <omp.h>
void ParallelQuickSort(int *begin, int *end)
{
if (begin+1 < end)
{
--end;
int *middle = partition(begin, end, bind2nd(less<int>(), *end));
swap(*end, *middle);
#pragma omp task shared(begin) firstprivate(middle)
ParallelQuickSort(begin, middle);
#pragma omp task shared(end) firstprivate(middle)
ParallelQuickSort(++middle, ++end);
}
}
int main()
{
int n = 200000000;
int* a = new int[n];
for (int i=0; i<n; ++i)
{
a[i] = i;
}
random_shuffle(a, a+n);
cout<<"Sorting "<<n<<" integers."<<endl;
double startTime = omp_get_wtime();
#pragma omp parallel
{
#pragma omp single
ParallelQuickSort(a, a+n);
}
cout<<omp_get_wtime() - startTime<<" seconds."<<endl;
for (int i=0; i<n; ++i)
{
if (a[i] != i)
{
cout<<"Sort failed at location i="<<i<<endl;
}
}
delete[] a;
return 0;
}
The problem I have in the code is the data attribute in task construct within ParallelQuickSort function. The variable middle should be firstprivate instead of shared as it may be changed by threads executing the two tasks. However, if I set variable begin and end as shared as showed in the code, the program will fail. I wonder why they (begin and end) should be firstprivate instead of shared. In my view, as the threads executing the two tasks keep the variable begin and end respectively, they will not affect each other. On the other hand, the ParallelQuickSort function is recursive, and thus there is a race in the variable begin or end (for example, in the parent function and in the child function). I am not sure about this suspect as the variables are in different functios (parent and child function).
A:
First, variables that are determined to be private in a given region are automatically firstprivate in explicit tasks, so you don't need to declare them explicitly as firstprivate. Second, your code contains ++end; and --end; which modify the value of end, affecting other tasks if end is shared. firstprivate is the correct data sharing class here - each task simply retains the values of begin, end and middle that they used to have at the time the task was created.
Your ParallelQuickSort should be as simple as this:
void ParallelQuickSort(int *begin, int *end)
{
if (begin+1 < end)
{
--end;
int *middle = partition(begin, end, bind2nd(less<int>(), *end));
swap(*end, *middle);
#pragma omp task
ParallelQuickSort(begin, middle);
#pragma omp task
ParallelQuickSort(++middle, ++end);
}
}
Note that although this code works, it is way slower than the single-threaded version: 88.2 seconds with 2 threads on a large Xeon X7350 (Tigerton) box versus 50.1 seconds with a single thread. The reason is that tasks creation continues up to the very simple task of swapping two array elements. The overhead of using tasks is huge and you should put a sane upper threshold below which tasking should be disabled, let's say when the subarray size has reached 1024 elements. The exact number depends on the OpenMP run-time implementation, your CPU type and memory speed, so the value of 1024 is more or less randomly chosen. Still the optimal value should not create two tasks that process elements that would fall in the same cache line, so the number of elements should be a multiple of 16 (64 bytes per cache line / 4 bytes per integer):
void ParallelQuickSort(int *begin, int *end)
{
if (begin+1 < end)
{
--end;
int *middle = partition(begin, end, bind2nd(less<int>(), *end));
swap(*end, *middle);
#pragma omp task if((end - begin) > 1024)
ParallelQuickSort(begin, middle);
#pragma omp task if((end - begin) > 1024)
ParallelQuickSort(++middle, ++end);
}
}
With this modification the code runs for 34.2 seconds with two threads on the same box.
| {
"pile_set_name": "StackExchange"
} |
Q:
how to make a css "if opera,not..."
I want to make a css rule, which affects all but the opera browser, all the other browser add a css rule:
#content{left:1px;}, (opera without this rule). the below code not worked...
<!--[if !OPERA]>
<style type="text/css">
#content{left:1px;}
</style>
<![endif]-->
A:
Conditional comments are recognized by IE only. If you need Opera-specific CSS, you will need JavaScript:
if (window.opera) {
document.getElementById('foo').style.height = '100px';
}
A:
you can use the property you want for a selector like #content{left:1px;} then add a css hack for opera providing the default value (or the value you want). The css hack has the following syntax: @media all and (min-width:0px) {head~body .selector {property:value;}} an example of the previous syntax and your example could be: @media all and (min-width:0px) {head~body #content {left:0px;}}
| {
"pile_set_name": "StackExchange"
} |
Q:
Avoid duplication of steps across multiple pools in azure pipelines
I have a netstandard library that I want to build and test across multiple platforms (Windows & Linux).
Currently I have to do this
jobs:
- job: Linux
pool:
vmImage: ubuntu-16.04
steps:
# A number of steps here
- job: Windows
pool:
vmImage: vs2017-win2016
steps:
# The exact same steps as the linux job
Is there any way to avoid having to duplicate the steps between the jobs?
A:
I'am afraid that you have to set the steps separately for each pool if across multiple pools in azure pipelines.
However you can try using template. A single step can be defined in one file and used multiple places in another file.
Please see YAML schema reference - Step template for details.
# File: steps/build.yml
steps:
- script: npm install
- script: npm test
Across multiple pools:
# File: azure-pipelines.yml
jobs:
- job: macOS
pool:
vmImage: 'macOS-10.13'
steps:
- template: steps/build.yml # Template reference
- job: Linux
pool:
vmImage: 'Ubuntu-16.04'
steps:
- template: steps/build.yml # Template reference
- job: Windows
pool:
vmImage: 'VS2017-Win2016'
steps:
- template: steps/build.yml # Template reference
- script: sign # Extra step on Windows only
| {
"pile_set_name": "StackExchange"
} |
Q:
ssh delay when connecting
When connecting to one specific server (running Debian Lenny) it always takes about 5 seconds before it prompts me to enter a password. After login there is no noticable delay anymore. There is also no delay at any other server in this network (although they are not running Lenny).
Any idea what could be causing this and how to fix it?
A:
It's most often dns problem. Try setting 'UseDNS no' in sshd_config.
A:
It could possibly be a reverse-dns lookup delay. If your connecting host doesn't have an DNS entry, try adding an entry for your source system in /etc/hosts on the server you're connecting to.
| {
"pile_set_name": "StackExchange"
} |
Q:
Add column to existing table using a query in Access
I have a table that contains an ID and the income of an individual in Access. I now would like to add a column to this table that contains the income quintile (5%) the individual belongs to i.e. 1, 2, 3, ... depending on the individuals' income.
I have calculated the upper limits of the quintiles from a different dataset and have stored them in another table.
Originally I planed to do this with a WHEN function in "calculate field" in the table view. However since I would like to use 5% quintiles I need 20 WHEN conditions which Access can't handle ("The function you have entered is too complex.").
Is there any way to do this with a query?
A:
Do this with a join. Assuming that the other table has these columns:
quantile
lowerbound
upperbound
Then do:
select t.*,
(select quantile
from othertable as ot
where t.value >= ot.lowerbound and t.value < ot.upperbound
) as quantile
from mytable as t;
The exact logic depends on the nature of your data (are the bounds inclusive or exclusive, for instance). Your question provides no specifics, so this is guidance.
If you have a quantile column in the first table, you can also use a similar query to update the value.
| {
"pile_set_name": "StackExchange"
} |
Q:
linux (and OSX) shell command to execute every time file is saved
I am trying to get jsdoc to automatically generate when I save my javascript file. I have got a working script that stores the last update time of a file (currently hardcoded) and compares with the current timestamp of that file. I run this in a while loop that runs until CTRL-C is pressed, and insert a 0.1 second sleep to stop the processor being consumed.
This is the working script:
while :
do
if [ $(( lastTime )) -ne `stat -f %m -t %s javascript.js` ]
then
lastTime=`stat -f %m -t %s javascript.js`
# custom jsdoc generating script
jsdoc javascript.js
echo +++ Run: `date` +++
fi
# stops while loop from consuming a lot of resources
# and making my fan whirr like he wants the computer to take off
sleep .1
done
I know there is a better way - just not what that way is. Any help appreciated.
Edit: Update for linux machines with inotify-tools installed this should work
#!/bin/bash
# with inotify-tools installed...
# only watches first parameter for modification
while inotifywait -e modify $1; do
echo
echo +++ Building JSDocs +++
jsdoc $@
echo +++ Last run: `date` +++
done
However, I would like this to work on both Linux and OSX shell, so I can use in both environments
A:
There is a linux kernel feature called INotify that watches the file system for any changes. It is exposed as a number of system APIs.
For scripting, there is a package called inotify-tools that gives scripting access to the notification system.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to visualise the direction of torque?
Torque is analog of force in rotational motion and it has one of two directions I.e clockwise and counterclockwise . How I can visualise these directions . For example r and F are on same plane(x and y axes) then according to righthand rule torque will be on z axis but I find it difficult to visualise .
A:
Torque by being the (external) product of the force applied by the radius of application represents an axial vector (or rotational vector)
$$\vec{T} = \vec{F} \times \vec{r}$$
One way to represent such vector (related to the definition above) is a by a vector which is perpendicular to the plane generated by both $\vec{F}$ and $\vec{r}$ and which has a magnitude equalt to the rotational torque.
Whether the torque vector $\vec{T}$ points "upward" from this plane or "downward" is related to whether the rotation is clock-wise or counter-clockwise (any convention will do as long as it is consistent).
See diagram below for visual:
| {
"pile_set_name": "StackExchange"
} |
Q:
How to get wall clock time from xcode instruments time profiler?
I am trying to profile a scenario in my iOS app, being able to see the CPU cycles spent per function. However I would like to check the wall clock time spent by the functions, as I am expecting some wait time because of resource contention.
A:
Set "Record Waiting Threads" in the recording options (File > Recording Options). Then, when examining the call tree, you can use the Separate by State option in the Call Tree configuration pop-up.
| {
"pile_set_name": "StackExchange"
} |
Q:
decide / conclude / come to the conclusion
Here's the context.
The reason I quit my last job wasn't because it was bad or anything. In fact, what I do now is more or less the same. It's actually even harder(more difficult) in some ways and there are plenty of things that I miss about my old job. It's just that I had certain goals in mind and I decided that working there just wasn't in line with what I wanted to do going forward.
.
I decided that working there just wasn't in line with what I wanted to do going forward.
I concluded that working there just wasn't in line with what I wanted to do going forward.
I came to the conclusion that working there just wasn't in line with what I wanted to do going forward.
I feel they are different but I'm not sure what exactly the difference is.
Could you let me know the difference? Thanks in advance.
A:
There is little difference.
"Conclude" means at the end. So it implies "At the end of a thinking or after a logic process". On the other hand you could "decide" quickly and based on your feelings or judgement. "Came to a conclusion" is mostly a long way of saying "concluded" but suggests an even longer process.
Three brothers each chose a sword. The first decided to choose the one that looked nicest. The second investigated the balance of each sword and concluded that a Japanese katana was best. The third discussed with his family and friends and came to the conclusion that fighting was wrong, so left to become a monk.
In the example you gave, you could use "decided" or "concluded" or even "cam to a conclusion"
| {
"pile_set_name": "StackExchange"
} |
Q:
Hibernate Search multiple fields with @ClassBridge
First of all, Happy New Year !
I'd like to index entity label in multiple languages.
I have 2 entities :
MyEntity
labelCode
Translation
code
languageCode
label
The MyEntity.labelCode must match with Translation.code then I have multiple labels for multiple languages per MyEntity instance.
I wrote a ClassBridge on MyEntity to add multiple fields to document :
class I18NTranslationClassBridge implements FieldBridge {
Analyzer analyzer
@Override
void set(String name, Object value, Document document, LuceneOptions luceneOptions) {
if (value && value instanceof I18NDictionaryCategory) {
I18NDictionaryCategory entity = value as I18NDictionaryCategory
String labelCode = entity.getLabelCode()
def translations = TranslationData.findAllByCode(labelCode)
if (!analyzer) analyzer = Search.getFullTextSession(Holders.getApplicationContext().sessionFactory.currentSession).getSearchFactory().getAnalyzer('wildcardAnalyzer')
translations?.each { translation ->
document.add(getStringField("labelCode_${translation.languageCode}", translation.label, Field.Store.YES, Field.Index.ANALYZED, Field.TermVector.NO, 1f, analyzer))
document.add(getStringField("labelCode__${translation.languageCode}_full", translation.label, Field.Store.YES, Field.Index.NOT_ANALYZED_NO_NORMS, Field.TermVector.NO, 1f, null))
}
}
}
private static Field getStringField(String fieldName, String fieldValue, Field.Store store, Field.Index index, Field.TermVector termVector, float boost, Analyzer analyzer) {
Field field = new Field(fieldName, fieldValue, store, index, termVector);
field.setBoost(boost);
// manually apply token stream from analyzer, as hibernate search does not
// apply the specified analyzer properly
if (analyzer) {
try {
field.setTokenStream(analyzer.reusableTokenStream(fieldName, new StringReader(fieldValue)));
}
catch (IOException e) {
e.printStackTrace();
}
}
return field
}
}
I'd like to index 2 fields per language : 1 with no analyzer and no tokenizer (for sorting results) and an other with tokenizer (for full-text search).
My problem is that all fields without analyzer are well indexed but fields with analyzer are not. Only 1 language is correctly indexed.
I try to do it with ClassBridge or FieldBridge without success.
Any suggest ?
Best regards,
Léo
A:
You should not use an Analyzer within class/field bridge. Analyzers are applied at a later stage. Hibernate Search collects all required analyzers in a so called ScopedAnalyzer which gets used when the Lucene Document gets added to the index. To support your use case you can make use of the dynamic analyzer selection feature. See also http://docs.jboss.org/hibernate/stable/search/reference/en-US/html_single/#d0e4119.
The basic approach is to define the language specific analyzers via @AnalyzerDiscriminator. This makes them globally available by name. Then you need to implement org.hibernate.search.analyzer.Decriminator. You basically return the right analyzer name depending on your field name (assuming that the field names contain in some form the language code). Last but not least you need to annotate MyEntity with @AnalyzerDiscriminator(impl = MyDiscriminator.class).
| {
"pile_set_name": "StackExchange"
} |
Q:
Neo-6M GPS returning no values on Arduino Nano
I have recently bought a GPS module for my Arduino Nano. The GPS is not picking up any satellites. I have checked my code and wiring and cannot see anything that could be affecting it. Sometimes when the code is first run, it sends a jumbled NMEA code, but stops. My code is below.
#include <Arduino.h>
#include <Adafruit_BMP085.h>
#include <Wire.h>
#include <SoftwareSerial.h>
#include <TinyGPS++.h>
#include "AltSoftSerial.h"
Adafruit_BMP085 bmp;
AltSoftSerial ss;
TinyGPSPlus gps;
float lat = 10;
float lon = 10;
void setup() {
Serial.begin(9600);
bmp.begin();
ss.begin(9600);
}
void loop() {
Serial.print("Pressure:");
Serial.print(bmp.readPressure());
Serial.println(" ");
Serial.print("Temp:");
Serial.print(bmp.readTemperature());
Serial.println("C* , ");
gps.encode(ss.read());
if (ss.available() > 0){
Serial.print("Latitude= ");
Serial.print(gps.location.lat());
Serial.print(" Longitude= ");
Serial.println(gps.location.lng());
Serial.print("GPS Height:");
Serial.println(gps.altitude.meters());
Serial.print("Number of Sattilites:");
Serial.println(gps.satellites.value());
Serial.print("Date:");
Serial.println(gps.date.day() + "/" + gps.date.month());
}
delay(3000);
}
Here is a snip of what is shown on the Serial Monitor
Pressure:100397
Temp:30.10C* ,
Latitude= 0.00 Longitude= 0.00
GPS Height:0.00
Number of Sattilites:0
Date:/
Pressure:100396
Temp:30.10C* ,
Latitude= 0.00 Longitude= 0.00
GPS Height:0.00
Number of Sattilites:0
Date:/
Thanks
A:
Thanks, that seems to have fixed it. All the tutorials I looked at used delay(), so I didn't think it would affect it
| {
"pile_set_name": "StackExchange"
} |
Q:
Choose (or create) ads to display in AdMob
In my Android app I want to display specific ads that already exist or I would create.
Banner Ads should be of restaurants / bars in my area, which are agreed with the owners of the premises. So I need to display existing Ads or create them as "House Ads".
It is possible by AdMob? Obviously I can display images or something else within the app, but I want safety and trace-ability of AdMob.
A:
Yes. Simply add the AdMob SDK like you normally would, then go to the AdMob website console, and set a really high ECM value for the AdMob network so that no "regular" ads are shown (or set it to "only show ads from Greece in Norwegian" to effectively disable AdMob ads). Then, setup your house ads in the "Campaigns" tab in AdMob.
| {
"pile_set_name": "StackExchange"
} |
Q:
C# Opening a program from a local directory
I have little to no experience on C#, but I am very much willing to learn. I am trying to create an application with a button that launches an executable. The application gets ran from an USB Flash drive. Lets say the flash drive has driveletter (e:) on my computer. I want to run a program called rkill.exe from the bin directory.
private void opschonen_RKill_Click(object sender, EventArgs e)
{
var process_RKill = new Process
{
StartInfo = new ProcessStartInfo
{
FileName = "/bin/rkill.exe"
}
};
process_RKill.Start();
process_RKill.WaitForExit();
}
However, this does not work. If I launch the application from the root, it does work. I can't point to a driveletter, because not every computer assigns the driveletter to E:
What am I doing wrong? I suppose it's something simple, because I am just a beginner.
A:
const string relativePath = "bin/rkill.exe";
//Check for idle, removable drives
var drives = DriveInfo.GetDrives()
.Where(drive => drive.IsReady
&& drive.DriveType == DriveType.Removable);
foreach (var drive in drives)
{
//Get the full filename for the application
var rootDir = drive.RootDirectory.FullName;
var fileName = Path.Combine(rootDir, relativePath);
//If it does not exist, skip this drive
if (!File.Exists(fileName)) continue;
//Execute the application and wait for it to exit
var process = new Process
{
StartInfo = new ProcessStartInfo
{
FileName = fileName
}
};
process.Start();
process.WaitForExit();
}
| {
"pile_set_name": "StackExchange"
} |
Q:
kendo mvc grid with different column settings
I have a kendo mvc grid which is basically like this:
@(Html.Kendo().Grid<baseballViewModel>()
.Name("baseball")
.DataSource(dataSource => dataSource
.Ajax()
.Read(read => read.Action("Index", "baseball"))
.ServerOperation(false)
)
.HtmlAttributes(new { style = "height:175px;" })
.Columns(col => {
col.Bound(player=> player.Date).Width(85);
col.Bound(player=> player.Year).Width(55);
col.Bound(player=> player.Name).Width(150);
col.Bound(player=> player.team).Width(120);
}).Sortable()
.Scrollable()
.Pageable()
)
Now, Im trying to insert a new column with button(on each row). Each button when clicked fires an event which passes player name to a controller. I have tried using col.Template() after the fourth column. But, no luck with that. Is there any way to do this?
A:
try to use custom command........
http://demos.kendoui.com/web/grid/custom-command.html
| {
"pile_set_name": "StackExchange"
} |
Q:
Trim, crop, clip not working as expected in XeTeX
Possible Duplicate:
Clipping support in XeTeX
Why does the crop feature not work in this example?
Secondary question: Is there a good and thorough example somewhere floating around the internet explaining how to work with the bounding box and viewport etc.?
As you can see when you look at it with draft enabled and then remove the , draft code, the image is actually not being cropped. Just moved upwards by the trim feature.
What I'm trying to achieve is to crop off the very first line of characters.
MWE to be run in XeLaTeX:
\documentclass[pagesize=pdftex, 9pt]{scrbook}
\usepackage{graphicx}
\usepackage{wrapfig}
% the ratio of the embedded PDF to the text on the page
\newlength\ratiowidth
\setlength\ratiowidth{.61\textwidth}
\begin{document}
\begin{wrapfigure}{r}{\ratiowidth}
\begin{center}
\includegraphics[width=\ratiowidth, page=1, trim=0 0 0 100, clip=true, draft]{GenesisTest.png}
\end{center}
\end{wrapfigure}
\end{document}
A:
There is currently no clipping support in XeTeX. A usuable driver was suggested by Joseph Wright and me a little while ago, but it will take a good while until it is in stable release. See Clipping support in XeTeX (which I just found after my initial answer, it is actually a general duplicate).
However, you can use my adjustbox package which is inspired by graphicx but allows to apply the modification keys like clipping to arbitrary content. The current version has now also special macros to apply them directly to images. This is particular useful because then the adjustbox clipping driver for XeTeX is used, which is part of the package.
You can simply load the adjustbox package instead or in addition to graphicx and then replace \includegraphics with \adjincludegraphics (which is short for \adjustbox{<key=value options>}{\includegraphics{<filename>}). There is also a \adjustimage macro which takes the key=value options as a mandatory argument instead of an optional one.
\documentclass[pagesize=pdftex, 9pt]{scrbook}
\usepackage{graphicx}
\usepackage{adjustbox}
\usepackage{wrapfig}
% the ratio of the embedded PDF to the text on the page
\newlength\ratiowidth
\setlength\ratiowidth{.61\textwidth}
\begin{document}
\begin{wrapfigure}{r}{\ratiowidth}
\begin{center}
\adjincludegraphics[width=\ratiowidth, page=1, trim=0 0 0 100, clip=true]{GenesisTest.png}
\end{center}
\end{wrapfigure}
\end{document}
Note that trim (with or without clip) is always applied to the original image, then the result is scaled to the given width! If you want to change the order, i.e. first resize than clip, you need to use adjustboxes Clip=0 0 0 100 key instead. This one can actually be used multiple times. See the package manual for all details.
About the bounding box and viewport: You don't need to use the bounding box keys any longer if you use EPS, PNG or JPG images. They were only needed in older times when the bounding box couldn't be read from the files directly.
The viewport key is similar to trim but states the lower left and upper right corner relative to the reference point, which is the lower left corner. Instead, trim states the amount taken away from the left, bottom, right and top edges.
I made some diagrams for the new version of my adjustbox bundle (I'm planning to put the trim and clip box macros into a sub-package). See the unfinished trimclip package manual starting from page 7.
| {
"pile_set_name": "StackExchange"
} |
Q:
PowerApps - Unable to set SharePoint Person/People field to Current User
I am uable to set a person field to the current user in a PowerApp. I have followed the tutorial below but the dropdown control is not set to current user. This approach seems to be the general consensus on how to do this. I must be missing a step.
http://www.codeovereasy.com/2017/07/powerapps-set-sharepoint-person-field-to-current-user/
OnVisible property of my screen:
//Here I am setting the person field record to a variable 'myself' with current user values
Collect(Collection1, {Pressed: Button1.Pressed});
UpdateContext({
myself: {
'@odata.type': "#Microsoft.Azure.Connectors.SharePoint.SPListExpandedUser",
Claims:"i:0#.f|membership|" & Lower(User().Email),
Department:"",
DisplayName:User().FullName,
Email:User().Email,
JobTitle:".",
Picture:"."
},
manager: Office365Users.ManagerV2(Office365Users.MyProfile().Id).mail,
varAlwaysTrueForTest: true})
Default property of person field dropdown control:
//This should show the current user in the dropdown control after the screen becomes visible
If(varAlwaysTrueForTest, myself, Parent.Default)
Update property of person field DataCard:
//DataCardValue6 is my person dropdown control
If(varAlwaysTrueForTest, myself, DataCardValue6.Selected)
Result - should be populated with current user
A:
I discovered the answer in the post below. The root issue is the property that needs to be set on the dropdown control. It should be the DefaultSelectedItems. Also, the record being passed to the control only requires the DisplayName and Claims properties.
https://powerusers.microsoft.com/t5/General-Discussion/How-to-set-a-defaul-value-for-a-person-fields-in-a-SharePoint/m-p/188290#M61575
OnVisible event of screen
UpdateContext({ myself:
{
Claims:"i:0#.f|membership|" & Lower(User().Email),
DisplayName:User().FullName
}
})
DefaultSelectedItems
If(varAlwaysTrueForTest, myself, Parent.Default)
| {
"pile_set_name": "StackExchange"
} |
Q:
How are ranks calculated in tie cases?
I am considering the following question: when there are ties, how are ranks calculated?
In some references, they first rank them without repeating the ranks, and then average the ranks of those ties and assign the average rank to each one in the ties.
I was wondering if that way is unanimously used in statistics for determining the ranks?
Are there other ways to determine the ranks in tie cases? If yes, when to use which way?
Thanks and regards!
A:
R lists 5 ways to calculate ranks. The first ("average") is by far the most commonly used: it has the advantage that the ranks computed this way are scale/permutation invariant
| {
"pile_set_name": "StackExchange"
} |
Q:
die game where we roll until we get a 5 or a 6
We roll a die until we get a $5$ and a $6$ for the first time, not necessarily consecutively and not necessarily in that order. We need to pay $x$ dollars before each die throw, and once both a $5$ and a $6$ have appeared for the first time, the game stops and we receive $100$ dollars. The problem is to determine $x$ such that this is a fair game.
My first thought has been to determine the expected number of rolls to get a $5$ and a $6$, which turns out to be $9$ (we need $3$ rolls on average to get a $5$ or a $6$ and then an additional $6$ to get both of them for the first time). However, I a having trouble linking this with a kind of recursive formula for the expected value of the game (obviously as a function of $x$), which would help us determine where $x$ should be set to make the game fair. Any ideas would be greatly appreciated.
A:
First you are rolling until you hit a $5$ or $6$, which has probability $2/6$, so the average number of rolls is the inverse $6/2$.
Then you are rolling until you hit the other one, which has probability $1/6$, so the average number of rolls is the inverse $6/1$.
Hence, the total number of rolls on average is $9$. So in order for the game to be fair, you pay $100/9$ dollars per roll, since then on average people will pay $100$ dollars until the game ends, and their winnings will be zeroed out.
For a related problem, check out the coupon collector's problem.
| {
"pile_set_name": "StackExchange"
} |
Q:
Connection to MongoDb failed
I was trying to connect my mongodb with node server using command prompt.
I started mongodb my mongod --dbpath E:\node start\node\data
Then I installed mongodb dependencies using npm install mongodb
I added some code into my app.js which is described below :
app.js
var mongodb = require('mongodb'); //acquiring mongodb native drivers
var mongoClient = mongodb.MongoClient;
var url = 'mongodb://localhost:7000/myDatabase'; //connection url
mongoClient.connect(url, function(err,db){
if(err){
console.log('Unable to connect to mongodb server. Error :' , err);
}
else{
console.log('Connection established to', url);
db.close();
}
});
when I ran app.js in command prompt, following error occured :
Unable to connect to mongodb server. Error :{[ MongoError : connect ECONNREFUSED] name : 'MongoError' , message: 'connect ECONNREFUSED' }
I cannot understand what the problem is and what should I do next.
A:
MongoDB usually runs on port 27017, but you're trying to connect to port 7000. Try changing your url variable.
var url = 'mongodb://localhost:27017/myDatabase';
| {
"pile_set_name": "StackExchange"
} |
Q:
no applicable method for 'xpathApply' applied to an object of class "NULL"
I have the following code I don't know why I receive this error:
rm(list=ls())
require("XML")
# <a href="/music/The+Beatles/Sgt.+Pepper%27s+Lonely+Hearts+Club+Band"
beatles = "http://www.last.fm/music/The+Beatles/"
beatles.albums.page = paste(sep="", beatles, "+albums")
lines = readLines(beatles.albums.page)
album.lines = grep(pattern="href.*link-reference", lines, value=TRUE)
album.names = sub(pattern=".*<h3>(.*)</h3>.*", replacement="\\1", x=album.lines)
album.names = gsub(pattern=" ", replacement="+", x=album.names)
album.names = gsub(pattern="'", replacement="%27", x=album.names)
for (album in album.names[1]) {
print(album)
album.link = paste(sep="", beatles, album)
print(album.link)
tables = readHTMLTable(album.link)
}
Any idea?
A:
The line
readHTMLTable(album.link)
is causing the error. Try changing it to
tables = readHTMLTable(album.link, header = FALSE)
But it still gives you the warning:
Warning message:
In readLines(beatles.albums.page) :
incomplete final line found on 'http://www.last.fm/music/The+Beatles/+albums'
Which you can get rid with
readLines(beatles.albums.page, warn = FALSE)
Also note that you're not 'saving' the tables, it changes at every loop, but maybe that's what you want.
| {
"pile_set_name": "StackExchange"
} |
Q:
rsync and bash command substitution
Trying to get a script to properly make use of variables. (The below examples are on the command line but I see the exact same behavior when inside a #!/bin/bash script.
$ FLAGS='--rsh="ssh -2"'
$ rsync $FLAGS foo bar
rsync: -2": unknown option
rsync error: syntax or usage error (code 1) at main.c(1084)
So then I add quotes.
$ rsync "$FLAGS" foo bar
And now it works properly. Okay. Now let's add some more flags to the $FLAGS variable. (I tried it with just "-r" and just "-p", the same thing happens, so I don't think it's related to the particular single-hyphen flags I'm passing.)
$ FLAGS='-rptvlCR --rsh="ssh -2"'
$ rsync $FLAGS foo bar
rsync: -2": unknown option
rsync error: syntax or usage error (code 1) at main.c(1084)
$ rsync "$FLAGS" foo bar
rsync: -rptvlCR --rsh="ssh -2": unknown option
rsync error: syntax or usage error (code 1) at main.c(1084)
$
Notice in the second case it's treating the entire argument as a single option to rsync.
The basic command (typed out by hand without using the $FLAGS variable) works properly.
Any ideas?
Any ideas? I've read through all the bash scripting docs I can find, and I can't figure out why rsync is ignoring the double quotes some of the time and treating -2" as a single token.
A:
Throw them into an array:
FLAGS=( '-rptvlCR' '--rsh="ssh -2"' )
rsync "${FLAGS[@]}"
See this Bash FAQ for more detail
| {
"pile_set_name": "StackExchange"
} |
Q:
How do nebulae beacons in normal sectors work?
It is said that they slow down the rebel fleet, but how does that works exactly?
Is the fleet slowed down when they encounter a nebula beacon? Or is it when the player explore a nebula beacon? Or do you just need to have nebulae beacons in a system to make the fleet slower?
A:
When the player "explores"/visits a nebula beacon, the rebel fleet advance for that turn is cut by half. i.e. if you visited 2 nebula beacons continuously, the rebel fleet would advance as much as it would have if you had visited 1 normal/non-nebula beacon.
In Uncharted Nebula sectors the enemy fleet will be prepared for the nebula. This means that all the nebula beacons in the sector will only slow them down to 3/4th of their original speed. So even though the sector is full of Nebula beacons you would not gain double the amount of turns as a normal sector.
| {
"pile_set_name": "StackExchange"
} |
Q:
Binding a ASP.Net Identity object to a Func using ninject
EDIT: I Found a solution but if anyone knows a better way I am all ears.
I am working on a ASP.Net MVC 5 application with some of ASP.Net Identity.
I have the following code (which is ran in Startup.cs on app start):
public class Startup
{
#region Properties/Delegates
public static Func<UserManager<AppUserModel>> UserManagerFactory { get; private set; }
#endregion
public void Configuration(IAppBuilder app)
{
// Enable the application to use a cookie to store information for the signed in user
// and to use a cookie to temporarily store information about a user logging in with a third party login provider
// Configure the sign in cookie
app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie,
LoginPath = new PathString("/auth/login"),
Provider = new CookieAuthenticationProvider
{
// Enables the application to validate the security stamp when the user logs in.
// This is a security feature which is used when you change a password or add an external login to your account.
OnValidateIdentity = SecurityStampValidator.OnValidateIdentity<UserManager<AppUserModel>, AppUserModel>(
validateInterval: TimeSpan.FromMinutes(30),
regenerateIdentity: (manager, user) => user.GenerateUserIdentityAsync(manager))
}
});
app.UseExternalSignInCookie(DefaultAuthenticationTypes.ExternalCookie);
app.UseTwoFactorSignInCookie(DefaultAuthenticationTypes.TwoFactorCookie, TimeSpan.FromMinutes(5));
app.UseTwoFactorRememberBrowserCookie(DefaultAuthenticationTypes.TwoFactorRememberBrowserCookie);
// Configure the user manager
// We use a delegate here so we can acess the IBuilder
// Then we bind this delegate to UserManager<AppUserModel> in Ninject
UserManagerFactory = () =>
{
var usermanager = new UserManager<AppUserModel>(
new UserStore<AppUserModel>(new AppDbContext()));
usermanager.PasswordHasher = new SQLPasswordHasher();
// allow alphanumeric characters in username
usermanager.UserValidator = new UserValidator<AppUserModel>(usermanager)
{
AllowOnlyAlphanumericUserNames = false
};
usermanager.PasswordValidator = new PasswordValidator
{
RequiredLength = 6,
RequireNonLetterOrDigit = true,
RequireDigit = false,
RequireLowercase = false,
RequireUppercase = false
};
usermanager.UserLockoutEnabledByDefault = true;
usermanager.DefaultAccountLockoutTimeSpan = TimeSpan.FromMinutes(5);
usermanager.MaxFailedAccessAttemptsBeforeLockout = 5;
// Register two factor authentication providers. This application uses Phone and Emails as a step of receiving a code for verifying the user
// You can write your own provider and plug it in here.
usermanager.RegisterTwoFactorProvider("Phone Code", new PhoneNumberTokenProvider<AppUserModel>
{
MessageFormat = "Your security code is {0}"
});
usermanager.RegisterTwoFactorProvider("Email Code", new EmailTokenProvider<AppUserModel>
{
Subject = "Security Code",
BodyFormat = "Your security code is {0}"
});
usermanager.EmailService = new EmailService();
usermanager.SmsService = new SmsService();
IDataProtectionProvider provider = app.GetDataProtectionProvider();
if (provider != null)
{
IDataProtector dataProtector = provider.Create("ASP.NET Identity");
usermanager.UserTokenProvider = new DataProtectorTokenProvider<AppUserModel>(dataProtector);
}
// use out custom claims provider
//usermanager.ClaimsIdentityFactory = new AppUserClaimsIdentityFactory();
return usermanager;
};
}
I would like to inject the UserManagerFactory above in the place of UserManager.
I can not seem to get the binding to work.
What I have tried:
kernel.Bind<UserManager<AppUserModel>>().To<Startup.UserManagerFactory>();
What actually worked:
kernel.Bind<UserManager<AppUserModel>>().ToMethod(context => Startup.UserManagerFactory());
The UserManager is a object that Microsoft Identity owns.
I want to inject the Delegate into things like this:
private readonly UserManager<AppUserModel> _userManager;
public AuthController(UserManager<AppUserModel> userManager)
{
this._userManager = userManager;
}
This is based off of http://benfoster.io/blog/aspnet-identity-stripped-bare-mvc-part-2
Under Configuring UserManager and Authentication. He invoked it into the constructor I chose ninject instead.
A:
Solved
kernel.Bind<UserManager<AppUserModel>>().ToMethod(context => Startup.UserManagerFactory());
Let me know if there is a better way!
| {
"pile_set_name": "StackExchange"
} |
Q:
Add an existing sql to django project
I have a sql file which I want to add to my django, I don't know:
where to put the file
which commands should I run for adding it
many thanks for your help.
A:
Your question is not proper by mean adding I think you may want to connect with database. If you are using MySQL then you need to install MySQL client This link help you.
A:
You need to set the database details in settings.py. If you have an sql file you need to load it into a database, then update DATABASES in settings.py
As you're using the sql file you might want to run python manage.py inspectdb command which will return you the model definitions for the tables present in the database.
Copy the results in models.py file in one of your app and you are good to go.
| {
"pile_set_name": "StackExchange"
} |
Q:
Using Shoulda redirect_to to test a controller's create action
I'm using RSpec + Shoulda to test my RESTful controller in Rails 3. I'm having trouble figuring out how to test the create action's redirect. The standard RESTful controller should redirect to the show action for the new post. For example, if I have a ProjectsController for a Project model, then upon successful create, that action should:
redirect_to project_url(@project)
Shoulda provides a handy redirects_to macro for handling this. Here is what I have tried:
describe ProjectsController, '#create' do
context "Anonymous user" do
before :each do
@attrs = Factory.attributes_for(:project_with_image)
post :create, :project => @attrs
end
it { should assign_to(:project) }
it { should respond_with(:redirect) }
it { should redirect_to(@project) }
end
end
(Yes, I'm using FactoryGirl, but since I'm only using it for attributes in this case, it shouldn't matter. I think.)
How do I specify the last test there? It should redirect_to(...) what? I've tried @project, project_url(@project).. But I can't figure it out.
Looking at the Shoulda matcher code, I noticed that the redirect_to matcher can accept a block. But I'm not sure how to access the newly created @project object in that block...
Any thoughts?
A:
Haven't tried it, but the problem probably is, that @project is not available in your spec. How about it {should redirect_to(Project.last) } or it {should redirect_to(assigns(:project)) }?
| {
"pile_set_name": "StackExchange"
} |
Q:
phonegap :: navigator.notification.activityStart() and loadingStart() not working
I try to call loadingStart() and activityStart() on phonegap1.0 (inside onDeviceReady) but it is not working. is there a known reason? should it work well?
thnx!
A:
As mmigdol stated, these have been deprecated in 1.0.0
Look for them here: https://github.com/phonegap/phonegap-plugins
| {
"pile_set_name": "StackExchange"
} |
Q:
jq with curl fails if select filter is used
I'm piping curl output to jq: https://stedolan.github.io/jq/ and everything works great until I try to use a select filter.
This very filter works fine when in their online tool: https://jqplay.org/ and in my command line experiments after having downloaded the file.
This issue occurs only when I try to directly pipe the curl output into jq.
This fails:
i71178@SLCITS-L2222:~/next-gen/mongodb$ curl 'http://fhirtest.uhn.ca/baseDstu3/Patient?_format=json&_count=50&_pretty=false&_summary=data' | jq-linux64 --unbuffered -r -c '.link[] | select(.relation == next) | .url' | head -3
jq: error: next/0 is not defined at <top-level>, line 1:
.link[] | select(.relation == next) | .url
jq: 1 compile error
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2421 0 2421 0 0 2413 0 --:--:-- 0:00:01 --:--:-- 2413
curl: (23) Failed writing body (1675 != 2736)
i71178@SLCITS-L2222:~/next-gen/mongodb$
This works fine:
i71178@SLCITS-L2222:~/next-gen/mongodb$ curl 'http://fhirtest.uhn.ca/baseDstu3/Patient?_format=json&_count=50&_pretty=false&_summary=data' | jq-linux64 --unbuffered -r -c '.link[] | .url' | head -3 % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 46801 0 46801 0 0 66256 0 --:--:-- --:--:-- --:--:-- 66290
http://fhirtest.uhn.ca/baseDstu3/Patient?_count=50&_format=json&_pretty=false&_summary=data
http://fhirtest.uhn.ca/baseDstu3?_getpages=e73ba3b4-cc7e-4028-8679-b5da1f9cbdd1&_getpagesoffset=50&_count=50&_format=json&_bundletype=searchset
i71178@SLCITS-L2222:~/next-gen/mongodb$
For context, this is the what is being pipes into the select filter:
i71178@SLCITS-L2222:~/next-gen/mongodb$ curl 'http://fhirtest.uhn.ca/baseDstu3/Patient?_format=json&_count=50&_pretty=false&_summary=data' | jq-linux64 --unbuffered -r -c '.link[]' | head -3
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 46801 0 46801 0 0 64411 0 --:--:-- --:--:-- --:--:-- 64375
{"relation":"self","url":"http://fhirtest.uhn.ca/baseDstu3/Patient?_count=50&_format=json&_pretty=false&_summary=data"}
{"relation":"next","url":"http://fhirtest.uhn.ca/baseDstu3?_getpages=00952912-c9ab-47ca-826c-200bddffe617&_getpagesoffset=50&_count=50&_format=json&_bundletype=searchset"}
i71178@SLCITS-L2222:~/next-gen/mongodb$
I would really appreciate any help here.
Thanks!
A:
The problem is evidently your select filter:
select(.relation == next)
I think you meant:
select(.relation == "next")
Safer would be:
select(.relation? == "next")
| {
"pile_set_name": "StackExchange"
} |
Q:
pandas select multiple columns conditionally
Suppose I have a dataframe:
C1 V1 C2 V2 Cond
1 2 3 4 X
5 6 7 8 Y
9 10 11 12 X
The statements should return: if Cond == X, pick C1 and C2, else pick C2 and V2.
The output dataframe is something like:
C V
1 2
7 8
9 10
** EDIT: To add one more requirement: the number of columns can change but follow some naming pattern. In this case select all columns with "1" in it, else with "2". I think the hard-coded solution might not work.
A:
Another option with DataFrame.where():
df[['C1', 'V1']].where(df.Cond == "X", df[['C2', 'V2']].values)
# C1 V1
#0 1 2
#1 7 8
#2 9 10
A:
I try create more general solution with filter and numpy.where, for new column names use extract:
#if necessary sort columns
df = df.sort_index(axis=1)
#filter df by 1 and 2
df1 = df.filter(like='1')
df2 = df.filter(like='2')
print (df1)
C1 V1
0 1 2
1 5 6
2 9 10
print (df2)
C2 V2
0 3 4
1 7 8
2 11 12
#np.where need same shape of mask as df1 and df2
mask = pd.concat([df.Cond == 'X']*len(df1.columns), axis=1)
print (mask)
Cond Cond
0 True True
1 False False
2 True True
cols = df1.columns.str.extract('([A-Za-z])', expand=False)
print (cols)
Index(['C', 'V'], dtype='object')
print (np.where(mask, df1,df2))
Index(['C', 'V'], dtype='object')
[[ 1 2]
[ 7 8]
[ 9 10]]
print (pd.DataFrame(np.where(mask, df1, df2), index=df.index, columns=cols))
C V
0 1 2
1 7 8
2 9 10
| {
"pile_set_name": "StackExchange"
} |
Q:
creating job with ssis step using tsql
I would like to create sql server job using stored procedure and I can't seem to get it right.
Integration Service Catologs -> SSIDB -> Cat1 ->Projects->999->Packages->999.dtsx
In step 1 properties of below script on Package tab "Server: and Package:" are empty, I need to populate these as well as set 32bit to true
Below is what I got, thanks in advance
EXECUTE msdb..sp_add_job @job_name = 'Job 1', @owner_login_name = SUSER_NAME(), @job_id = @JobId OUTPUT
EXECUTE msdb..sp_add_jobserver @job_id = @JobId, @server_name = @@SERVERNAME
EXECUTE msdb..sp_add_jobstep @job_id = @JobId, @step_name = 'Step1',@database_name = DB_NAME(), @on_success_action = 3 ,@subsystem = N'ssis'
, @command = N' "\SSISDB\Cat1\999\999.dtsx" @SERVER=N"@ServerName"'
EXECUTE msdb..sp_add_jobstep @job_id = @JobId, @step_name = 'Step2', @command = 'execute msdb..sp_delete_job @job_name="Job 1"'
EXECUTE msdb..sp_start_job @job_id = @JobId
A:
if anyone else comes across similar situation, easiest way to figure out how to create a job pragmatically is to create it using UI (Server Agent -> New Job). create everything you want to see, save it, then right click at the job Script Job As -> Create To -> New query and sql server will export the job as a query so you can see what you need to do.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to scan charcoal and graphite drawings?
I was trying to scan some of my sketches in order to post them on some community sites, but the effects are quite bad. An example could be seen here:
Basically, the contrast is terribly off. All soft shading done with a hard pencil is gone, the white is too bright and too "aggressive" and bleeding all over. All the shading had been "flattened down" in the process of scanning.
So, my question is, are there some techniques or procedures I should follow, some typical settings for the scanner I should use in order to get good quality scans without the brightness/contrast distorion?
A:
(upgraded comment to answer)
Scott, brendan and tim human all provide good advice regarding scanners.
I actually do a lot of work with paintings and drawings, and photographing them is almost always a better option than scanning. I have a low-end professional flatbed scanner on my desk, but I almost never use it anymore because I get better results in less time using a DSLR (even for the rare times I need to scan office documents). Then again, I have a dedicated station, with alignment marks on the floor and table, so the photo equipment is pretty much set up and ready to go.
I use 2 photo lights, polarizing filters to reduce glare, and a Digital SLR using RAW format. I use a low ISO setting to minimize noise, which usually requires longer exposures and therefore a tripod. Additionally, I set the aperture small to increase the depth of field as a way to compensate for the auto-focus and ensure that larger items are in focus across the entire surface (it is not always easy to get an old painting perfectly parallel). Usually this means 15-25 second exposures.
The polarizers are really only needed for items with glass or high reflectance--drawings are probably not going to exhibit glare. They do alter the color and/or saturation and they can mess with auto focus settings in some cameras.
If you only have a consumer pocket camera, consider a tripod at least. The key is decent consistent lighting across the composition, and as "straight-on" a shot as you can get. You want the plane of the camera's CCD to be parallel to the plane of the drawing to avoid having to distort or fix the perspective. Not so bad for a one-off, but if you need to fix 30 at a time, you are better off taking the time working on your set up first.
For consumer cameras, and non-RAW format, if you don't like the results, check to see if there is a "custom white balance" setting in the camera menus: you take a shot of a white piece of paper in the light setting you are using, and then set the custom white balance to that photo. This will help reduce the color cast of any lights you are using (regular lights usually cast yellow-red, fluorescents usually cast blue)
A:
If you look at the levels of your scan, you can see that it is scanned too bright. Ideally the range of color information goes from almost black to almost white.
Try to change brightness settings on your scanner and avoid "helpers" like sharpening oder presets for text scans
A:
Check your scanner's settings. One of the purposes of scanning is to get text documents in, and for those to look good when scanned some scanners will tend to up the contrast and do some kind of sharpening to help get that sharp black-and-white look.
If you are able to get into the preferences, try to find things like Exposure or Sharpening and just play with the settings.
@Scott's comment covers the rest. Some scanners are low quality, and using a camera might be better (if you have a good camera to work with).
| {
"pile_set_name": "StackExchange"
} |
Q:
Rails query boolean MySQL error
Doing any one of these:
notifications.where("read = 'true'")
notifications.where("read = '1'")
notifications.where("read = true")
notifications.where("read = 1")
results in the error:
Mysql2::Error: You have an error in your SQL syntax; check the manual
that corresponds to your MySQL server version for the right syntax to
use near 'read = 'true')
However, this works just fine:
notifications.where(:read => true)
Any idea why this could be?
A:
read is mysql reserved keyword you need to use back-ticks around your column name
notifications.where(" `read` = true")
Not familiar with ruby but you can refer to this answer to enclose the column with back-ticks
Mysql Reserved Words
| {
"pile_set_name": "StackExchange"
} |
Q:
Replacing in data.frames in R
I have 2 dataframes
df1 #This one is in fact the mapping file (300 rows)
Sample_title Sample_geo_accession
EC2003090503AA GSM118720
EC2003090502AA GSM118721
EC2003090504AA GSM118722
df2 #(300 rows)
cmap_name concentration (M) perturbation_scan_id vehicle_scan_id3
metformin 0.00001 EC2003090503AA EC2003090502AA
metformin 0.00001 EC2003090504AA EC2003090502AA
metformin 0.0000001 EC2003090503AA EC2003090502AA
I want to read every line in df2 and replace the perturbation_scan_id and vehicle_scan_id3 by the !Sample_geo_accession in df1.
The final output will be:
df3
cmap_name concentration_M perturbation_scan_id vehicle_scan_id3
metformin 0.00001 GSM118720 GSM118721
metformin 0.00001 GSM118722 GSM118721
metformin 0.0000001 GSM118720 GSM118721
A:
One possible solution (if you have many of these columns that you want to replace) is to first create a row index and the "melt" the data by cmap_name, concentration_M and the new index. Then perform a single merge against the new value column (which will have all the values of these column in a single long column), and then "decast" the data back into a wide format. Here's a possible data.table implementation. Please make sure you have the latest version from CRAN. I also made your column names more "R friendly" so it will be easier to work with
library(data.table) # V 1.9.6+
temp <- melt(setDT(df2)[, indx := .I], id = c(1:2, 5))[df1, on = c(value = "Sample_title")]
dcast(temp, cmap_name + concentration_M + indx ~ variable, value.var = "Sample_geo_accession")
# cmap_name concentration_M indx perturbation_scan_id vehicle_scan_id3
# 1: metformin 1e-07 3 GSM118720 GSM118721
# 2: metformin 1e-05 1 GSM118720 GSM118721
# 3: metformin 1e-05 2 GSM118722 GSM118721
A similar dplyr/tidyr implementation
library(dplyr)
library(tidyr)
df2 %>%
mutate(indx = row_number()) %>%
gather(variabe, value, -c(1:2, 5)) %>%
left_join(., df1, by = c("value" = "Sample_title")) %>%
select(-value) %>%
spread(variabe, Sample_geo_accession)
| {
"pile_set_name": "StackExchange"
} |
Q:
lifting orthogonal idempotents (induction step)
I'm trying to prove (by induction on $n$) that whenever $I$ is a nilpotent ideal of a ring $R$, and $r_1+I,\ldots,r_n+I$ form a complete set of orthogonal idempotents in the quotient $R/I$, there exists a complete set of orthogonal idempotents $e_1,\ldots, e_n\in R$ such that for each $1\leq i\leq n$, $e_i+I=r_i+I$.
I'm fine with the base case, but am struggling with the inductive step.
This result appears as Corollary 7.5 on page 107 of Etingof's representation theory book, available here: http://www-math.mit.edu/~etingof/replect.pdf. In the (rather terse) proof, they suggest applying the inductive assumption to a particular subring, but I don't understand exactly what they mean. I'm wondering if anyone could explain Etingof's argument, or perhaps suggest a different approach.
Thanks very much.
A:
The picture you should always keep in mind is a vector space $V$, $A$ is the algebra of matrices on $V$ and $\pi_1,...,\pi_n$ are projections onto subspaces $V_1,...,V_n$ with $V=V_1\oplus \cdots \oplus V_n$. Then $\pi_1,...,\pi_n$ are idempotents with $\pi_1+\cdots+\pi_n=1$, $V_i=\pi_iV$, and the algebra of matrices on $V_i$ is isomorphic to $\pi_iA\pi_i$. Note that $\pi_2+\cdots+\pi_n=1-\pi_1$ is an idempotent too - it's the projection onto $V_2\oplus \cdots \oplus V_n$ - and the algebra of matrices on $V_2\oplus \cdots \ \oplus V_n$ is thus $(1-\pi_1)A(1-\pi_1)$.
So now you have orthogonal idempotents $\overline{e}_1,...,\overline{e}_n$ in $A/I$ with $\sum \overline{e}_i=1$, and have a lift $e_1$ of $\overline{e}_1$. Note that $\overline{e}_2,...,\overline{e}_n$ are ortohogonal idempotents in
$$(1-\overline{e}_1)A/I(1-\overline{e}_1) \ = \ (1-e_1)A(1-e_1)/I$$ with $\sum \overline{e}_i=1$ the identity in this algebra. Thus by induction there are idempotent lifts $e_2,...,e_n$ to $(1-e_1)A(1-e_1)$, which are orthogonal and $e_2+\cdots+e_n=1$ is the identity in $(1-e_1)A(1-e_1)\subseteq A$. Note
$$A\ = \ e_1Ae_1 \ \oplus \ (1-e_1)A(1-e_1)$$
and the identities in $A,e_1Ae_1,(1-e_1)A(1-e_1)$ are $1,e_1,(1-e_1)$ respectively. Thus
$$e_1\ + \ e_2 \ + \ \cdots \ + \ e_n\ = \ 1.$$
| {
"pile_set_name": "StackExchange"
} |
Q:
Getting newly inserted contacts in Android
In my application, I'm extracting all the contacts and upload them to the server. But there's a problem. I know how to extract all the contacts, but is there any way to get the newest contacts?
For example: The first time I open the app, I upload all the contacts but the other times I don't want to upload all of them, just the newest.
Do you know any way to do it?
Thanks!
A:
If your contacts have some order (by ID, by Name), you can save the identifier of the last contact you saved and the next time just check to see if there is a contact with identifier bigger than the last one you saved.
For example:
You save a list of contacts and the last saved contact ID (assuming they are ordered, if not then save the max) was 100.
Next run, you check to see if you have any contact with ID > 100 and you save these, and again you save the last inserted ID.
| {
"pile_set_name": "StackExchange"
} |
Q:
Calling up html text box using javascript and converting it to an integer
So I just want to understand how I could do this, very simple example:
My html is
<div id = "pie"> eee </div>
<input type="text" name="item1" size="5">
and my javascript is
var q = parseFloat("10");
document.getElementById("pie").innerHTML = q ;
and basically what I want to do is call up whatever text is in the text box (named item1) and convert it to an integer, then display that number. I'm sure there's an easy way to do it but I cant figure it out
A:
Try this:
+(document.getElementsByName('item1')[0].value);
That is the shortest way to do that now to display it, for an example in the element #pie
document.getElementById('pie').innerHTML = +(document.getElementsByName('item1')[0].value)
Putting + before something will convert it to a number. If the text box isn't a number you can detect that by:
var value = document.getElementsByName('item1')[0].value;
document.getElementById('pie').innerHTML = !isNaN(value)?+value:"Not a number";
Demo
| {
"pile_set_name": "StackExchange"
} |
Q:
I only want to see items related to my favorite tags in the review queues
I am now able to see review queues, but it's showing me triage and first posts for tags that I have no expertise or knowledge in.
Is there a way to filter the review queue to only those tags that are my favourite tags? i.e. I only want to see the review queue for items related to the one language that I use.
I get that the idea is not to review the domain-knowledge but rather to review the quality of the questions, but unless it's related to my favorite tags I'm not going to have the motivation to review them!
This is related to Show review queue counts according to current filters of the user, but it was more concerned with Close Vote counts. Do the same arguments apply here?
Edit: I can only see this when I go to the "review" area - I don't see the filter option
A:
You can apply filters within each review queue by clicking the "not so obvious" link at the top. Just access any of the queues and you should see the filter option shown below:
| {
"pile_set_name": "StackExchange"
} |
Q:
Advantage of using "IF Amplifier" over traditional OP amp?
I am trying to receive 1MHz ultrasonic signal with transducer.
So, I will need to design some kind of filter to only pass 1MHz signal and filter out other noise signals.
Searching online for OP amp configuration, I found something called "IF Amplifier".
Below is the link to datasheet.
[MC1350] - IF amplifier(Datasheet)
What is the advantage of using such specific IC over using generic OP amp with nice parameter(high input impedance and GBP, low offset voltage and output impedance)? In terms of noise performance, signal attenuation, jitter, and etc.
A:
If one is implementing a notch-filter or a narrow band-pass filter, for noise performance, better use in differential mode and with dual power supply.
If this filtered output is not driving any output, we don't need high current sourcing capability.
May be something like LT1028 would suffice. (With least THD and noise levels, it may be useful to end application. http://www.linear.com/product/LT1007)
| {
"pile_set_name": "StackExchange"
} |
Q:
Password protect files on IIS without using NTFS file permission
We host online reports that a client will log into and view. The reports are ASP pages pulling the necessary numbers from a SQL Server database. The client's access details are managed by a table in the SQL Server too.
In the past we've had one or more PDF or other files that the client might also want to access from the online report. These were simply uploaded into a subfolder within the ASP files folder and linked to from the relevant ASP page.
If I wanted to protect those files to make sure that the user attempting to access them has logged in - is there a way to do this? I'm thinking that the file has to be stored in the database. How do others manage this?
A:
Putting the PDFs in the db is unnecessary. Instead, store the PDFs in a location outside of the web folder. Well away from public access. Create records in a DB for the locations of each PDF with an id number. Assuming you already have an authentication system for users in place, create another table which links the userid to the recordid of the pdf they have access to. From there it's a simple matter of creating a page which checks credentials against this access table, opening the file location provided by the db and response binarywriting it's contents. You can find several examples of doing this scattered about the web. However, ASP classic has this annoying habit of storing the entire file in ram while transferring it which EATS up resources like you wouldn't believe. I'd recommend using an ASP.Net script if at all possible. The code is much easier too
Server.TransferFile()
| {
"pile_set_name": "StackExchange"
} |
Q:
Do I have to widgetize my pages?
I have a three pages. On front page I want to show some custom test about page1,link to page1, custom text about page2, link to page2, custom text about page3, link to page3.
And then I want a another page that includes texts from all three pages on one page-template.
Is the correct way then to make widgets of these three pages?
A:
Use page templates and template parts. On your Front Page, you include the Texts as Template Parts:
Front Page
+---------------+---------------+---------------+
| Page A | Page B | Page C |
+---------------+---------------+---------------+
| Link Page 1 | Link Page 2 | Link Page 3 |
| Text 1 (TP1*) | Text 2 (TP2*) | Text 3 (TP3*) |
+---------------+---------------+---------------+
Then you reuse them in your other (page) template:
Other Page
+-----------------+-----------------+-----------------+
| Content |
+-----------------+-----------------+-----------------+
| Template Part 1 | Template Part 2 | Template Part 3 |
+-----------------+-----------------+-----------------+
That's maybe easier than creating widget areas and fetching static text from the DB.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to run petsc in MPI mode correct?
I'm using petsc as a solver for my project. However, the solver in parallel mode creates much more process then my expectation.
The code using python and petsc4py. The machine have 4 cores.
(a). If I run it directly, petsc uses only 1 process to assemble the matrix, and creates 4 process to solve the equations,
(b). If I use comment 'mpirun -n 4', petsc uses 4 process to assemble the matrix, but creates 16 process to solve the equations,
I have checked my own python code,, the main component associates with matrix create is as follow:
m = PETSc.Mat().create(comm=PETSc.COMM_WORLD)
m.setSizes(((None, n_vnode[0]*3), (None, n_fnode[0]*3)))
m.setType('dense')
m.setFromOptions()
m.setUp()
m_start, m_end = m.getOwnershipRange()
for i0 in range(m_start, m_end):
delta_xi = fnodes - vnodes[i0//3]
temp1 = delta_xi ** 2
delta_2 = np.square(delta) # delta_2 = e^2
delta_r2 = temp1.sum(axis=1) + delta_2 # delta_r2 = r^2+e^2
delta_r3 = delta_r2 * np.sqrt(delta_r2) # delta_r3 = (r^2+e^2)^1.5
temp2 = (delta_r2 + delta_2) / delta_r3 # temp2 = (r^2+2*e^2)/(r^2+e^2)^1.5
if i0 % 3 == 0: # x axis
m[i0, 0::3] = ( temp2 + np.square(delta_xi[:, 0]) / delta_r3 ) / (8 * np.pi) # Mxx
m[i0, 1::3] = delta_xi[:, 0] * delta_xi[:, 1] / delta_r3 / (8 * np.pi) # Mxy
m[i0, 2::3] = delta_xi[:, 0] * delta_xi[:, 2] / delta_r3 / (8 * np.pi) # Mxz
elif i0 % 3 == 1: # y axis
m[i0, 0::3] = delta_xi[:, 0] * delta_xi[:, 1] / delta_r3 / (8 * np.pi) # Mxy
m[i0, 1::3] = ( temp2 + np.square(delta_xi[:, 1]) / delta_r3 ) / (8 * np.pi) # Myy
m[i0, 2::3] = delta_xi[:, 1] * delta_xi[:, 2] / delta_r3 / (8 * np.pi) # Myz
else: # z axis
m[i0, 0::3] = delta_xi[:, 0] * delta_xi[:, 2] / delta_r3 / (8 * np.pi) # Mxz
m[i0, 1::3] = delta_xi[:, 1] * delta_xi[:, 2] / delta_r3 / (8 * np.pi) # Myz
m[i0, 2::3] = ( temp2 + np.square(delta_xi[:, 2]) / delta_r3 ) / (8 * np.pi) # Mzz
m.assemble()
the main component associates to petsc solver is as follow:
ksp = PETSc.KSP()
ksp.create(comm=PETSc.COMM_WORLD)
ksp.setType(solve_method)
ksp.getPC().setType(precondition_method)
ksp.setOperators(self._M_petsc)
ksp.setFromOptions()
ksp.solve(velocity_petsc, force_petsc)
Is there any one could give me some suggestion? Thanks.
A:
set an environment variable OMP_NUM_THREADS=1.
| {
"pile_set_name": "StackExchange"
} |
Q:
How is Mercury tidally locked if the ratio is not 1:1?
The ratio for a planet to be tidally locked has to be 1:1, but the ratio for Mercury is 3:2. How is Mercury tidally locked if the ratio is not 1:1?
A:
The simple answer to your question is that Mercury is not tidally locked. You may have seen old books (before 1965) that said it was tidally locked, because it was once assumed to be so. Alternatively, as zephyr said, your source may have been referring to the 3:2 resonance, but that is also not really the same thing.
A:
It's not tidally locked like the moon is because it is in a 3:2 resonance with the sun. It rotates three times for every two orbits it makes. So it isn't considered a tidal lock because it means they usually need to be in a 1:1 resonance. I think you were referring to Wikipedia, where it said Mercury was in a tidal lock with the sun. A 3:2 resonance would not be considered regular tidal locking, but elliptical tidal locking. Elliptical tidal locking means when a body is in a stable resonance that is not 1:1, so Mercury wouldn't be the best example of a tidal lock, but it would be a good example of elliptical tidal locking.
| {
"pile_set_name": "StackExchange"
} |
Q:
CKEditor with Bootstrap Skin precompile issue in Production for Rails 4
I am trying to use the bootstrap skin for ckeditor in rails 4. All works fine in development but I get a precompile error in production.
I added ckeditor gem (https://github.com/galetahub/ckeditor). Then downloaded the bootstrap skin from (https://github.com/Kunstmaan/BootstrapCK4-Skin/tree/master/skins/bootstrapck)
I added the skin content in vendor/assests/javascript/ckeditor/skins
and added ckeditor to precompile array. Rails.application.config.assets.precompile += %w( ckeditor/* )
In precompiling assets I get Sass::SyntaxError: Undefined variable error.
There are some scss files in the skin folder like vendor/assets/javascripts/ckeditor/skins/bootstrapck/scss/components/_colorpanel.scss
that have variables for example in above mentioned file there variable for colors like $gray (from bootstrap I suppose).
I think the scss files in the vendor folder cant access the variables and other scss from app/assets in precompile task.
Any idea how can I fix this, or is this actually the correct way of adding the extra skin in vendor folder?
rails 4.1.4
sass-rails 4.0.4
ckeditor 4.1.1
appreciate any help.
A:
You can add
@import "../config/config";
at the top of most partials in
vendor/assets/javascripts/ckeditor/skins/bootstrapck/scss/components/
and
@import "colors";
at the top of
vendor/assets/javascripts/ckeditor/skins/bootstrapck/scss/config/_defaults.scss
to make it work.
It's not very DRY, but at least the skin assets will precompile.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to remove .jsp extension and replace with forward Slash in htaccess?
I want to remove the .jsp extension from my URL and replace it with a forward slash eg. example.com/xyz.jsp to example.com/xyz/ and example.com/xyz.jsp?ab=12 to example.com/xyz/?ab=12.
I used:
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^([^/]+)/$ $1.php
RewriteRule ^([^/]+)/([^/]+)/$ /$1/$2.php
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_URI} !(\.[a-zA-Z0-9]{1,5}|/)$
RewriteRule (.*)$ /$1/ [R=301,L]
And
RewriteEngine On
RewriteBase /
# external redirect from /example.html to /example
RewriteCond %{THE_REQUEST} ^[A-Z]{3,}\s/+([^.]+)\.html [NC]
RewriteRule ^ /%1/ [R=301,L]
# internal forward from /example/ to //example.html
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-l
RewriteCond %{DOCUMENT_ROOT}/$1.html -f
RewriteRule ^(.+?)/?$ /$1.html [L]
In my .htaccess but neither are not working correctly.
A:
You will need to ensure that MultiViews is disabled before this will work correctly, as this will tend to conflict with your mod_rewrite directives. Add this in your .htaccess file:
Options +FollowSymLinks -MultiViews
(FollowSymLinks needs to be enabled for mod_rewrite to work, so just to be sure.)
Then, something like what you already have looks reasonable:
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{DOCUMENT_ROOT}/$1.jsp -f
RewriteRule (.+)/$ $1.jsp [L]
I've made the trailing slash mandatory on the URL (otherwise you potentially have two URLs accessing the same content - duplicate content).
UPDATE: To redirect any requests to the .jsp URL to the canonical URL (ie. without the extension and with a trailing slash) then something like the following (similar to what you had in your question) would need to go before the directives above:
RewriteCond %{THE_REQUEST} \.jsp\s
RewriteRule (.+)\.jsp$ /$1/ [R=301,L]
This is only strictly necessary if the .jsp URLs had been indexed or externally linked to. If this is a new site then this step is optional.
It is more efficient to match what you can with the RewriteRule pattern (ie. (.+)\.jsp$), rather than have a catch-all regex here. The THE_REQUEST condition ensures that this only applies to initial requests and not rewritten requests - thus preventing a redirect loop.
So, in summary:
# Disable MultiViews
Options +FollowSymLinks -MultiViews
RewriteEngine On
# Remove file extension from URLs (external redirect)
RewriteCond %{THE_REQUEST} \.jsp\s
RewriteRule (.+)\.jsp$ /$1/ [R=301,L]
# Internally rewrite extensionless URLs back to ".jsp"
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{DOCUMENT_ROOT}/$1.jsp -f
RewriteRule (.+)/$ $1.jsp [L]
DEBUGGING: To help with debugging the above, add the following directive below the RewriteEngine On directive and check the environment variables (MOD_REWRITE_THE_REQUEST and MOD_REWRITE_URL_PATH) in your server-side code:
RewriteCond %{THE_REQUEST} (.*)
RewriteRule (.*) - [E=MOD_REWRITE_THE_REQUEST:%1,E=MOD_REWRITE_URL_PATH:$1]
What do these environment variables contain when you access a .jsp URL?
| {
"pile_set_name": "StackExchange"
} |
Q:
What are "view frustum", "focal length" and near plane distance?
I've been reading through "Mathematics for 3D game programming and Computer Graphics" lately. Specifically definition of the view frustum and how the APIs generally process it, author introduces the concept of a length e which he classifies as the focal length. And it makes absolutely no sense, isn't the whole notion of focal length incompatible with the pinhole camera concept (inf. small) which is at the heart of 3D APIs as Direct3D and OpenGL? I thought all I need is the aspect ratio, the distance from the near plane and the FoV which can be established from the distance from the near plane.
And then he simply drops it and speaks of near plane distance n, which I thought was the actual value of e. What gives? Isn't the distance from the near plane the same as the "focal length" since we're talking about a pinhole camera? How do they differentiate?
A:
The focal length controls the field of view. That is, they're mathematical transforms of each other: changing one changes the other. The longer the focal length, the smaller your field of view.
So your FOV parameter is merely a re-stated form of the focal length.
The "common projection matrix" doesn't really use a focal length. The actual focal length is always the same. These matrices implement FOV as a scale applied to the X and Y camera-space coordinates. This is just to make the math easier; being able to keep the eye point at a fixed location makes the math simpler. But in terms of the results, either produces the same results.
Isn't the distance from the near plane the same as the "focal length" since we're talking about a pinhole camera?
No. The near plane is irrelevant; the near/far planes exist solely for near/far culling of rasterized triangles. They're important mathematically for computing the depth value (which ties into depth buffers), but they have no effect on the visuals of the scene (outside of depth buffer artifacts from having a close near plane).
| {
"pile_set_name": "StackExchange"
} |
Q:
Call touchesEnded after touchesMoved and lifted finger
I am trying to play different animations and sounds depending on which button the user presses. The buttons are shaped variously and I need to play a sound when the user is holding the button down and stop them while he lifts up his finger. I thought it would be easy just doing with touchesBegan and touchesMoved.
However, if the user moves his finger while touching the button (even a 1 pixel movement), then there is touchesMoved method called. So, I tried some options and I am able to stop the sound once the finger moves (by calling touchesEnded by myself), however it is not the perfect solution, because the user moves the finger even without him noticing (like 1 pixel or so) and then it is very hard to play the sound continuously while he is touching the button.
So I thought I could create two Integers, which to one I will set value to in touchesBegan, then in touchesMoved setting the another and lastly comparing them, checking if the move is in the same view (button) - if it is not then it calls the touchesEnded. However it has one problem, and that is if the user holds his finger on the button, then moves it (still on the same button) and then he lifts up, the touchesEnded is not called, because he started and moved in the same view.
What should I do to call the touchesEnded method after user lifts up his finger after moving it?
Here is my code (ignore those alpha settings, playing sounds etc.):
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
if ([touch view] == leftArmBtn) {
leftArmBtn.alpha = 0;
leftLegBtn.alpha = 0;
mainImageView.image = [UIImage imageNamed:@"leftArmPush.jpg"];
[[SoundManagerOAL sharedSoundManagerOAL] playSoundWithKey:@"LEFTARM"];
touchParent = 1;
} else if ([touch view] == mouthBtn) {
mouthBtn.alpha = 0;
mainImageView.image = [UIImage imageNamed:@"mouthPush.jpg"];
[[SoundManagerOAL sharedSoundManagerOAL] playSoundWithKey:@"MOUTH"];
touchParent = 2;
} else if ([touch view] == rightArmBtn) {
rightArmBtn.alpha = 0;
righLegBtn.alpha = 0;
mainImageView.image = [UIImage imageNamed:@"rightArmPush.jpg"];
[[SoundManagerOAL sharedSoundManagerOAL] playSoundWithKey:@"RIGHTARM"];
touchParent = 3;
} else if ([touch view] == leftLegBtn) {
leftLegBtn.alpha = 0;
mainImageView.image = [UIImage imageNamed:@"leftLegPush.jpg"];
[[SoundManagerOAL sharedSoundManagerOAL] playSoundWithKey:@"LEFTLEG"];
touchParent = 4;
} else if ([touch view] == righLegBtn) {
righLegBtn.alpha = 0;
mainImageView.image = [UIImage imageNamed:@"rightLegPush.jpg"];
[[SoundManagerOAL sharedSoundManagerOAL] playSoundWithKey:@"RIGHTLEG"];
touchParent = 5;
} else if ([touch view] == vakBtn) {
vakBtn.alpha = 0;
mainImageView.image = [UIImage imageNamed:@"vakPush.jpg"];
[[SoundManagerOAL sharedSoundManagerOAL] playSoundWithKey:@"VAK"];
touchParent = 6;
} else {
touchParent = 0;
}
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
if ([touch view] == leftArmBtn) {
leftLegBtn.alpha = 1;
leftArmBtn.alpha = 1;
mainImageView.image = defaultImage;
[[SoundManagerOAL sharedSoundManagerOAL] stopSoundWithKey:@"LEFTARM"];
} else if ([touch view] == mouthBtn) {
mouthBtn.alpha = 1;
mainImageView.image = defaultImage;
} else if ([touch view] == rightArmBtn) {
rightArmBtn.alpha = 1;
righLegBtn.alpha = 1;
mainImageView.image = defaultImage;
[[SoundManagerOAL sharedSoundManagerOAL] stopSoundWithKey:@"RIGHTARM"];
} else if ([touch view] == leftLegBtn) {
leftLegBtn.alpha = 1;
mainImageView.image = defaultImage;
[[SoundManagerOAL sharedSoundManagerOAL] stopSoundWithKey:@"LEFTLEG"];
} else if ([touch view] == righLegBtn) {
righLegBtn.alpha = 1;
mainImageView.image = defaultImage;
[[SoundManagerOAL sharedSoundManagerOAL] stopSoundWithKey:@"RIGHTLEG"];
} else if ([touch view] == vakBtn) {
vakBtn.alpha = 1;
mainImageView.image = defaultImage;
} else {
}
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
if ([touch view] == leftArmBtn) {
movingTouch = 1;
} else if ([touch view] == mouthBtn) {
movingTouch = 2;
} else if ([touch view] == rightArmBtn) {
movingTouch = 3;
} else if ([touch view] == leftLegBtn) {
movingTouch = 4;
} else if ([touch view] == righLegBtn) {
movingTouch = 5;
} else if ([touch view] == vakBtn) {
movingTouch = 6;
} else {
movingTouch = 10;
}
if (touchParent != movingTouch) {
[self touchesEnded:touches withEvent:event];
}
}
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event {
[self touchesEnded:touches withEvent:event];
}
A:
Ok, so found out that adding touchesBegan, touchesMoved and touchesEnded to a UIButton subclass doesn't work as expected, so I have switched to adding targets for each of the UIControl actions.
I used the code from here How do you get the touchesBegan coordinates when a UIButton is triggered?
Used it for each button and modified it a little - like this:
UIButton *aButton = [UIButton ......
[aButton addTarget:self action:@selector(aButtonTouch) forControlEvents:UIControlEventTouchDown];
[aButton addTarget:self action:@selector(aButtonTouchEnd) forControleEvents:UIControlEventTouchUpInside | UIControlEventTouchUpOutside];
and then
-(void)aButtonTouch{
//do something
}
-(void)aButtonTouchEnd {
//do something
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Consolidating Windows and Linux servers
I'm looking forward to getting your thoughts on consolidating/virtualizing 3 Windows 2008 Servers and 2 Linux Debian Servers into 1 (powerful) machine.
What is the most cost-effective Virtualization software available to accomplish this. VMWare looks awfully expensive!
A:
VMWare ESXi is free and is a true hypervisor in that it doesn't require a host OS to run on; it's also the most mature and arguably the most widely-supported VM platform out there.
However, KVM is a big up and comer; Xen is also popular, but since KVM is built into the Linux kernel (as opposed to a separate micro-kernel for Xen), it's quickly catching up with IBM using it for their cloud initiative and all the major distros of Linux now supporting it.
If you're comfortable with Windows, there's also Hyper-V; several flavors are inexpensive or free depending on your current licensing (Enterprise gets you 4 guest VMs using the same "parent" license).
| {
"pile_set_name": "StackExchange"
} |
Q:
Create Google Marketplace App only to grant API access to service_account (GSuite)
I have an node.js application with a working server2server GMail API communication via an service_account.
Everything works fine.
To be able to communicate with a users account, the G Suite Admin has to grant API Acess to the Client ID of my service_account manually.
As described here:
Impersonating list of users with Google Service Account
with a Marketplace App it would be possible, to grant access only to specific organizationals units (OUs) and it would be more fancy to use (enabling a marketing place app is more user friendly than configuring API Access for ClientID and Scope manually like here:
)
Now my question: Is it possible to provide a Marketplace App only for the purpose to grant API access for my application automatically? Will it get through the review when it has no other purpose? Any other hints on this?
A:
Yes it is a working way to create a marketplace application to grant the API access automatically when the G Suite Admin installs this application.
The only restriction is, that your actual application has to support Google SSO to make it through the review process. So the user must be able to log into your Web Application by clicking on the icon in his G Suite account. If the user has no account in your web app, an account has to be created automatically (trial-account is sufficient)
| {
"pile_set_name": "StackExchange"
} |
Q:
Can Hela be killed?
Throughout the whole film Thor: Ragnarok, Hela was shown to be remarkably powerful. Decimating Asgard's army, Loki, and Thor, she has proven herself as being one the best villains in the MCU. With that being said, there were several occasions on which she should have been seriously injured but to no avail. She was thrust through by one of the Einherjar, thrust through by Valkyrie, and also hit by a very powerful lightning bolt from Thor. All of these seemed to have no effect on her.
So this leaves the question to be asked. Can Hela be killed?
A:
One very important plot point in Thor:Ragnarok is that Hela draws her power from Asgard itself. Odin's life-force kept her imprisoned temporarily, but he never actually killed her. Once Odin died, she was released, and became progressively more powerful the longer she stayed on Asgard.
As you could see from the first scene where Loki and Thor meet her, her costume appeared torn, but once she arrived in Asgard, the tears sealed up and she presumably healed from any wound she had received. So the main thing about her that made her so difficult to kill was that the longer she stayed on Asgard, the more powerful she became, so when Thor hit her with 'the biggest lightning bolt in the history of lightning', she was relatively unscathed.
However, that was why Thor and Loki realised they had to free Surtur to destroy Asgard in the end, and also why Hela said "No!" when she saw Surtur emerge. She then battled him for a bit before he completely destroyed Asgard. It stands to reason that once her source of power was destroyed, Hela would lose much of her power.
Of course, it would still be difficult to kill her. My guess is that with her main source of power gone, she would only be comparable to Loki, who emerged generally ok after getting beat up by the Hulk. But there's no reason that she can't be killed. Especially since Asgard is now gone.
In fact, I wouldn't be surprised if she is dead. Blown up along with Asgard.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to change a docker image
I pulled an image I found on docker hub, I made some changes and it is working locally.
But now need to push it to docker hub. I used the following guide https://docs.docker.com/docker-cloud/builds/push-images/ and I was able to push the image on my repository.
The problem is that the changes I made are not in the image I just pushed. I think the reason is because I didn’t build (after making the changes) from the docker file. But given that the image I modified is not mine I don’t know where can I find(if I can ?) the DOCKERFILE, build the image and then push it.
Thank you
A:
You have to commit the changes that you made to the docker image using docker commit command. Otherwise you will lose them once the container is destroyed in which you made the changes.
Also you can push the new image to Docker hub and later on pull the new image and find your changes in place.
Command: docker commit CONTAINER_ID IMAGE_NAME
| {
"pile_set_name": "StackExchange"
} |
Q:
How to configure ssh tunneling with Putty between two RHEL servers?
I'm trying to allow ssh access from a remote RHEL server to a local RHEL box via Win/Putty gateway. Basically, I'd like to be able to do 'ssh localhost -p 512' on a remote server so that it would connect to the RHEL server in the local network. The local network is beyond a firewall so I can connect from my Win PC to the remote server with Putty/ssh but not vice verse.
LclSrv----WinXP/Putty-----||-----RmtSrv
So, I've added the following tunneling settings to the current RmtSrv session in Putty (actually I use Kitty but doesn't matter):
R512 LclSrv:22
I expect that this would create a process on the remote server that listens port 512 and transfers the connections to a local network/LclSrv port 22.
After pressing start button, Putty opens a regular ssh terminal session successfully but nothing happens. (options show active port forwarding). I've checked with netstat -l that port 512 is not listening on RmtSrv. ssh on this port returns 'connection refused'. What am I doing wrong? May be there is something in the sshd_config that needs to be changed in order to allow the tunneling? Could it be user privileges on RmtSrv that prevents me from creating tunnels? I have sudo btw.
Cheers, Vlad.
A:
Local port forwarding scenario (rmtsrv has access to WinXP):
What you want to do in ssh terms is forward a local port to another machine and allow other hosts (rmtsrv) to connect to it.
So you set up local WinXP:512 to forward to lclsrv:22.
So in Putty's Tunnel settings be sure to check Local ports accept connections from other hosts and add source port 512 with the destination lclsrv:22 to the forwarded ports.
Edit to accommodate comment:
Remote port forwarding scenario (WinXP has access to both srvs):
The configuration you suggest should work.
r512 LclSrv:22
Is correct.
I'd guess the issue is with the sshd security settings on rmtsrv. Check if this is enabled:
AllowTcpForwarding yes
If you want to enable access to the forwarded port for others on rmtsrvs network:
GatewayPorts yes
The config usually resides in '/etc/ssh/sshd_config
| {
"pile_set_name": "StackExchange"
} |
Q:
Interoperability problems when using JavaFx combobox within SWT Dialog
JavaFx is supposed to be easily integrated in an SWT application (see here: http://docs.oracle.com/javafx/2/swt_interoperability/jfxpub-swt_interoperability.htm) and both toolkits use the same threading model.
However things get strange, when I open a dialog containing an FxCanvas which contains a JavaFx ComboBox. If I open the combo box popup menu and then close the dialog, the popup menu stays open. If I now move the mouse onto the popup a null pointer exception is thrown within javafx. When doing this within a larger application all JavaFx GUIs remain broken until the application is restarted.
Any ways to work around this?
Example code below: Close the dialog with 'Ok' or the window close button. Exit the application with 'Cancel'
package test;
import javafx.embed.swt.FXCanvas;
import javafx.geometry.Insets;
import javafx.scene.Parent;
import javafx.scene.Scene;
import javafx.scene.control.ComboBox;
import javafx.scene.layout.StackPane;
import org.eclipse.jface.dialogs.Dialog;
import org.eclipse.swt.SWT;
import org.eclipse.swt.layout.FillLayout;
import org.eclipse.swt.widgets.Composite;
import org.eclipse.swt.widgets.Control;
import org.eclipse.swt.widgets.Display;
import org.eclipse.swt.widgets.Shell;
public class TestFx {
static class MyDialog extends Dialog {
Parent w;
public MyDialog(Shell parent,Parent n) {
super(parent);
this.w = n;
setShellStyle(SWT.RESIZE| SWT.BORDER | SWT.TITLE |SWT.CLOSE );
}
@Override
public void cancelPressed() {
System.exit(0);
}
@Override
protected Control createDialogArea(Composite parent) {
Composite container = (Composite) super.createDialogArea(parent);
container.setLayout(new FillLayout());
FXCanvas fxCanvas = new FXCanvas(container, SWT.NONE);
Scene scene = new Scene(w);
fxCanvas.setScene(scene);
return container;
}
}
private static Parent createScene() {
StackPane pane = new StackPane();
pane.setPadding(new Insets(10));
ComboBox<String> c = new ComboBox<String>();
c.getItems().addAll("Test1","Test2");
pane.getChildren().add(c);
return pane;
}
public static void main(String[] args) {
Display display = new Display();
Shell shell = new Shell(display);
while (true) {
MyDialog d = new MyDialog(shell,createScene());
d.open();
}
}
}
Exception:
java.lang.NullPointerException
at com.sun.javafx.tk.quantum.GlassScene.sceneChanged(GlassScene.java:290)
at com.sun.javafx.tk.quantum.ViewScene.sceneChanged(ViewScene.java:156)
at com.sun.javafx.tk.quantum.PopupScene.sceneChanged(PopupScene.java:30)
at com.sun.javafx.tk.quantum.GlassScene.markDirty(GlassScene.java:157)
at javafx.scene.Scene$ScenePulseListener.pulse(Scene.java:2214)
at com.sun.javafx.tk.Toolkit.firePulse(Toolkit.java:363)
at com.sun.javafx.tk.quantum.QuantumToolkit.pulse(QuantumToolkit.java:460)
at com.sun.javafx.tk.quantum.QuantumToolkit$9.run(QuantumToolkit.java:329)
at org.eclipse.swt.internal.win32.OS.DispatchMessageW(Native Method)
at org.eclipse.swt.internal.win32.OS.DispatchMessage(OS.java:2546)
at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3756)
at org.eclipse.jface.window.Window.runEventLoop(Window.java:825)
at org.eclipse.jface.window.Window.open(Window.java:801)
at test.TestFx.main(TestFx.java:55)
A:
At work we're developing some applications using JavaFX, on top of and old Swing platform and we also have found this issue.
Apparently it is caused by some issues on JFXPanel which is not correctly propagating some window events (focus, iconifying, etc) to the FX framework. The issue affects not only the ComboBox component, but every component that uses a PopupWindow (Menu, Tooltip, etc), specially when using Swing's JInternalFrame.
So, when a Popup is displaying and the window is minimized or closed, the Popup does not hide, causing the FX thread to crash if you try subsequently to interact with it.
The workaround mentioned above works, but only for ComboBox, as Menu and Tooltip does not inherit from the Node class, so didn't work for us :(
I developed another workaround which resolved the problem for all components that display popups, which basically forces all popups to close whenever a JFXPanel loses focus:
private static void initFX(final JFXPanel jfxPanel) {
final TestFxPanel parent = new TestFxPanel();
final Scene scene = new Scene(parent);
jfxPanel.setScene(scene);
jfxPanel.addFocusListener(new FocusAdapter() {
@Override
public void focusLost(final FocusEvent e) {
System.out.println(jfxPanel.getName() + ": FocusLost");
runFocusPatch(scene);
}
});
}
static void runFocusPatch(final Scene scene) {
Platform.runLater(new Runnable() {
@Override
public void run() {
System.out.println("Running patch");
final Iterator<Window> winIter = scene.getWindow().impl_getWindows();
while (winIter.hasNext()) {
final Window t = winIter.next();
if (t instanceof PopupWindow) {
System.out.println("Got a popup");
Platform.runLater(new Runnable() {
@Override
public void run() {
((PopupWindow) t).hide();
}
});
}
}
}
});
}
I confirm that the issue is NOT present in 8.0. Sadly we are not allowed to java 8 in production software as its still in beta stage.
best regards.
| {
"pile_set_name": "StackExchange"
} |
Q:
PS1 variable inheritance between scripts and programs using bash in AIX
How can I ensure that PS1 variable in AIX bash is inherited between cross calling of scripts and programs ?
Suppose a program gives to the user a shell instance such as the shell command of vi. This can be used in two ways, in one of them it's launched by a script (see the 2nd case below):
ksh prompt -> program -> "user asks for shell" -> ksh
script -> program -> "user asks for shell" -> ksh
That works well with ksh. But when the bash is used (in AIX), we noticed in the 2nd case that the PS1 variable isn't inherited, so it has the default value.
You can test it just using vi, create a script, like: runvi.sh:
# blablabla
# vi $1
When we run the script and ask vi for the shell, the prompt is: sh-4.3$
Of course, when you run vi directly, and it asks for the shell, the prompt is your previously defined PS1.
The unique difference between the test above and the real program is that in the program the bash shows the PS1 with the value bash-4.3$, so the issue with PS1 inheritance seems the same. This C program can shows it:
#include <stdlib.h>
main() { system("$SHELL"); }
BASH VERSION:
bash-4.3$ bash -version
GNU bash, version 4.3.30(1)-release (powerpc-ibm-aix5.1.0.0)
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
AIX VERSION
The same behavior between AIX 5.3 until 7.
OBS: In Ubuntu this doesn't happen.
A:
An workaround:
// C code snippet
#define CMDPATTERN_PS1_PRESET "export PS1='%s';$SHELL"
#define CMDPATTERN_PS1_NOSET "$SHELL"
char* cmdpattern = (userPS1!=NULL && *userPS1 != NULL) ?CMDPATTERN_PS1_PRESET :CMDPATTERN_PS1_NOSET;
sprintf(shellcmd, cmdpattern, userPS1);
system(shellcmd);
So, if the user wants a custom PS1, in case it isn't inherited, it should be previsouly configured in the software. And can keep working in all shells and OSes.
| {
"pile_set_name": "StackExchange"
} |
Q:
Control text update in event throws illegal cross thread exception
I am working on a Youtube search by keyword programmatically and using Youtube API to do this. I want to fire an event when a search progress is completed and return the result in YoutubeSearchCompletedEventArgs sent by YoutubeSearchCompleted.
But the code inYoutubeSearchCompleted in Form.cs throws cross thread illegal operation exception. Normally, using AsyncOperation.Post method it must not throw InvalidOperationException. Because I used the same method in a download manager project before and it worked well. So I can't understand why this happens.
Youtube search class
class YouTubeManager
{
public delegate void YoutubeSearchCompletedEventHandler(object sender, YoutubeSearchCompletedEventArgs e);
public event YoutubeSearchCompletedEventHandler YoutubeSearchCompleted;
AsyncOperation aop = AsyncOperationManager.CreateOperation(null);
List<YoutubeVideo> SearchByKeyword(string keyword)
{
List<YoutubeVideo> videos = new List<YoutubeVideo>();
//.......
//...Youtube data api search codes....
//.......
return videos;
}
public void Search(string keyword)
{
Task.Run(() =>
{
List<YoutubeVideo> list = SearchByKeyword(keyword);
aop.Post(new System.Threading.SendOrPostCallback(delegate
{
if (YoutubeSearchCompleted != null)
YoutubeSearchCompleted(this,
new YoutubeSearchCompletedEventArgs(keyword, list);
}), null);
});
}
}
Form.cs
public partial class Form1 : Form
{
YouTubeManager yam = new YouTubeManager();
public Form1()
{
InitializeComponent();
this.Load += Form1_Load;
}
void Form1_Load(object sender, EventArgs e)
{
yam.YoutubeSearchCompleted += yam_YoutubeSearchCompleted;
yam.Search("Blues");
}
void yam_YoutubeSearchCompleted(object sender, YoutubeSearchCompletedEventArgs e)
{
if (e.Videos.Count < 1) return;
textBox1.Text = e.Videos[0].Title();
}
}
In this code the textBox1.Text = e.Videos[0].Title(); line throws InvalidOperationException. How can I fix this problem?
Note: I don't want Invoke method, just AsyncOperation.
A:
Most likely the issue is caused by AsyncOperation created too early. You can check that with the following:
if (!(aop.SynchronizationContext is WindowsFormsSynchronizationContext))
{
// Oops - we have an issue
}
Why is that? The AsyncOperation stores the SynchronizationContext.Current at the construction time, and normally all Control derived classes (including the Form) install WindowsFormsSynchronizationContext from inside the Control class constructor.
But imagine the Forgm1 is your startup form (e.g. the typical Application.Run(new Form1()); call from Main). Since Any instance variable initializers in the derived class are executed before the base class constructor, at the time aop variable is initialized (through yam field initializer), the Control class constructor has not been ran yet, hence the WindowsFormsSynchronizationContext is not installed, so the AsynOperation is initialized with the default SynchronozationContext, which implements Post by simply executing it on separate thread.
The fix is simple - don't use initializer, just define the field
YouTubeManager yam;
and move the initialization
yam = new YouTubeManager();
inside the form constructor or load event.
| {
"pile_set_name": "StackExchange"
} |
Q:
Migrate OS Install to VMWare Virtual Machine
I have Windows Server 2003 installed on my server. I'd really like to be running it via VMWare EXSi as a virtual machine, but I don't what to have to reconfigure the whole deal.
Is there a relatively painless way to move it to a virtual machine? It will be staying on the same box with the exact same hardware... nothing changes.
Thoughts?
A:
VMware Converter. Free and easy.
You might install it on different machine than the win2k3 server and then point it at that one to convert. It definitely will help if the server and the ESXi box are on a fast network together (gigabit)
| {
"pile_set_name": "StackExchange"
} |
Q:
Extract the minor matrix from a 3x3 based on input i,j
For a given 3x3 matrix, for example:
A = [3 1 -4 ; 2 5 6 ; 1 4 8]
If I need the minor matrix for entry (1,2)
Minor = [2 6 ; 1 8]
I already wrote a program to read in the matrix from a text file, and I am supposed to write a subroutine to extract the minor matrix from the main matrix A based on the user inputs for i,j. I am very new to Fortran and have no clue how to do that. I made some very desperate attempts but I am sure there is a cleaner way to do that.
I got so desperate I wrote 9 if functions for each possible combination of i and j but that clearly is not a smart way for doing this. Any help is appreciated!
A:
One way to do this is, as @HighPerformanceMark said in the comment, with vector subscripts. You can declare an array with the rows you want to keep, and the same for columns, and pass them as indices to your matrix. Like this:
function minor(matrix, i, j)
integer, intent(in) :: matrix(:,:), i, j
integer :: minor(size(matrix, 1) - 1, size(matrix, 2) - 1)
integer :: rows(size(matrix, 1) - 1), cols(size(matrix, 2) - 1), k
rows = [(k, k = 1, i - 1), (k, k = i + 1, size(rows))]
cols = [(k, k = 1, j - 1), (k, k = j + 1, size(cols))]
minor = matrix(rows, cols)
end
(I didn't test it yet, so tell me if there is any error)
Another option would be constructing a new matrix from 4 assignments, one for each quadrant of the result (limited by the excluded row/column).
I like the first option more because it is more scalable. You could easily extend the function to remove multiple rows/columns by passing arrays as arguments, or adapt it to work on higher dimensions.
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I connect a non-AMP page to an AMP page?
I want to implement Google AMP on my blog. I've already created the AMP version of the blog articles. For example:
# Regular version
http://myblog.com/a-cool-blog-post
# AMP version
http://myblog.com/amp/a-cool-blog-post
How to do I connect them together so that Google can index and serve the AMP version when users search for my posts on mobile?
A:
In your non-AMP blog post, add the amphtml link tag and set the href to the AMP version:
<link rel="amphtml" href="http://myblog.com/amp/a-cool-blog-post">
Next, in your AMP blog post, add the canonical link tag and set the href to the non-AMP version:
<link rel="canonical" href="http://myblog.com/a-cool-blog-post">
See Politico's Warren moves to outflank Trump on trade (regular version | AMP version) for a live example.
Source: Make your pages discoverable
| {
"pile_set_name": "StackExchange"
} |
Q:
getting an embedded resource in a single dll made up of multiple class libraries
my solution has multiple projects and in one of them I have the code to get the embedded resource (an xml file) from another project. All this works fine when all the projects are seperate. However when all the class libraries are embedded into a single dll, the code to get the resource file does not work i.e. it cannot get the emebedded resource.
I was wondering if the references to the emebedded resource get mixed up when they are combined together in a single dll??
I use the method Assembly.GetCallingAssembly().GetManifestResourceStream("namespace..filename");
A:
I would not use Assembly.GetCallingAssembly(). I would use typeof(SomeClassNextToXmlFile).Assembly that way if you are calling the dll with the embedded resource from a exe file it won't go looking in the exe for the resource. Also you may want to try using Reflector and make sure the resource you are looking for is where you think it is.
| {
"pile_set_name": "StackExchange"
} |
Q:
In Minecraft how do I make a video with my secound account moving around by itself?
Here is a video if you don't understand the question. How do I make my POV like that with my second Minecraft account?
A:
These kind of videos are almost always done by use of a simple Video Camera mod like:
http://www.minecraftforum.net/topic/938825-164-camera-studio-v2164-standalone-modloader-forge-video-recorder/
There are a few decent ones out there, so download one or two and see what you think! You'll need 2 accounts and 2 computers to do this.
| {
"pile_set_name": "StackExchange"
} |
Q:
Pivot on Oracle 10g
I am using oracle 10g.
I have a temp table TEMP.
TEMP has following structure:-
USER COUNT TYPE
---- ----- ----
1 10 T1
2 21 T2
3 45 T1
1 7 T1
2 1 T3
I need a query which will show all types has column names,and types can have any value like T1, T2,..Tn and result will be like:-
USER T1 T2 T3
---- -- -- --
1 17 0 0
2 0 21 1
3 45 0 0
and User column will show all the users and T1, T2 column will show total count of types.
A:
In Oracle 10g, there was no PIVOT function but you can replicate it using an aggregate with a CASE:
select usr,
sum(case when tp ='T1' then cnt else 0 end) T1,
sum(case when tp ='T2' then cnt else 0 end) T2,
sum(case when tp ='T3' then cnt else 0 end) T3
from temp
group by usr;
See SQL Fiddle with Demo
If you have Oracle 11g+ then you can use the PIVOT function:
select *
from temp
pivot
(
sum(cnt)
for tp in ('T1', 'T2', 'T3')
) piv
See SQL Fiddle with Demo
If you have an unknown number of values to transform, then you can create a procedure to generate a dynamic version of this:
CREATE OR REPLACE procedure dynamic_pivot(p_cursor in out sys_refcursor)
as
sql_query varchar2(1000) := 'select usr ';
begin
for x in (select distinct tp from temp order by 1)
loop
sql_query := sql_query ||
' , sum(case when tp = '''||x.tp||''' then cnt else 0 end) as '||x.tp;
dbms_output.put_line(sql_query);
end loop;
sql_query := sql_query || ' from temp group by usr';
open p_cursor for sql_query;
end;
/
then to execute the code:
variable x refcursor
exec dynamic_pivot(:x)
print x
The result for all versions is the same:
| USR | T1 | T2 | T3 |
----------------------
| 1 | 17 | 0 | 0 |
| 2 | 0 | 21 | 1 |
| 3 | 45 | 0 | 0 |
Edit: Based on your comment if you want a Total field, the easiest way is to place the query inside of another SELECT similar to this:
select usr,
T1 + T2 + T3 as Total,
T1,
T2,
T3
from
(
select usr,
sum(case when tp ='T1' then cnt else 0 end) T1,
sum(case when tp ='T2' then cnt else 0 end) T2,
sum(case when tp ='T3' then cnt else 0 end) T3
from temp
group by usr
) src;
See SQL Fiddle with Demo
| {
"pile_set_name": "StackExchange"
} |
Q:
Does Amazon's RDS Multi-AZ deployment cost twice as much as the normal cost of a single instance?
It's not totally clear based on Amazon's provided documentation. Intuitively, it feels like it'd cost twice as much as a normal RDS instance for a two-zone deployment; does anyone here have experience with this, and could confirm or disconfirm that?
A:
Amazon's RDS pricing page has two sections. One for Single-AZ Deployment and one for Multi-AZ Deployment.
Yes, Multi-AZ is about twice the price of Single-AZ. For example I see:
db.t2.micro single - $0.017/hour
db.t2.micro multi - $0.034/hour
| {
"pile_set_name": "StackExchange"
} |
Q:
How to fetch user ID in google script?
I am trying to create a Google script linked to a spreadsheet containing multiple email ids and some data related to them. The script will send an email containing an HTML button to every id in the spreadsheet. What I am doing is that when the user clicks that button in its mail, it will fetch the data from the spreadsheet respective to that email id.
What I am unable to do is how to fetch user id of that particular user when that user clicks the button (that script sent in the mail) and send it with the POST response, so that script can know which user is asking for information and fetch data from spreadsheet only for that particular user.
A:
I followed Casper's suggestion and used the email address of the active user and then split it to get the id part which then I can verify.
| {
"pile_set_name": "StackExchange"
} |
Q:
Последовательность синхронных запросов Retrofit2 для android
Какой подход лучше всего использовать, если необходимо выполнить несколько синхронных запросов подряд (после получения ответа от сервера слать следующий запрос и т.д.)? Причём выполнение этих запросов необходимо начать при наступлении какого-либо события, например, нажатия на кнопку или получении Intent'а.
Что лучше всего использовать: Service, либо связку Thread-Looper-Handler, либо что-то другое?
A:
Два самых простейших способа реализовать такое:
1) RxJava + Retrofit
2) LiveData + Transformations (Android Architecture components) + LiveDataCallAdapter для Retrofit
| {
"pile_set_name": "StackExchange"
} |
Q:
Directory.members().insert is not working using the Java client library
I'm using the Java client library for the Directory API from here: https://developers.google.com/api-client-library/java/apis/admin/directory_v1
I have insert user and insert group working fine, but for some reason when I try to insert a member, it doesn't work. There is no exception thrown. Here is the code:
Member member = new Member();
member.setEmail("[email protected]");
member.setRole("MEMBER");
//member.setKind("admin#directory#member"); not sure if I need this. tried with and without
member.setType("USER"); // docs say "MEMBER" but doesn't seem true. Tried both
client.members().insert(myGroupId, member);
A:
You don't need to set kind nor type.
After "client.members().insert(myGroupId, member);" do you call execute ?
| {
"pile_set_name": "StackExchange"
} |
Q:
Attempt to invoke interface method on a null object reference while implement interface
public class forgetPassword extends AppCompatDialogFragment {
public ResetDialogListener listener;
EditText emailToReset;
final String tag="finalProject.bhaa";
String email =" ";
@NonNull
@Override
public Dialog onCreateDialog(@Nullable Bundle savedInstanceState) {
AlertDialog.Builder builder = new AlertDialog.Builder(getActivity());
LayoutInflater inflater = getActivity().getLayoutInflater();
View view = inflater.inflate(R.layout.forget_password, null);
builder.setView(view)
.setTitle("Reset Password")
.setNegativeButton("cancel", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialogInterface, int i) {
}
})
.setPositiveButton("Reset", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialogInterface, int i) {
Log.i(tag,"---"+email);
email=emailToReset.getText().toString().trim();
Log.i(tag,"---"+email);
try {
listener.applyUpdate(email);
}catch (Exception e){
e.printStackTrace();
Log.i(tag,e.getMessage());
}
}
});
emailToReset= view.findViewById(R.id.emailToReset);
return builder.create();
}
public interface ResetDialogListener {
void applyUpdate(String Email);
}
}
Here is the implemntation of applyUpdate:
@Override
public void applyUpdate(String Email) {
Log.i(tag,"yyyyyyy");
firebaseAuth.sendPasswordResetEmail(Email).addOnCompleteListener(new OnCompleteListener<Void>() {
@Override
public void onComplete(@NonNull Task<Void> task) {
if (task.isSuccessful()){
Toast.makeText(MainActivity.this,"Check your email .. ",Toast.LENGTH_SHORT);
}else {
String err=task.getException().getMessage();
Toast.makeText(MainActivity.this,"Error Occured: "+err,Toast.LENGTH_SHORT).show();
}
}
});
}
Tis lin throw an exception :
listener.applyUpdate(email);
Attempt to invoke interface method 'void com.bhaa.finalproject.forgetPassword$ResetDialogListener.applyUpdate(java.lang.String)' on a null object reference
and I need the help for this problem
A:
Make sure to initialize the listener when AppCompatDialogFragment attach.
@Override
public void onAttach(Context context) {
super.onAttach(context);
listner = (ResetDialogListener) context
}
| {
"pile_set_name": "StackExchange"
} |
Q:
if statement for MFMailComposer dosn`t work
I have a simple problem:
If i click on my mail button and there is no E-Mail address deposited i want the alertView to show up and i don`t want it to go to the Mail composer but it does.
The AlertView works but why does it goe to the MailComposer anyway?
Using X-Code, Objective C.
Here is the Code:
-(IBAction)sendMail {
if ([mailIdentity isEqualToString:@""] == FALSE); {
MFMailComposeViewController *mailController0 = [[MFMailComposeViewController alloc]init];
[mailController0 setMailComposeDelegate:self];
NSString *email0 = mailIdentity;
NSArray *emailArray0 = [[NSArray alloc] initWithObjects:email0, nil];
[mailController0 setToRecipients:emailArray0];
[mailController0 setSubject:@"From my IPhone"];
[self presentViewController : mailController0 animated:YES completion:nil];
}
if ([mailIdentity isEqualToString:@""] == TRUE) {
UIAlertView *noMail = [[UIAlertView alloc] initWithTitle: @"noMail"
message: @"No E-Mail address"
delegate: nil
cancelButtonTitle:@"OK"
otherButtonTitles:nil];
[noMail show];
}
}
Thanks for the help.
A:
Notice the semicolon ; after ..g:@""] == FALSE);. It should be ..g:@""] == FALSE) {...} . BTW, You could also write : if(![mailIdentity isEqualToString:@""]) {...} and if ([mailIdentity isEqualToString:@""]) {...}
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I use dotnet test to run tests from multiple libraries with a single pass/fail summary
How do I run unit tests with dotnet test if I have multiple test libraries in a code base?
I can run dotnet test, and it will find and run all tests even across multiple libraries, but it runs and reports each test library run independently:
$ dotnet test
Test run for C:\Users\mark\Documents\Redacted.Test\bin\Debug\netcoreapp2.1\Redacted.Test.dll(.NETCoreApp,Version=v2.1)
Test run for C:\Users\mark\Documents\Redacted\Redacted.SqlAccess.Test\bin\Debug\netcoreapp2.1\Redacted.SqlAccess.Test.dll(.NETCoreApp,Version=v2.1)
Microsoft (R) Test Execution Command Line Tool Version 16.2.0-preview-20190606-02
Copyright (c) Microsoft Corporation. All rights reserved.
Microsoft (R) Test Execution Command Line Tool Version 16.2.0-preview-20190606-02
Copyright (c) Microsoft Corporation. All rights reserved.
Starting test execution, please wait...
Starting test execution, please wait...
Test Run Successful.
Total tests: 59
Passed: 59
Total time: 3.1779 Seconds
Test run for C:\Users\mark\Documents\Redacted\Redacted.RestApi.Tests\bin\Debug\netcoreapp2.1\Redacted.RestApi.Tests.dll(.NETCoreApp,Version=v2.1)
Microsoft (R) Test Execution Command Line Tool Version 16.2.0-preview-20190606-02
Copyright (c) Microsoft Corporation. All rights reserved.
Starting test execution, please wait...
Test Run Successful.
Total tests: 99
Passed: 99
Total time: 9.8155 Seconds
Test Run Successful.
Total tests: 25
Passed: 25
Total time: 21.2894 Seconds
In this example, there's two test libraries, so I get two test result outputs.
This may work OK if the code has already been compiled, but in a clean build, there's going to be a lot of output from the compiler. This can easily cause one of the test run summaries to scroll past the visible part of the screen.
That's a problem if that test run fails.
How can I collapse all the unit tests to a single pass/fail summary?
On .NET 4.x, I could, for instance, use xUnit.net's console runner to run all test libraries as a single suite:
$ ./packages/xunit.runner.console.2.4.0/tools/net461/xunit.console BookingApi.UnitTests/bin/Debug/Ploeh.Samples.Booking
Api.UnitTests.dll BookingApi.SqlTests/bin/Debug/Ploeh.Samples.BookingApi.SqlTests.dll
xUnit.net Console Runner v2.4.0 (64-bit Desktop .NET 4.6.1, runtime: 4.0.30319.42000)
Discovering: Ploeh.Samples.BookingApi.UnitTests
Discovered: Ploeh.Samples.BookingApi.UnitTests
Starting: Ploeh.Samples.BookingApi.UnitTests
Finished: Ploeh.Samples.BookingApi.UnitTests
Discovering: Ploeh.Samples.BookingApi.SqlTests
Discovered: Ploeh.Samples.BookingApi.SqlTests
Starting: Ploeh.Samples.BookingApi.SqlTests
Finished: Ploeh.Samples.BookingApi.SqlTests
=== TEST EXECUTION SUMMARY ===
Ploeh.Samples.BookingApi.SqlTests Total: 3, Errors: 0, Failed: 0, Skipped: 0, Time: 3.816s
Ploeh.Samples.BookingApi.UnitTests Total: 7, Errors: 0, Failed: 0, Skipped: 0, Time: 0.295s
-- - - - ------
GRAND TOTAL: 10 0 0 0 4.111s (5.565s)
Notice how this produces a single summary at the bottom of the screen, so that I can immediately see if my tests passed or failed.
A:
Use dotnet vstest to run multiple assemblies.
PS> dotnet vstest --help
Microsoft (R) Test Execution Command Line Tool Version 15.9.0
Copyright (c) Microsoft Corporation. All rights reserved.
Usage: vstest.console.exe [Arguments] [Options] [[--] <RunSettings arguments>...]]
Description: Runs tests from the specified files.
Arguments:
[TestFileNames]
Run tests from the specified files. Separate multiple test file names
by spaces.
Examples: mytestproject.dll
mytestproject.dll myothertestproject.exe
...
Note that this method requires that you point at compiled assemblies (as opposed to dotnet test, which wants you to point at project files, and will optionally build things first for you).
| {
"pile_set_name": "StackExchange"
} |
Q:
How I can make an query with mongoose from a function using a parameter?
I try make a mongoose query using a function like this:
/*
* @param {function} Model - Mongoose Model
* @param {String} searchText- Text that will be used to search for Regexp
* @param {String} Key- key to search into a Model
* @param {object} res - Response of node.js / express
*/
function _partialSearch (Model, searchText, key, res) {
var search = new RegExp(searchText, "i");
Model.find({ key : { $regex : search } })
.exec(function (err, docs) {
if(err) log(err);
else {
res.json(docs);
}
})
}
My problem is the query take a parameter key literal and search like this:
I need this:
_partialSearch(Products, 'banana', 'fruts', res)
I spect this:
Products.find({ 'fruts' : 'banana})
But I get this:
Products.find({ key : 'banana})
A:
Use the bracket notation to create the query object dynamically, so you could restructure your function as follows:
function _partialSearch (Model, searchText, key, res) {
var search = new RegExp(searchText, "i"),
query = {};
query[key] = { $regex : search };
Model.find(query)
.exec(function (err, docs) {
if(err) log(err);
else {
res.json(docs);
}
});
}
| {
"pile_set_name": "StackExchange"
} |
Q:
css media queriesto hide or show content
Ok so I cant see whats up with this, I have two divs
html
<div id="desktop-content">
desktop
</div>
<div id="mobile-content">
mobile
</div>
One should print mobile if on mobile screen and hide the desktop and the other show on desktop but hide on mobile.
Here is my queries
@media screen and (min-width: 0px) and (max-width: 200px) {
#mobile-content { display: block; } /* show it on small screens */
}
@media screen and (min-width: 201px) and (max-width: 1024px) {
#mobile-content { display: none; } /* hide it elsewhere */
}
@media screen and (min-width: 0px) and (max-width: 200px) {
#desktop-content { display: none; } /* hide it on small screens */
}
@media screen and (min-width: 201px) and (max-width: 1024px) {
#desktop-content { display: block; } /* show it elsewhere */
}
seems simple enough, except desktop is printing when mobile should, and on desktop its all printing.
Im new to media queries, If someone could point out the error of my ways i would appreciate it.
A:
Was simple mistake I was viewing on a widescreen monitor outside the parameters of the css. simple fix.
@media screen and (min-width: 0px) and (max-width: 600px) {
#mobile-content { display: block; } /* show it on small screens */
}
@media screen and (min-width: 601px) {
#mobile-content { display: none; } /* hide it elsewhere */
}
@media screen and (min-width: 0px) and (max-width: 600px) {
#desktop-content { display: none; } /* hide iton small screens */
}
@media screen and (min-width: 601px) {
#desktop-content { display: block; } /* show it elsewhere */
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Curl doesn't get external websites
I have some troubles with cUrl configuration on my local machine (my own computer).
I can only download site that is placed on localhost. Trying any other host causes failure (empty string returned). I'm sure that code is ok - it works on my production server.
Also, curl_errno doesn't return any error.
I can't find the problem, please help.
Edit: Here's the code.
<?php
$url = "http://stackoverflow.com";
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$result = curl_exec($ch); // It's empty
A:
is PHP curl installed on your machine, if not then install it first and then try
for installation you may use either of these methods
http://php.net/manual/en/curl.installation.php
sudo apt-get install php5-curl - and then restart apache.
it should work
| {
"pile_set_name": "StackExchange"
} |
Q:
Validate forms using external services
I'm trying to validate an Angular form using an external service but I'm getting a cannot read property of undefined error.
I've created a simple form in my component:
this.myForm = this.fb.group({
username: ['', [this.validator.username]],
});
From there, I'm calling my username method:
@Injectable()
export class ValidatorService {
constructor(private auth: AuthService) {}
username(input: FormControl): {[key: string]: any} {
return { userInvalid: this.auth.validate(input.value) };
}
}
My ValidatorService, then, calls a method that checks the server if that username exists:
@Injectable()
export class AuthService {
validate(username: string): boolean {
return username !== 'demo';
}
}
I'm getting the following error: Cannot read property 'auth' of undefined. Any ideas, please?
Live demo
A:
username method is executed like a function not like a method of ValidationService so you're loosing context this.
Function.prototype.bind method should help you:
username: ['', [this.validator.username.bind(this.validator)]],
| {
"pile_set_name": "StackExchange"
} |
Q:
Why does my C++ object loses its VPTr
While debugging one of the program's core dump I came across the scenario where its contained object which is polymorphic loses its VPTr and I can see its pointing to NULL.
What could be the scenario when an object loses its VPTr.
Thanks in advance,
Brijesh
A:
The memory has been trashed, i.e. something overwrote the memory.
You destroyed it by calling delete or by invoking the destructor directly. This typically does not NULL out the vptr, it will just end up having it point to the vtable of the base class, but that depends on your implementation.
Most likely, case 1. If you have a debugger that has memory breakpoints and if you can reproduce the problem reliably, set a memory breakpoint on the vptr and see what's modifying it.
A:
Likely something overwrote the whole object. Something like this:
memset( object, 0, sizeof( *object ) );
which is fine until it is used on an object with vptr.
| {
"pile_set_name": "StackExchange"
} |
Q:
Viewing x_range attributes on a Bokeh figure object?
I'm trying to fix an older example for Bokeh where this no longer works:
callback = CustomJS(code="console.log('stuff')")
fig.x_range.callback = callback
Now this seems to be the solution:
callback = CustomJS(code="console.log('stuff')")
plot.x_range.js_on_change('start', callback)
How do I check what other attributes are there on the x_range object, other than start?
A:
from bokeh.models import DataRange1d
print(DataRange1d.properties())
| {
"pile_set_name": "StackExchange"
} |
Q:
Show and format array elements in jQueryUI dialog
In my app there is an option to select documents by checking the checkboxes. Then if user clicks on submit button he gets a confirmation box which shows all the documents which he has selected for deletion. I store all the selected docs in an array. Now I want to represent the document list in a well formatted manner. Like
Warning: Below mentioned documents will be deleted, review them and click OK to proceed.
1. Document 1
2. Document 2
3. Document 3
n. Document n
So my confirmation box should look like above. Since this can't be done using default confirm box so I used jQuery UI dialog but there also I'm unable to format it. Can someone help me with the formating? Is there any other option available to show a list in confirm box?
What I've tried.
A:
var str="";
for(var i=0;i<arrayis.length;i++){
str+=(i+1)+")"+arrayis[i]+"<br/>";
}
ConfirmDialog("Below mentioned documents will be deleted, review them and click OK to proceed?"+"<br/>"+str);
http://jsfiddle.net/nM3Zc/1003/
| {
"pile_set_name": "StackExchange"
} |
Q:
Window with UIWindowLevelStatusBar + 1 hides status bar on iOS 8
I have been adding a console window on top of the status bar:
This has been working great by setting its windowLevel to UIWindowLevelStatusBar + 1 up to iOS 7.x (screenshot).
On iOS 8 the same code makes the status bar disappear and offsets navigation bars up. I tried several different window levels with no luck.
I use the library on many projects and noticed that the status bar does show up when a "PopUpWindow" of level UIWindowLevelAlert is also shown.
So one possible solution would be to add a mock window there but that would be plain dirty.
A:
Try implementing the -prefersStatusBarHidden method on the root view controller of your UIWindow. Worked for me.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to .filter and .reduce javascript array object
WebDev Student. I am having an issue with using .filter + .reduce array methods and chaining to return the total cost of both items. Codeburst.io on the subject is super helpful but I need direction on how to apply to this specific exercise as I am targeting a property: value in the array.
This is an exercise in reducing the amount of code., so rather than using a for loop to iterate through an array, I need to apply a method(s) to .filter through and .reduce to return the total cost of each item in the array. .price
Using the shopCart variable, create a function that takes the shopCart variable and returns the total cost of both items as the total variable. Add Code after "// code below".
var shopCart = [
{
id: 0,
name: 'Womens Shirt',
price: 30,
size: 'Small'
},
{
id: 1,
name: 'childs shirt',
price: 12,
size: 'Large'
}
]
function getCost(items){
let total = 0;
// code below
// code above
return total;
}
getCost(shopCart) < --- Add
let cost = getCost(shopCart); < ---OMIT
console.log(cost); < -- OMIT
PLease re-review - Code has been amended.
A:
Here’s a more explicit approach.
function sum(nums) {
return nums.reduce((a, b) => a + b)
}
const prices = items.map(item => item.price)
const total = sum(prices)
| {
"pile_set_name": "StackExchange"
} |
Q:
Heyacrazy: Careening
This is a Heyacrazy puzzle, designed at request of testsolvers of previous puzzles who wanted to see how longer diagonals would work.
Rules of Heyacrazy:
Shade some cells of the grid.
Shaded cells cannot be orthogonally adjacent; unshaded cells must be orthogonally connected.
When the puzzle is solved, you must not be able to draw a line segment that passes through two borders, and does not pass through any shaded cells or grid vertices.
For an example puzzle and its solution, see this question.
A:
Solution:
There are lots of quick and easy deductions at the bottom, starting with the two cells that on their own contain two edges and leading to this:
One of the two cells on the top edge where two lines meet must be shaded. If we try letting it be the right-hand one, we quickly get to this impossible situation
and so instead it must be the left-hand one:
One of the two currently-blank cells in column 3 must be shaded. Can it be the lower one? If so, then we have this
which is impossible because now lines emanating SE-ish from the third cell on the first row require two adjacent cells to be shaded. So in fact it was the upper of the two cells in column 3 that's shaded:
requiring the cell one left and one down from the top right corner to be shaded, which immediately requires all the other cells near it to be unshaded for connectivity
leading us rapidly to
and then
and we're done.
A:
Labelling the rows 1 (top) to 7 (bottom) and the columns A (left) to F (right), our first step is that
A5 and F6 are clearly shaded, which means A4, A6, B5, F5, F7, E6 are unshaded. Then B6 is also shaded, which means C6 and B7 are unshaded. For connectivity of unshaded squares, now A7, C7, and E7 must be unshaded.
Next target to be shaded is
E5, with unshaded around it, and then D6, with unshaded around it. For connectivity, now C5 must be unshaded. Then D4 must be shaded, with unshaded around it.
Now we have a few sets of squares of which at least one must be shaded:
D1 and E1, C1 and C2 and C3, D1 and E2. That means if E1 is shaded, then D1 is not shaded and so E2 is shaded, contradiction. So D1 is shaded, which means C1, E1, and D2 are unshaded, which means exactly one of C2 and C3 is shaded.
Trying a similar deduction technique again:
At least one of C2 and E3 must be shaded. So either C2 is shaded, or C3 and E3 both are. In the latter case, we get the following which is impossible (draw a line from A1 to B6):
So in fact
C2 is shaded, which means B2 and C3 are unshaded, and B1 for connectivity. Now E2 must be shaded (I think), and then everything else in the top right corner must be unshaded.
Finally,
B3 must be shaded, so B4 and A3 are unshaded, and A2 for connectivity. Then A1 must be shaded and we're done:
| {
"pile_set_name": "StackExchange"
} |
Q:
Microsoft Graph The token contains no permissions, or permissions cannot be understood
I am working with Microsoft Graph and have created an app that reads mail from a specific user.
However, after getting an access token and trying to read the mailfolders, I receive a 401 Unauthorized answer. The detail message is:
The token contains no permissions, or permissions cannot be understood.
This seems a pretty clear message, but unfortunately I am unable to find a solution.
This is what I have done so far:
Registering the app on https://apps.dev.microsoft.com
Giving it
application permissions Mail.Read, Mail.ReadWrite
(https://docs.microsoft.com/en-us/graph/api/user-list-mailfolders?view=graph-rest-1.0)
Have gotten administrator consent.
The permissions are:
- Written the code below to acquire an access token:
// client_secret retrieved from secure storage (e.g. Key Vault)
string tenant_id = "xxxx.onmicrosoft.com";
ConfidentialClientApplication client = new ConfidentialClientApplication(
"..",
$"https://login.microsoftonline.com/{tenant_id}/",
"https://dummy.example.com", // Not used, can be any valid URI
new ClientCredential(".."),
null, // Not used for pure client credentials
new TokenCache());
string[] scopes = new string[] { "https://graph.microsoft.com/.default" };
AuthenticationResult result = client.AcquireTokenForClientAsync(scopes).Result;
string token = result.AccessToken;
So far so good. I do get a token.
Now I want to read the mail folders:
url = "https://graph.microsoft.com/v1.0/users/{username}/mailFolders";
handler = (HttpWebRequest)WebRequest.Create(url);
handler.Method = "GET";
handler.ContentType = "application/json";
handler.Headers.Add("Authorization", "Bearer " + token);
response = (HttpWebResponse)handler.GetResponse();
using (StreamReader sr = new StreamReader(response.GetResponseStream()))
{
returnValue = sr.ReadToEnd();
}
This time I receive a 401 message, with the details:
The token contains no permissions, or permissions cannot be understood.
I have searched the internet, but can’t find an answer to why my token has no permissions.
Thanks for your time!
update 1
If I use Graph Explorer to read the mailfolders, then it works fine. Furthermore: if I grap the token id from my browser en use it in my second piece of code, then I get a result as well. So, the problem is really the token I receive from the first step.
A:
To ensure this works like you expect, you should explicitly state for which tenant you wish to obtain the access token. (In this tenant, the application should, of course, have already obtained admin consent.)
Instead of the "common" token endpoint, use a tenant-specific endpoint:
string url = "https://login.microsoftonline.com/{tenant-id}/oauth2/v2.0/token";
(Where {tenant-id} is either the tenant ID of the tenant (a Guid), or any verified domain name.)
I would also strongly recommend against building the token request on your own, as you show in your question. This may be useful for educational purposes, but will tend to be insecure and error-prone in the long run.
There are various libraries you can use for this instead. Below, an example using the Microsoft Authentication Library (MSAL) for .NET:
// client_secret retrieved from secure storage (e.g. Key Vault)
string tenant_id = "contoso.onmicrosoft.com";
ConfidentialClientApplication client = new ConfidentialClientApplication(
client_id,
$"https://login.microsoftonline.com/{tenant_id}/",
"https://dummy.example.com", // Not used, can be any valid URI
new ClientCredential(client_secret),
null, // Not used for pure client credentials
new TokenCache());
string[] scopes = new string[] { "https://graph.microsoft.com/.default" };
AuthenticationResult result = client.AcquireTokenForClientAsync(scopes).Result
string token = result.AccessToken;
// ... use token
| {
"pile_set_name": "StackExchange"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.