text
stringlengths 64
89.7k
| meta
dict |
---|---|
Q:
Does MSVC have something like __builtin_va_arg_pack?
Does Visual C++ have something similar to __builtin_va_arg_pack?
This built-in function represents all anonymous arguments of an inline
function. It can be used only in inline functions which will be always
inlined, never compiled as a separate function, such as those using
attribute ((always_inline)) or attribute ((gnu_inline)) extern inline functions. It must be only passed as
last argument to some other function with variable arguments. This is
useful for writing small wrapper inlines for variable argument
functions, when using preprocessor macros is undesirable. For example:
extern int myprintf (FILE *f, const char *format, ...);
extern inline __attribute__ ((__gnu_inline__)) int
myprintf (FILE *f, const char *format, ...)
{
int r = fprintf (f, "myprintf: ");
if (r < 0)
return r;
int s = fprintf (f, format, __builtin_va_arg_pack ());
if (s < 0)
return s;
return r + s;
}
A:
Not that I know of. But there's no need to use a gcc extension here, use vfprintf instead:
int myprintf (FILE *f, const char *format, ...)
{
va_list ap;
va_start(ap, format);
int r = fprintf (f, "myprintf: ");
if (r < 0)
return r;
int s = vfprintf (f, format, ap);
va_end(ap);
if (s < 0)
return s;
return r + s;
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can I load geojson file in expresss.js
I am working to read geojson file in Node.js/express.js. I am reading "output.geojson"
I don't wanna use JSON.parse.But I wanna load it using express.js (or at least json render inside this function)
var o = require('output.geojson');
app.get('/points.geojson', function(req, res) {
res.json(o);
console.log(res)
});
But I am getting this error :
Users/macbook/leaflet-geojson-stream/output.geojson:1
(function (exports, require, module, __filename, __dirname) { "type":"FeatureC
^
SyntaxError: Unexpected token :
at Module._compile (module.js:439:25)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (/Users/macbook/leaflet-geojson-stream/example/server.js:15:9)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
The geojson file look like .
{
"type": "FeatureCollection",
"features": [{
"geometry": {
"type": "MultiPolygon",
"coordinates": [
[
[
[-73.8283219965448, 40.8446061654002],
[-73.828397789942, 40.844583182304],
[-73.8285477331865, 40.8448132168025],
[-73.8284744943625, 40.8448401137412],
[-73.8283219965448, 40.8446061654002]
]
]
]
},
"type": "Feature",
, {
"geometry": {
"type": "MultiPolygon",
"coordinates": [
[
[
[-73.832361912256, 40.8488019205992],
[-73.832369554769, 40.8487286684528],
[-73.8327312374341, 40.8487518102579],
[-73.8327304815978, 40.8487590590352],
[-73.8327235953166, 40.8488250624279],
[-73.832361912256, 40.8488019205992]
]
]
]
},
"type": "Feature"
}
....
....
}
How can I load geojson file ?
A:
You are currently loading the whole JSON file into memory by 'requiring' it.
Instead you want to stream the file because it is big and so use the fs.createReadStream function:
var fs = require('fs');
app.get('/points.geojson', function(req, res) {
res.setHeader('Content-Type', 'application/json');
fs.createReadStream(__dirname + '/output.geojson').pipe(res);
});
Also make sure that the contents of /output.geojson is actually valid JSON. You can use JSONLint to check - the file should with '{' or '[' and NOT have Javascript functions inside.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
ActionBar is null after migration to Gradle
This is a really weird error as far as I can understand.
I just migrated a project from ADT to Gradle. And the same exact code is now crashing on startup.
It seems the problem is that the ActionBar that previously was perfectly valid now is null.
The code is within a class that extends Activity and is called from within onCreate
setContentView(R.layout.activity_main);
ActionBar actionBar = getActionBar();
if (actionBar==null) Log.d(TAG,"AB null.");
Not sure what more code to post, since I'm really puzzeled why this worked just 30 minutes ago and not at all now.
The project is not using any support package, and only targets 4.0 and later.
A:
Theme.Light does not have an action bar. That is the old Android 1.x/2.x theme, with the old title bar (thin grey strip with the app's name).
Theme.Holo.Light, and a targetSdkVersion of 11+, will give you an action bar.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Add objects to NSMutable array with grouping
I want my NSArray sampleData to receive actual data from parse.com database assuming like this:
self.sampleData = @[ @{ @"date": @"12/5/2014",
@"group": @[ @{ @"text": @"post1", @"location": @"x,y" },
@{ @"text": @"post2", @"location": @"x,y" },
@{ @"text": @"post3", @"location": @"x,y" },
@{ @"text": @"post4", @"location": @"x,y" },
@{ @"text": @"post5", @"location": @"x,y" }
]
},
@{ @"date": @"12/3/2014",
@"group": @[ @{ @"text": @"post6", @"location": @"x,y" },
@{ @"text": @"post7", @"location": @"x,y" },
@{ @"text": @"post8", @"location": @"x,y" },
@{ @"text": @"post9", @"location": @"x,y" },
@{ @"text": @"post10", @"location": @"x,y" }
]
}
];
As you can see, I want to group text and location by date, so that I can display them in a view with date as header and text/location as content.
Here below is what I'm capable doing so far:
PFQuery *postQuery = [PFQuery queryWithClassName:kPAWParsePostsClassKey];
[postQuery whereKey:kPAWParseUserKey equalTo:[PFUser currentUser]];
postQuery.cachePolicy = kPFCachePolicyNetworkElseCache;
postQuery.limit = 20;
[postQuery findObjectsInBackgroundWithBlock:^(NSArray *myPosts, NSError *error)
{
if( !error )
{
NSDateFormatter *formatter = [[NSDateFormatter alloc] init];
[formatter setDateFormat:@"MM/dd/yyyy"];
NSMutableArray *objectArray = [NSMutableArray new];
for (PFObject *object in myPosts) {
[objectArray addObject:@{@"createdAt": [formatter stringFromDate:object.createdAt], @"text": [object objectForKey:@"text"], @"location": [object objectForKey:@"location"]}];
}
self.sampleData = objectArray;
NSLog(@"My sampleData --> %@", self.sampleData);
}
}
];
The above code is obvious there's no grouping whatsoever, so really need help here.
A:
Okay, so you have an array of items, and you want to group them into sections based on a particular key.
NSDateFormatter *formatter = [[NSDateFormatter alloc] init];
[formatter setDateFormat:@"MM/dd/yyyy"];
// Sparse dictionary, containing keys for "days with posts"
NSMutableDictionary *daysWithPosts = [NSMutableDictionary dictionary];
[myPosts enumerateObjectsUsingBlock:^(PFObject *object, NSUInteger idx, BOOL *stop) {
NSString *dateString = [formatter stringFromDate:[object createdAt]];
// Check to see if we have a day already.
NSMutableArray *posts = [daysWithPosts objectForKey: dateString];
// If not, create it
if (posts == nil || (id)posts == [NSNull null])
{
posts = [NSMutableArray arrayWithCapacity:1];
[daysWithPosts setObject:posts forKey: dateString];
}
// add post to day
[posts addObject:obj];
}];
// Sort Dictionary Keys by Date
NSArray *unsortedSectionTitles = [daysWithPosts allKeys];
NSArray *sortedSectionTitles = [unsortedSectionTitles sortedArrayUsingComparator:^NSComparisonResult(id obj1, id obj2) {
NSDate *date1 = [formatter dateFromString:obj1];
NSDate *date2 = [formatter dateFromString:obj2];
return [date2 compare:date1];
}];
NSMutableArray *sortedData = [NSMutableArray arrayWithCapacity:sortedSectionTitles.count];
// Put Data into correct format:
[sortedSectionTitles enumerateObjectsUsingBlock:^(NSString *dateString, NSUInteger idx, BOOL *stop) {
NSArray *group = daysWithPosts[dateString];
NSDictionary *dictionary = @{ @"date":dateString,
@"group":group };
[sortedData addObject:dictionary];
}];
self.sampleData = sortedData;
This code will not generate exactly what you asked for. It will generate something that looks like this:
sampleData = @[ @{ @"date": @"12/5/2014",
@"group": @@[ PFObject*,
PFObject*,
PFObject*,
PFObject*,
PFObject*,
]
},
@{ @"date": @"12/3/2014",
@"group": @[ PFObject*,
PFObject*,
PFObject*,
PFObject*,
PFObject
]
}
];
There's no need to convert your PFObject* in the myPosts array into @{ @"text": @"post5", @"location": @"x,y" } since you'll lose access to other pieces of information. Here is how you would use this sampleData array.
- (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView; {
return self.sampleData.count;
}
- (NSString *)tableView:(UITableView *)tableView titleForHeaderInSection:(NSInteger)section; {
return self.sampleData[section][@"date"];
}
- (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section; {
return self.sampleData[section][@"group"].count;
}
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath; {
PFObject *post = self.sampleData[indexPath.section][@"group"][indexPath.row];
UITableViewCell *cell = // dequeue A reusable tableviewcell here
// configure the cell here
return cell;
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Let $x,y$ be nonnegative integers satisfying $(xy-7)^2 = x^2+y^2$. Prove that $x+y = 7$
Let $x,y$ be nonnegative integers satisfying $(xy-7)^2 = x^2+y^2$. Prove that $x+y = 7$.
We can rewrite the given equation as $$(x+y)^2-(xy-6)^2 = (x+y-xy+6)(x+y+xy-6) = 13.$$ How do we continue?
A:
Note that we now have that either $$x+y-xy+6 = \pm 1 \tag{1}$$
$$x+y+xy-6=\pm 13 \tag{2}$$
or $$x+y-xy+6= \pm 13 \tag{3}$$ $$x+y+xy-6= \pm 1 \tag{4}$$
From your last equation. $\frac{(1)+(2)}{2}$ and $\frac{(3)+(4)}{2}$ both give that $$x+y=\pm 7$$From this, using the condition that $x$ and $y$ are non negative integers, we see that $x+y=7$.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why is my Access Max() subquery not working?
I have a Microsoft Access table in which I want to return a set of records, but only for the most recent date.
My data has 24 records with 12 of these have RECORD_DATE=#23/04/2020#, the other 12 have RECORD_DATE=#24/04/2020# (ie. 12 with 23-April, 12 with 24-April); the field is set as Date/Time.
I have written a subquery to get the Max date for each record.
The problem I have is that my query executes, but returns all the records in the table (well, it applies the "WHERE Q.SITE_ID=?" correctly), not just those with the MAX(RECORD_DATE).
Here is the SQL:
SELECT
Q.CONSUMABLE_RECORD_ID,
Q.SITE_ID,
Q.CONSUMABLE_STORAGE_ID,
Q.CONSUMABLE_PRODUCT_ID,
Q.CONSUMABLE_MEASUREMENT_UNIT_ID,
Q.RECORD_DATE,
Q.RECORD_VALUE,
Q.LAST_DATE,
Q.LAST_VALUE
FROM
CONSUMABLE_RECORD AS Q
WHERE
Q.SITE_ID =?
AND
Q.RECORD_DATE=
(
SELECT
MAX(S.RECORD_DATE)
FROM
CONSUMABLE_RECORD AS S
WHERE
Q.CONSUMABLE_RECORD_ID=S.CONSUMABLE_RECORD_ID
)
If it makes any difference, CONSUMABLE_RECORD_ID is the Primary Key, and I am executing the query using and OleDbCommand via C#, and the data provider I am using is Microsoft.ACE.OLEDB.16.0. I've also tried using MAX(CONSUMABLE_RECORD_ID) in the subquery, but that didn't work - I'd prefer to keep it as Max(RECORD_DATE) as theoretically records could be entered out-of-sequence on date.
What do I need to do to get this working? I tried 'TOP 1' in the subquery, and still get the 24 rows back.
I would prefer to keep this as a subquery rather than an Inner Join etc.
Edit:
This is different from This answer. In that one, they are looking for the top 3 results in a group - I just want the maximum record. I tried adding TOP 1 to the start of the question, and and ORDER BY (which necessitates a GROUP BY), but I'm still getting all the records back.
Edit 2:
Sample Data:
The desired result is just all the records with RECORD_DATE=#24/4/20#.
A:
It would seem that you don't want a correlated subquery. This seems like a concise way of writing what you want:
SELECT cr.*
FROM Consumable_Record as cr
WHERE cr.Site_ID = ? AND
cr.Record_Date = (SELECT MAX(cr2.Record_Date)
FROM Consumable_Record as cr2
);
Note the use of meaningful table aliases. Q and S don't mean anything with respect to your data. CR is an abbreviation for the table name.
The above will not return records if the site doesn't have any on that day. If you want the most recent records per site, then use that in the correlation clause:
SELECT cr.*
FROM Consumable_Record as cr
WHERE cr.Site_ID = ? AND
cr.Record_Date = (SELECT MAX(cr2.Record_Date)
FROM Consumable_Record as cr2
WHERE cr2.Site_Id = cr.Site_Id
);
I suspect this is what you really want.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Who is a virtuous agent in virtue ethics?
Who exactly is a virtuous agent? I know a virtuous agent, at least in Aristole's view, is one that acts accordance with reason. But is this agent myself or someone I know or someone I look up to or random or anyone?
A:
It's anyone with the capacity to use reason to decide what to do, who decides well, and then develops good habits by repeatedly doing the right thing. ("Agent" just means someone/something who is capable of doing things. It comes from the Latin word for "doing.") Such people can use reason to act in one way or another, and thereby acquire the habit of doing things that cause them to be happy and flourishing (virtues), or the things that cause them not to be, which are vices. So to actually be virtuous such a person needs to use his or her reason and will to choose the right things to do, and thereby develop the habit of doing those things.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Embedding a YouTube channel
Recently i did small project for a community now their requirement is to include a block that contain a YouTube video frame and they have to display their YouTube channel there, So i have installed emfield module but i don't know how to implement that. Is there any suggestion like which module is comfort for these purpose ? and How to use emfield ?
A:
As far as I know, every YT-channel has it's own embed code. Try to embed this code to your block and make sure "Full HTML" is set as text-format.
Look here as well: Embed You-Tube channel
|
{
"pile_set_name": "StackExchange"
}
|
Q:
LOAD DATA INFILE ... SET column = NULLIF(column, 'NULL') gets incorrect value error
On MariaDB-10.2.7, with a table schema:
CREATE TABLE items (
id BIGINT(20) NOT NULL,
deleted_at TIMESTAMP NULL
) ENGINE=innodb ;
The query:
LOAD DATA INFILE '/items.csv'
INTO TABLE items
SET deleted_at = NULLIF(deleted_at, 'NULL') ;
items.csv (tab separated):
1 NULL
2 2019-07-24
The result:
ERROR 1292 (22007): Incorrect datetime value: 'NULL' for column 'deleted_at' at row 1
In the CSV, some of deleted_at are NULL as string (not \N). I'd like to convert it to NULL when running LOAD DATA.
A:
I think you need to do it in 2 steps:
LOAD DATA ...
(col1, col2, @deleted_at, col4)
SET deleted_at = NULLIF(@deleted_at, 'NULL')
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What is the usage of adding Bootstrap css file and js files reference in angularjs application?
It is recommended to add bootstrap references like bootstrap.css, bootstrap.min.css, bootstrap-responsive.css and associated js files for bootstrap css files in the angularjs app. What is the effect of adding these on the application?
A:
Bootstrap is the most popular HTML, CSS, and JS framework for developing responsive, mobile first projects on the web.
So, using Bootstrap will definitely reduce time needed for front end development.
Just ckeck link
|
{
"pile_set_name": "StackExchange"
}
|
Q:
the degree of a map from $S^2$ to $S^2$
Is the degree of $\pi\circ p$ $0$?
$\pi\circ p:S^{2}\xrightarrow{p}\mathbb{R}P^2=S^{1}\bigcup_{2}e^{2}\xrightarrow{\pi}S^{1}\bigcup_{2}e^{2}/S^{1}=S^2$,Where $e^2$ is a unit disk.If it is true,can anyone give me a brief proof ,thanks!
A:
If $\pi\circ p$ were non-trivial, say of degree say $d\neq 0$, then it would induce multiplication by $d$ on $H_2S^2$. But since $H_2\mathbb{R}P^2=0$ we have
$(\pi\circ p)_*=\pi_*\circ p_*:H_2S^2\rightarrow H_2\mathbb{R}P^2=0\rightarrow H_2S^2$
factoring through the trivial group. Therefore $(\pi\circ p)_*$ is multiplication by $0$, so we must have $d=0$ and $\pi\circ p\simeq \ast$.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Getting array index out of bounds when splitting a string in android
I am working on android. I have a string like below.
String str = "@test1|test2|test3@test4|test5|test6|test7@test8|test9|test10|test11@test12|test13|test14|test15@......"
I am splitting the above string with "@" using String[] str1 = str.split("[@]");
so that the final results are
str1[0] = test1|test2|test3
str1[1] = test4|test5|test6|test7
...
The code snippet I used
for (int x = 1; x < str1.length; x++) {
String[] st1 = str1 [x].trim().split("\\|");
System.out.println("first" + st1[0].trim());
System.out.println("second" + st1[1].trim());
System.out.println("third" + st1[2].trim());
System.out.println("fourth" + st1[3].trim());
List1.add(st1[0].trim());
List2.add(st1[1].trim());
List3.add(st1[2].trim());
List4.add(st1[3].trim());
}
In the above for loop, when the loop starts from x=2 it is working fine. But if I give x=1 then it is throwing array
index out of bounds exception at "System.out.println("fourth" + st1[3].trim());". Is because str1[0] consists of only 3 items whereas the remaining consists of 4 items. So now I am unable to get the fourth value after splitting with "|". Please tell me how to get the fourth value. Thanks in advance.
A:
ArrayIndexOutOFBound is coming because you are accessing the forth value you can get by following two ways,
String[] str1 = str.split("@");
for (int x = 1; x < str1.length; x++) {
String[] st1 = str1[x].trim().split("\\|");
for (int j = 0; j < st1.length; j++) {
System.out.println(st1[j]);
}
}
or
String[] str1 = str.split("@");
for (int x = 1; x < str1.length; x++) {
String[] st1 = str1[x].trim().split("\\|");
/*for (int j = 0; j < st1.length; j++) {
System.out.println(st1[j]);
}*/
if(st1.length> 0 )
System.out.println("first" + st1[0].trim());
if(st1.length> 1 )
System.out.println("second" + st1[1].trim());
if(st1.length> 2 )
System.out.println("third" + st1[2].trim());
if(st1.length> 3 )
System.out.println("fourth" + st1[3].trim());
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Использование SQLCMD.exe для первой проверки доступа к серверу
Здравствуйте. После установки MS SQL Server 2005 и Manager Studio, возникла необходимость проверки все ли работает правильно. Есть следующий код:
C:\Documents and Settings\pc77>"C:\Program Files\Microsoft SQL Server\90\Tools\b
inn\SQLCMD.EXE" -S 192.168.153.130\SQLEXPRESS,1433 -U sa -P 123 -W
1> select name, database_id, source_database_id, owner_sid from sys.databases
2> go name database_idsource_database_idowner_sid
Если я правильно понимаю этот код выводит названия баз данных из sys.databases, их id? Что делает 2 пункт? И ip-сервера - ip-ПК?
Если все неправильно, скажите правильный ответ и как узнать ip-сервера.
A:
Если сервер локальный то ip 127.0.0.1 подойдёт (не подойдёт в очень запущенных экзотических случаях). Если удалённый, то узнайте его ip у администратора сервера.
Далее я MS SQL до такой степени никогда даже не пытался освоить, но могу предположить следующее:
1) по первому пункту вероятно понимаете правильно.
2) второй пункт - это продолжение первого. команда go запускает запрос, написанный выше, а name и т.д. - это вывод результата запроса
|
{
"pile_set_name": "StackExchange"
}
|
Q:
pass argument to link and update table
I am not that good in php so you migt find it simple.
this php is not updating on the database
<?php error_reporting(E_ALL); ini_set('display_errors', 1);
//credentials...
$id= intval($_GET['id']);
$likes= intval($_GET['likes']);
$con = mysqli_connect($host,$uname,$pwd,$db) or die(mysqli_error());
$sql1="UPDATE OBJECTS SET LIKES='$likes' WHERE ID='$id'";
$result = mysqli_query($con,$sql1);
?>
I tried to run the link http://justedhak.com/old-files/singleactivity.php/id=1&likes=1 by passing the arguments but nothing happen.
A:
Your url has an error. The GET variables are delimited by a ? after the page address.
http://justedhak.com/old-files/singleactivity.php/id=1&likes=1
http://justedhak.com/old-files/singleactivity.php?id=1&likes=1
Change / to ?
|
{
"pile_set_name": "StackExchange"
}
|
Q:
An unfair marriage lemma
I am looking for a citeable reference to the following generalization of Hall's Marriage Theorem:
Given a bipartite graph of boys and girls. In addition to gender difference, they are divided into 1st and 2nd class citizens. Suppose that Hall's condition is satisfied for 1st class citizens. That is, for every set $M\subset\{\text{1st class boys}\}$ there are at least $|M|$ girls (from both classes) adjacent to $M$. And similarly for every $W\subset\{\text{1st class girls}\}$ there are at least $|W|$ boys adjacent to $W$. Then there exists a matching covering all 1st class citizens.
I need this fact as a lemma in a paper on geometric analysis. The proof is more or less straightforward but it occupies some space when written down. And I suspect that the fact may be well-known to specialists. Is this indeed the case and what are relevant references?
A:
In this answer I sketch an easy proof of your lemma and then give some references.
The easy proof uses Knaster's fixed point theorem:
THEOREM. Let $S$ be any set (finite or infinite) and let $\varphi:\mathcal P(S)\to\mathcal P(S)$ be an order-preserving map, i.e., $X\subseteq Y\implies\varphi(X)\subseteq\varphi(Y).$ Then $\varphi$ has a fixed point.
PROOF. Let $X_0=\bigcup\{X\in\mathcal P(S):X\subseteq\varphi(X)\}.$ It is easy to see that $\varphi(X_0)=X_0.$
(Knaster's fixed point theorem was set as a Putnam Problem in 1957. The generalization to complete lattices is called the Knaster-Tarski Theorem.)
Now let $B_1,B_2,G_1,G_2$ be the set of all first-class boys, second-class boys, first-class girls, and second-class girls, respectively; $B=B_1\cup B_2,\ G=G_1\cup G_2,\ B_1\cap B_2=G_1\cap G_2=\emptyset.$ By Hall's theorem there are matchings $f:B_1\to G$ and $g:G_1\to B.$ Define an order-preserving map $\varphi:\mathcal P(B_1)\to\mathcal P(B_1)$ by setting, for $X\subseteq B_1,$
$$\varphi(X)=B_1\setminus g[G_1\setminus f[X]].$$
By Knaster's fixed point theorem we have $\varphi(X_0)=X_0$ for some $X_0\subseteq B_1$. Now we can match the boys in $X_0$ with the girls in $f[X_0]$ and the girls in $G_1\setminus f[X_0]$ with the boys in $g[G_1\setminus f[X_0]].$
Here are some references related to your lemma:
L. Mirsky, Transversal Theory, Academic Press, New York and London, 1971.
L. Mirsky and Hazel Perfect, Systems of representatives, J. Math. Analysis Appl. 15 (1966), 520-568.
O. Ore, Theory of Graphs, Amer. Math. Soc. Colloquium Publications No. 38, Providence, 1962 [Theorem 7.4.1].
Here is how your lemma is stated on p. 36 of Mirsky's book:
THEOREM 2.3.1. Let $(X,\Delta,Y)$ be a deltoid and let $X',Y'$ be admissible subsets of $X,Y$ respectively. Then there exist linked sets $X_0,Y_0$ such that $X'\subseteq X_0\subseteq X,\ Y'\subseteq Y_0\subseteq Y.$
The jargon is defined on pp. 33-34. Namely, a deltoid $(X,\Delta,Y)$ is a bipartite graph with partite sets $X,Y$ and edge set $\Delta$; a set $X'\subseteq X$ is admissible if there is an injective matching of $X'$ to $Y$; a set $Y'\subseteq Y$ is admissible if there is an injective matching of $Y'$ to $X$; two sets $X_0\subseteq X,Y_0\subseteq Y$ are linked if there is a bijective matching of $X_0$ to $Y_0.$
P.S. Sergei Ivanov has commented that the result, as stated in the question, can be traced back to Dulmage & Mendelsohn, Coverings of bipartite graphs, Canad. J. Math. 10 (1958) 517-534, Theorem 1.
A:
I have not seen it stated anywhere, but I would call it a corollary of Hall. It is in fact very natural, so it would not surprise me if it has been stated before.
Simplest proof I can come up with using Hall:
Hall gives you a matching from the first class boys into the girls, and a matching from the first class girls into the boys, record all these edges with this direction to get a directed graph, which consists of directed paths and even cycles. Delete every other edge in every cycle, and delete the second, fourth, etc. edge of every path. Note that the last vertex in every path is 2nd class, so this yields a matching covering all first class vertices.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Multiply Dense Rectangular Matrix by Sparse Matrix
I'm using Python, Numpy and Scipy packages to do matrix computations. I am attempting to perform the calculation X.transpose() * W * X where X is a 2x3 dense matrix and W is a sparse diagonal matrix. (Very simplified example below)
import numpy
import scipy.sparse as sp
X = numpy.array([[1, 1, 1],[2, 2, 2]])
W = sp.spdiags([1, 2], [0], 2, 2).tocsr()
I need to find the product of the Dense Matrix X.transpose and sparse matrix W.
The one method that I know of within scipy does not accept a sparse matrix on the right hand side.
>>> sp.csr_matrix.dot(X.transpose(), W)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method dot() must be called with csr_matrix instance as first argument (got ndarray instance instead)
Is there a way to multiply a sparse and dense matrix where the sparse matrix is the term on the right within scipy? If not, what is the best way to do this without turning my W into a dense matrix?
A:
Matrix multiplication is associative, so you can always first compute W * X:
>>> X.T.dot(W.dot(X))
array([[9, 9, 9],
[9, 9, 9],
[9, 9, 9]])
If you really have to compute X.T * W, the first dense, the second sparse, you can let the sparse matrix __mul__ method take care of it for you:
>>> X.T * W
array([[1, 4],
[1, 4],
[1, 4]])
Actually, for your use case, if you use np.matrix instead of np.array, your particular operation becomes surprisingly neat to code:
>>> Y = np.matrix(X)
>>> Y.T * W * Y
matrix([[9, 9, 9],
[9, 9, 9],
[9, 9, 9]])
|
{
"pile_set_name": "StackExchange"
}
|
Q:
freebsd does not recognise that php was installed via ports
I have php 5.2.12 installed on FreeBSD 8.0-STABLE. It was installed from ports and I am trying to upgrade it to 5.3.2.
However for some reason my system is not recognising that php was installed via ports. When I run "pkg_version" the list does not include php it does however include all the extensions that I have installed.
I have even tried to do "make deinstall" on "/usr/ports/lang/php5" it told me that the port had been deinstalled but php still appears to be working correctly i.e "php -v" works
any ideas on how this port has become de attached from the ports system? and how I can get the ports system to recognise that it installed php?
EDIT: When I run "make deinstall" over and over again I always get the same answer
Deinstalling for lang/php5
I never get
php52 not installed, skipping
which is what I am expecting after the first time I run "make deinstall"
A:
I am not sure why but the answer was to rebuild all of the ports
portupgrade -a
after running this pkg_version now realises that php is installed.
Fortunately this is not a production machine so this was not a problem if I get this happening on a production machine I think I will need a better answer so, if anyone has an explanation as to why portupgrade -a might have fixed my issue that would be very helpful.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
cant appear only the max date for a number of transactions
I execute this query:
SELECT distinct(accounts.account_id), transactions.trn_code , transactions.trn_date
FROM accounts join transactions ON accounts.account_id = transactions.account_id
WHERE accounts.account_currency = 'USD' AND accounts.account_cards_linked > 1
AND transactions.trn_date >= '2015/03/01' AND transactions.trn_date <= '2015/03/31'
GROUP BY transactions.trn_code , transactions.trn_date,accounts.account_id
HAVING transactions.trn_date >= MAX(transactions.trn_date)
ORDER BY accounts.account_id;
and I get these results:
click image to see the result grid
My problem is that I want for each one account to appear only the transaction with the latest date. But now if an account has more than one transaction, there are all appeared. (for example, I want the account 912...129 to appears only one time with the latest day,2015/03/05. (see the image for the example))
Any ideas??
A:
You need to do a JOIN on MAX(trn_date) of each account:
SELECT
a.account_id,
t.trn_code,
t.trn_date
FROM accounts a
INNER JOIN transactions t
ON a.account_id = t.account_id
INNER JOIN (
SELECT
account_id, MAX(trn_date) AS trn_date
FROM transactions
WHERE
trn_date >= '2015/03/01'
AND trn_date <= '2015/03/31'
GROUP BY account_id
)td
ON t.account_id = td.account_id
AND t.trn_date = td.trn_date
WHERE
a.account_currency = 'USD'
AND a.account_cards_linked > 1
ORDER BY a.account_id
Note: Use meaningful table aliases to improve readability and maintainability.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
DNS mapping to subfolders
I have a use case where I have DNS domain with name www.example.com which points to test.com/abcd .
Now I want to create one more dns record which should point www.example.com/test2 to
test2.com/abcd.
www.example.com is just a domain name and I dont have any server running on it.
A:
It sounds like you have a CNAME record like so:
www.example.com CNAME test.com
And code on test.com that performs a redirect, probably for all resources at www.example.com:
if (req.headers['Host'] === 'www.example.com') {
res.writeHead(301, { 'Location': 'http://test.com/abcd' + req.url });
res.end();
}
Since www.example.com is served by test.com, then you need to include another redirect on the server, which can either happen when clients land at http://www.example.com/test2 or when they land at http://test2.com/abcd/test2.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
AEM 6: SlingBindings object is null
I have created sling:OsgiConfig nodes inside config folders like config.author, config.publish and so on. I am trying to fetch properties from these nodes by doing something like this:
public static List fetchTokenLinksFromOsgiConfig(final SlingHttpServletRequest slingRequest) throws IOException {
List<String> tokenlinksList = new ArrayList<String>();
SlingBindings bindings = (SlingBindings) slingRequest.getAttribute(SlingBindings.class.getName());
log.info("=================inside fetchTokenLinksFromOsgiConfig======================"+bindings);
SlingScriptHelper sling = bindings.getSling();
Configuration conf = sling.getService(org.osgi.service.cm.ConfigurationAdmin.class).getConfiguration("com.xxxxx.TokenLinksConfig");
log.info("=================inside fetchTokenLinksFromOsgiConfig:::taking configurations======================");
String TokenId = (String) conf.getProperties().get("TokenId");
String TokenSecret = (String) conf.getProperties().get("TokenSecret");
String OAuthLink = (String) conf.getProperties().get("OAuthLink");
log.info("=================TokenId:::TokenSecret:::OAuthLink======================"+TokenId +" "+TokenSecret+" "+OAuthLink);
if(!StringUtil.isEmpty(TokenId)) {
tokenlinksList.add(TokenId);
}
if(!StringUtil.isEmpty(TokenSecret)) {
tokenlinksList.add(TokenSecret);
}
if(!StringUtil.isEmpty(OAuthLink)) {
tokenlinksList.add(OAuthLink);
}
return tokenlinksList;
}
I am calling this method from a sling servlet like this:
List tokenList = OsgiConfigUtil.fetchTokenLinksFromOsgiConfig(slingRequest);
but the bindings object of type SlingBindings is coming null. I have no idea how to work this out ?
Thanks in advance
A:
Sling servlet is an OSGi component, so you can inject the ConfigurationAdmin service directly, using SCR @Reference annotation:
public MyServlet extends SlingSafeMethodServlet {
@Reference
private ConfigurationAdmin confAdmin;
public doGet(SlingHttpServletRequest request, SlingHttpServletResponse response) {
confAdmin.getConfiguration("com.myuhc.TokenLinksConfig");
}
}
No need to use the SlingBindings object, which is meant to provide OSGi services inside the JSP scripts.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can XSD define a wildcard complex type?
Say I don't know what an element's name will be, but I do know what it's children will be. For example, the names "foo" and "bar" are not prescribed but "A", "B" & "C" are.
<example>
<foo>
<A>A</A>
<B>B</B>
<C>C</C>
</foo>
<bar>
<A>A</A>
<B>B</B>
<C>C</C>
</bar>
</example>
I cannot leave the name attribute out because that's a violation. I would expect to be able to do this instead:
<xs:element name="example">
<xs:complexType>
<xs:sequence maxOccurs="unbounded">
<xs:any>
<xs:complexType>
<xs:sequence>
<xs:element name="A" type="xs:string"/>
<xs:element name="B" type="xs:string"/>
<xs:element name="C" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:any>
</xs:sequence>
</xs:complexType>
</xs:element>
This does not work either, <xs:any> can only contain annotations and refuses a type.
Is there something I can do with namespaces that will work with unknown element names? Should I give up, not attempt to validate the children and just document what the contents must be?
A:
You could try doing this with substitution groups:
<xs:element name="example">
<xs:sequence>
<xs:element ref="ABCSequence" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
</xs:element>
<xs:element name="ABCSequence" abstract="true" type="ABCSeq" />
<xs:complexType name="ABCSeq">
<xs:complexContent>
<xs:sequence>
<xs:element name="A" type="xsd:string" />
<xs:element name="B" type="xsd:string" />
<xs:element name="C" type="xsd:string" />
</xs:sequence>
</xs:complexContent>
</xs:complexType>
<xs:element name="foo" substitutionGroup="ABCSequence" type="ABCSeq" />
<xs:element name="bar" substitutionGroup="ABCSequence" type="ABCSeq" />
I'm not sure if that will allow arbitrary external elements to be added in without declaring their types (via xsi:type attributes) but it does at least allow describing the sort of document you're after.
A:
You cannot quite achieve what you want to do there in XML Schema. There is a close solution, but not quite what you want:
<xs:element name="example">
<xs:complexType>
<xs:sequence maxOccurs="unbounded">
<xs:any namespace="##other" processContents="lax"/>
</xs:sequence>
</xs:complexType>
</xs:element>
Now you could provide separate schemas for the elements that can occur in there, e.g. one for foo, under a separate namespace and in a separate schema file:
<xs:element name="foo">
<xs:complexType>
....
</xs:complexType>
</xs:element>
That's about all you can do (it's your "multiple namespaces" solution). You can't avoid listing the elements entirely. If that's not good enough, then <xsd:any> with processContents set to skip is your only solution, followed by external validation (code, Schematron, etc)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Asp.net mvc identity current user path to razor view return error / The model item passed into the dictionary
I am using asp.net identity 2.0 and trying to pass current user to razor view. Here is my code :
public ActionResult Settings()
{
string currentUserId = User.Identity.GetUserId();
var user = _db.Users.Find(currentUserId);
return View(user);
}
But in razor i'm getting the following error:
The model item passed into the dictionary is of type 'System.Data.Entity.DynamicProxies.User_D3D98E327FE171A79BDF8C79D31176E467C1EAF139BF185F0608911A37B99ECA',
but this dictionary requires a model item of type 'ExamsTraining.Models.ExternalLoginListViewModel'.
It says that razor view needs ExternalLoginListViewModel but In razor i have a model :
@model ExamsTraining.Models.User
I'm passing a correct model...
A:
From your exception I assume that you are loading the wrong view. Go to Views/{Controller name folder}/Settings.cshtml view and make sure that the correct model is there.
The problem could be deeper if you are using Html.Partial in view or layout.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Proving that the roots of a complex equation lie within a circle
Prove that, for integral values of $n\ge 1$, all the roots of the equation $$nz^n=1+z+z^2+...+z^n$$ lie within the circle $\vert z\vert=\frac{n}{n-1}$
Taking modulus on both sides, $$n\vert z\vert^n=\vert1+z+z^2+...+z^n\vert$$
Using triangle inequality,
$$n\vert z\vert^n\le 1+\vert z\vert+\vert z\vert^2+...+\vert z\vert^n$$
$$\vert z\vert^n(n-1)\le 1+\vert z\vert+\vert z\vert^2+...+\vert z\vert^{n-1}$$
Using sum of GP formula,
$$(n-1)\vert z\vert^n\le\frac{\vert z\vert^n-1}{\vert z\vert -1}$$
$$(n-1)\frac{\vert z\vert^n}{\vert z\vert^n-1}\le\frac{1}{\vert z\vert -1}$$
(I am not sure about the above step because I am multiplying with a number that can be negative.)
How should I proceed?
A:
$z=0$ is never a solution so we can safely rewrite the equation as
$$n=\frac1{z^n}+\cdots+\frac1z+1.$$
If $z$ is a solution with norm strictly greater than $n/(n-1)$ then the first $n$ terms on the right hand side are strictly less (in absolute value) than $(n-1)/n$ so we would have
$$n<n\frac{(n-1)}n+1$$
which is clearly impossible.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Business Catalyst Responsive Catalog Tables
So the unfortunate thing about BC is that it uses tables to contain it's product/catalog grids.
I have my catalog page set to display 4 columns of items. The problem with this is it remains 4 on a mobile device. I tried to add a clear to the catalogueItem and it's having none of it.
Does anyone know how I can make it only display 1 column when using a mobile device?
The page in question is here.
A:
You can turn off tables for your products/catalog grids.
This is the code that is found in the Layouts > OnlineShop > page_contact.html
This is an example:
{tag_productlist, perRow, target, perPage, sort, hideEmptyMessage, listType}
set the last option, listType, to true and it renders as an unordered list, then it is simple to do with whatever grid system you are using.
Example: {tag_productlist,3,,50,,true,true}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What equipment should I use to record a conversation, cheaply?
I do work with endangered languages.
Sometimes we get a chance to have a conversation with one of the few remaining speakers of a language, and we'd like to get the best possible recording of their voice, so we can study it later. We'd also like good video of their upper body, as that's an important part of language.
Here's an example, with Yiddish: http://vimeo.com/20128505
Often the spaces are far from ideal - outdoors with wind or traffic, or kids playing in the same room.
Our budget is tiny. We can't afford much equipment. We usually can't dedicate a person to operating equipment - it should be fire-and-forget.
We can't put an omnidirectional mic in the middle, as we need that space to be open. A lapel mic seems like a good choice, since only the fluent speaker's voice really matters, and it's unobtrusive.
We also do a group practice conversation. Here's an example, with Squamish: http://youtu.be/UQImIstZpTM
Would it be sufficient to use a $20 lapel mic, a $20 tripod, and a basic point-and-shoot camera?
A:
Endangered languages, cool gig. Your question of "can I get away with it" is more philosophical than a gear question.... sure you can get away with it, but if you value your work, then you probably want to take it up a level. Since it sounds like (no pun intended) audio would be a priority for you, I would find a used Zoom digital recorder, like the H1 or even better a H2... if you place one of those just outside of frame you will have editable quality audio. Since picture is secondary, use whatever point and shoot you can get your hands on... Most P and S cameras don't have a mic input so you would be out of luck plugging a mic into it.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What is state management in angular? and why should I use it?
I am new to Angular and this question might be very broad. But I am interested in learning more on State management usage. Recently one of our project implemented state management using NGXS library. However I am trying to understand what are all the advantages it brought to application?
The implementation is very deep and on high level, there are some actions which carry application data (as set by user) and listeners to those actions which process the request and dispatches to next step as required. How this is different in terms of application usage or performance etc., from general angular application. I am at beginning stage of understanding state management and so I feel like I am writing so much of code which mayn't be really required. example - just to route to another page, i had to implement a state modal to hold the object and declare an action and a listener to implement that action.
I am going over several documentations and getting details on how state management can be implemented but not getting the right answer for why state management should be implemented.
Thank you in advance!
A:
First, to answer your question, you should know that State Management is not a term of Angular, and you don't have to use it. State Management is a pattern to implement the CQRS principle, and I quote Wikipedia:
It states that every method should either be a command that performs an action, or a query that returns data to the caller, but not both.
State Management acts as a single source of truth for your application.
You can build an app without State Management. You can use only services, and you're good to go. Adding State Management to your application would add some complexity and boilerplate, but then you'll have the benefits of (quoted from https://stackoverflow.com/a/8820998/1860540):
Large team - You can split development tasks between people easily if you have chosen CQRS architecture. Your top people can work on domain logic leaving regular stuff to less skilled developers.
Difficult business logic - CQRS forces you to avoid mixing domain logic and infrastructural operations.
Scalability matters - With CQRS you can achieve great read and write performance, command handling can be scaled out on multiple nodes and as queries are read-only operations they can be optimized to do fast read operations.
The most popular Angular's State Management libraries are NGRX and NGXS.
I won't elaborate on NGRX, but in short - it has proven itself in real production applications.
However, NGXS is a younger state management library for angular that adopted some ideas of NGRX and "empowered" it by using the tools that Angular provides (such as DI). The main difference between NGRX and NGXS is that your boilerplate is significantly less on NGXS. If you're interested, you can read the main reason Why another state management for Angular.
So to sum it up - If you're planning on building a large scaled application, then you should consider using State Management, although - you don't have to.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Collection calculating too much?
Let say I've abstracted away a data structure such as a list; should this new object perform all the calculations or just provide minimal calculation functions and then allow a new object such as ReportGenerator to perform more large-scale calculations? Example below:
THIS?
class WrappedCollection
{
public double CalculateByYear(int year) {...}
public double CalculateCumulativeByYear(int year) {...}
public double CalculateTotal() {...}
private IList<double> collection = new List<double>();
}
OR THIS?
class WrappedCalculation
{
public double CalculateByYear(int year) {...}
public double CalculateTotal() {...}
private IList<double> collection = new List<double>();
}
class ReportGenerator
{
public ReportGenerator(WrappedCollection collection) {...}
public double TotalByYear(int year) {...}
public double CumulativeTotalByYear(int year) {...}
public double Total() {...}
private WrappedCollection collection;
}
This is a greatly simplified version; however, I'm starting to notice that in my WrappedCollection I am needing more methods and I feel as if I'm violating the SRP. The WrappedCollection class should just manage what I want (e.g. someone's retirement portfolio), and the reporting should left to another class that does all the calculations.
I seem to remember an example from Uncle Bob Martin or Martin Fowler that showed pushing all the data gathering further down the hierarchy into the collections themselves. This seems to generate more than 1 responsibility.
The project is comparing retirement portfolios; so I don't believe someone's 401k should give me all the metrics like cumulative contributions, etc. It should just give me it's current total, and maybe a contribution for a given year based on type (e.g. employer vs employee contribution). Another class can them compile a list of the cumulative contributions. Yes?
A:
Yet another developer torpedoed by the SOLID principles.
Responsibility doesn't mean "do only one thing." It means "have only one reason to change," or more specifically, "This is the place to go to make modifications for this area of concern." Having more methods on your class doesn't necessarily mean you're violating SRP.
A repository doesn't have four responsibilities because it has Create, Read, Update and Delete methods. It has only one: data access.
The SOLID principles exist to suggest ways to improve your software's maintainability. It is maintainability you should be striving for, not slavish adherence to arbitrary principles.
A:
Your portfolio class violates the Single Responsibility Principle because it has methods for performing tasks from two largely orthogonal groups - namely, maintaining portfolio's content, and computing reporting metrics based on that content.
Separating out the two the way you did provides an improvement, but you could go even further by giving WrappedCalculation an interface, say, IWrappedCalculation, and coding up ReportGenerator in terms of that interface. This way you would be able to reuse the report generator that you wrote for retirement portfolios to produce reports for portfolios of other kind - say, non-retirement portfolios, or combinations of several portfolios.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Oracle - Validate user and password without opening another connection?
Is there some function/procedure/package that validates a username and password in Oracle (from a user that exists in the db)?
Background: We want to create a web application that will use a pool of connections. All users already exists in the database, because of a legacy 6i application. So we think that the best approach is to validate the user and password of the database, but we don't want to hardcode the url and open a new connection just to validate this.
I know that another way is to store a password in a user's table, but if the Oracle provide this option will be much easier.
A:
I found out a unofficial script that verify the Oracle's user$ table.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Create Value Array From Model into String in Swift
I want to create data like
From model like
but the data that I get is like this
how can i get the data structure like that (top)? clean there is no model name and no optional in data
Note: i use method create value like this
for i in 0 ..< self.dataProduct.count {
let id_sell = "\(self.dataProduct[i].seller_id ?? 0)"
let origin = self.dataProduct[i].origin ?? 0
let product = self.dataProduct[i].product ?? []
var dataItem = [DataCheckoutMitras.ProductItemCheckout]()
var itemMitra : DataCheckoutMitras?
var dataCourierSelected : CourierObject?
for x in 0 ..< product.count {
var item : DataCheckoutMitras.ProductItemCheckout?
item = DataCheckoutMitras.ProductItemCheckout(product_id: product[x].product_id ?? 0,
name: product[x].name ?? "",
price: product[x].price ?? 0,
subTotal: product[x].subTotal ?? 0,
quantity: product[x].quantity ?? 0,
weight: product[x].weight ?? 0,
origin_item: origin,
notes: product[x].notes ?? "")
dataItem.append(item!)
}
for x in 0 ..< self.id_seller.count {
if id_sell == self.id_seller[x] {
dataCourierSelected = self.dataKurir[x]
}
}
itemMitra = DataCheckoutMitras(origin: origin, select_price_courier: dataCourierSelected, items: dataItem)
mitras.append(itemMitra!)
}
A:
The issue you are facing is because you are printing definition of your struct. What you want is the JSON to do so you will need to:
Implement Codable protocol in both of your struct
This also applied to your CourierObject
struct DataCheckoutMitras: Codable {
let origin: Int?
let items: [ProductItemCheckout]?
struct ProductItemCheckout: Codable {
let product_id : Int?
let name : String?
}
}
encode the struct to JSON data using JSONEncoder
let encodedJSONData = try! JSONEncoder().encode(mitras)
convert JSON to string
let jsonString = String(data: encodedJSONData, encoding: .utf8)
print(jsonString)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Simplify Product $\prod_{k=2}^{n} \left(1 - \frac{2}{k (k+1)}\right)$
I try to simplify an expression (wolfram link) but i really do not know enough about the topic.
$\prod_{k=2}^{n} \left(1 - \frac{2}{k (k+1)}\right)$
I can see the result but i do not know how to get there. I promised a friend to help with homework but now we are both stuck. Can someone show us how to think about a problem like that.
The result is supposed to be $\frac{(n+2)}{(3 n)}$ so i assume that i can split the product in two products, remove some elements from either product and somehow elimitate the remaining products with each other. I failed miserably when i tried it though.
A:
Hint:
$$
1 - \frac 2{k(k+1)}
=\frac {k^2 + k - 2}{k(k+1)}
=\frac {(k-1)(k+2)}{k(k+1)}
$$
Then when you make the product of these, most terms telescope.
A:
$$1-\frac2{k(k+1)}=\frac{k^2+k-2}{k(k+1)}=\frac{(k-1)(k+2)}{k(k+1)}=\frac{\frac{k-1}{k+1}}{\frac k{k+2}}=\frac{v_k}{v_{k+1}}$$
where
$$v_k=\frac{k-1}{k+1}$$
so the desired product gives by telescoping
$$\frac{v_2}{v_{n+1}}=\frac{n+2}{3n}$$
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to copy entire rows based on column A duplicated name to its respective worksheet in VBA?
My current code will attempt to copy entire rows based on the column A duplicated name to its respective worksheet using VBA as shown below. But it only works for the 1st duplicated name but not the rest. When i review my code, i realised that my target(at the part for target=Lbound to Ubound part) is always 0 so i was wondering why is it always 0 in this case? Because it suppose to be ranging from 0 to 3?
Sub test()
Dim ws As Worksheet: Set ws = ThisWorkbook.Sheets("Sheet1")
Dim cs As Worksheet
Dim mycell As Range, RANG As Range, Mname As String, Rng As Range
Dim r As Range, dict As Object
Set dict = CreateObject("Scripting.Dictionary")
With Sheets(1)
' Build a range (RANG) between cell F2 and the last cell in column F
Set RANG = Range(.Cells(2, "A"), .Cells(.Rows.count, "A").End(xlUp))
End With
' For each cell (mycell) in this range (RANG)
For Each mycell In RANG
Mname = mycell.Value
' If the count of mycell in RANG is greater than 1, then set the value of the cell 1 across to the right of mycell (i.e. column G) as "Duplicate Found"
If Application.WorksheetFunction.CountIf(RANG, mycell.Value) > 1 Then
If dict.count > 0 And dict.Exists(Mname) Then
dict(Mname) = mycell.Row()
Else
dict.Add Mname, mycell.Row()
End If
End If
Next mycell
Dim x As Long, Target As Long, i As Long
Dim CopyMe As Range
'Dim Arr: Arr = Array(Key)
Dim f As Variant
For x = 1 To 4
Set cs = ThisWorkbook.Sheets.Add(After:=Sheets(ThisWorkbook.Sheets.count))
cs.Name = "Names" & x
Next x
'Display result in debug window (Modify to your requirement)
Startrow = 2
For Each Key In dict.Keys
Set Rng = ws.Range("A" & Startrow & ":A" & dict(Key))
'Create 3 Sheets, move them to the end, rename
lr = dict(Key)
v = dict.Keys 'put the keys into an array
'Loop through each name in array
For Target = LBound(v) To UBound(v) - 1 '<-------why is Target always 0 here?
'Loop through each row
For i = Startrow To lr
'Create Union of target rows
If ws.Range("A" & i) = v(Target) Then
If Not CopyMe Is Nothing Then
Set CopyMe = Union(CopyMe, ws.Range("A" & i))
Else
Set CopyMe = ws.Range("A" & i)
End If
End If
Next i
Startrow = dict(Key) + 1
'Copy the Union to Target Sheet
If Not CopyMe Is Nothing And Target = 0 Then
CopyMe.EntireRow.Copy Destination:=ThisWorkbook.Sheets("Names1").Range("A1")
Set CopyMe = Nothing
End If
If Not CopyMe Is Nothing And Target = 1 Then
CopyMe.EntireRow.Copy Destination:=ThisWorkbook.Sheets("Names2").Range("A1")
Set CopyMe = Nothing
End If
If Not CopyMe Is Nothing And Target = 2 Then
CopyMe.EntireRow.Copy Destination:=ThisWorkbook.Sheets("Names3").Range("A1")
Set CopyMe = Nothing
End If
If Not CopyMe Is Nothing And Target = 3 Then
CopyMe.EntireRow.Copy Destination:=ThisWorkbook.Sheets("Names4").Range("A1")
Set CopyMe = Nothing
End If
Next Target
Next
End Sub
Main worksheet
In the case of duplicated John name:
In the case of duplicated Alice name
Updated code:
Sub test()
Dim ws As Worksheet: Set ws = ThisWorkbook.Sheets("Sheet1")
Dim cs As Worksheet
Dim mycell As Range, RANG As Range, Mname As String, Rng As Range
Dim r As Range, dict As Object
Set dict = CreateObject("Scripting.Dictionary")
With Sheets(1)
' Build a range (RANG) between cell F2 and the last cell in column F
Set RANG = Range(.Cells(2, "A"), .Cells(.Rows.Count, "A").End(xlUp))
End With
' For each cell (mycell) in this range (RANG)
For Each mycell In RANG
Mname = mycell.Value
' If the count of mycell in RANG is greater than 1, then set the value of the cell 1 across to the right of mycell (i.e. column G) as "Duplicate Found"
If Application.WorksheetFunction.CountIf(RANG, mycell.Value) > 1 Then
If dict.Count > 0 And dict.Exists(Mname) Then
dict(Mname) = mycell.Row()
Else
dict.Add Mname, mycell.Row()
End If
End If
Next mycell
Dim StartRow As Long
StartRow = 2
Dim Key As Variant
Dim lr As Long, v As Variant
For Each Key In dict.Keys
Set Rng = ws.Range("A" & StartRow & ":A" & dict(Key))
lr = dict(Key)
v = dict.Keys 'put the keys into an array
'Create 3 Sheets, move them to the end, rename
'Loop through each name in array
For Target = LBound(v) To UBound(v) - 1 '<-------why is Target always 0 here?
'Loop through each row
For i = StartRow To lr
'Create Union of target rows
If ws.Range("A" & i) = v(Target) Then
If Not CopyMe Is Nothing Then '<---object required error at If Not copyme...
Set CopyMe = Union(CopyMe, ws.Range("A" & i))
Else
Set CopyMe = ws.Range("A" & i)
End If
End If
Next i
StartRow = dict(Key) + 1
'Copy the Union to Target Sheet
If Not CopyMe Is Nothing Then
Mname = "Name" & CStr(Target + 1)
CopyMe.EntireRow.Copy Destination:=ThisWorkbook.Sheets(Mname).Range("A1")
Set CopyMe = Nothing
End If
Next Target
Next Key
End Sub
A:
Use a dictionary for the start row and another for the end row. It is then straightforward to determine the range of duplicate rows for each name and copy them to a new sheet.
Sub CopyDuplicates()
Dim wb As Workbook, ws As Worksheet
Dim irow As Long, iLastRow As Long
Dim dictFirstRow As Object, dictLastRow As Object, sKey As String
Set dictFirstRow = CreateObject("Scripting.Dictionary") ' first row for name
Set dictLastRow = CreateObject("Scripting.Dictionary") ' last row for name
Set wb = ThisWorkbook
Set ws = wb.Sheets("Sheet1")
iLastRow = ws.Range("A" & Rows.Count).End(xlUp).Row
' build dictionaries
For irow = 1 To iLastRow
sKey = ws.Cells(irow, 1)
If dictFirstRow.exists(sKey) Then
dictLastRow(sKey) = irow
Else
dictFirstRow.Add sKey, irow
dictLastRow.Add sKey, irow
End If
Next
' copy range of duplicates
Dim k, iFirstRow As Long, rng As Range, wsNew As Worksheet
For Each k In dictFirstRow.keys
iFirstRow = dictFirstRow(k)
iLastRow = dictLastRow(k)
' only copy duplicates
If iLastRow > iFirstRow Then
Set wsNew = wb.Worksheets.Add(after:=wb.Sheets(wb.Sheets.Count))
wsNew.Name = k
Set rng = ws.Rows(iFirstRow & ":" & iLastRow).EntireRow
rng.Copy wsNew.Range("A1")
Debug.Print k, iFirstRow, iLastRow, rng.Address
End If
Next
MsgBox "Done"
End Sub
|
{
"pile_set_name": "StackExchange"
}
|
Q:
U.S. universities on-campus residence for International PhD students?
I want to ask generally, do most U.S. universities offer on-campus accommodation for International Ph.D. students Who are married?
If the student is benefitted from scholarships {Free tuition and fees + 2k monthly stipend} will he/she need to pay an extra to use not-shared on-campus settlements?
Without any self-funding and other support, is the stipend enough to pay for it while taking care of food, health care, etc. at the same time?
A:
I want to ask generally, do most U.S. universities offer on-campus accommodation for International Ph.D. students Who are married?
Probably not. Even when available, it might not be your best option. In some places on-campus housing can be a fair bit more expensive than renting from a private landlord. And if the university offers subsidized housing (whether on or away from the main campus) the waiting period can be quite long. Basically, you'd want to research your alternatives once you know where you might be going.
If the student is benefitted from scholarships {Free tuition and fees + 2k monthly stipend} will he/she need to pay an extra to use not-shared on-campus settlements?
Generally, housing wouldn't be free, but possibly subsidized.
Without any self-funding and other support, is the stipend enough to pay for it while taking care of food, health care, etc. at the same time?
It varies, so it's hard to say anything concretely. If the university pays for health insurance, that can be worth a lot actually. Also, living expenses vary considerably from place to place. I will say that if you have to pay for insurance yourself, and pay rent in a large popular city, well, money could be very tight.
Again, this is something you'll have to research when considering your offers. There are websites that help you compare these expenses between cities, to see if e.g. rent is particularly expensive somewhere.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Struggling to understand Query
The documentation for Query starts with:
data = {
<|"a" -> 1, "b" -> "x", "c" -> {1}|>,
<|"a" -> 2, "b" -> "y", "c" -> {2, 3}|>,
<|"a" -> 3, "b" -> "z", "c" -> {3}|>,
<|"a" -> 4, "b" -> "x", "c" -> {4, 5}|>,
<|"a" -> 5, "b" -> "y", "c" -> {5, 6, 7}|>,
<|"a" -> 6, "b" -> "z", "c" -> {}|>};
They have the following example:
Query[Total, "a"] @ data
21
As I understand it Total is an ascending operator so it does the "a" selection, resulting in a list of the numbers 1 through 6. Then Query returns and does the Total operation. I think that I understand that each operator in turn operates on the next level, either ascending or descending.
How would I get Query to give me the total of the values of "a" in rows 1 through 5? That is, without preconditioning data. I realize that I don't have a good understanding of how Query works.
A:
Short Answer
The query operator for a given level can perform both descending and ascending actions, separated by the /* operator. We can tack the descending filtering operator 1 ;; 5 onto the front of the ascending operator Total:
data // Query[(1 ;; 5) /* Total, "a"]
(* 15 *)
The parentheses are necessary due to the tightly binding precedence of /*.
Long Answer
Fasten your seatbelt...
The query Query[Total, "a"] contains two operators, Total and "a".
Total appears as the first operator of the query, so it is applied to the whole expression (i.e. level zero). It is documented to be an ascending operator. So in the descending phase of the operation, it does nothing and its action is essentially Identity. In the ascending phase, its action is to perform the function of the like-named built-in function Total.
"a" is documented to be a descending operator whose action is to invoke the operator form of the built-in function Key like this: Key["a"]. Since this operator appears as the second query operator, it will be applied to each element at level one in the expression. Notionally, this means that the operator further expands into Map[Key["a"]]. As a descending operator, "a" does nothing in the ascending phase. This is notionally equivalent to Map[Identity].
To summarize, we have the following descending and ascending options for each operator:
Operator Descending Action Ascending Action
TOTAL Identity TOTAL
"a" Map[Key["a"]] Map[Identity]
Let's simulate the execution of the query. We will apply these actions to our data, with descending actions working down into the expression's levels and ascending actions working back up:
data
(* { <|"a" -> 1, "b" -> "x", "c" -> {1}|>
, ...
, <|"a" -> 6, "b" -> "z", "c" -> {}|>
} *)
% // Identity (* level 0 descending action *)
(* { <|"a" -> 1, "b" -> "x", "c" -> {1}|>
, ...
, <|"a" -> 6, "b" -> "z", "c" -> {}|>
} *)
% // Map[Key["a"]] (* level 1 descending action *)
(* {1, 2, 3, 4, 5, 6} *)
% // Map[Identity] (* level 1 ascending action *)
(* {1, 2, 3, 4, 5, 6} *)
% // Total (* level 0 ascending action *)
(* 21 *)
Now, to the question at hand. How do we restrict the range of rows? From the table above, we can see that we need to change the first descending operator from Identity to something that selects the first five rows. The Query documentation tells us that we can use the operator syntax i ;; j.
But... we already have a first operator: Total. How can we perform a descending operation as well? The documentation provides that answer as well:
When one or more descending operators are composed with one or more
ascending operators (e.g. desc/*asc), the descending part will be
applied, then subsequent operators will be applied to deeper levels,
and lastly the ascending part will be applied to the result.
We can tack our descending operation onto the front of Total like this:
(1;;5) /* Total
The parentheses are necessary due to the tight binding of the /* operator.
The operator 1;;5 is notionally interpreted as the function #[[1;;5]]&.
The full query is:
Query[(1;;5) /* Total, "a"]
The operation table for this query looks like this:
Operator Descending Action Ascending Action
(1;;5) /* Total #[[1;;5]]& TOTAL
"a" Map[Key["a"]] Map[Identity]
The execution proceeds exactly as detailed above, except that the new first descending action reduces the input list to its first five elements instead of the full list.
The input rows could be reduced by any descending filtering operator in place of (1;;5). Notably, the Select operator could be used for content-based filtering instead of part selection.
What's up with "Notionally"?
This response has used the word "notionally" several times when speaking of the interpretation of query operators (e.g. 1;;2 is notionally interpreted as #[[1;;2]]&). This qualification is necessary because in practice the query compiler uses a variety of internal functions to optimize or perform special processing during query execution. Furthermore, non-operations like Map[Identity] are optimized out.
The notional interpretations given above are roughly equivalent substitutes used to avoid distracting from the main points of discussion.
To see the actual interpretation of a query, use Normal:
Query[(1 ;; 5) /* Total, "a"] // Normal
(* GeneralUtilities`Slice[1 ;; 5] /* GeneralUtilities`Slice[All, "a"] /* Total *)
GeneralUtilities`Slice is a common query helper function that can perform all manner of slicing and mapping functions, applied simultaneously.
I recommend thinking in such notional terms, with query operators operating upon distinct expression levels with distinct descending and ascending phases. It helps that the documentation is written in such terms, and the optimized execution plans respect those semantics. Make it a habit to read the query:
Query[Total, "a"]
as if it were written as:
Query[ Identity /* Total
, "a" /* Identity ]
... mentally separating out the ascending and descending phases.
Query Operators vs. Normal Operators
Throughout this response, there has been an implicit distinction drawn between query operators and the functions that implement them. This is an important distinction to realize. It is just a coincidence that the Total query operator is implemented by the like-named Total function. This coincidence is not true in general. We already saw exceptions like "a" interpreted as Key["a"] and 1;;5 interpreted as #[[1;;5]]&.
GroupBy is an example of a query operator that behaves subtly differently from its function namesake. GroupBy["a"] is quietly rewritten as GroupBy[Key["a"]]. There are many other operators that have similar small changes in behaviour -- study the documentation for details.
Here is an interesting case: the /* query operator behaves very differently from the /* function. The /* function simply chains operations together, one right after the other. But the left and right operations of the /* query operator will not necessarily be executed one right after the other -- they might be separated by many intervening operators from lower query levels. Don't be fooled by the surface similarity of the syntax. Consider:
Query[Select[OddQ] /* f, Select[EvenQ] /* g] // Normal
(* Select[OddQ] /* Map[Select[EvenQ] /* g] /* f *)
Notice how the operands from Select[OddQ] /* f are not adjacent in the compiled query.
How do I know whether an operator is ascending or descending?
At the moment, the only real way is to read the documentation and memorize the descending operator list. If an operator is not descending, then it is ascending (duly noting the exception of composite operators). Any operator that is not listed in the documentation is ascending, meaning that the vast majority of expressions are ascending.
There are undocumented functions that will indicate whether an operator is ascending or descending:
Query; Needs["Dataset`"]
AscendingQ[Total]
(* True *)
DescendingQ[1;;5]
(* True *)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Social sharing plugin phonegap Share value
I want to share text from textarea input value.
This my code :
<textarea type="text" id="source" tabindex="1" name="source" placeholder="Text to be translated" ></textarea>
<textarea id="results_body" onfocus="ok=1" cols="100" rows="6" style="height:10%;" ></textarea>
<button onclick="window.plugins.socialsharing.share('textarea value')>share only</button>
A:
In case you only want to get the text area values to where you currently have 'textarea value', you can replace it with
document.getElementById('source').value
to be like
share(document.getElementById('source').value)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
python find a dictionary in a dictionary
I have a dictionary with dictionaries as values, like this:
myDict = {'custom_field1': {'label': 'CNAMEs', 'data': 'bar1 bar2' },
'custom_field2': {'label': 'LocalForward', 'data': '8080:localhost:80' },
'custom_field3': None,
'custom_field4': {'label': 'ProxyCommand', 'data': 'ssh -q [email protected] nc -q0 %h 22' },
'custom_field5': {'label': 'SomeMoreInfos', 'data': 'AboutSomethingElse' },
'created_by': {'username': 'someperson', 'date': 'recently' },
'name': 'hostname'
}
There are many other key/values in the dict I don't care about. What would be an easy way, to get the data for a custom_field where the label is foo and then the data where label is bar and then the data where label is more?
Because currently I do it like this:
customItem = []
for field in range(1, 10):
new_field = myDict.get('custom_field%s' % field)
if new_field is not None:
customItem.append(new_field)
for field in customItem:
if field.get('label') == 'foo' and field.get('data') != '':
for part in field.get('data').split():
"""do something for each"""
for field in customItem:
if field.get('label') == 'bar' and field.get('data') != '':
print (field.get('data'))
My general goal is to create an automated ssh_config file for clients, so with the one dict for a host, it will create several ssh_config entries, the result should look like this:
hostname (from label 'name')
ProxyCommand ssh -q [email protected] nc -q0 %h 22
hostname-fwd (twice, because there was data behind label 'LocalForward')
ProxyCommand ssh -q [email protected] nc -q0 %h 22
LocalForward 8080:localhost:80
bar1 (as found from label 'CNAMEs')
ProxyCommand ssh -q [email protected] nc -q0 %h 22
bar1-fwd (twice, just because there was data behind label 'bar')
ProxyCommand ssh -q [email protected] nc -q0 %h 22
LocalForward 8080:localhost:80
bar2 (same as with bar1)
ProxyCommand ssh -q [email protected] nc -q0 %h 22
bar2-fwd
ProxyCommand ssh -q [email protected] nc -q0 %h 22
LocalForward 8080:localhost:80
EDIT I tried to be more specific with my task now, as just random sampledata is not so easy to understand, sorry.
A:
You can index the fields by label, i.e. create a new dict to use for quick lookups by labels. For example:
>>> label2field = {
field_val['label']: field_key
for field_key, field_val in myDict.items()
}
>>> label2field['more']
'custom_field4'
>>> myDict[label2field ['foo']]['data']
'bar1 bar2'
EDIT: To support None values and strings in myDict, just filter them out when creating the index:
label2field = {
field_val['label']: field_key
for field_key, field_val in myDict.items()
if isinstance(field_val, dict)
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Form POST data not passing through to PHP
I'm stumped as to why my code isn't working. I've done POSTS a million times, but this time it just doesn't seem to work.
My form:
<form method="post" name="form" id="form" enctype="text/plain" action="../posts/">
<fieldset id="inputs" style="text-align: center;">
<input id="password" type="password" name="password" placeholder="enter password" required />Press enter to submit
</fieldset>
</form>
My PHP code for retrieval:
if(isset($_POST['password'])){
echo "success";
}
else{
echo "fail";
}
I get "fail" every time. What am I doing wrong? I just cant see it.
A:
Remove enctype="text/plain" and check like
<form method="post" name="form" id="form" action="../posts/">
<fieldset id="inputs" style="text-align: center;">
<input id="password" type="password" name="password" placeholder="enter password" required />Press enter to submit
</fieldset>
</form>
enctype defines how the data should be formatted before sending. the two correct formats are application/x-www-form-urlencoded (which essentially sends things as key=valye&anotherkey=anothervalue, with http headers) and multipart/form-data (which splits each key up into a distinct section of data). text/plain means nothing should be done - its behaviour is essentially undefined between browsers, and was only ever used for automated email forms in the days before spam.
A:
Remove enctype="text/plain" from the form it will submit.
<form method="post" name="form" id="form" action="../posts/">
<fieldset id="inputs" style="text-align: center;">
<input id="password" type="password" name="password" placeholder="enter password" required />Press enter to submit
</fieldset>
</form>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Does Scala provide an easy way to convert Infix expressions to postfix ones?
I a beginner in Scala and I am writing a program in Scala to convert infix arithmetic expression to the postfix one meanwhile wondering whether Scala provided an easier way to handle these kind of conversions.
Can anyone guide me if there is any easier way for it?
A:
A way to do it is to use Dijkstra's Shunting Yard algorithm. Here is one implementation in Scala.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Canvasの画像にフィルターを掛けたい
こちらのサイトを参考にfilterで画像を編集できるボタンを作っています。
var can = $("#drawarea")[0];
var context = can.getContext("2d");
$("#select").on("change",function(){
var fileList = this.files ;
if( 1 > fileList.length ){
return false ;
}
var file = fileList[0] ;
var fr = new FileReader() ;
//読み込み後の処理
fr.onload = function(){
//[Image]クラスを起動
var image = new Image() ;
//読み込み完了後の処理
image.onload = function(){
//キャンパスに描画処理
if ( can.getContext ) {
//キャンパスのコンテキスト
var context = can.getContext( "2d" ) ;
//画像サイズを取得
var width = this.width ;
var height = this.height ;
//キャンパスのサイズを決めておく
can.width = 600 ;
can.height = 315 ;
//キャンパスにイメージを描画する
context.drawImage( this , 0, 0 , width , height , 0 , 0 , 600 , 315 );
}
}
//画像を読み込む
image.src = this.result ;
}
//ファイルを[base64エンコード]として読み込む
fr.readAsDataURL( file ) ;
});
Filters = {};
var grayscale = function(){
}
Filters.getPixels = function(img) {
var c = this.getCanvas(img.width, img.height);
var ctx = c.getContext('2d');
ctx.drawImage(img);
return ctx.getImageData(0,0,c.width,c.height);
};
Filters.getCanvas = function(w,h) {
var c = document.createElement('drawarea');
c.width = w;
c.height = h;
return c;
};
Filters.filterImage = function(filter, image, var_args) {
var args = [this.getPixels(image)];
for (var i=2; i<arguments.length; i++) {
args.push(arguments[i]);
}
return filter.apply(null, args);
};
Filters.grayscale = function(pixels, args) {
var d = pixels.data;
for (var i=0; i<d.length; i+=4) {
var r = d[i];
var g = d[i+1];
var b = d[i+2];
// CIE luminance for the RGB
// The human eye is bad at seeing red and blue, so we de-emphasize them.
var v = 0.2126*r + 0.7152*g + 0.0722*b;
d[i] = d[i+1] = d[i+2] = v
}
return pixels;
};
can.addEventListener("click", function(){
grayscale();
}, false);
<canvas id="drawarea" width="500" height="300" style="border:1px solid black;"></canvas>
<input type="file" id="select">
<button onclick = "grayscale()">グレーボタン</button>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
ただ、ボタンを押してもconsole上でエラーが出力され、つまずいています。
グレーボタンが押されたときのアクションをどのように記述すればよいでしょうか?
よろしくお願いします。
A:
Filters.getCanvas = function(w,h) {
var c = document.createElement('drawarea');
c.width = w;
c.height = h;
return c;
};
drawareaはidです。Canvas要素を生成したいので、var c = document.createElement('canvas');になります。
Filters.filterImage()にはフィルター用のメソッドと、Canvas要素を渡せばよいようでしたので、
function grayscale(){
var c = $("#drawarea")[0];
var idata = Filters.filterImage(Filters.grayscale, c);
var ctx = c.getContext('2d');
ctx.putImageData(idata, 0, 0);
}
で動作しました。
元サイトに動作するサンプルがありますので、そのコードを読んでみてはどうでしょうか。
|
{
"pile_set_name": "StackExchange"
}
|
Q:
"message": "Invalid login", "code": 401 for @feathersjs/authentication-local in windows while not in linux or mac
When I pulled working code of my project which was working fine in Mac, was giving me this error when authenticating in windows
Error
{
"name": "NotAuthenticated",
"message": "Invalid login",
"code": 401,
"className": "not-authenticated",
"errors": {}
}
post request
{
"strategy": "local",
"username": "demo",
"password": "demo"
}
default.json
"authentication": {
"entity": "user",
"service": "users",
"secret": "2wVrC38R9sUb6Cjhuhoir3oate0=",
"authStrategies": [
"jwt",
"local"
],
"jwtOptions": {
"header": {
"type": "access"
},
"audience": "https://yourdomain.com",
"issuer": "feathers",
"algorithm": "HS256",
"expiresIn": "1d"
},
"local": {
"usernameField": "username",
"passwordField": "password"
}
},
So strategy local was not at all working
A:
From documentation feathers doc
"Important: If you want to set the value of usernameField to username in your configuration file under Windows, the value has to be escaped as \\username (otherwise the username environment variable will be used)"
thus the code change requires in default.json is
"local": {
"usernameField": "\\username",
"passwordField": "password"
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Custom jQuery Selector
Quick question - is it possible to extend jQuery selectors to change the resultset (for example) via traversal instead of just by filtering an existing set?
To clarify - I don't want the selector equivalent of a call to $.filter() - I want something closer to $('foo:nth-child(n)') or $('foo:eq(n)'), where I can specify exactly which elements are returned by the selector. Any thoughts would be appreciated.
Edit: here's an example of what I want to implement:
$.expr[':']['nth-parent'] = function(deTarget, iIndex, aProperties, aStack) {
var iN, $currentElement;
if(!deTarget)
return;
if(!aProperties || aProperties.length < 4 || isNaN(iN = parseInt(aProperties[3])) || iN < 0)
throw('jQuery.nth-parent(N) requires a non-negative integer argument');
$currentElement = $(deTarget);
while(--iN >= 0)
$currentElement = $currentElement.parent();
aStack = $currentElement.length ? [$currentElement.get(0)] : [];
return aStack.length ? true : false;
};
So, ultimately, I'd want the new aStack array returned as the result set, in this particular case.
A:
Yes you can do this, a proper example here would be what you're after in the question, here's how :eq() is implemented:
jQuery[":"].eq = function(elem, i, match) {
return match[3] - 0 === i;
};
The signature has one more parameter, stack, like this:
function(elem, index, match, stack)
stack is the set of all elements, if you need use that in your filtering.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Server 2012 Group Policy Script
What is the best way through Group Policy to make a registry file run on logon for computers in a domain?
I'm testing on a Windows Server 2012, with Windows 8 clients.
I have created a Group Policy applied to a 'Computers' folder of which the computers are in, but the .reg file is not running on sign in.
A:
I think the best way is to use Group Policy Preferences.
There are two places for these settings:
Computer Config\Preferences\Windows Settings\Registry
and
User Config\Preferences\Windows Settings\Registry
You can craft new registry settings for the recipients of this Group Policy in there.
You can create HKLM and HKCU, etc., registry keys in either the User or Computer configuration areas. It's basically just a matter of whether you want the registry settings to apply as the computer logs on (e.g. during bootup) or if you want the registry settings to apply every time a user logs on.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to implement callback on connection drop event using SQLObject?
I'm using a Python script that does certain flow control on our outgoing mail messages, mostly checking whether a user is sending spam.
The script establishes a persistent connection with a database via a SQLObject. Under certain circumstances, the connection is dropped by a third-party (e.g. our firewall closes the connection due to excess idle), and the SQLObject doesn't notice it has been closed and it continues sending queries on a dead TCP handler, resulting in log entries like these:
Feb 06 06:56:07 mailsrv2 flow: ERROR Processing request error: [Failure instance: Traceback: <class 'psycopg2.InterfaceError'>: connection already closed#012/usr/lib/python2.7/threading.py:524:__bootstrap#012/usr/lib/python2.7/threading.py:551:__bootstrap_inner#012/usr/lib/python2.7/threading.py:504:run#012--- <exception caught here>---#012
/opt/scripts/virtualenv/local/lib/python2.7/site-packages/twisted/python/threadpool.py:191:_worker#012
/opt/scripts/virtualenv/local/lib/python2.7/site-packages/twisted/python/context.py:118:callWithContext#012
/opt/scripts/virtualenv/local/lib/python2.7/site-packages/twisted/python/context.py:81:callWithContext#012
/opt/scripts/flow/server.py:91:check#012
/opt/scripts/flow/flow.py:252:check#012
/opt/scripts/flow/flow.py:155:append_to_log#012
/opt/scripts/virtualenv/local/lib/python2.7/site-packages/SQLObject-1.3.1-py2.7.egg/sqlobject/main.py:1226:__init__#012
/opt/scripts/virtualenv/local/lib/python2.7/site-packages/SQLObject-1.3.1-py2.7.egg/sqlobject/main.py:1274:_create#012
/opt/scripts/virtualenv/local/lib/python2.7/site-packages/SQLObject-1.3.1-py2.7.egg/sqlobject/main.py:1298:_SO_finishCreate#012
/opt/scripts/virtualenv/local/lib/python2.7/site-packages/SQLObject-1.3.1-py2.7.egg/sqlobject/dbconnection.py:468:queryInsertID#012
/opt/scripts/virtualenv/local/lib/python2.7/site-packages/SQLObject-1.3.1-py2.7.egg/sqlobject/dbconnection.py:327:_runWithConnection#012
/opt/scripts/virtualenv/local/lib/python2.7/site-packages/SQLObject-1.3.1-py2.7.egg/sqlobject/postgres/pgconnection.py:191:_queryInsertID#012]
This makes me think that indeed there must be some callback for this kind of situation, otherwise that log entry wouldn't be written. I'd use that callback to establish a new connection to the database. I've been unable to find any piece of documentation about that.
Does anyone know if it's even possible to implement that callback and how to declare it?
Thanks.
A:
We're more regular users of SQLAlchemy rather than SQLObject. According to this thread from 2010 (http://sourceforge.net/p/sqlobject/mailman/message/26460439), SQLObject does not support reconnection logic for PostgreSQL. It's an old thread, but there does not appear to be any discussion about solving this from within SQLObject.
I have three suggested solutions.
The first solution is to explore Connection Pools. It might provide a way to open a new connection object when SQLObject detects the psycopg2 has disconnected. I can't guarantee it will, but if it does this solution would be your best best as it requires the least amount of changes on your part.
The second solution is to switch your backend from Postgres to MySQL. The SQLObject docs provide information on how use the reconnection logic of the mysql driver - http://sourceforge.net/p/mysql-python/feature-requests/9
The third solution is to switch to SQLAlchemy as your ORM and to use their version of Connection Pools. According to the core event documentation, when using pools if a connection is dropped or closed a new connection will opened -- http://docs.sqlalchemy.org/en/rel_0_9/core/exceptions.html#sqlalchemy.exc.DisconnectionError
Best of luck
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What will be the regular expression for parsing all words except last one?
I want to match the whole string except the last word. i.e. for
This is my house
the matched string should be This is my. What will be the regular expression for this?
A:
This should do it:
^([\w ]*) [\w]+$
^ is start of line
([\w ]*) is your group of any number of letters and space
\w+ is a space followed by one or more word characters
$ is end of line.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
When to delete your own question
I asked a question, and for whatever reason (probably my fault for not being clear, or asking something too similar to other questions about MVC frameworks), the answers didn't really go in the direction that I was looking for. I expect that to happen with questions on Stack Overflow sometimes, and I accept that. But now I've got a question that I consider useless. I don't think someone randomly finding the question via google would get any value out of the question's answers, and maybe I'll clarify what I'm looking for and ask the question in a much better way later.
I essentially accepted the answer that was most applicable (though not really what I was looking for) because I enjoy having people actually answer my questions, and wouldn't want to discourage that.
Should I delete the question? Does deleting my own question cost me / answerers reputation, in which case I'll probably just skip it? Is deleting and then reasking a similar question a bad way to go? Should I try to completely edit the question into essentially a new (if somewhat similar) question in order to try to get a more effective answer?
A:
I don't think you should edit the question to drive future answers in a different direction. You will make the current answers either wrong or irrelevant. Users answered the question as posed and you accepted an answer.
If you can see where you went wrong with the question, ask it again in a way that is not a duplicate and will solicit different answers. If it helps, reference your original question and clarify why you should expect different answers this time around.
Also, I don't think you can delete your original question, once it has been answered. But, if you could, you would lose any rep gains/losses from the question after the next reputation re-calc.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
ANDROID.How to make voice input to text without google pop up appearing?
Hello android developers,
I'm beginners in android development. Got stuck with this problem -
I want to make voice to text input, but I don't want google popup appear. I've seen multiple apps that has voice inputs without that, so please maybe someone can help me figure it out?
Here the screenshot of popup
Here is my code of MainActivity
public class MainActivity extends AppCompatActivity {
private TextView resultTV;
/**
* ATTENTION: This was auto-generated to implement the App Indexing API.
* See https://g.co/AppIndexing/AndroidStudio for more information.
*/
private GoogleApiClient client;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
// ATTENTION: This was auto-generated to implement the App Indexing API.
// See https://g.co/AppIndexing/AndroidStudio for more information.
client = new GoogleApiClient.Builder(this).addApi(AppIndex.API).build();
resultTV = (TextView) findViewById(R.id.resultTV);
}
public void onButtonClick(View view) {
if (view.getId() == R.id.imageButton) {
promtSpeechInput();
}
}
public void promtSpeechInput() {
Intent i = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
i.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
i.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.getDefault());
i.putExtra(RecognizerIntent.EXTRA_PROMPT, "SAY SOMETHING");
i.putExtra(RecognizerIntent.EXTRA_PARTIAL_RESULTS, true);
try {
startActivityForResult(i, 100);
Toast.makeText(MainActivity.this, "Say something kiddo", Toast.LENGTH_LONG).show();
} catch (Exception e) {
Toast.makeText(MainActivity.this, "Sorry, your device not support speech inputs", Toast.LENGTH_LONG).show();
e.printStackTrace();
}
}
public void onActivityResult(int request_code, int result_code, Intent i){
super.onActivityResult(request_code,result_code,i);
switch (request_code)
{
case 100: if(result_code == RESULT_OK && i != null){
ArrayList<String> result = i.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
resultTV.setText(result.get(0));
}
break;
}
A:
Try using this activity i created where when toggle button is switched on recording starts and the fluctuations in pitch of the voice is shown in a progressbar using the RmsChanged() method, the text recognized is shown in a textview
public class MainActivity extends Activity implements RecognitionListener
{
private TextView returnedText;
private ToggleButton toggleButton;
private ProgressBar progressBar;
private SpeechRecognizer speech = null;
private Intent recognizerIntent;
private String LOG_TAG = "VoiceRecognitionActivity";
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
returnedText = (TextView) findViewById(R.id.textView1);
progressBar = (ProgressBar) findViewById(R.id.progressBar1);
toggleButton = (ToggleButton) findViewById(R.id.toggleButton1);
Button recordbtn = (Button) findViewById(R.id.mainButton);
progressBar.setVisibility(View.INVISIBLE);
speech = SpeechRecognizer.createSpeechRecognizer(this);
speech.setRecognitionListener(this);
recognizerIntent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
recognizerIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_PREFERENCE,
"en");
recognizerIntent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE,
this.getPackageName());
recognizerIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
RecognizerIntent.LANGUAGE_MODEL_WEB_SEARCH);
recognizerIntent.putExtra(RecognizerIntent.EXTRA_PARTIAL_RESULTS, true);
recognizerIntent.putExtra(RecognizerIntent.EXTRA_SPEECH_INPUT_MINIMUM_LENGTH_MILLIS, 3000);
toggleButton.setOnCheckedChangeListener(new OnCheckedChangeListener() {
@Override
public void onCheckedChanged(CompoundButton buttonView,
boolean isChecked) {
if (isChecked) {
progressBar.setVisibility(View.VISIBLE);
progressBar.setIndeterminate(true);
speech.startListening(recognizerIntent);
} else {
progressBar.setIndeterminate(false);
progressBar.setVisibility(View.INVISIBLE);
speech.stopListening();
}
}
});
}
@Override
public void onResume() {
super.onResume();
}
@Override
protected void onPause() {
super.onPause();
if (speech != null) {
speech.destroy();
Log.i(LOG_TAG, "destroy");
}
}
@Override
public void onBeginningOfSpeech() {
Log.i(LOG_TAG, "onBeginningOfSpeech");
progressBar.setIndeterminate(false);
progressBar.setMax(10);
}
@Override
public void onBufferReceived(byte[] buffer) {
Log.i(LOG_TAG, "onBufferReceived: " + buffer);
}
@Override
public void onEndOfSpeech() {
Log.i(LOG_TAG, "onEndOfSpeech");
progressBar.setIndeterminate(true);
toggleButton.setChecked(false);
}
@Override
public void onError(int errorCode) {
String errorMessage = getErrorText(errorCode);
Log.d(LOG_TAG, "FAILED " + errorMessage);
returnedText.setText(errorMessage);
toggleButton.setChecked(false);
}
@Override
public void onEvent(int arg0, Bundle arg1) {
Log.i(LOG_TAG, "onEvent");
}
@Override
public void onPartialResults(Bundle arg0) {
Log.i(LOG_TAG, "onPartialResults");
ArrayList<String> matches = arg0
.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
String text = "";
for (String result : matches)
text += result + "\n";
returnedText.setText(text);
}
@Override
public void onReadyForSpeech(Bundle arg0) {
Log.i(LOG_TAG, "onReadyForSpeech");
}
@Override
public void onResults(Bundle results) {
Log.i(LOG_TAG, "onResults");
}
@Override
public void onRmsChanged(float rmsdB) {
Log.i(LOG_TAG, "onRmsChanged: " + rmsdB);
progressBar.setProgress((int) rmsdB);
}
public static String getErrorText(int errorCode) {
String message;
switch (errorCode) {
case SpeechRecognizer.ERROR_AUDIO:
message = "Audio recording error";
break;
case SpeechRecognizer.ERROR_CLIENT:
message = "Client side error";
break;
case SpeechRecognizer.ERROR_INSUFFICIENT_PERMISSIONS:
message = "Insufficient permissions";
break;
case SpeechRecognizer.ERROR_NETWORK:
message = "Network error";
break;
case SpeechRecognizer.ERROR_NETWORK_TIMEOUT:
message = "Network timeout";
break;
case SpeechRecognizer.ERROR_NO_MATCH:
message = "No match";
break;
case SpeechRecognizer.ERROR_RECOGNIZER_BUSY:
message = "RecognitionService busy";
break;
case SpeechRecognizer.ERROR_SERVER:
message = "error from server";
break;
case SpeechRecognizer.ERROR_SPEECH_TIMEOUT:
message = "No speech input";
break;
default:
message = "Didn't understand, please try again.";
break;
}
return message;
}
}
DO NOT FORGET ADDING THIS PERMISSION IN YOUR MANIFEST
<uses-permission android:name="android.permission.RECORD_AUDIO" />
|
{
"pile_set_name": "StackExchange"
}
|
Q:
how do you capture process-specific loopback interface network traffic on OSX?
I would like to monitor traffic between two processes running on OSX El Capitan. The server is listening on 127.0.0.1 so i believe i need to monitor the lo0 loopback interface.
I'm trying to use the tcpdump program supplied by Apple to do this with the following command, as per https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man1/tcpdump.1.html:
sudo tcpdump -i pktap,lo0 -v ./DumpFile01.pcap
but this fails with:
tcpdump: data link type PKTAP
tcpdump: listening on pktap,lo0, link-type PKTAP (Packet Tap), capture size 262144 bytes
tcpdump: pktap_filter_packet: pcap_add_if_info(lo0, 0) failed: pcap_add_if_info: pcap_compile_nopcap() failed
It appears to be Apple's version of tcpdump:
tcpdump --version
tcpdump version 4.7.3 -- Apple version 66
libpcap version 1.5.3 - Apple version 54
From the tcpdump man page above and https://dreness.com/blog/archives/829 i think i should be able to run the following to see the packets for a given process:
tcpdump -i pktap,lo0 -Q "proc =myserver"
Has anybody had success with this? I would try the latest tcpdump, but i understand from the man page that "-Q" is an Apple extension.
A:
sudo tcpdump -i pktap,lo0 -v ./DumpFile01.pcap
That tcpdump command says "capture on lo0 with pktap, print text output in verbose mode, and use the string "./DumpFile01.pcap" as a capture filter". -v means "print in verbose mode"; did you mean -w, which means "write in binary form to the file whose name comes after the -w flag"?
"./DumpFile01.pcap" is not a valid capture filter; unfortunately, Apple's libpcap is buggy (Apple bug 21698116), and, if you're capturing with pktap, its error message for invalid capture filters is the not-very-informative "pktap_filter_packet: pcap_add_if_info(lo0, 0) failed: pcap_add_if_info: pcap_compile_nopcap() failed". (I told them how to fix it in the bug; hopefully they'll fix it in 10.12 Big Sur or whatever it's called, even if they don't get around to fixing it in 10.11.x.)
If you want to monitor traffic on lo0, and have tcpdump print its interpretation of the traffic on the terminal (rather than saving it to a binary pcap file for later interpretation by tcpdump or Wireshark or whatever; neither tcpdump nor Wireshark can read, as a capture, printed output from tcpdump), then do
sudo tcpdump -i pktap,lo0 -v
If you want the printed interpretation saved to a text file (again, you cannot feed that text file to tcpdump or Wireshark as a capture), do
sudo tcpdump -i pktap,lo0 -v >PrintedCapture.txt
If you want to save the raw packet data to a binary capture file for later interpretation by tcpdump or Wireshark or whatever, do:
sudo tcpdump -i pktap,lo0 -w ./DumpFile01.pcap
(-w, not -v).
And, yes, -Q is an Apple extension. -k is another Apple extension to print packet metadata such as process names if you're capturing with pktap.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Setting property inside method
Is something like the following possible or do you have to return the list and assign it afterwards? I get object reference not set to instance of an object.
public class MyCollection
{
public List<SomeObject> Collection { get; set; }
public List<SomeObject> CreateCollection()
{
// Is there a way to set the Collection property from here???
this.Collection.Add(new SomeObject()
{
// properties
});
}
}
...
MyCollection collection = new MyCollection();
collection.CreateCollection();
A:
Yes, you can use an object initializer:
public List<SomeObject> CreateCollection()
{
// You may want to initialize this.Collection somehere, ie: here
this.Collection = new List<SomeObject>();
this.Collection.Add(new SomeObject
{
// This allows you to initialize the properties
Collection = this.Collection
});
return this.Collection;
}
Note that this will still potentially have an issue - you are never initializing this.Collection in any code you're displaying. You will need to initialize it to a proper collection in your constructor or via some other mechanism.
It is also an odd choice to have a "Create" method that initializes the local variable and returns a List<T>. Typically, you'd do one or the other. A more common approach would be to place this code within the constructor:
public class MyCollection
{
public IList<SomeObject> Collection { get; private set; } // The setter would typically be private, and can be IList<T>!
public MyCollection()
{
this.Collection = new List<SomeObject>();
this.Collection.Add(new SomeObject
{
Collection = this.Collection
});
}
}
You could then use it via:
MyCollection collection = new MyCollection();
var object = collection.Collection.First(); // Get the first element
That being said, in general, there is no real reason to make a custom class for a collection like this in most cases. Just using a List<SomeObject> directly is typically sufficient.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Theorem 4-13 Spivak's Stokes Theorem
This is a question regarding the first part of Spivak's proof of Stokes' theorem. Let $\omega$ be a $(k-1)$-form on $[0,1]^k$. Then $\omega$ is the sum of $(k-1)$-forms of the type
$$
fdx^1\wedge\dots\wedge\hat{dx^i}\wedge\dots\wedge dx^k,
$$
and it suffices to prove the theorem for each of these. Now Spivak claims the following (for the case $j=i$):
$$
\int_{[0,1]^{k-1}}{I^{k}_{(j,\alpha)}}^*(fdx^1\wedge\dots\wedge\hat{dx^i}\wedge\dots\wedge dx^k)
=\int_{[0,1]^k}f(x^1,\dots,\alpha,\dots,x^k)dx^1\dots dx^k.
$$
Now, I know that a couple of posts have already been made on this part of the proof, but those were not helpful to me, so I will proceed to clarify my own confusion.
${I^{k}_{(j,\alpha)}}^*(fdx^1\wedge\dots\wedge\hat{dx^i}\wedge\dots\wedge dx^k)=({I^{k}_{(j,\alpha)}}^*f)({I^{k}_{(j,\alpha)}}^*dx^1)\wedge...\wedge\hat{dx^i}\wedge...\wedge({I^{k}_{(j,\alpha)}}^*dx^k)$
${I^{k}_{(j,\alpha)}}^*f=f\circ{I^{k}_{(j,\alpha)}}=f(x^1,\dots,x^{j-1},\alpha,x^{j+1},\dots,x^k)$
So i can have $$
\int_{[0,1]^{k-1}}{I^{k}_{(j,\alpha)}}^*(fdx^1\wedge\dots\wedge\hat{dx^i}\wedge\dots\wedge dx^k)=\int_{[0,1]^{k-1}}f(x^1,\dots,x^{j-1},\alpha,x^{j+1},\dots,x^k)({I^{k}_{(j,\alpha)}}^*dx^1)\wedge...\wedge\hat{dx^i}\wedge...\wedge({I^{k}_{(j,\alpha)}}^*dx^k)$$
But how should I proceed further to get the equation? how do we go from $[0,1]^{k-1}$ to $[0,1]^k$?
A:
All he's doing is throwing in the extra $dx^i$ so that all the integrals over the faces of the cube are instead $k$-fold integrals over the entire cube. Why can he do this? Because $\int_0^1 dx^i = 1$ and we're integrating $f(x)$ with $x^i$ set equal to $\alpha$ (so that the function we're integrating is independent of $x^i$), the equality you copied follows. For example, $\int_{[0,1]} f(x,1)\,dx = \int_{[0,1]^2} f(x,1)\,dx\,dy$.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Prove there exists a unique local inverse.
I've been set this problem recently and I'm having a lot of trouble with it. Any help would be much appreciated!
Let $f:\mathbb{R} \rightarrow \mathbb{R}$ be a function with continuous derivatives of all orders and suppose that, for some $x\in \mathbb{R}$ the derivative $f'(x)$ is non-zero. Thus there exists an interval D containing x such that $f'(y)\neq 0$ for all $y\in D$. We can also show that f is Lipschitz with Lipschitz constant less than 1.
Show there is a unique function $g:f(D)\rightarrow D$ such that $f\circ g$ and $g\circ f$ are the identity maps on $D$ and $f(D)$ respectively. Also show that g is continuous and differentiable.
A:
$f'$ is nonzero on $D$ so $f$ is strictly monotonic. Thus it is one-to-one. This means that $f:D\to f[D]$ has an inverse $g:f[D]\to D$. $g$ is diffentiable with $g'(x)=\frac{1}{f'(g(x))}$.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
wavelength of seismic wave with a gaussian source
i want to know how i get the wavelength/frequency of a seismic wave, if i only have a gaussian source and the velocity (c = 4000m/s) of a medium given.
e.g. for a ricker wavelet it would be easy to get the central frequency and max frequency of the source to compute the wavelength by c/f, but i have some issues with the gaussian and unfortunately only signals of a gaussian source.
Let's assume my source time function in time domain looks like this:
source = 1/(2* np.pi * 17.**2) * np.exp( - (t-100)**2 / (2* 17.**2))
So a gaussian with variance of 17 squared and shifted to the right. In the frequency domain it will be centered around zero and i dont really get a maximum frequency as well. Is there a way to find these?
A:
The Fourier transform of a Gaussian is a Gaussian.
If your signal is given by
$$g(t)=\frac{1}{\sqrt{2 \pi \sigma^2}} \exp\left( -\frac{(t-t_0)^2}{2 \sigma^2}\right)$$
then your frequency spectrum is
$$G(f) = \exp(-2 \pi^2 \sigma^2 f^2),$$
where f is the frequency ($2 \pi \omega).$
I'l let you do the substitution with your particular numbers. Enjoy!
https://warwick.ac.uk/fac/sci/mathsys/courses/msc/ma934/resources/notes8.pdf
http://www.cse.yorku.ca/~kosta/CompVis_Notes/fourier_transform_Gaussian.pdf
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Replace a row in a csv file
So i have a csv file:
Today</span><span class='invisible'>3:00 p.m. Nov. 29, 2013
Today</span><span class='invisible'>1:52 p.m. Nov. 29, 2013
Today</span><span class='invisible'>12:50 p.m. Nov. 29, 2013
Today</span><span class='invisible'>11:42 a.m. Nov. 29, 2013
Today</span><span class='invisible'>9:56 a.m. Nov. 29, 2013
Nov. 27, 2013
Nov. 27, 2013
Nov. 27, 2013
Nov. 27, 2013
Nov. 25, 2013
and I need to replace all the rows that start with Today and replace it with the date that is currently in the line. So far I've been running a for-loop:
rownumber=$(wc -l < DateStamp.csv)
for ((i=1; i<=$rownumber; i++))
do
s1=$(awk -v "row=$i" -F'@' 'NR == row { print $1 }' DateStamp.csv)
if [[ "$s1" =~ 'Today' ]]
then
year=$(date +'%Y')
text=$(awk -v "row=$i" -F'@' 'NR == row { print $1 }' DateStamp.csv | grep -o -P "(?<=m\. ).*(?<=$year)")
__SOME COMMAND__
else
break
fi
done
I want my output to be this:
Nov. 29, 2013
Nov. 29, 2013
Nov. 29, 2013
Nov. 29, 2013
Nov. 29, 2013
Nov. 27, 2013
Nov. 27, 2013
Nov. 27, 2013
Nov. 27, 2013
Nov. 25, 2013
Is there a line which I can replace with SOME COMMAND that will replace the row I am in with my variable text? Maybe a sed or awk command?
A:
Assuming that the presence of a.m. or p.m. in front of the date is guaranteed, do you really need to parse and extract the date from the Today lines? The following may suffice
sed 's/.*[ap]\.m\.\s\+\(.*\)$/\1/' DateStamp.csv
This uses a capture group \(.*\) to collect the portion after a.m or p.m. and replaces the entire input line with the contents of this capture group. sed just passes through the original line otherwise
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Dynamic add element to the top of the page
PROBLEM: I have table with tr that have data from table, When I insert element in Mongo and post it on my page, element added on the bottom of the table
Posts = new Mongo.Collection('posts');
Meteor.publish('allPosts', function(){
return Posts.find({}, { sort: { date: -1 }} );
});
Meteor.subscribe("allPosts");
Posts.insert({
date: 1
})
Posts.insert({
date: 2
})
Posts.insert({
date: 3
})
<table class=" table">
{{#each posts}}
{{> postJobs}}
{{/each}}
</table>
<template name="postJobs">
<tr class="smallInfo">
<td>{{date}}</td>
</tr>
</template>
In DOM I have:
<table>
<tr>
<td>1</td> // must be 3
</tr>
<tr>
<td>2</td> // must be 2
</tr>
<tr>
<td>3</td> // must be 1
</tr>
</table>
All my last inserts must add to the top of my page (table)
EDIT :
My problem with dynamic insert in collection (and i know about -1 gramatik mistake)
EXAMPLE :
Meteor.startup(function(){
Meteor.setTimeout(function(){Test("10");}, 1000);
function Test(x)
{
Posts.insert( {
submitDate: moment().format()
});
}
Meteor.setTimeout(function(){Test("10");}, 10000);
If i sort by submitDate it will show like :
<tr><td>10-10-10</td></tr> // must be 10-10-20
<tr><td>10-10-20</td></tr> // must be 10-10-10
BUT when i refresh my page(F5) all ok
<tr><td>10-10-20</td></tr>
<tr><td>10-10-10</td></tr>
A:
Why do you not sort descending?
{sort: { date: -1 } }
Cheers,
Tom
UPDATE:
You can find a live example at a MeteorPad I prepared:
http://meteorpad.com/pad/Ba5DTe94NjFi3ZTPA/Playground_Flow-Router_Chat
You have to do the sort on the client side not within the publish method.
This is why you get first time sorted but then just as inserted returns.
If you do the sort on client side find() the minimongo will do it on each new document.
Hope this helps for you.
Tom
|
{
"pile_set_name": "StackExchange"
}
|
Q:
ГербОвый или гЕрбовый?
Добрый день! Расскажите, пожалуйста, про ударение в прилагательном "гербовый"?
A:
герб - гЕрбовый, гЕрбового, гЕрбовому, гЕрбовым, о гЕрбовом.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why HttpRequest returns NULL while reading for second time?
I am creating webservices in Java using RESTlet API. I am using 2.0 stable version.
In this I am getting NullPointerException error while reading the request for second time.
I have applied the Filter on webservices for security purpose. In filter class I am checking the request contains the expected parameters. If it is successful then the call is made to webservice.
While processing in the webservice I am getting the NULL in request. Following is my Filter class -
public class MyFilter extends Filter {
public MyFilter(Context context) {
super(context);
}
@Override
protected int beforeHandle(Request request, Response response) {
int result = STOP;
try
{
String requestSt = request.getEntityAsText();
// Check for validation
} catch (Exception e) {
e.printStackTrace();
}
return result;
}
}
When I read the request for second time it returns NULL. Even when I just write -
System.out.println(request.getEntityAsText());
before -
String requestSt = request.getEntityAsText();
then also it gives me NullPointerException for the line String requestSt = ....
So please provide me the solution to read the request for multiple times.
A:
You can wrap the incoming request entity to ensure its content isn't lost after reading it a first time. For this purpose, you can leverage the org.restlet.engine.io.BufferingRepresentation class.
This line should do it:
request.setEntity(new BufferingRepresentation(request.getEntity());
You might need to update to version 2.1 which is the current stable version.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Resize font size of h1 when browser resolution is below 1260px
I'm stuck, I am designing a webpage on resolution 1266px.
I want it to fit on 1024px because 13% of the people are still using that resolution.
So I thought I could change the fontsize of my h1 and my nav_items.
So that they will fit, if the resolution drops below 1260 else I want them to stay at the fixed px given to them.
This is what I got:
header {
width:100%;
background: #222;
color: white;
height: 80px;
margin: 0px 0px 0px 0px;
padding: 0px 0px 0px 0px;
-webkit-box-shadow: 0 1px 10px rgba(0, 0, 0, 0.6);
-moz-box-shadow: 0 1px 10px rgba(0, 0, 0, 0.6);
box-shadow: 0 1px 10px rgba(0, 0, 0, 0.6);
text-shadow: 2px 2px 0 #000;
}
header h1 { /* TAVERNE DE STADSPOORT */
font-size: 40px;
float:left;
color:white;
margin: 0px 0px 0px 0px; /* top, left, bottom, right*/
padding: 10px 30px 10px 20px; /* top, right, bottom, right*/
}
header ul {
float: right;
width: 680px;
padding: 20px; 0px; 0px; 0px; /* top, left, bottom, right*/
}
.nav_items li {
display:inline;
font-size:22px;
margin: 0px 4px 0px 4px; /* top, left, bottom, right (outside) */
padding: 0px 4px 0px 4px; /* top, left, bottom, right*/
}
<div id="logo">
<h1 class="nav_items"><a href="index.html">Taverne De Stadspoort</a></h1>
</div>
<nav id="top">
<ul class="nav_items">
<li><a href="/">Menukaart</a></li>
<li><a href="/">Suggesties</a></li>
<li><a href="/">Onze wijnen</a></li>
<li><a href="/">Ligging/contact</a></li>
<li><a href="/">Gastenboek</a></li>
</ul>
</nav>
Maybe I did something wrong, and I could resolve this problem a other way.
Or should I use jquery instead for archieving this? since I am not familair with that
an other solution would be great.
i added this: thx all
@media all and (max-width: 1259px) { h1 { font-size: 25px; } }
@media all and (min-width: 1260px) { h1 { font-size: 40px; } }
/* http://www.w3.org/TR/css3-mediaqueries/ */
A:
E.g
@media all and (min-width: 1266px) {
h1 {
font: (large-size)px;
}
}
@media all and (max-width: 1265px) {
h1 {
font: (smaller-size)px;
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Compile failed - gwt maven
HTTP ERROR 404
Problem accessing /WEB-INF/error/error.jsp. Reason:
/WEB-INF/error/error.jsp
Caused by:
Compile failed; see the compiler error output for details.
at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:933)
at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:757)
at org.apache.jasper.compiler.Compiler.generateClass(Compiler.java:382)
at org.apache.jasper.compiler.Compiler.compile(Compiler.java:472)
at org.apache.jasper.compiler.Compiler.compile(Compiler.java:451)
at org.apache.jasper.compiler.Compiler.compile(Compiler.java:439)
at org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:511)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:295)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:292)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:236)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:686)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:501)
A:
Maybe you can see the error, when you enable apache logs (as it is written in the error message)
You need to change the log level for "org.apache.jasper" to "error" and add a console appender, then the JSP compiler errors appears in the Java console window, as desired.
For example, paste this into log4j.xml, assuming you have an appender named "console"
<logger name="org.apache.jasper">
<level value="error"/>
<appender-ref ref="console"/>
</logger>
A possible problem could be that your Jetty version is not compatible with your java version
Here you have compatibility list
Jetty 9, Servlet 3.1, Java 1.8
Jetty 8, Servlet 3.0, Java 1.6
Jetty 7, Servlet 2.5, Java 1.5
Jetty 6, Servlet 2.5, Java 1.4
|
{
"pile_set_name": "StackExchange"
}
|
Q:
ActiveRecord::StatementInvalid (PG::Error: ERROR: column does not exist
Having issues upon deploying to heroku regarding tables constructed using find statements. They functioned perfectly when using a local server but not there up on heroku I get the 'were sorry, but something went wrong' error. I came across some other threads suggesting it may be an issue with case sensitivity in postgresql not that I know much about it but I tried making sure the capitalization was consistent throughout and it didnt seem to make a difference.
Heroku Logs
2012-08-14T00:31:30+00:00 app[web.1]: Processing by ApartmentsController#aptMenu as HTML
2012-08-14T00:31:30+00:00 app[web.1]: Completed 500 Internal Server Error in 4ms
2012-08-14T00:31:30+00:00 app[web.1]:
2012-08-14T00:31:30+00:00 app[web.1]: LINE 1: SELECT "apartments".* FROM "apartments" WHERE (bed = 0)
2012-08-14T00:31:30+00:00 app[web.1]: ActiveRecord::StatementInvalid (PG::Error: ERROR: column "bed" does not exist
Controller
def aptMenu
@apartments = Apartment.all
@studio = Apartment.find(:all, :conditions => ["bed = 0"])
Schema
create_table "apartments", :force => true do |t|
t.integer "bed"
Im pretty confused with this seeing as according to everything I can see the 'bed' column does exist. Any help would be much appreciated.
A:
This problem occurs because the Apartment table on production database is missing the column Bed. You need to migrate on production side.
heroku rake db:migrate
See if the new column comes up.
Make sure you restart your heroku as well.
heroku restart
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to refresh a background image depending on the proximity of a beacon?
I want to change the background image of my app, depending on the proximity of a beacon. If I run the app (see code below) on my iPhone, it shows the background image correctly depending on the proximity - but if I move, it doesn't change/refresh the background image.
I tried the same with a background color (see code below) and it worked: the app changes its background color depending on the proximity of the beacon, and if I move it correctly changes/refreshes the background color. Just the background image doesn't work properly.
How can I force the background image to refresh correctly?
func updateDistance(_ distance: CLProximity) {
UIView.animate(withDuration: 0.8) {
switch distance {
case .unknown:
let backgroundImage = UIImageView(frame: UIScreen.main.bounds)
backgroundImage.image = UIImage(named: "1.jpg")
//backgroundImage.reloadInputViews()
backgroundImage.contentMode = UIView.ContentMode.scaleAspectFill
self.view.insertSubview(backgroundImage, at: 0)
//print("nothing here..")
case .far:
let backgroundImage = UIImageView(frame: UIScreen.main.bounds)
//backgroundImage.image = UIImage(named: "2.png")
backgroundImage.image = UIImage(named: "3.png")
//backgroundImage.reloadInputViews()
backgroundImage.contentMode = UIView.ContentMode.scaleAspectFill
self.view.insertSubview(backgroundImage, at: 0)
case .near:
let backgroundImage = UIImageView(frame: UIScreen.main.bounds)
backgroundImage.image = UIImage(named: "4.png")
//backgroundImage.reloadInputViews()
backgroundImage.contentMode = UIView.ContentMode.scaleAspectFill
self.view.insertSubview(backgroundImage, at: 0)
case .immediate:
let backgroundImage = UIImageView(frame: UIScreen.main.bounds)
backgroundImage.image = UIImage(named: "5.jpg")
//backgroundImage.reloadInputViews()
backgroundImage.contentMode = UIView.ContentMode.scaleAspectFill
self.view.insertSubview(backgroundImage, at: 0)
@unknown default:
// fatalError()
let backgroundImage = UIImageView(frame: UIScreen.main.bounds)
backgroundImage.image = UIImage(named: "6.jpg")
//backgroundImage.reloadInputViews()
backgroundImage.contentMode = UIView.ContentMode.scaleAspectFill
self.view.insertSubview(backgroundImage, at: 0)
}
}
func updateDistance(_ distance: CLProximity) {
UIView.animate(withDuration: 0.8) {
switch distance {
case .unknown:
self.view.backgroundColor = UIColor.gray
print("nothing here..")
case .far:
self.view.backgroundColor = UIColor.blue
case .near:
self.view.backgroundColor = UIColor.orange
case .immediate:
self.view.backgroundColor = UIColor.red
// self.sendUserNotification()
@unknown default:
<#fatalError()#>
}
}
}
A:
You are inserting all images at zero index, First remove previous image as if it contains any image like switching between proximity will insert many images all at zero and last one will appear. Or Just add it as subview and bring to front(if needed) instead of inserting at zero.
Color is working because you view will have one color. Images are appending one by one. for better understanding try view debugging.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can I create chrome.notification real Time?
Using the google chrome Api extension, I have the following code that show notification from JSON. Once notification is show, and when I clicked on it opened multiple tabs url (It's an error). I need to solve that problem because this should open only one tab url, this is the only problem. look the code below:
var timeChange = 1000, jsons = ['JSON_URL'];
updateValue = function() {
var colorStatus = 0;
chrome.storage.local.get(function (dataStorage) {
$.getJSON(jsons+'?'+$.now(), function (data) {
var jLastPost = {'LastNotification':''}, sizePost = (data.results.length - 1), dataLastPost = data.results[0];
totalEntradas = data.totalEntradas ? data.totalEntradas : '';
$.each(data.results, function (k,v) {
if (($.inArray('post-'+v.id, dataStorage.IDs) !== -1) && (v.date_status > 0)) {
colorStatus = 1;
}
});
chrome.browserAction.setBadgeText({'text': totalEntradas.toString()});
if (dataStorage.LastNotification !== dataLastPost.id)
{
jLastPost.LastNotification = dataLastPost.id;
chrome.storage.local.set(jLastPost);
chrome.notifications.create(dataLastPost.id,{
type: 'basic',
title: dataLastPost.titulo,
message: 'Now for you!',
iconUrl: dataLastPost.image
}, function (id) {});
chrome.notifications.onClicked.addListener(function () {
chrome.tabs.create({url: dataLastPost.uri});
});
chrome.notifications.clear(dataLastPost.id, function() {});
return false;
}
});
});
setTimeout(updateValue, timeChange);
}
updateValue();
A:
You are attaching a chrome.notifications.onClicked listener every second when you run updateValue. At a minimum you should move the listener outside of the method.
Something along these lines should work.
var timeChange = 1000, jsons = ['JSON_URL'];
var lastPostUri;
chrome.notifications.onClicked.addListener(function () {
if (lastPostUri) {
chrome.tabs.create({url: lastPostUri});
}
});
updateValue = function() {
var colorStatus = 0;
chrome.storage.local.get(function (dataStorage) {
$.getJSON(jsons+'?'+$.now(), function (data) {
var jLastPost = {'LastNotification':''}, sizePost = (data.results.length - 1), dataLastPost = data.results[0];
totalEntradas = data.totalEntradas ? data.totalEntradas : '';
$.each(data.results, function (k,v) {
if (($.inArray('post-'+v.id, dataStorage.IDs) !== -1) && (v.date_status > 0)) {
colorStatus = 1;
}
});
chrome.browserAction.setBadgeText({'text': totalEntradas.toString()});
if (dataStorage.LastNotification !== dataLastPost.id)
{
jLastPost.LastNotification = dataLastPost.id;
chrome.storage.local.set(jLastPost);
lastPostUri = dataLastPost.uri
chrome.notifications.create(dataLastPost.id,{
type: 'basic',
title: dataLastPost.titulo,
message: 'Now for you!',
iconUrl: dataLastPost.image
}, function (id) {});
chrome.notifications.clear(dataLastPost.id, function() {});
return false;
}
});
});
setTimeout(updateValue, timeChange);
}
updateValue();
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Django & webapp living together on GAE?
Is it possible to have both frameworks available?
So that I could have
from google.appengine.ext import webapp
from django.template.loader import render_to_string
class MainPage(webapp.RequestHandler):
def get(self):
self.response.out.write(render_to_string('some.template'))
and
from django.http import HttpResponse
def hello(request):
return HttpResponse("Hello world")
running mapped to different URLs?
EDIT:
The question basically boils down to how do I implement
urlpatterns = [
# webapp-style handler
(r'/webapp', views.MainPage),
# django
(r'/django', views.hello),
]
A:
Certainly - as long as you're not using 0.9.6 in one sub-app and 1.0 (via the use_library call) in the other. Just map URL regular expressions to separate handlers in app.yaml and you're good to go.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can we define all the Named Queries at one place like some property file instead of writing in the entity class
How to Define All Named Queries in One place (like class) instead of writing the NamedQuery in top of Entity class.
For Example i have a base class. In this class i can define all named queries.
Base.java
@Entity
@NamedQueries({
@NamedQuery(name="Student.findAll", query="SELECT s FROM Student s"),
@NamedQuery(name="Employee.findAll", query="SELECT e FROM Employee e")})
class Base {
}
Assume now i have two entities like Student and Employee. Now i want to access the Named query from Base class. It is possible to access. If is possible then how.
class Employee {
}
class Student {
}
In employee class, Student i want to access the named query from Base class. It is possible. If it is possible how i can access.
A:
We did a similar thing on one project. We had one entity that contained all of our named queries and (as @jimfromsa already said) they are accessible to your code wherever you need them because you are calling them by their unique name.
@Entity
@NamedQueries({
@NamedQuery(name="Student.findAll", query="SELECT s FROM Student s"),
@NamedQuery(name="Employee.findAll", query="SELECT e FROM Employee e"),
...
@NamedQuery(name="SomeEntity.someQuery", query="SELECT se FROM SomeEntity se")
})
public class NamedQueryHolder implements Serializable {
// entity needs to have an id
@Id
private Integer id;
// getter and setter for id
}
Note that this entity doesn't need to be mapped to an existing table, as long as you don't use it anywhere in the code. At least this approach worked with EclipseLink, never tried it with Hibernate.
In your case, if you don't want this extra query holder entity, you can put all named queries in Base class and it will work. However, I don't think it is a good idea to access named queries from your entity classes, they are called POJOs for a reason. Then again, it is all a matter of design (take a look at Anemic Domain Model).
A:
If you don't want the named queries to be mingled in your entity codes, you can store them in separate XML files like this:
employee.xml:
<?xml version="1.0" encoding="UTF-8"?>
<entity-mappings xmlns="http://java.sun.com/xml/ns/persistence/orm"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence/orm
http://java.sun.com/xml/ns/persistence/orm_1_0.xsd" version="1.0">
<package>mypersistenceunit</package>
<named-query name="Employee.findAll">
<query><![CDATA[SELECT e FROM Employee e]]></query>
</named-query>
</entity-mappings>
student.xml:
<?xml version="1.0" encoding="UTF-8"?>
<entity-mappings xmlns="http://java.sun.com/xml/ns/persistence/orm"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence/orm
http://java.sun.com/xml/ns/persistence/orm_1_0.xsd" version="1.0">
<package>mypersistenceunit</package>
<named-query name="Student.findAll">
<query><![CDATA[SELECT e FROM Employee e]]></query>
</named-query>
</entity-mappings>
Then you can include them in your persistence.xml like this:
<?xml version="1.0" encoding="UTF-8"?>
<persistence xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence
http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"
version="1.0">
<persistence-unit name="some-persistence-unit">
<jta-data-source>java:/someDS</jta-data-source>
<!-- Named JPQL queries per entity, but other organization is possible -->
<mapping-file>META-INF/employee.xml</mapping-file>
<mapping-file>META-INF/student.xml</mapping-file>
</persistence-unit>
</persistence>
Now you should be able to call these named queries from your Employee and Student classes by invoking EntityManager.createNamedQuery().
This way, you don't have to create a Base class just to store queries.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is second moment of exponential also Memory Less?
I know that the exponential is memory less and that means that:
$$
E[X\mid X>1]=1+E[X]
$$
Now, does the memory less property also hold for the second moment?
Specifically, is the following true? $$
E[X^2\mid X\geq1]=1+E[X^2]
$$
A:
Actually it should say
$$
\operatorname{E}(X^2\mid X\ge 1) = \operatorname{E}((1+X)^2).
$$
The conditional distribution of $X-a$ given that $X\ge a$ is the same as the marginal (i.e. unconditional) distribution of $X$. Hence the conditional distribution of $X$ given that $X\ge a$ is the same as the marginal distribution of $a+X$.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
c# Windows service with Timer not executing code
I Have following code, which does not executed when windows service starts
I found solutions here but they didn't work for me.
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Diagnostics;
using System.ServiceProcess;
using System.Configuration;
using System.Threading.Tasks;
namespace TFS_JIRA_sync
{
public partial class SyncProcess : ServiceBase
{
public SyncProcess()
{
InitializeComponent();
}
private System.Timers.Timer timer = new System.Timers.Timer();
protected override void OnStart(string[] args)
{
this.timer.Interval = ScanPeriod.period * 60000; //turn minutes to miliseconds
this.timer.Elapsed += new System.Timers.ElapsedEventHandler(this.OnTimer);//OnTimer;
this.timer.Enabled = true;
this.timer.AutoReset = true;
this.timer.Start();
}
private void OnTimer(object sender, System.Timers.ElapsedEventArgs e)
{
Processing proc = new Processing();
proc.doProcess();
}
protected override void OnStop()
{
}
}
}
In Programm:
using System;
using System.Collections.Generic;
using System.ServiceProcess;
using System.Threading.Tasks;
[assembly: log4net.Config.XmlConfigurator(Watch = true)]
namespace TFS_JIRA_sync
{
static class Program
{
/// <summary>
/// Главная точка входа для приложения.
/// </summary>
static void Main()
{
ServiceBase[] ServicesToRun;
ServicesToRun = new ServiceBase[]
{
new SyncProcess()
};
ServiceBase.Run(ServicesToRun);
//Processing proc = new Processing();
//proc.doProcess();
}
}
}
When I comment part starting "ServiceBase[]..." and uncomment "Processing ..." in Programm class it works fine.
But when my code run as Windows service - Nothing happens
A:
As I see your service is not going to run all the time
this.timer.Interval = ScanPeriod.period * 60000;. For your scenario
rather than having a windows service with a timer (and spending time on resolving the issue), I'd recommended to use a scheduled task (as suggested by this answer).
It references a Jon Gallow's article, that gives a lot of reasons why it is better to use a scheduled task.
If you're writing a Windows Service that runs a timer, you should re-evaluate your solution.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
SQL Query to return different values of a column having same id in another column in same row
My table 'Details' looks like this,
ID Name City
1 Arun Chennai
2 Arun Mumbai
3 Raj Bangalore
4 Raj Chennai
I want to select the same Name containing different values in 'City' column in a single row.
I have written the below query
select a.id, b.id, a.Name, a.city, b.city
from Details a join Details b on a.Name = b.Name
where a.City <> b.City
For which the output is
id id Name City City
1 2 Arun Chennai Mumbai
2 1 Arun Mumbai Chennai
3 4 Raj Bangalore Chennai
4 3 Raj Chennai Bangalore
But I need the output in one row with both id's and City's
id id Name City City
1 2 Arun Chennai Mumbai
3 4 Raj Bangalore Chennai
Kindly advise
A:
Try this:
select a.id, b.id, a.Name, a.city, b.city
from Details a
inner join Details b on a.Name = b.Name and a.id < b.id
Note: This will work as long as you always have two records per Name. If you have either one, or two records, then you can use LEFT JOIN instead of INNER JOIN.
Edit: If you can have one, two, or three records per Name, then your query can be formulated as follows:
select a.id, b.id, a.Name, a.city, b.city
from Details a
left join Details b on a.Name = b.Name and a.id < b.id
left join Details c on a.Name = c.Name and b.id < c.id
Note: If the number of records varies dynamically then you have to use some sort of grouping with string concatenation, an equivalent of GROUP_CONCAT used in MySQL.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
ng-mouseover not working
I am at the very beginning of learning Angular. Right now I am trying to implement a ng-repeat div which is populated from a collection. I also want to implement a mouseOver function which changes the text in a paragraph when I hover over one of the elements.
<!DOCTYPE html>
<html ng-app="MyApp">
<head>
<script src="http://ajax.googleapis.com/ajax/libs/angularjs/1.2.23/angular.min.js"></script>
<script type="text/javascript">
var app = angular.module('MyApp', []);
app.controller('RezeptController', function ($scope) {
this.rezepte = rezeptCollection;
this.mouseOverElement = function (element) {
this.msg = "Mouse Over: " + element.name;
}
});
var rezeptCollection = [
{name: 'Okroshka', herkunft: 'Russland'},
{name: 'Sushi', herkunft: 'Japan'}
];
</script>
<title></title>
<meta charset="utf-8" />
</head>
<body class="container" ng-controller="RezeptController as rezepte">
<div ng-repeat="rezept in rezepte.rezepte" >
<div ng-mouseover="mouseOverElement(element)">
{{rezept.name}}
</div>
</div>
<p>{{ msg }}</p>
</body>
</html>
This code does get the job of displaying the elements done. Unfortunately the mouseOverElement does not trigger.
I have to admit that I did not understand the scope concept entirly. So what I tried is to change the app.controller definition to:
app.controller('RezeptController', function ($scope) {
$scope.rezepte = rezeptCollection;
$scope.mouseOverElement = function (element) {
$scope.msg = "Mouse Over: " + element.name;
}
});
This does not fix the problem plus the items are not shown at all. Please help me understand what I am missing here.
A:
I believe your issue stem from the fact that you are using the "RezeptController as rezepte" notation, which is good practice, but then you are being inconsistent on how you access things in that scope.
You need to make sure you are prefixing any scope variable or function calls with rezepte. It is also good practice to take the confusion out of this by aliasing it as rezepte in your controller:
<!DOCTYPE html>
<html ng-app="MyApp">
<head>
<script src="http://ajax.googleapis.com/ajax/libs/angularjs/1.2.23/angular.min.js"></script>
<script type="text/javascript">
var app = angular.module('MyApp', []);
app.controller('RezeptController', function ($scope) {
var rezepte = this;
rezepte.rezepte = rezeptCollection;
rezepte.mouseOverElement = function (element) {
rezepte.msg = "Mouse Over: " + element.name;
}
});
var rezeptCollection = [
{name: 'Okroshka', herkunft: 'Russland'},
{name: 'Sushi', herkunft: 'Japan'}
];
</script>
<title></title>
<meta charset="utf-8" />
</head>
<body class="container" ng-controller="RezeptController as rezepte">
<div ng-repeat="rezept in rezepte.rezepte" >
<div ng-mouseover="rezepte.mouseOverElement(rezept)">
{{rezept.name}}
</div>
</div>
<p>{{ rezepte.msg }}</p>
</body>
</html>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Printing the contents of a php array
I am trying to print or echo out the contents of the GET and POST arrays. I am doing this for debugging reasons as I need to check exactly what is being passed to my submit form.
I am currently using the following code, but nothing is being displayed even though I can see some GET data in the url of the page.
<php print_r($_POST); print_r($_GET); ?>
On submit I get the following in the url, so the data is going somewhere :
&token=3dce374d23c82eaadc8463bc477a418b5ed2dfa2&name=Mrs Newton&date=27-01-2012&chronoform=addupdatelead&event=submit
A:
you need a question mark before php
<?php
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Using the Rust compiler to prevent forgetting to call a method
I have some code like this:
foo.move_right_by(10);
//do some stuff
foo.move_left_by(10);
It's really important that I perform both of those operations eventually, but I often forget to do the second one after the first. It causes a lot of bugs and I'm wondering if there is an idiomatic Rust way to avoid this problem. Is there a way to get the rust compiler to let me know when I forget?
My idea was to maybe somehow have something like this:
// must_use will prevent us from forgetting this if it is returned by a function
#[must_use]
pub struct MustGoLeft {
steps: usize;
}
impl MustGoLeft {
fn move(&self, foo: &mut Foo) {
foo.move_left_by(self.steps);
}
}
// If we don't use left, we'll get a warning about an unused variable
let left = foo.move_left_by(10);
// Downside: move() can be called multiple times which is still a bug
// Downside: left is still available after this call, it would be nice if it could be dropped when move is called
left.move();
Is there a better way to accomplish this?
Another idea is to implement Drop and panic! if the struct is dropped without having called that method. This isn't as good though because it's a runtime check and that is highly undesirable.
Edit: I realized my example may have been too simple. The logic involved can get quite complex. For example, we have something like this:
foo.move_right_by(10);
foo.open_box(); // like a cardboard box, nothing to do with Box<T>
foo.move_left_by(10);
// do more stuff...
foo.close_box();
Notice how the operations aren't performed in a nice, properly nested order. The only thing that's important is that the inverse operation is always called afterwards. The order sometimes needs to be specified in a certain way in order to make the code work as expected.
We can even have something like this:
foo.move_right_by(10);
foo.open_box(); // like a cardboard box, nothing to do with Box<T>
foo.move_left_by(10);
// do more stuff...
foo.move_right_by(10);
foo.close_box();
foo.move_left_by(10);
// do more stuff...
A:
You can use phantom types to carry around additional information, which can be used for type checking without any runtime cost. A limitation is that move_left_by and move_right_by must return a new owned object because they need to change the type, but often this won't be a problem.
Additionally, the compiler will complain if you don't actually use the types in your struct, so you have to add fields that use them. Rust's std provides the zero-sized PhantomData type as a convenience for this purpose.
Your constraint could be encoded like this:
use std::marker::PhantomData;
pub struct GoneLeft;
pub struct GoneRight;
pub type Completed = (GoneLeft, GoneRight);
pub struct Thing<S = ((), ())> {
pub position: i32,
phantom: PhantomData<S>,
}
// private to control how Thing can be constructed
fn new_thing<S>(position: i32) -> Thing<S> {
Thing {
position: position,
phantom: PhantomData,
}
}
impl Thing {
pub fn new() -> Thing {
new_thing(0)
}
}
impl<L, R> Thing<(L, R)> {
pub fn move_left_by(self, by: i32) -> Thing<(GoneLeft, R)> {
new_thing(self.position - by)
}
pub fn move_right_by(self, by: i32) -> Thing<(L, GoneRight)> {
new_thing(self.position + by)
}
}
You can use it like this:
// This function can only be called if both move_right_by and move_left_by
// have been called on Thing already
fn do_something(thing: &Thing<Completed>) {
println!("It's gone both ways: {:?}", thing.position);
}
fn main() {
let thing = Thing::new()
.move_right_by(4)
.move_left_by(1);
do_something(&thing);
}
And if you miss one of the required methods,
fn main(){
let thing = Thing::new()
.move_right_by(3);
do_something(&thing);
}
then you'll get a compile error:
error[E0308]: mismatched types
--> <anon>:49:18
|
49 | do_something(&thing);
| ^^^^^^ expected struct `GoneLeft`, found ()
|
= note: expected type `&Thing<GoneLeft, GoneRight>`
= note: found type `&Thing<(), GoneRight>`
A:
I don't think #[must_use] is really what you want in this case. Here's two different approaches to solving your problem. The first one is to just wrap up what you need to do in a closure, and abstract away the direct calls:
#[derive(Debug)]
pub struct Foo {
x: isize,
y: isize,
}
impl Foo {
pub fn new(x: isize, y: isize) -> Foo {
Foo { x: x, y: y }
}
fn move_left_by(&mut self, steps: isize) {
self.x -= steps;
}
fn move_right_by(&mut self, steps: isize) {
self.x += steps;
}
pub fn do_while_right<F>(&mut self, steps: isize, f: F)
where F: FnOnce(&mut Self)
{
self.move_right_by(steps);
f(self);
self.move_left_by(steps);
}
}
fn main() {
let mut x = Foo::new(0, 0);
println!("{:?}", x);
x.do_while_right(10, |foo| {
println!("{:?}", foo);
});
println!("{:?}", x);
}
The second approach is to create a wrapper type which calls the function when dropped (similar to how Mutex::lock produces a MutexGuard which unlocks the Mutex when dropped):
#[derive(Debug)]
pub struct Foo {
x: isize,
y: isize,
}
impl Foo {
fn new(x: isize, y: isize) -> Foo {
Foo { x: x, y: y }
}
fn move_left_by(&mut self, steps: isize) {
self.x -= steps;
}
fn move_right_by(&mut self, steps: isize) {
self.x += steps;
}
pub fn returning_move_right(&mut self, x: isize) -> MovedFoo {
self.move_right_by(x);
MovedFoo {
inner: self,
move_x: x,
move_y: 0,
}
}
}
#[derive(Debug)]
pub struct MovedFoo<'a> {
inner: &'a mut Foo,
move_x: isize,
move_y: isize,
}
impl<'a> Drop for MovedFoo<'a> {
fn drop(&mut self) {
self.inner.move_left_by(self.move_x);
}
}
fn main() {
let mut x = Foo::new(0, 0);
println!("{:?}", x);
{
let wrapped = x.returning_move_right(5);
println!("{:?}", wrapped);
}
println!("{:?}", x);
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to properly draw Residual Neural Network blocks with Graphviz?
I am trying to create a graph showing the ConvNet architecture with residual connections. I use the following graph definition.
digraph Model {
node [shape=box];
input [label="Input"];
n1 [label="Conv2d(256, BN, ReLU)"];
n2 [label="Conv2D(256, BN, ReLU)"];
n3 [label="Conv2D(128, BN, ReLU)"];
n4 [label="Conv2D(128, BN, ReLU)"];
n5 [label="GlobalPool2d"];
n6 [label="Flatten"];
top [label="Dense(1, Sigmoid)"];
add1 [label="Add"];
add2 [label="Add"];
input -> n1;
n1 -> n2;
n1 -> add1;
n2 -> add1;
add1 -> n3;
n3 -> n4;
n3 -> add2;
n4 -> add2;
add2 -> n5;
n5 -> n6 -> top;
}
The generated plot looks like the following image shows.
The problem is that the residual connections shift the convolution layers to the left. I wonder if it's possible to align the boxes by vertical axis? So all the layers are on the same vertical line and the residual connections go around. I've tried to do some manipulations with rank and rankdir but without any luck.
Could you help me with it? Or maybe point to the relevant part of the documentation where I can read how to properly do what I need?
A:
The weight attribute will do what you want.
Change two lines to:
n1 -> add1 [weight=0];
n3 -> add2 [weight=0];
You'll get this:
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Using numpy.median on a masked array
I'm a little confused about the output of numpy.median in the case of masked arrays. Here is a simple example (assuming numpy is imported - I have version 1.6.2):
>>> a = [3.0, 4.0, 5.0, 6.0, numpy.nan]
>>> am = numpy.ma.masked_array(a, [numpy.isnan(x) for x in a])
I'd like to be able to use the masked array to ignore nanvalues in the array when calculating the median. This works for mean using either numpy.mean or the mean() method of the masked array:
>>> numpy.mean(a)
nan
>>> numpy.mean(am)
4.5
>>> am.mean()
4.5
However for median I get:
>>> numpy.median(am)
5.0
but I'd expect something more like this result:
>>> numpy.median([x for x in a if not numpy.isnan(x)])
4.5
and unfortunately a masked_array does not have a median method.
A:
Use np.ma.median on a MaskedArray.
[Explanation: If I remember correctly, the np.median does not support subclasses, so it fails to work correctly on np.ma.MaskedArray.]
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Class design complication (C++)
My classes are
Base
Derived_A
Derived_B
Parent
Child_One
Child_Two
Base has two signature functions:
virtual void foo( const Parent& ) = 0;
virtual void bar( const Base& ) = 0;
, which other parts of the program expect.
The problem is:
Derived_A treats Child_One and Child_Two the same. But Derived_B treats them differently.
How should I implement this?
One way is to find out what kind of object is passed to Derived_B.foo. This would be apparently "a design flaw".
The other way I tried is to change the signature functions as:
class Base
{
class Derived_A;
class Derived_B;
// virtual void bar( const Base& ) = 0;
virtual void bar( const Derived_A& ) = 0;
virtual void bar( const Derived_B& ) = 0;
}
class Derived_A: public virtual Base
{
virtual void foo( const Parent& ) = 0;
}
class Derived_B: public virtual Base
{
virtual void foo( const Child_A& ) = 0;
virtual void foo( const Child_B& ) = 0;
}
But now the bar function cannot use Base.foo. So I have to write the bar function twice, although the code is exactly the same.
Are there any other ways to deal with the problem? which one do you suggest?
P.S. I couldn't think of a good title. Please feel free to modify it.
A:
The problem you are describing is called Double Dispatch. The link describes the problem and a few possible approaches to a solution (including polymorphic function signatures and the visitor pattern).
A:
Without details of what the two type hierarchies' relation is with each other and how they interact, it's impossible to say what approach is appropriate. I've composed an overview of the other answers and another viable alternative that can be extended to the visitor pattern which was mentioned in a comment.
Performing the polymorphic behaviour in the children implementing a virtual function in Parent as already suggested by Joey Andres is quite typical object oriented solution for this problem in general. Whether it's appropriate, depends on the responsibilities of the objects.
The type detection as suggested by Olayinka and already mentioned in your question certainly smells kludgy, but depending on details, can be the minimum of N evils. It can be implemented with member function returning an enum (I guess that's what Olayinka's answer tries to represent) or with a series of dynamic_casts as shown in one of the answers in the question you linked.
A trivial solution could be to overload foo in Base:
struct Base {
virtual void foo(const Parent&) = 0;
virtual void foo(const Child_Two&) = 0;
};
struct Derived_A: Base {
void foo(const Parent& p) {
// treat same
}
void foo(const Child_Two& p) {
foo(static_cast<Parent&>(p));
}
};
struct Derived_A: Base {
void foo(const Parent& p) {
// treat Child_One (and other)
}
void foo(const Child_Two& p) {
// treat Child_Two
}
};
If there are other subtypes of Base that treat Child_One and Child_Two the same, then the implementation of foo(const Child_Two&) may be put in Base to avoid duplication.
The catch of this approach is that foo must be called with a reference of proper static type. The call will not resolve based on the dynamic type. That may be better or worse for your design. If you need polymorphic behaviour, you can use the visitor pattern which essentially adds virtual dispatch on top of the solution above:
struct Base {
foo(Parent& p) {
p.accept(*this);
}
virtual void visit(Child_A&) = 0;
virtual void visit(Child_B&) = 0;
};
struct Parent {
virtual void accept(Base&) = 0;
};
struct Child_A: Parent {
void accept(Base& v) {
v.visit(*this);
}
};
// Child_B similarly
struct Derived_A: Base {
void treat_same(Parent&) {
// ...
}
void visit(Child_A& a) {
treat_same(a);
}
void visit(Child_B& b) {
treat_same(b);
}
};
struct Derived_B: Base {
void visit(Child_A&) {
// ...
}
void visit(Child_B&) {
// ...
}
};
There's a bit more boilerplate, but since you seem very averse to implementing the behaviour in the children, this may be good approach for you.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can Athena search all object versions in S3?
I've done some reading around whether this possible but I can't find anything concrete. If my data in S3 is versioned JSON files, can I use Athena to search in all versions of each object?
Thanks,
A:
There is no specific information about this, but I would say the answer is no.
There would be very little need to load data from multiple versions of an object.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
¿Que charset debo usar cuando mando un mail por PHP?
Uso este charset:
$mailHeader2 .= "Mime-Version: 1.0 \r\n";
$mailHeader2 .= "Content-Type: text/html; charset=utf-8";
A:
Imaginando que tu problema se relaciona a caracteres con tildes y eñes, es muy probable que debas utilizar el charset UTF-8.
Utilizando la información de esta respuesta, este sería el código de tu header:
$mailHeader2 .= "Content-Type: text/html; charset=utf-8";
Y si el contenido de tu correo no posee ningún HTML, entonces puedes usar este header:
$mailHeader2 .= "Content-Type: text/plain;charset=utf-8";
Asimismo, asegúrate que el charset de tu archivo PHP sea el mismo que el que estás colocando en el header. Esto es, si vas a enviar el email en UTF-8, tu archivo PHP también debe estar escrito en UTF-8.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Setting alarm via Uri reference
Intent intent = new Intent(Intent.ACTION_PICK, android.provider.MediaStore.Audio.Media.EXTERNAL_CONTENT_URI);
startActivity(intent);
1.How can I get an URI of music file using code above?
2.If I have Uri already, how set it as ringtone or alarm?
Also tried use:
Convert a file path to Uri in Android
A:
To get a return result from an activity, use startActivityForResult instead of startActivity.
private int REQUEST_ACTION_PICK = 1;
...
Intent intent = new Intent(Intent.ACTION_PICK, android.provider.MediaStore.Audio.Media.EXTERNAL_CONTENT_URI);
// startActivity(intent);
startActivityForResult(intent, REQUEST_ACTION_PICK); //fixme
// ...
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == REQUEST_ACTION_PICK && resultCode == RESULT_OK) {
Uri uri = data.getData();
Log.d(TAG, uri.toString());
}
}
As for 'how set it as ringtone or alarm?', perhaps this might work?
Set default alarm sound programatically Android
|
{
"pile_set_name": "StackExchange"
}
|
Q:
/usr/bin/env ruby_noexec_wrapper fails with no file or directory
When I try to start chef-solr as service it's failing with the following error
# service chef-solr start
Starting chef-solr: /usr/bin/env: ruby_noexec_wrapper: No such file or directory
[FAILED]
But when I run it manually from command line it's running successfully
# chef-solr -d -c /etc/chef/solr.rb -L /var/log/chef/solr.log -P /var/run/chef/solr.pid
# echo $?
0
# ps -ef | grep chef
root 2691 1 12 04:19 ? 00:00:01 java -Xmx256M -Xms256M -Dsolr.data.dir=/var/lib/chef/solr/data -Dsolr.solr.home=/var/lib/chef/solr/home -jar /var/lib/chef/solr/jetty/start.jar
Here is my rvm info
# rvm info
ruby-1.9.3-p194:
system:
uname: "Linux Console 2.6.32-220.el6.x86_64 #1 SMP Wed Nov 9 08:03:13 EST 2011 x86_64 x86_64 x86_64 GNU/Linux"
bash: "/bin/bash => GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)"
zsh: " => not installed"
rvm:
version: "rvm 1.15.6 (stable) by Wayne E. Seguin <[email protected]>, Michal Papis <[email protected]> [https://rvm.io/]"
updated: "7 hours 1 minute 51 seconds ago"
ruby:
interpreter: "ruby"
version: "1.9.3p194"
date: "2012-04-20"
platform: "x86_64-linux"
patchlevel: "2012-04-20 revision 35410"
full_version: "ruby 1.9.3p194 (2012-04-20 revision 35410) [x86_64-linux]"
homes:
gem: "/usr/local/rvm/gems/ruby-1.9.3-p194"
ruby: "/usr/local/rvm/rubies/ruby-1.9.3-p194"
binaries:
ruby: "/usr/local/rvm/rubies/ruby-1.9.3-p194/bin/ruby"
irb: "/usr/local/rvm/rubies/ruby-1.9.3-p194/bin/irb"
gem: "/usr/local/rvm/rubies/ruby-1.9.3-p194/bin/gem"
rake: "/usr/local/rvm/gems/ruby-1.9.3-p194/bin/rake"
environment:
PATH: "/usr/local/rvm/gems/ruby-1.9.3-p194/bin:/usr/local/rvm/gems/ruby-1.9.3-p194@global/bin:/usr/local/rvm/rubies/ruby-1.9.3-p194/bin:/usr/local/rvm/bin:/usr/lib64/qt-3.3/bin:/usr/java/default/bin:/bin:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/usr/sbin:/usr/bin:/root/bin"
GEM_HOME: "/usr/local/rvm/gems/ruby-1.9.3-p194"
GEM_PATH: "/usr/local/rvm/gems/ruby-1.9.3-p194:/usr/local/rvm/gems/ruby-1.9.3-p194@global"
MY_RUBY_HOME: "/usr/local/rvm/rubies/ruby-1.9.3-p194"
IRBRC: "/usr/local/rvm/rubies/ruby-1.9.3-p194/.irbrc"
RUBYOPT: ""
gemset: ""
Here are the corresponding environmental variables
declare -x GEM_HOME="/usr/local/rvm/gems/ruby-1.9.3-p194"
declare -x GEM_PATH="/usr/local/rvm/gems/ruby-1.9.3-p194:/usr/local/rvm/gems/ruby-1.9.3-p194@global"
declare -x IRBRC="/usr/local/rvm/rubies/ruby-1.9.3-p194/.irbrc"
declare -x MY_RUBY_HOME="/usr/local/rvm/rubies/ruby-1.9.3-p194"
declare -x PATH="/usr/local/rvm/gems/ruby-1.9.3-p194/bin:/usr/local/rvm/gems/ruby-1.9.3-p194@global/bin:/usr/local/rvm/rubies/ruby-1.9.3-p194/bin:/usr/local/rvm/bin:/usr/lib64/qt-3.3/bin:/usr/java/default/bin:/bin:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/usr/sbin:/usr/bin:/root/bin"
declare -x RUBY_VERSION="ruby-1.9.3-p194"
How to get this issue resolved?
A:
make sure all the variables all set correctly, especially PATH and GEM_PATH, you can use this code to set the environment for you:
source /usr/local/rvm/environments/ruby-1.9.3-p194
add it in the service before chef-solr is run
A:
My problem was similar, and so was my answer:
My problem was
Permission denied - /usr/local/rvm/gems/ruby-1.9.3-p194/bin/ruby_noexec_wrapper
ruby_noexec_wrapper was in ruby-1.9.3-p194@global not in the listed path
My solution was
source /usr/local/rvm/environments/ruby-1.9.3-p194@global
I upvoted mpapis because his answer was critical in finding mine. Feel free to upvote him rather than me. Just adding an additional answer to try and help anybody with a similar problem.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Relative approximation of morphisms
Let $S$ a qcqs scheme, and let $f : X := \varprojlim_j X_j \to S$ be an inverse limit of qcqs schemes $f_j : X_j \to S$ with affine transition maps.
Suppose $f$ is (P). Is $f_j$ also (P) for all $j$ large enough?
(1) P = flat;
(2) P = finite;
(3) P = quasi-finite.
A:
No (in general).
Let $A$ be a non-zero ring and let $S = \mathrm{Spec}(A[T]/(T^2))$. Let $M$ be a free $A$-module of infinite rank, viewed as an $A[T]/(T^2)$-module via the section $A[T]/(T^2) \rightarrow A$ which sends $T$ to $0$.
Take $X_j = \mathrm{Spec}(A[T]/(T^2) \oplus M)$, where $m^2 = 0$ for any $m \in M$, with affine transition maps given by the composition
$$
A[T]/(T^2) \oplus M \rightarrow A[T]/(T^2) \rightarrow A[T]/(T^2) \oplus M.
$$
The limit $X \rightarrow S$ is an isomorphism, hence satisfies $(1),(2),(3)$, while each $X_j$ satisfies none of the properties $(1),(2),(3)$.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can i show language switcher in both the header and the footer?
How can i show language switcher in both the header and the footer, but with a different template than in the header? I would like to have another phtml file, where I can customize it...
A:
I have found the solution. I have a footer block that has a template file called footer.phtml. In that file I added this code:
<div class="container">
<?php echo $this->getLayout()->createBlock("Magento\Store\Block\Switcher")->setTemplate("Magento_Store::switch/languages_footer.phtml")->toHtml() ?>
</div>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can I use the C++11 brace initialization syntax to avoid declaring trivial constructors for simple aggregates?
Let's say I have the following code:
#include <vector>
struct Foo
{
int tag = 0;
std::function<void ()> code;
};
int main()
{
std::vector<Foo> v;
}
And now I want to add a new Foo item to the vector with the specific tag and code without explicitly creating a temporary. That means I must add a constructor for Foo:
struct Foo
{
inline Foo(int t, std::function<void ()> c): tag(t), code(c) {}
int tag = 0;
std::function<void ()> code;
};
And now I can use emplace_back:
v.emplace_back(0, [](){});
But when I had to do this again - for the 100th time - with a newly created struct, I thought: can't I use the brace initializer? Like so:
#include <vector>
struct Foo
{
int tag = 0;
std::function<void ()> code;
};
int main()
{
std::vector<Foo> v;
v.push_back(Foo{ 0, [](){} });
}
That gives me a compilation error (cannot convert from 'initializer-list' to 'Foo'), but I hope this can be done and I've just got the syntax wrong.
A:
In C++11, you can't use an aggregate initializer with your struct because you used an equal initializer for the non-static member tag. Remove the = 0 part and it will work:
#include <vector>
#include <functional>
struct Foo
{
int tag;
std::function<void ()> code;
};
int main()
{
std::vector<Foo> v;
v.push_back(Foo{ 0, [](){} });
}
A:
According to the C++11 standard, Foo is not an aggregate, the presence of the brace-or-equal-initializer prevents it from being one.
However, this rule was changed for C++14, so if you compile your code with -std=c++14 (or whatever your compiler's equivalent setting is), Foo will be an aggregate, and your code will compile successfully.
Live demo
For a C++11 compiler, you must either remove the initializer, which will make Foo an aggregate, or provide a two argument constructor.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do I strip only the string part from a list in Python?
I have a list that looks like:
{
'J2EE': 0.0202219636,
'financial': 0.2439565346,
'Guru': 0.0202219636,
'AWS': 0.0202219636,
'next generation': 0.12072663160000001,
'Machine Learning': 0.2025762767,
'technology': 0.066936981
}
How do I extract only the text parts and make my list look like:
['J2EE', 'financial', 'Guru', 'AWS', ...]
Should I use Regular expressions?
A:
What you have there is a dictionary, not a list, and what you want are the keys:
your_dict = {'J2EE': 0.0202219636, 'financial': 0.2439565346, 'Guru': 0.0202219636, 'AWS': 0.0202219636, 'next generation': 0.12072663160000001, 'Machine Learning': 0.2025762767, 'technology': 0.066936981}
your_dict_keys = your_dict.keys()
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Implementing a Maybe for multidimensional data
The scenario is we're working with a REST endpoint that gives us nice JSON objects. We're using requests, and everything works wonderfully. But one day you notice that data you though was always being pulled back isn't there for some models! You're getting index, key and attribute errors left, right and centre. What used to be obj.phones[0] is now:
(DefaultAttr(obj, None).phones or [None])[0]
Or worse, if obj.phones[0] could actually be None!
And so I decided to build a simple Maybe like group of classes. The interface in Haskell seemed really clean and simple. However there's no case statement in Python, and not one that's based on type. so I decided to use Maybe.get(default: T) -> T and Just._value to get the value instead. Since Just._value isn't implemented on either Nothing or Maybe, I decided to make the equality check if the LHS is an instance of the RHS, and so Just(1) == Just would be True.
This however isn't good enough for the JSON object, and so I subclassed Maybe to create a MaybeNav, that creates a MaybeNav when you get items or attributes. And so if you don't care if obj.phones[0] is there or None, you can use MaybeNav(obj).phones[0].get(). Which is much cleaner, and simpler to read.
I've added type hints to the code, so that it's a little simpler to reason with. And I've added some docstrings, I'd appreciate any way to improve these, as I don't normally use them. And so my skills with them are likely to be very poor.
Due to using f-strings, the code only works in Python 3.6. Finally any and all help is welcome.
mayby.py
from typing import TypeVar, Generic
T = TypeVar('T')
class Maybe(Generic[T]):
"""Simple Maybe abstract base class"""
def __init__(self, *args, **kwargs):
"""
Error as this should be overwridden
The code for this interface is implemented in Just and Nothing.
This is to prevent implementation errors. Such as overwritting
__getattr__ in a child class causing an infinate loop.
"""
raise TypeError("'Maybe' needs to be instansiated by child constructor")
def get(self, default: T = None) -> T:
"""Get the value in Just, or return default if Nothing"""
raise NotImplementedError("'get' should be changed in a sublcass")
class Constant(type):
def __call__(self, *args, **kwargs):
"""Simple metaclass to create a constant class"""
# Constants can't be called.
raise TypeError(f"'{self.__name__}' object is not callable")
def __repr__(self) -> str:
# Display the constant, rather than the class location in memory.
return f'{self.__name__}'
class JustBase:
def __init__(self, value: T):
"""Younger sibling class of Maybe for Just classes"""
self.__value = value
def __repr__(self) -> str:
return f'{type(self).__name__}({self._value!r})'
def __eq__(self, other: object) -> bool:
"""
Check if this is an instance of other
This makes checking if the class is a Just simpler.
As it's a common operation.
"""
return isinstance(other, type) and isinstance(self, other)
def get(self, default: T = None) -> T:
"""Get the value in Just, or return default if Nothing"""
return self._value
@property
def _value(self):
return self.__value
def build_maybes(Maybe, just_name, nothing_name):
"""Build a Just and Nothing inheriting from Maybe"""
Just = type(just_name, (JustBase, Maybe), {})
class MaybeConstant(Constant, Maybe):
def get(self, default: T = None) -> T:
"""Get the value in Just, or return default if Nothing"""
return default
Nothing = MaybeConstant(nothing_name, (object,), {})
return Just, Nothing
class MaybeNav(Maybe[T]):
"""Maybe for navigating objects"""
# __getitem__ and __getattr__ actually return MaybeNav[T].
def __getitem__(self, item: object) -> Maybe[T]:
if self == NothingNav:
return NothingNav
val = self._value
try:
val = val[item]
except Exception:
return NothingNav
else:
return JustNav(val)
def __getattr__(self, item: str) -> Maybe[T]:
obj = object()
val = getattr(self.get(obj), item, obj)
if val is obj:
return NothingNav
return JustNav(val)
Just, Nothing = build_maybes(Maybe, 'Just', 'Nothing')
JustNav, NothingNav = build_maybes(MaybeNav, 'JustNav', 'NothingNav')
# Don't delete T, so subclasses can use the same generic type
del TypeVar, Generic, Constant, JustBase
An example of using this can be:
import math
from collections import namedtuple
from typing import Iterable
from maybe import Maybe, Just, Nothing, MaybeNav, JustNav
def safe_log(number: float) -> Maybe[float]:
if number > 0:
return Just(math.log(number))
else:
return Nothing
def user_values(obj: object) -> Iterable[MaybeNav[object]]:
obj = JustNav(obj)
return [
obj.name,
obj.first_name,
obj.last_name,
obj.phones[0]
]
v = safe_log(1000)
print(v == Just, v)
v = safe_log(-1000)
print(v == Just, v)
User = namedtuple('User', 'name first_name last_name phones')
vals = user_values(User('Peilonrayz', 'Peilonrayz', None, []))
print(vals)
print([val.get(None) for val in vals])
I find maybe.py a little hard to read, mostly as I got the inheritance to happen in build_maybes. And so the following is a class diagram of the user defined objects in maybe.py. One thing to note is the dotted arrows from Nothing and NothingNav to MaybeConstant are to show they are using a metaclass, rather than box standard inheritance. The box with some classes in it is to emphasise they are created in build_maybes.
A:
This may very well be not what you want to hear, but this seems like the wrong kind of solution for a duck-typed language like Python, especially for something like a REST endpoint. You are going to be fighting the language and implementing non-idiomatic solutions to work around the fact that the language is based around a looser object model than you seem to want to impose.
My first reaction to (DefaultAttr(obj, None).phones or [None])[0] would not be fit for this forum, but the next thought would be why obj.phones[0] if len(obj.phones) else None wasn't good enough. It sounds like your data handling might be too generic - you are passing "arbitrary" objects to code which expects an object with a phones attribute. Instead you could pull out the phones-specific code and put that where you are sure of receiving compatible objects. Another option would be to build a generic object handler which finds all non-default attributes and either handles them all or passes them on to attribute-specific handlers.
Generics are much more of a known and expected factor in for example Haskell (as you know) and Java. So implementing this in a different language might better fit your programming style.
Passing type names as strings is a code smell in every language - you instantly lose the ability for automatic code inspection to understand what is happening.
Overriding __repr__ to print the class name is really weird. From the documentation:
If at all possible, this should look like a valid Python expression that could be used to recreate an object with the same value (given an appropriate environment). If this is not possible, a string of the form <...some useful description...> should be returned. […] This is typically used for debugging, so it is important that the representation is information-rich and unambiguous.
A couple of examples should illustrate this point:
>>> 'foo'.__repr__()
"'foo'"
>>> Exception().__repr__()
'Exception()'
With regard to docstrings, you should use them to explain the why, not the what. For example the one for Maybe.__init__ explains something which should be known to developers familiar with the Python object model. One exception in my opinion is when you cannot express a "what" in the language itself. For example, mentioning that something is an abstract base class is good because it's only implicit by the fact that the constructor raises TypeError (that is, the constructor could conceivably throw TypeError for some other reason).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What is the name for the programming paradigm characterized by Go?
I'm intrigued by the way Go abandons class hierarchies and seems to completely abandon the notion of class in the typical object oriented sense. Also, I'm amazed at the way interfaces can be defined without the type which implements that interface needing to know.
Are there any terms which are/can be used to characterize this type of programming methodology and language paradigm (or perhaps specific aspects of it)? Is the Go language paradigm sufficiently new and distinct from the classical OOP paradigm and sufficiently important in the history of computer programming to warrant a unique name?
A:
Message passing between lightweight execution contexts, coupled with ability to create and destroy these contexts dynamically, is basically the actor model.
Programming languages tend to approach the expression problem in one of two ways: OO-languages tend to focus on making it easier to implement the same operations using different data types (e.g. "object I can click on with a mouse" might be a scrollbar, a window, a menu, a text-box, etc. - same operation, different data representations), while functional languages tend to focus on easily implementing new operations against the same underlying data types. By abandoning class hierarchies, Go seems to end up more on the "functional" side of this divide.
As Adam Crossland indicated in his comment, "type-ignorantly-implementing-interface" can be considered a form of duck-typing, which is highly prevalent in dynamic languages. (It's more technically correct, though, to consider this as a structural type system within Go. C++ templates are probably the most popular implementation of a structural type system today.)
Go has plenty of antecedents - I don't think any one of its ideas are original to the language. But I think that's generally the wrong measure for a language intended to be practical. Go looks like it mixes useful ideas from several different domains together in an elegant way, that (I think) would result in more productive programming than C# or Java might yield. I hope it gains traction.
A:
Concurrency primitives.
Communicating Sequential Processes or CSP is a paper by Charles Antony Richard Hoare, first published in 1985 by Prentice Hall International.
Erlang is a functional language and Go is structured, one might say that Erlang is the predicator to Go.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to pass an ID to read a Json from assets folder
id = localStorage.getItem('currentUser')
this.http.get('./assets/id/profiles/admin.json')
.subscribe(result => {
this.profile = result.json();
});
Getting id values from local storage with that id, trying to read json.
How to pass that id to read that particular json.
A:
this.http.get('./assets/' + id +' /profiles/admin.json')
.subscribe(result => {
this.profile = result.json();
});
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What do camera ready figures mean nowadays?
A journal I am submitting to requires camera-ready figures. What does this term mean nowadays? Does it imply high resolution? My figures are in png format and have 96 dpi. Enlarging it can produce some jagged edges. Should I update them to eps or pdf format?
A:
There isn't any real, single standard as to what "camera-ready figures" means. Different journals/publishers may use different publishing processes and workflows, which can result in different requirements for file formats, resolution etc. (Or even color spaces.) However, 96 dpi is indeed quite low. 300 dpi is a more reasonable minimum guideline for raster images - but obviously vector graphics should be preferred whenever possible.
In lieu of a standard, we can look at typical recommendations. A review by J.H. Lee in Science Editing titled Handling digital images for publication states that
The most commonly recommended resolution for printing on paper depends on the nature of the images: 1) 300 dpi for color pictures, 2) 300 to 600 dpi for black and white pictures, 3) 600 to 900 dpi for combination art (photo and text), and 4) 900 to 1,200 dpi for line art.
Note that this is the printed dpi, which isn't necessarily the same dpi as seen in the submitted manuscript. The same author also suggests a universal guide to avoid issues with publishers rescaling figures etc.:
It is the opinion of this author that a universal recommendation could help authors prepare their images. The standard figure size of most academic journals is about 86 mm (single column). The standard pixels per inch for line art is 900 to 1,200 ppi. Therefore, an image file of 900 ppi and 4 inches is of sufficient quality for most publications; this means 3,600 pixels in a horizontal line. It is recommended that authors use this number as a universal guide.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
groovy matcher failing or returning single characters and not whole words
I am attempting to create a matcher to match a regex and return a particular index, but despite trying numerous variations of the code, it either throws an exception, or only prints single characters and not the whole word. All of the examples I find are similar to what I am doing, but my results don't look like the examples. Here's the code:
def RAW = """
policer-profile "GD-1"
bandwidth cir 4992 cbs 32767 eir 4992 ebs 32767
traffic-type all
compensation 0
exit
policer-profile "EIR-1"
bandwidth cir 0 cbs 0 eir 9984 ebs 32767
traffic-type all
compensation 0
exit
shaper-profile "Shaper1"
bandwidth cir 999936 cbs 65535
compensation 0
exit
"""
RAW.split("\n").each() { line ->
def matcher = line =~ /bandwidth cir \d+ cbs \d+/
if (matcher) {
println line[0][2]
}
}
I keep getting either "index out of range" or it simply prints the "n" (third character) in the word "bandwidth" for each line, instead of the numerical value after "cir" (the third word). Any help would be greatly appreciated. Thanks in advance.
A:
I've sligthly modified the script:
def RAW = """
policer-profile "GD-1"
bandwidth cir 4992 cbs 32767 eir 4992 ebs 32767
traffic-type all
compensation 0
exit
policer-profile "EIR-1"
bandwidth cir 0 cbs 0 eir 9984 ebs 32767
traffic-type all
compensation 0
exit
shaper-profile "Shaper1"
bandwidth cir 999936 cbs 65535
compensation 0
exit
"""
RAW.split("\n").each() { line ->
def matcher = line =~ /\s+bandwidth cir (\d+) cbs (\d+).*/
if(matcher.matches()) {
println "cir: ${matcher[0][1]}, cbs: ${matcher[0][2]}"
}
}
You had an mistaken regex (whitespace at the beginning as well as not matching the end of line) and remember to output groups taken from matcher not from line. Now it should work.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Any recommendation to manage an investigation projects?
I'm working on an investigation on the NLP field. Since software engineering and software documentation are not the primary concerns of my whole investigation I decided to do it using XP.
My question is, can you recomend:
Any tool you've used to manage XP projects
Any recomendation/link on how to manage investigation projects
I'm very biased with XP because I've used it before, but if anyone has a better methodology you can post it as well
A:
XPlanner is my favorite XP/Scrum planning and management tool. It isn't heavy on reporting, but it gives me what I need. I've also used Rally, which is more suited to large organizations and is a bit heavy for the daily team tasks. If you need detailed reports for upper management, it does a good job.
As for how to manage investigation, I have my teams set definable goals. Tasks have definable success/failure/completion states and aren't vague. So we end up with tons of short tasks like "Test REST interface for performance capabilities" and "Examine proxy/firewall restrictions for client" rather than "Explore REST". New tasks go up all the time and forces us to have very short sprints. In the range of 3-5 days. You just can't plan exploration out much further than that in the beginning because you don't know what you don't know.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to extract data from a text file and write into CSV file in Java
I have got a text file containing reference, name, address, amount, dateTo, dateFrom and mandatory columns, in the following format:
"120030125 J Blog 23, SOME HOUSE, 259.44 21-OCT-2013 17-NOV-2013"
" SQUARE, STREET, LEICESTER,"
LE1 2BB
"120030318 R Mxx 37, WOOD CLOSE, BIRMINGHAM, 121.96 16-OCT-2013 17-NOV-2013 Y"
" STREET, NN18 8DF"
"120012174 JE xx 25, SOME HOUSE, QUEENS 259.44 21-OCT-2013 17-NOV-2013"
" SQUARE, STREET, LEICESTER,"
LE1 2BB
"100154992 DL x 23, SOME HOUSE, QUEENS 270.44 21-OCT-2013 17-NOV-2013 Y"
" SQUARE, STREET, LEICESTER,"
LE1 2BC
I am only interested in the first lines of each string and want to extract the data in the reference, name, amount, dateTo and dateFrom columns and want to write them into a CSV file. Currently I've only been able to write the following code and extract the first lines and get rid of the starting and ending double quotes. The input file contains white spaces and so does the output file.
public class ReadTxt {
public static void main(String[] args) throws IOException {
BufferedReader br = new BufferedReader(new FileReader("C:/Users/me/Desktop/input.txt"));
String pattern = "\"\\d\\d\\d\\d";
// Create a Pattern object
Pattern r = Pattern.compile(pattern);
int i;
ArrayList<String> list = new ArrayList<String>();
boolean a = true;
PrintWriter out = new PrintWriter(new PrintWriter("C:/Users/me/Desktop/Output.txt"), a);
try {
String line = br.readLine();
while (line != null) {
Matcher m = r.matcher(line);
if (m.find()) {
String temp;
temp = line.substring(1, line.length() - 1);
list.add(temp);
}
else {
// do nothing
}
line = br.readLine();
}
}
finally {
br.close();
}
for (i = 0; i < list.size(); i++) {
out.println(list.get(i));
}
out.flush();
out.close();
}
}
The above code will create a text file with the following output:
120030125 J Blog 23, SOME HOUSE, QUEENS 259.44 21-OCT-2013 17-NOV-2013
120030318 R Mxx 37, WOOD CLOSE, BIRMINGHAM, 121.96 16-OCT-2013 17-NOV-2013 Y
120012174 JE xx 25, SOME HOUSE, QUEENS 259.44 21-OCT-2013 17-NOV-2013
100154992 DL x 23, SOME HOUSE, QUEENS 259.44 21-OCT-2013 17-NOV-2013 Y
My expected output is as following, but into a csv file:
120030125 J Blog 259.44 21-OCT-2013 17-NOV-2013
120030318 R Mxx 121.96 16-OCT-2013 17-NOV-2013
120012174 JE xx 259.44 21-OCT-2013 17-NOV-2013
100154992 DL x 259.44 21-OCT-2013 17-NOV-2013
Any suggestions, links to tutorials or help would be greatly appreciated as I am not an expert in Java. I did try looking up for tutorials on the internet, but could not find any which was useful in my case.
A:
Here, test this out. I just used a array, but you can implement the necessary code into yours. I changed some addresses (look at 2nd and 3rd address in the array) to have spaces and no spaces in different locations to test.
public class SplitData {
public static void main(String[] args) {
String[] array = {"120030125 J Blog 23, SOME HOUSE, QUEENS 259.44 21-OCT-2013 17-NOV-2013",
"120030318 R Mxx 37,WOODCLOSE,BIRMINGHAM, 121.96 16-OCT-2013 17-NOV-2013 Y 0",
"120012174 JE xx 25, SOME HOUSE,QUEENS 259.44 21-OCT-2013 17-NOV-2013",
"100154992 DL x 23, SOME HOUSE, QUEENS 259.44 21-OCT-2013 17-NOV-2013 Y"
};
String s1 = null;
String s2 = null;
String s3 = null;
String s4 = null;
String s5 = null;
for (String s : array) {
String[] split = s.split("\\s+");
s1 = split[0];
s2 = split[1] + " " + split[2];
for (String string: split) {
if (string.matches("\\d+\\.\\d{2}")) {
s3 = string;
break;
}
}
String[] newArray = s.substring(s.indexOf(s3)).split("\\s+");
s4 = newArray[1];
s5 = newArray[2];
System.out.printf("%s\t%s\t%s\t%s\t%s\n", s1, s2, s3, s4, s5);
}
}
}
Output
120030125 J Blog 259.44 21-OCT-2013 17-NOV-2013
120030318 R Mxx 121.96 16-OCT-2013 17-NOV-2013
120012174 JE xx 259.44 21-OCT-2013 17-NOV-2013
100154992 DL x 259.44 21-OCT-2013 17-NOV-2013
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to DRY static repetitive boilerplate code in derived classes?
I have this inheritance model:
public class Animal
{
}
public class Dog : Animal
{
public static List<Action<Dog>> Augmenters = new List<Action<Dog>>();
}
public class Cat : Animal
{
public static List<Action<Cat>> Augmenters = new List<Action<Cat>>();
}
// in another place of the code, augmenters are added and configured
public static void Main (string[] args)
{
Dog.Augmenters.Add(dog =>
{
// doing something with dog
});
Cat.Augmenters.Add(cat =>
{
// doing something with cat
});
}
Augmenters have a lot of static code in each Dog/Cat/etc. classes including null checking, instantiation, concurrency control, performance tuning, etc. that is exactly the same across all derived classes.
Dog augmenters should be static, because they apply to all dogs, not just one dog. So does cat augmenters, etc.
Yet they can't be migrated to Animal class, because augmenters of each derived class differs from other classes. If I move Augmenters to the Animal class, then each augmenter that should only belong to cats, will be applied to dogs too.
How do you DRY this type of boilerplate code?
I saw something similar for C++ here, it's called CRTP.
A:
Let me try to dry
class Program
{
public abstract class Animal<T> where T : Animal<T>
{
public static List<Action<T>> Augmenters = new List<Action<T>>();
}
public class Dog : Animal<Dog>
{
}
public class Cat : Animal<Cat>
{
}
// in another place of the code, augmenters are added and configured
public static void Main(string[] args)
{
Dog.Augmenters.Add(dog =>
{
Console.WriteLine("bark");
});
Cat.Augmenters.Add(cat =>
{
Console.WriteLine("meow");
});
Dog.Augmenters[0].Invoke(new Dog());
Cat.Augmenters[0].Invoke(new Cat());
Console.ReadLine();
}
}
Added an abstract method and added a constraint for its type, at least you don't have to repeat the implementation of Augementers inside the concrete classes.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Mini tasks flow bar
I am working on a project where i need to define a flow of task which comprise of 4 mini task . Above that i need to show end user at which mini task he is. For this i am using image tags for every task.
<img>1</img>
<img>2</img>
<img>3</img>
<img>4</img>
Is there any other way to do this , like relating it by css or jquery but not by using image. Any idea is appriciated.
Thanks in advance.
A:
Something Like this(Lets assume first 2 tasks are completed!):
HTML:
<div class='project'>
<div class='task completed taskName1 '></div>
<div class='task completed taskName2 '></div>
<div class='task current taskName3 '></div>
<div class='task taskName4 '></div>
</div>
CSS:
.project {
/*project styles here...*/
}
.task {
/*task styles here...
...default styles*/
}
.task.completed {
/*completed task styles here...
override .task styles to show completed
*/
}
.task.current {
/*completed task styles here...
override .task styles to show current
*/
}
.taskNameX {
/*use task specific styles here*/
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
git rebase -i with gvim doesn't wait for me to save?
When using git rebase -i with core.editor=gvim, the message "Successfully rebased and updated refs/heads/master." appears immediately, before I actually save the file that allows me to pick/reword/squash... commits. And gvim pops up with a message that the file is no longer available.
Is this a config issue on my end, or does setting core.editor to graphical vim just not work?
A:
When gVim is running in GUI (as opposed to inside the terminal), they will typically launch themselves in the background, so you can continue to work in the terminal after they are launched. This behaviour is normal when Vim is launched through its gvim shortcut. To disable it, use the -f option that forces Vim to stay in foreground, and not relinquish control back to the calling process until it is finished.
core.editor="gvim -f"
|
{
"pile_set_name": "StackExchange"
}
|
Q:
connecting to database
can anyone help me how to connect my java forms to my mysql database?
i have this following codes but it didn't work...
private void saveActionPerformed(java.awt.event.ActionEvent evt) {
String value1 = textField1.getText();
String value2 = textField2.getText();
String value3 = textField3.getText();
String value4 = textField4.getText();
Connection con = null;
String url = "jdbc:mysql://localhost:3306/Marketing";
String driver = "com.mysql.jdbc.Driver";
String db = "Marketing";
String user = "root";
String pass = "";
System.out.println(value1 + value2 + value3 + value4);
try {
Class.forName(driver);
con = DriverManager.getConnection(url + db, user, pass);
PreparedStatement st = con.prepareStatement("insert into clients (idclients, name, address, contact_person, contact_num) values(?,?,?,?,?)");
st.setString(2, value2);
st.setString(3, value3);
st.setString(4, value4);
st.executeUpdate();
JOptionPane.showMessageDialog(jPanel1, "Data is successfully inserted into database.");
con.close();
} catch (Exception e) {
JOptionPane.showMessageDialog(jPanel1, "Error in submitting data!");
}
}
A:
There is problem with connection code :
url = jdbc:mysql://localhost:3306/Marketing
db = Marketing
url + db = jdbc:mysql://localhost:3306/MarketingMarketing
here you have to remove one extra marketing
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Prove that the complement of an open ball in $\mathbb{R^n}$ has exactly one unbounded component
Question: Let $B^n \subset \mathbb{R^n}$ be open ball in the Euclidean metric. Prove that the complement of $B^n$ in $\mathbb{R^n}$ has exactly one unbounded component (components of a set are class partitions defining largest connected sets...).
This is an exercise of the book 'C. Adam - Topology'. Obviously, it does not hold for $n=1$, since $\mathbb{R}-(-a,a)$ is disconnected and is made of TWO unbounded components. But for the case of $n=2,3$ it is easy to prove that the conjecture holds (each 'side' of the open balls is homeomorphic to $\mathbb{R^n}$ and has non-empty intersection with its neighbour-side...)
How to elevate the conjecture for the case of $n\ge 4$, since it is not possible by intuition?
Thank you.
EDIT - This is not dublicate, since my question is about complement of an open ball not a bounded set in general. I read here before I wrote my question; the answer doesn't prove $\mathbb{R^n}−B^n$ is connected, which I need to prove.
A:
Hint: Given two points in the complement of the ball, you can explicitly write down a path connecting them. This shows the complement is path connected, which shows it is connected. (And it's clearly unbounded.)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Notice: Constant XYZ already defined in wp-config.php on a line that doesn't exist
I've been tasked to move a website into a new domain, and I've encontered this weird issue.
On the homepage, I always see these:
Notice: Constant AUTOSAVE_INTERVAL already defined in /home/gturnat/public_html/wp-config.php on line 99
Notice: Constant WP_POST_REVISIONS already defined in /home/gturnat/public_html/wp-config.php on line 100
What I have tried:
Notice: Constant WP_POST_REVISIONS already defined suggests commenting the constants on default-constants.php, but it doesn't work.
Settings display_errors to 0, '0' or 'Off' does nothing.
Running error_reporting(0) will still display the errors.
Creating a mu-plugin (As suggested on How can I stop PHP notices from appearing in wordpress?).
Nothing happens and the plugin isn't even loaded.
The errors still keep showing up.
Tried to comment out the lines in wp-config.php, but didn't work. The notices are still there.
Removed the lines entirelly and moved them around wp-config.php, but the warnings insist it is on line 99 and 100.
Causing a syntax error inside wp-config.php does lead to an error being logged, which means that the file isn't cached.
Tried to enable and disable the debug mode, set display_errors to false, 0, '0' and 'Off', but doesn't work.
Ran grep -1R WP_POST_REVISIONS * and grep -1R AUTOSAVE_INTERVAL * with the following result:
root@webtest:# grep -lR WP_POST_REVISIONS *
wp-config.php
wp-includes/default-constants.php
wp-includes/revision.php
root@webtest:# grep -lR AUTOSAVE_INTERVAL *
wp-config.php
wp-includes/script-loader.php
wp-includes/default-constants.php
wp-includes/class-wp-customize-manager.php
I really am out of any other idea to try.
I'm using Wordpress 4.7.2, running on PHP 5.4 with the following modules loaded:
There is no op-cache working in the server. Just those options.
PHP was configured with the following options:
'./configure' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/opt/alt/php54' '--exec-prefix=/opt/alt/php54' '--bindir=/opt/alt/php54/usr/bin' '--sbindir=/opt/alt/php54/usr/sbin' '--sysconfdir=/opt/alt/php54/etc' '--datadir=/opt/alt/php54/usr/share' '--includedir=/opt/alt/php54/usr/include' '--libdir=/opt/alt/php54/usr/lib64' '--libexecdir=/opt/alt/php54/usr/libexec' '--localstatedir=/var' '--sharedstatedir=/usr/com' '--mandir=/opt/alt/php54/usr/share/man' '--infodir=/opt/alt/php54/usr/share/info' '--cache-file=../config.cache' '--with-libdir=lib64' '--with-config-file-path=/opt/alt/php54/etc' '--with-config-file-scan-dir=/opt/alt/php54/link/conf' '--with-exec-dir=/usr/bin' '--with-layout=GNU' '--disable-debug' '--disable-rpath' '--without-pear' '--without-gdbm' '--with-pic' '--with-zlib' '--with-bz2' '--with-gettext' '--with-gmp' '--with-iconv' '--with-openssl' '--with-kerberos' '--with-mhash' '--with-readline' '--with-pcre-regex=/opt/alt/pcre/usr' '--with-libxml-dir=/opt/alt/libxml2/usr' '--with-curl=/opt/alt/curlssl/usr' '--enable-exif' '--enable-ftp' '--enable-magic-quotes' '--enable-shmop' '--enable-calendar' '--enable-xml' '--enable-force-cgi-redirect' '--enable-fastcgi' '--enable-pcntl' '--enable-bcmath=shared' '--enable-dba=shared' '--with-db4=/usr' '--enable-dbx=shared,/usr' '--enable-dom=shared' '--enable-fileinfo=shared' '--enable-intl=shared' '--enable-json=shared' '--enable-mbstring=shared' '--enable-mbregex' '--enable-pdo=shared' '--enable-phar=shared' '--enable-posix=shared' '--enable-soap=shared' '--enable-sockets=shared' '--enable-sqlite3=shared,/opt/alt/sqlite/usr' '--enable-sysvsem=shared' '--enable-sysvshm=shared' '--enable-sysvmsg=shared' '--enable-wddx=shared' '--enable-xmlreader=shared' '--enable-xmlwriter=shared' '--enable-zip=shared' '--with-gd=shared' '--enable-gd-native-ttf' '--with-jpeg-dir=/usr' '--with-freetype-dir=/usr' '--with-png-dir=/usr' '--with-xpm-dir=/usr' '--with-t1lib=/opt/alt/t1lib/usr' '--with-imap=shared' '--with-imap-ssl' '--with-xmlrpc=shared' '--with-ldap=shared' '--with-ldap-sasl' '--with-pgsql=shared' '--with-snmp=shared,/usr' '--enable-ucd-snmp-hack' '--with-xsl=shared,/usr' '--with-pdo-odbc=shared,unixODBC,/usr' '--with-pdo-pgsql=shared,/usr' '--with-pdo-sqlite=shared,/opt/alt/sqlite/usr' '--with-mssql=shared,/opt/alt/freetds/usr' '--with-interbase=shared,/usr' '--with-pdo-firebird=shared,/usr' '--with-pdo-dblib=shared,/opt/alt/freetds/usr' '--with-mcrypt=shared,/usr' '--with-tidy=shared,/usr' '--with-recode=shared,/usr' '--with-enchant=shared,/usr' '--with-pspell=shared' '--with-unixODBC=shared,/usr' '--with-icu-dir=/opt/alt/libicu/usr' '--with-sybase-ct=shared,/opt/alt/freetds/usr'
As a testing point, I tried to run it on PHP 5.6, with the same results, with the following modules:
A:
tl;dr: Clear your (Comet) Cache!
Long answer:
I only have 2 words: Comet Cache!
Comet Cache was enabled.
Checking the source code showed me a note like this, after the closing </html>:
<!-- *´¨)
¸.•´¸.•*´¨) ¸.•*¨)
(¸.•´ (¸.•` ¤ Comet Cache Notes ¤ ´¨) -->
<!-- Cache File Version Salt: n/a -->
<!-- Cache File URL: http://<my-domain> -->
<!-- Cache File Path: /cache/comet-cache/cache/http/<my-domain>/index.html -->
<!-- Cache File Generated Via: HTTP request -->
<!-- Cache File Generated On: Feb 22nd, 2017 @ 5:37 pm UTC -->
<!-- Cache File Generated In: 4.59149 seconds -->
<!-- Cache File Expires On: Mar 1st, 2017 @ 5:37 pm UTC -->
<!-- Cache File Auto-Rebuild On: Mar 1st, 2017 @ 5:37 pm UTC -->
<!-- *´¨)
¸.•´¸.•*´¨) ¸.•*¨)
(¸.•´ (¸.•` ¤ Comet Cache is Fully Functional ¤ ´¨) -->
<!-- Loaded via Cache On: Feb 22nd, 2017 @ 5:37 pm UTC -->
<!-- Loaded via Cache In: 0.03472 seconds -->
Manually deleting /cache/comet-cache/cache/http/<my-domain>/index.html (path relative to your /wp-content/ directory) solved the issue.
I feel so stupid for assuming that there was no caching going on. Always blame the cache!
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Does the codiagonal functor have adjoints like the diagonal functor? If not, does it have something else instead?
I know that for a given category $\mathcal C$, the diagonal functor $\Delta : \mathcal{C \to C \times C}$ has the product and coproduct functors on $\mathcal C$ (if they exist) as its right and left adjoints respectively. But what about the codiagonal functor $\nabla : \mathcal{C + C \to C}$?
My thought is that $\nabla$ can't have adjoints because any functor $\mathcal{F : C \to C + C}$ has to send each object $X \in obj(\mathcal C)$ to one or the other 'side' of $\mathcal{C + C}$ (call them the 'left' and 'right' sides). Say $X$ is sent to the left side: then taking the right-hand copy of $X$ from $\mathcal{C + C}$ and applying $\mathcal{F \circ \nabla}$ will leave you on the left side of $\mathcal{C + C}$. Because there are no morphisms between the two sides of $\mathcal{C + C}$, there can be no natural transformations between $\mathcal{F \circ \nabla}$ and $\mathrm{id}_{\mathcal{C + C}}$ in either direction, therefore $\mathcal F$ cannot be a left or right adjoint to $\nabla$.
However, this result surprises me a little because I would have assumed there to be some duality between the diagonal and codiagonal functors. Is my reasoning above wrong? If I'm not wrong, and there are no adjoints, then do there exist adjoint-like structures for $\nabla$ that are in some way dual to the adjoints of $\Delta$? Or am I just looking for symmetry where there is none?
A:
Your argument is correct, and you're basically looking for symmetry where there is none. 2-categorical duality, including duality questions involving adjoints, is a bit more complicated than the ordinary case.
The duality that sends the product to the coproduct is the opposite (2-) category: $C+C$ is the product in Cat*, as in any category. Now, an adjunction $f:x\leftrightarrow y:g$ in a 2-category gives rise to an adjunction $g^*:x^*\leftrightarrow y^*:f**$ in the opposite 2-category. The 2-morphisms have not been turned around, so we still have the 2-morphism corresponding to the unit of the original adjunction, $\eta^*: 1_{x^*}\to (gf)^*=f^*g^*$. Thus our adjunction has given rise to an adjunction in the opposite 2-category between the same objects, with left and right adjoints interchanged.
There's another dual of a 2-category, which is the one realized in Cat by sending $C$ to $C^*$ (note the difference from the duality above!) This one turns around the 2-arrows, so it also preserves adjunctions but switches the left and right adjoints.
So, what happens in your case? The previous paragraph says that the adjoints to $\Delta$ are still adjoints to $\Delta:C^*\times C^*\to C^*$, but with left and right switched: this is the familiar phenomenon that products and coproducts are interchanged between $C$ and $C^*$. The paragraph before that says that we get some adjunctions in Cat*, which is useless.
What we would need is adjoints to the diagonal in Cat*, which would become adjoints to the codiagonal back in Cat. For this you would need some sort of general principle guaranteeing you such adjoints in a sufficiently large class of 2-categories. But notice that, while Cat* certainly has products and coproducts, asking for these adjoints is asking that $C^*$ have products in some internal sense, as an object of Cat*, which is a totally different level of question. What we are seeing is that, just because an object of a 2-category has done limits and colimits in the internal sense, there's no reason at all to believe the same holds off the same object viewed in the opposite 2-category. The dualities are simply not acting at the same level.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to retain this object when inserting to mutable array?
update: I have found the bug in my copyWithZone method in A. Thanks everyone.
update: sorry, I do have the @properties declared, I thought it was obvious so I skipped them in my OP. Sorry about that.
The crash message is: objA was released (zombie) memory when trying to access the str.
My data structure looks like this:
@class A
{
NSString *str;
}
@property (retain) NSString *str; // str synthesized in .m
@class B
{
A *objA;
}
@property (copy) A *objA; // objA synthesized in .m
What I am trying to do is:
B *newB = [[B alloc] init];
[someMutableArray addObject: newB];
However, I will crash some times when I try to access like this:
B *myB = [someMutableArray objectAtIndex: index];
someLabel.text = myB.objA.str;
I guess the objA & objA.str were not retained when inserting B into the array. But I don't know how to make sure they are retrain.
Any help is appreciated
-Leo
A:
You should be using properties for Class A and B:
@interface A : NSObject {
NSString *str;
}
@property (nonatomic, retain) NSString *str;
@end
The use @synthesize str; in .m file, this will retain the str, don't forget to release the str in the dealloc method:
@implementation A
@synthesize str;
- (void) dealloc {
[str release], str= nil;
[super dealloc];
}
@end;
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Behavior of expression evaluation in Plot
I'm confused with Mathematica's way of parsing expressions. I've been struggling with this for a while and never found an exhaustive answer, sometimes things don't parse the way I think they would and I don't really understand why.
As an example, with Mathematica 8:
(* A works *)
Manipulate[
Plot[(y/a) /. {y -> (x - b)}, {x, 0, 2}, PlotRange -> {0, 1}],
{a, 1, 2},
{b, 0, 1}]
(* B doesn't work *)
Manipulate[
Plot[(x/a) /. {x -> (x - b)}, {x, 0, 2}, PlotRange -> {0, 1}],
{a, 1, 2},
{b, 0, 1}]
(* C works *)
(x/a) /. {x -> (x - b)}
Manipulate[
Plot[%, {x, 0, 2}, PlotRange -> {0, 1}],
{a, 1, 2},
{b, 0, 1}]
(* D doesn't work *)
test = (x/a) /. {x -> (x - b)}
Manipulate[
Plot[test, {x, 0, 2}, PlotRange -> {0, 1}]
, {a, 1, 2}, {b, 0, 1}]
(* E works *)
test2[x_, a_, b_] = (x/a) /. {x -> (x - b)}
Manipulate[
Plot[test2[x, a, b], {x, 0, 2}, PlotRange -> {0, 1}],
{a, 1, 2},
{b, 0, 1}]
Case A works, so the substitution is performed fine inside Plot and Manipulate.
But B doesn't, which I could understand as an issue trying to substitute a variable with an expression containing itself, but then, if you evaluate it beforehand, as in C, everything works again, so it has to be a problem with x being part of Plot, I guess.
Then if you assign the result of the substitution to a variable, you can't directly plot it, so it seems that variables are not evaluated if they are not functions (as in E, with pattern matching) inside Plot. But % is, so % is "special" as it gets evaluated inside plot while a standard symbol assigned to a value does not.
Can someone explain me all this? I guess it's related to the Hold attributes a function can have?
A:
Plot has Attributes HoldAll, so one possibility to get what you expect is to do just
SetOptions[Plot, Evaluated -> True];
at the beginning.
Another possibility (better documented) would be to use Evaluate inside Plot:
Plot[Evaluate[...], {x, 0, 2}]
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Returning Rows Created in a Cursor From a Stored Procedure
I have a table with three columns each containing comma separated data like
AAA,BBB,CCC,DDD,....
I want to return a CROSS JOIN with all the possible combinations of all the tokens from all columns, along with some of the other columns on each row.
I have a Split function which returns the tokens in a table. I pass in each column and get a bunch of rows back.
The best way I have been able to come up with to do this is with a cursor, taking each row one at a time. After doing the CROSS JOIN, I write all the calculated rows to a work/temp table. Once all the rows are processed, I SELECT from the work/temp table to return the calculated rows.
My question: Is there a way to do this without the work/temp table?
The code I have now is:
DECLARE cPKG CURSOR FAST_FORWARD FOR SELECT ID, SEARCH, COUNTY, COMPANY FROM DEV..EXPPKG WITH(NOLOCK)
OPEN cPKG
FETCH NEXT FROM cPKG INTO @ID, @SEARCH, @COUNTY, @COMPANY
WHILE @@FETCH_STATUS = 0
BEGIN
INSERT INTO DEV..PKG_DUMP_WORK
(ID,
PKG_CODE,
PRICE,
CNTY,
CPNY,
SRCH,
SEARCH,
COUNTY,
COMPANY,
SRCH_COUNT,
UPDT_DT,
UPDT_BY,
UPDT_CMT)
SELECT PKG.ID,
PKG.PKG_CODE,
PKG.PRICE,
CNTY.VALUE AS CNTY,
CPNY.VALUE AS CPNY,
SRCH.VALUE AS SRCH,
PKG.SEARCH,
PKG.COUNTY,
PKG.COMPANY,
PKG.SRCH_COUNT,
PKG.UPDT_DT,
PKG.UPDT_BY,
PKG.UPDT_CMT
FROM (SELECT *
FROM DEV..EXPPKG WITH(NOLOCK)
WHERE ID = @ID) PKG
CROSS JOIN DBO.Split(@SEARCH, ',') SRCH -- AAA,BBB,CCC...
CROSS JOIN DBO.Split(@COUNTY, ',') CNTY -- DDD,EEE,FFF..
CROSS JOIN DBO.Split(@COMPANY, ',') CPNY -- GGG,HHH,KKK...
FETCH NEXT FROM cPKG INTO @ID, @SEARCH, @COUNTY, @COMPANY
END
CLOSE cPKG
DEALLOCATE cPKG
Some data:
INSERT INTO [EXPPKG] ( PKG_CODE, PRICE, SEARCH, COUNTY, COMPANY, SRCH_COUNT, UPDT_DT, UPDT_BY, UPDT_CMT ) VALUES ( 'A-2', 999, 'CO,ER,FC,HB,ST,TX', 'BX,KG,QN,RI', ',AAN,ALR,CITI,GRANITE,HARB,LLS,LTTA,MADI,NARROW,REGENCY,', 6, NULL, NULL, NULL );
INSERT INTO [EXPPKG] ( PKG_CODE, PRICE, SEARCH, COUNTY, COMPANY, SRCH_COUNT, UPDT_DT, UPDT_BY, UPDT_CMT ) VALUES ( 'AM-2', 999, 'CO,ER,FC,HB,ST,TX', 'MA', ',ALR,CITI,GRANITE,INTER,LTTA,MADI,SKYLINE,', 6, NULL, NULL, NULL );
INSERT INTO [EXPPKG] ( PKG_CODE, PRICE, SEARCH, COUNTY, COMPANY, SRCH_COUNT, UPDT_DT, UPDT_BY, UPDT_CMT ) VALUES ( 'B-2', 999, 'AR,CO,ER,FC,HB,HI,HL,ST,TX', 'BX,KG,QN,RI', ',C&C,LTTA,', 9, NULL, NULL, NULL );
INSERT INTO [EXPPKG] ( PKG_CODE, PRICE, SEARCH, COUNTY, COMPANY, SRCH_COUNT, UPDT_DT, UPDT_BY, UPDT_CMT ) VALUES ( 'CA-2', 999, 'CO,ER,FC,HB,HI,ST,TX', 'BX,KG,MA,QN,RI', ',CANY,CHATHAM,TRAK,', 7, NULL, NULL, NULL );
INSERT INTO [EXPPKG] ( PKG_CODE, PRICE, SEARCH, COUNTY, COMPANY, SRCH_COUNT, UPDT_DT, UPDT_BY, UPDT_CMT ) VALUES ( 'CT-4', 999, 'CO,ER,FC,HB', 'BX,KG,MA,QN,RI', ',CLTLTNY,CTALB,CTIM,CTIM-711,CTIM-CC,CTIM-Q,CTIM-R,CTIMR-O,FNT,FNT-A,FNT-AG,FNT-N,FNT-R,NYLS,TICOR,TICORROC,FNT-RAM,', 4, NULL, NULL, NULL );
A:
You could replace the entire cursor with a set based insert. I would also caution you against using that NOLOCK hint. It can and will return missing and/or duplicate rows. Along with a number of other nasty things. http://blogs.sqlsentry.com/aaronbertrand/bad-habits-nolock-everywhere/
INSERT INTO DEV..PKG_DUMP_WORK
(
ID,
PKG_CODE,
PRICE,
CNTY,
CPNY,
SRCH,
SEARCH,
COUNTY,
COMPANY,
SRCH_COUNT,
UPDT_DT,
UPDT_BY,
UPDT_CMT
)
SELECT PKG.ID,
PKG.PKG_CODE,
PKG.PRICE,
CNTY.VALUE AS CNTY,
CPNY.VALUE AS CPNY,
SRCH.VALUE AS SRCH,
PKG.SEARCH,
PKG.COUNTY,
PKG.COMPANY,
PKG.SRCH_COUNT,
PKG.UPDT_DT,
PKG.UPDT_BY,
PKG.UPDT_CMT
FROM DEV..EXPPKG PKG WITH(NOLOCK)
CROSS APPLY DBO.Split(PKG.SEARCH, ',') SRCH -- AAA,BBB,CCC...
CROSS APPLY DBO.Split(PKG.COUNTY, ',') CNTY -- DDD,EEE,FFF..
CROSS APPLY DBO.Split(PKG.COMPANY, ',') CPNY -- GGG,HHH,KKK...
|
{
"pile_set_name": "StackExchange"
}
|
Q:
To prove the mutual independence of events
If A$,B$ and $C$ are random events in a sample space and if $A,
B$ and $C$ are pairwise independent and $A$ is independent of $(B \cup C)$, then is it true that $A,B$ and $C$ are mutually independent.
My Attempt : (with questions of this type at my college the answer is usually in the affirmative.)
So to prove that $A,B$ and $C$ are mutually independent, all that remains to show is that $P(A \cap B \cap C) = P(A) \times P(B) \times P(C)$. We know that $P(A \cap (B \cup C))=P(A) \times P(B \cup C)$. How do I go about this?
A:
You have correctly identified the goal : We know about all necessary equations for mutual independence, except the final one : $P(A \cap B \cap C) = P(A) \times P(B) \times P(C)$.
Why not directly start from $P(A \cap (B \cup C))=P(A) \times P(B \cup C)$ ? From there you can probably apply the union rule to the right-hand side :
$P(B \cup C)=P(B) + P(C) - P(B \cap C)$
And on the left-hand side you can transform $A \cap (B \cup C)$ into $(A\cap B)\cup(A\cap C)$ and apply the union rule again.
Where do you land from there ?
PS : For those reading but not knowing : Mutual independence between events require that all possible intersections between events follow the "multiplication rule", similar to the one we find in pairwise independence :
$P(A\cap B) = P(A) \times P(B) \\
P(A\cap C) = P(A) \times P(C) \\
P(B\cap C) = P(B) \times P(C) \\
P(A \cap B \cap C) = P(A) \times P(B) \times P(C)$
The first three are already known from pairwise independence in the problem statement.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Django Rest Serializer returns empty
I'm following this tutorial (Django Rest Framework) and I'm facing a problem when I try to use serializers. It returns a empty result, instead the 49086 records that it should to return. My query it's ok because when I try to show the data without serializers it's shows these data. Please, what I'm doing wrong?
models.py
# coding=utf-8
from __future__ import unicode_literals
from django.db import models
class Workers(models.Model):
chapa = models.IntegerField(primary_key=True)
emp_cod = models.IntegerField(primary_key=False)
class Meta:
managed = False
db_table = 'WORKERS'
serializers.py
from rest_framework import serializers
from .models import Workers
class WorkersSerializer(serializers.ModelSerializer):
class Meta:
model = Workers
fields = '__all__'
views.py
...
@api_view(['GET'])
@permission_classes((permissions.AllowAny,))
def get_all_workers(request):
data = Workers.objects.using('rh').all().order_by('emp_cod')
print(data.count()) # Returns 49086
serializer = WorkersSerializer(data)
print(serializer.data) # Returns {}
json = JSONRenderer().render(serializer.data)
return Response(json) # Returns Django Rest standard page with "{}" data
A:
You should use many=True serializer's argument to serialize multiple objects. Also you can pass serializer.data directly as Response argument:
@api_view(['GET'])
@permission_classes((permissions.AllowAny,))
def get_all_workers(request):
data = Workers.objects.using('rh').all().order_by('emp_cod')
serializer = WorkersSerializer(data, many=True)
return Response(serializer.data)
Since your view return so many objects at once, I suggest you to add pagination:
from rest_framework.pagination import PageNumberPagination
@api_view(['GET'])
@permission_classes((permissions.AllowAny,))
def get_all_workers(request):
data = Workers.objects.using('rh').all().order_by('emp_cod')
paginator = PageNumberPagination()
paginator.page_size = 10
result_page = paginator.paginate_queryset(data, request)
serializer = WorkersSerializer(result_page, many=True)
return Response(serializer.data)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Creating an 'edit profile' action CakePHP 2.0?
I’m having a spot of problem with CakePHP 2.0.2. I’m wanting to create an “edit profile” action. Here’s my controller action:
public function edit_profile() {
if ($this->request->is('get')) {
$this->request->data = $this->User->findById($this->Auth->user('id'));
} else {
if ($this->User->save($this->request->data)) {
$this->Session->setFlash(__('Your profile has been updated'));
}
}
}
And here’s my view:
<h2>Edit Profile</h2>
<?php
echo $this->Form->create('User');
echo $this->Form->input('id', array('type' => 'hidden'));
echo $this->Form->input('first_name');
echo $this->Form->input('last_name');
echo $this->Form->input('email');
echo $this->Form->end('Save Profile');
?>
However, when I submit the form, nothing seems to happen. I get no success message, and I also get no error message. If I put an else statement to complement if ($this->User->save($this->request->data)), that code block is executed though, indicating the User model data is not saved.
Where am I going wrong?
A:
Check $this->User->validationErrors in the else statement if the user is not saved. My bet is that you have extra validation rules defined that are failing for fields not in your form.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
jQuery and MooTools Conflict
Okay, so I got jQuery to get along with MooTools with one script, by adding this at the top of the jQuery script:
var $j = jQuery.noConflict();
and then replacing every:
$(
with
$j(
But how would you get MooTools to like the following script that using jQuery??
Thanks in advance for any input,
Tracy
//Fade In Content Viewer: By JavaScript Kit: http://www.javascriptkit.com
var fadecontentviewer={
csszindex: 100,
fade:function($allcontents, togglerid, selected, speed){
var selected=parseInt(selected)
var $togglerdiv=$("#"+togglerid)
var $target=$allcontents.eq(selected)
if ($target.length==0){ //if no content exists at this index position (ie: stemming from redundant pagination link)
alert("No content exists at page number "+selected+"!")
return
}
if ($togglerdiv.attr('lastselected')==null || parseInt($togglerdiv.attr('lastselected'))!=selected){
var $toc=$("#"+togglerid+" .toc")
var $selectedlink=$toc.eq(selected)
$("#"+togglerid+" .next").attr('nextpage', (selected<$allcontents.length-1)? selected+1+'pg' : 0+'pg')
$("#"+togglerid+" .prev").attr('previouspage', (selected==0)? $allcontents.length-1+'pg' : selected-1+'pg')
$target.css({zIndex: this.csszindex++, visibility: 'visible'})
$target.hide()
$target.fadeIn(speed)
$toc.removeClass('selected')
$selectedlink.addClass('selected')
$togglerdiv.attr('lastselected', selected+'pg')
}
},
setuptoggler:function($allcontents, togglerid, speed){
var $toc=$("#"+togglerid+" .toc")
$toc.each(function(index){
$(this).attr('pagenumber', index+'pg')
})
var $next=$("#"+togglerid+" .next")
var $prev=$("#"+togglerid+" .prev")
$next.click(function(){
fadecontentviewer.fade($allcontents, togglerid, $(this).attr('nextpage'), speed)
return false
})
$prev.click(function(){
fadecontentviewer.fade($allcontents, togglerid, $(this).attr('previouspage'), speed)
return false
})
$toc.click(function(){
fadecontentviewer.fade($allcontents, togglerid, $(this).attr('pagenumber'), speed)
return false
})
},
init:function(fadeid, contentclass, togglerid, selected, speed){
$(document).ready(function(){
var faderheight=$("#"+fadeid).height()
var $fadecontents=$("#"+fadeid+" ."+contentclass)
$fadecontents.css({top: 0, left: 0, height: faderheight, visibility: 'hidden'})
fadecontentviewer.setuptoggler($fadecontents, togglerid, speed)
setTimeout(function(){fadecontentviewer.fade($fadecontents, togglerid, selected, speed)}, 100)
$(window).bind('unload', function(){ //clean up
$("#"+togglerid+" .toc").unbind('click')
$("#"+togglerid+" .next", "#"+togglerid+" .prev").unbind('click')
})
})
}
}
A:
When you have jQuery specific code that is using $, the simplest way is to wrap the code with the following:
// Disable the $ global alias completely
jQuery.noConflict();
// For jQuery scripts
(function($){
// set a local $ variable only available in this block as an alias to jQuery
... here is your jQuery specific code ...
})(jQuery);
// For Mootols scripts
(function($){
// set a local $ variable only available in this block as an alias
// to Mootools document.id
... here is your Mootools specific code ...
})(document.id);
See the second example on noConflict documentation.
A:
I don't know about a compatibility mode provided by MooTools, but an easy way should be to replace all occurrences of $( in the script by $j( or jQuery(.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to check if BackgroundFetching is enabled on iOS?
I like to determine if the user turned off Background Fetching in the Preferences app. If he turned it off my app won't work.
A:
Here is the code to do it:
if ([[UIApplication sharedApplication] backgroundRefreshStatus] == UIBackgroundRefreshStatusAvailable) {
NSLog(@"Background fetch is available for the app.");
}else if([[UIApplication sharedApplication] backgroundRefreshStatus] == UIBackgroundRefreshStatusDenied)
{
NSLog(@"Background fetch for this app or for the whole system is disabled.");
}else if([[UIApplication sharedApplication] backgroundRefreshStatus] == UIBackgroundRefreshStatusRestricted)
{
NSLog(@"Background updates are unavailable and the user cannot enable them again. For example, this status can occur when parental controls are in effect for the current user.");
}
|
{
"pile_set_name": "StackExchange"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.