text
stringlengths 64
89.7k
| meta
dict |
---|---|
Q:
C++ substring matching implementation
I have two strings, "hello" an "eo" for instance, and I wish to find duplicate characaters between the two strings, that is: 'e' and 'o' in this example.
My algorithm would go this way
void find_duplicate(char* str_1, char* str_2, int len1, int len2)
{
char c ;
if(len1 < len2)
{
int* idx_1 = new int[len1]; // record elements in little string
// that are matched in big string
for(int k = 0 ; k < len1 ; k++)
idx_1[k] = 0;
int* idx_2 = new int[len2]; // record if element in str_2 has been
// matched already or not
for(int k = 0 ; k < len2 ; k++)
idx_2[k] = 0;
for(int i = 0 ; i < len2 ; i++)
{
c = str_1[i];
for(int j = 0 ; j < len1 ; j++)
{
if(str_2[j] == c)
{
if(idx_2[j] == 0) // this element in str_2 has not been matched yet
{
idx_1[i] = j + 1; // mark ith element in idx as matched in string 2 at pos j
idx_2[j] = 1;
}
}
}
}
// now idx_1 and idx_2 contain matches info, let's remove matches.
char* str_1_new = new char[len1];
char* str_2_new = new char[len2];
int kn = 0;
for(int k = 0 ; k < len1 ; k++)
{
if(idx_1[k] > 0)
{
str_1_new[kn] = str_1[k];
kn++;
}
}
kn = 0;
for(int k = 0 ; k < len2 ; k++)
{
if(idx_2[k] > 0)
{
str_2_new[kn] = str_2[k];
kn++;
}
}
}
else
{
// same here, switching roles (do it yourself)
}
}
i feel my solution is awkward:
- symetry of both cases in first if/else and code duplication
- time complexity: 2*len1*len2 operations for finding duplicates, then len1 + len2 operations for removal
- space complexity: two len1 and two len2 char*.
What if len1 and len2 are not given (with and without resort to STL vector) ?
could you provide your implementation of this algo ?
thanks
A:
First of all, it isn't substring matching problem - it is problem of finding common characters between two strings.
Your solution works in O(n*m), where n=len1 and m=len2 in your code. You could easily solve the same problem in O(n+m+c) time by counting characters in each of strings (where c is equal to size of character set). This algorithm is called counting sort.
Sample code implementing this in your case:
#include <iostream>
#include <cstring> // for strlen and memset
const int CHARLEN = 256; //number of possible chars
using namespace std;
// returns table of char duplicates
char* find_duplicates(const char* str_1, const char* str_2, const int len1, const int len2)
{
int *count_1 = new int[CHARLEN];
int *count_2 = new int[CHARLEN];
char *duplicates = new char[CHARLEN+1]; // we hold duplicate chars here
int dupl_len = 0; // length of duplicates table, we insert '\0' at the end
memset(count_1,0,sizeof(int)*CHARLEN);
memset(count_2,0,sizeof(int)*CHARLEN);
for (int i=0; i<len1; ++i)
{
++count_1[str_1[i]];
}
for (int i=0; i<len2; ++i)
{
++count_2[str_2[i]];
}
for (int i=0; i<CHARLEN; ++i)
{
if (count_1[i] > 0 && count_2[i] > 0)
{
duplicates[dupl_len] = i;
++dupl_len;
}
}
duplicates[dupl_len]='\0';
delete count_1;
delete count_2;
return duplicates;
}
int main()
{
const char* str_1 = "foobar";
const char* str_2 = "xro";
char* dup = find_duplicates(str_1, str_2, strlen(str_1), strlen(str_2));
cout << "str_1: \"" << str_1 << "\" str_2: \"" << str_2 << "\"\n";
cout << "duplicates: \"" << dup << "\"\n";
delete dup;
return 0;
}
Please note that I am also sorting the output here. If you do not want to do that, you can skip the counting of characters in second string and just start comparing duplicates on the go.
If you, however, intend to be able to detect multiple duplicates of the same letter (e.g. if "banana" and "arena" should output "aan" instead of "an"), then you can just substract the number of counts in current solution and adjust the output accordingly.
| {
"pile_set_name": "StackExchange"
} |
Q:
Adding a user control "The Enumerator is not valid"
I'm using Visual Studio 2010 to create a small WPF application. I've created a user control that I am now trying to add to my main form. The user control does show up in toolbox but every time I try to drag the control to the form I get the error:
The enumerator is not valid because the collection changed.
I should know what's wrong and it is bugging me that I can't figure it out.
A:
You have a bug in the constructor of the usercontrol - you are using a foreach-loop over an IEnumerable and while the loop is running, the IEnumerable is changed, this is not allowed with a foreach loop. Use a for loop instead if you are manipulating the Collection you are iterating over.
A:
In order for a user control to function properly you need to have a constructor that takes zero arguments. This way the form designer has a way to render the control in a "default" manner.
I then overloaded my constructor to take the arguments I needed to actually run the control properly and everything worked as expected.
| {
"pile_set_name": "StackExchange"
} |
Q:
Only load a part of a page with jquery
I have a wordpress website and want to load my pages using jquery. I only want to load the content div, so I pass this the url like this:
var url = $(this).attr('href') + "#content";
Now when I try to load the page using ajax it loads the url without the #content part. Now it loads the complete page into my div instead of only the part that I want.
Maybe it has something to do with my .htaccess:
# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
# RewriteRule ^index\.php$ - [L]
RewriteRule ^(index\.php/?|intro.html)$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
# END WordPress
Any ideas?
A:
I think you need a space before the hash.
Here is a .load() example from the jQuery docs:
$('#b').load('article.html #target');
so in your case:
var url = $(this).attr('href') + " #content";
$('#containerDiv').load(url);
A:
You can use something like that:
$.get(url,function(response){
content = $("#content",response);
$('#newContentHere').append(content);
});
| {
"pile_set_name": "StackExchange"
} |
Q:
How to extend DT datatable cells across multiple columns inside table and in header
I'm having extreme difficulty trying to make a character string spread out across multiple columns in a DT:datatable header & in the table itself. I found these solutions:
Extend a table cell across multiple columns,
merge columns in DT:datatable
but can't seem to get them to work for my own purposes.
This is what I am getting:
This is what I want:
Sample Data:
df<-structure(list(`AQS ID` = c(NA, "AQS ID", "340071001", "340170006",
"340010006", "340070002", "340273001", "340150002", "340290006",
"340410007", "340190001", "340030006", "340110007", "340250005",
"340130003", "340315001", "340210005", "340230011", "340219991"
), State = c(NA, "State", "NJ", "NJ", "NJ", "NJ", "NJ", "NJ",
"NJ", "NJ", "NJ", "NJ", "NJ", "NJ", "NJ", "NJ", "NJ", "NJ", "NJ"
), Site = c(NA, "Site", "Ancora State Hospital", "Bayonne", "Brigantine",
"Camden Spruce St", "Chester", "Clarksboro", "Colliers Mills",
"Columbia", "Flemington", "Leonia", "Millville", "Monmouth University",
"Newark Firehouse", "Ramapo", "Rider University", "Rutgers University",
"Washington Crossing"), `4th Max (ppb)` = c("AQS AMP450 (4-10-19)",
"2015", "72", "77", "64", "79", "70", "76", "75", "66", "73",
"76", "68", "77", "72", "71", "73", "77", "75"), ...5 = c(NA,
2016, 64, 68, 63, 76, 67, 74, 71, 65, 73, 73, 68, 68, 68, 68,
71, 75, 74), ...6 = c(NA, 2017, 68, 67, 63, 76, 70, 73, 74, 64,
72, 74, 63, 60, 64, 66, 69, 75, 71), ...7 = c(NA, 2018, 68, 78,
63, 75, 73, 77, 74, 67, 72, 79, 63, 68, 71, 69, 76, 76, 77),
...8 = c("Envista", "2019", "67", "65", "59", "70", "62",
"68", "68", "58", "66", "71", "68", "67", "65", "64", "66",
"70", "67"), `Design Value 2017` = c("2017", "68", "70",
"63", "77", "69", "74", "73", "65", "72", "74", "66", "68",
"68", "68", "71", "75", "73", NA), `Design Value 2018` = c(2018,
66, 71, 63, 75, 70, 74, 73, 65, 72, 75, 64, 65, 67, 67, 72,
75, 74, NA), `Design Value 2019` = c(2019, 67, 70, 61, 73,
68, 72, 72, 63, 70, 74, 64, 65, 66, 66, 70, 73, 71, NA)), row.names = c(NA,
-19L), class = c("tbl_df", "tbl", "data.frame"))
Sample Code:
library(shiny)
library(shinydashboard)
ui <- dashboardPage(
dashboardHeader(),
dashboardSidebar(),
dashboardBody(DT::dataTableOutput("dailytable")))
server <- function(input, output) {
jsc <- '
function(settings, json) {
$("td:contains(\'AQS AMP450\')").attr("colspan", "4").css("text-align", "center");
$("tbody > tr:fifth-child > td:empty");
}'
output$dailytable<-renderDataTable({
DT::datatable(df,filter = 'top',options = list(dom = "t", ordering = FALSE, initComplete = JS(jsc)),
class = 'cell-border stripe')
})
}
shinyApp(ui, server)
As you can see, columns are getting pushed over to the right which is not what I want. I would appreciate any help or guidance. Thanks.
A:
Not exactly what you want, but close:
library(htmltools)
sketch <- withTags(
table(
class = "display",
thead(
tr(
th(colspan = 3, "2019", style = "border-right: solid 2px;"),
th(colspan = 5, "4th Max ppb", style = "border-right: solid 2px;"),
th(colspan = 3, "Design Values")
),
tr(
th(colspan = 3, "", style = "border-right: solid 2px;"),
th(colspan = 4, "AQS AMP450 (4-10-19)", style = "border-right: solid 2px;"),
th("Envista", style = "border-right: solid 2px;"),
th(colspan = 3, "")
),
tr(
th("AQS ID"),
th("State"),
th("Site", style = "border-right: solid 2px;"),
th("2015"),
th("2016"),
th("2017"),
th("2018", style = "border-right: solid 2px;"),
th("2019", style = "border-right: solid 2px;"),
th("2017"),
th("2018"),
th("2019")
)
)
)
)
dat <- cbind(df[3:nrow(df),1:8], df[2:(nrow(df)-1), 9:11])
library(DT)
datatable(dat, rownames = FALSE, container = sketch,
options = list(
columnDefs = list(
list(targets = "_all", className = "dt-center")
)
)) %>%
formatStyle(c(3,7,8), `border-right` = "solid 2px")
| {
"pile_set_name": "StackExchange"
} |
Q:
scala: override implicit parameter around a call-by-name code block
Is there a way to override an implicit parameter used by functions invoked inside a control structure block? I have some code that looks like this:
def g()(implicit y: Int) {
// do stuff with y
}
class A {
implicit val x: Int = 3
def f() {
overrideImplicit(...) { // <-- overrideImplicit() is permitted to do anything it wants make it so that g() sees a different implicit val, as long as we do not explicitly declare "implicit" here (though it could happen within the method/function)
g() // somehow sees the new implicit as opposed to x
}
}
}
My understanding is that even if overrideImplicit() sets the implicit inside itself, g() is still going to see the one that was in scope at the time, which is the one declared in A. I realize that one way to get the desired behavior is to explicitly state "implicit val x2: Int = 4" inside f(), but I want to avoid that and hide the fact that implicits are used. Is there any way to do this? Thanks.
A:
This is currently being done in STMs like this:
implicit object globalCtx extends Ctx
val r = ref(0)
atomic { implicit txn =>
r := 5 // resolves `txn` as the implicit parameter instead of globalCtx
}
so to my knowledge, there is no better way to do it. At least not yet - see this discussion on SAM (Single Abstract Method) types and possibly adding them to Scala. It's suggested at one point SAM closures could solve this problem if they were implemented so that implicits inside the SAM closure are resolved once again in the context of the target type.
| {
"pile_set_name": "StackExchange"
} |
Q:
Introspect whether a process created with `start_process` is still running and when it exits
I'm using start-process to run a process upon certain events picked up with hooks
(start-process "foo" "*Foo*" foo-command foo-args)
I would like to do 2 things with this.
Prevent the process from being started if it's already running
Print a message to *Messages* when the process is complete
How can I do this please?
A:
To see if a named process is currently running, you could use the process-status function. It will return nil if the named process is not running ...
process-status is a built-in function in `C source code'.
(process-status PROCESS)
Return the status of PROCESS.
The returned value is one of the following symbols:
run -- for a process that is running.
stop -- for a process stopped but continuable.
exit -- for a process that has exited.
signal -- for a process that has got a fatal signal.
open -- for a network stream connection that is open.
listen -- for a network stream server that is listening.
closed -- for a network stream connection that is closed.
connect -- when waiting for a non-blocking connection to complete.
failed -- when a non-blocking connection has failed.
nil -- if arg is a process name and no such process exists.
PROCESS may be a process, a buffer, the name of a process, or
nil, indicating the current buffer's process.
And as stated here, a process sentinel is a way for code to be invoked (such as displaying a message) when the process ends.
| {
"pile_set_name": "StackExchange"
} |
Q:
Does every mount on FreeBSD need the 10% "root space"?
So, our servers are being managed by a hosting provider. These servers are running FreeBSD.
Each disk we let them mount loses 10% of it's disk space.
I know (if I understand correctly) that this is normal and the reason is that the file-system allocates 10% diskspace for the root user.
I read for example this: https://forums.freebsd.org/threads/29336/
Now I do understand that the operating systems needs space here and there to do stuff.
However: when we mount a disk under - let's say - /data/web/my-user/some-sub-directory-somewhere-for-a-specific-goal/
Does that mount need to lose that 10% diskspace? Or can/should the hosting provider the tunefs -m option in order to save us some money.
Any FreeBSD guru's who can recommend something on this matter?
A:
Basically that reserved space of 5% (default on Linux) or 10% (on BSD) is used to prevent file system fragmentantion which translates into poor performance when the disk partition or disk drive is used over 95%. I am usually using tune2fs -m 1 /dev/partition in order to change the default reserved % to 1% reservation.
If you do know that you are going to use no more than 95% from that disk/partition then you can change the default reserved space to a lower value without an actual impact on the filesysytem's performance
I think this provides a pretty good answer to your question:
https://unix.stackexchange.com/questions/7950/reserved-space-for-root-on-a-filesystem-why
It's not related to BSD but to unix/linux but the things are pretty much the same.
| {
"pile_set_name": "StackExchange"
} |
Q:
Можно ли начинать с изучения java?
Прошу сразу не посылать меня, а дать совет. Перед тем как задавать данный вопрос я залез в гугл, поиск на форуме находил ответы на вопросы. Но все же я хочу уточнить некоторые моменты чтобы не заниматься ерундой. По java интересуют такие области как junior java developer и web developer.
В литературе по java пишут, что нужно иметь опыт программирования. Я учусь на втором курсе на программирование, но я пока нахожусь на уровне студента двоечника, поэтому нужно ли мне уметь хорошо программировать на языках программирования таких как С++, pascal и т.д.?
Как в одной статье я прочитал, что "Начинать изучение Java желательно c задач, адекватных имеющемуся уровню знания Java.". Где брать такие задания? Сам придумывать пока не в состоянии.
Никогда не понимал, как имея определенную задачу, работать с документацией по java?
Заранее благодарен за помощь.
A:
Начинать с изучения Java можно.
Знание C++ для изучения Java не обязательно, но желательно. Как и при изучении естественных языков, например, знание французского для изучения итальянского полезно, но не необходимо.
Надо найти учебники, где эти задания есть. Например, задачи и упражнения. Искать по словам: "задачи по Java".
Взять качественный учебник с примерами и прочитать его. Затем начать писать простые приложения. Для справки: начинающим Java программистам.
A:
Плохая у вас литература.
Поищите сайты фриланса, поищите задания для лаб, курсовых в институте, олимпиад по программированию, офф. сайт java.
Непонятно, о какой документации идет речь: RS/UTP или javadoc. RS нужен для того, чтоб описать весь необходимый функционал программы. UTP нужен для того, чтоб сделать тесткейс на каждое требование, описанное в RS. javadoc спасает при поиске средств, - напр., вы давно не работали с БД и забыли, какой метод вам конкретно нужно. Или вы забыли, какой параметр надо передавать, где взять константу, чтоб передать в метод и тд., и тп...
Стать-быть программистом совсем не означает знать какой-то язык. Хотите стать java юниором с уклоном к веб, - не проблема! Вот примерный список того, что можно для этого сделать:
хорошо разобраться с ООП, в java эта парадигма - основа языка (класы, интерфейсы, абстрактные класы);
изучить базовые классы для того, чтоб при написании программы вы не тратили много времени на поиск (работа с файлами, с сетью, написание ГУИ, сортировки, работа с БД); кроме того, надо не просто сделать простую програмку, надо еще и сделать все, чтобы она стала работать быстрее; (работа в этом направлении даст более глубокие знания средств языка);
освоить обработку ошибок и работу с потоками;
разобраться с шаблонами проектирования (хотя бы шаблоны создания, прочитав о каком-то шаблоне, попробуйте написать код сами, попробуйте найти его использование в реальном коде, - исходниках java);
разберитесь со средствами роботы с Regexp, xml (+ xpath), xsl (это вам пригодится в веб девелопменте);
апплеты, сервлеты, jsp страницы (разбираться стоит именно в такой последовательности); напишите клиент, сервер для обмена любыми данными, (напр., сервер погоды, конвертер валют); данные можете брать с какого-то паблик сервера;
разберитесь с технологиями ORM, EJB, Spring.
P.S. Научитесь писать хороший код. Если посторонний человек, едва знакомый с программированием (или не знакомый c java), но знающий английский, сможет в нем разобраться, это один из признаков хорошего кода. Хороший код не падает с IllegalArgumentException. Хороший код всегда хорошо отформатирован. Хороший код всегда легко исправить, добавить новый функционал. Хороший код тот, который уже не хочется переписать еще раз (отрефакторить)...
A:
Например в Мюнхенском техническом университете программирование изучают с 1 курса и именно на Java Core!!!
| {
"pile_set_name": "StackExchange"
} |
Q:
Boatstrap calendar disable not working
I am unable to disable the previous date from the current data.
Check my code -
$(function() {
window.prettyPrint && prettyPrint();
//$('#dp1').datepicker({minDate: 0});
/*$('#dp1').datepicker({startDate: '-0m'}).on('changeDate', function() {
$('#dp1').datepicker('hide');
});*/
/*var date = new Date();
date.setDate(date.getDate() - 1);
$('#dp1').datepicker({startDate: date});*/
var date = new Date();
date.setDate(date.getDate() - 1);
$('#dp1').datepicker({
startDate: date
});
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="http://vanceblackburn.com/Demo/datepicker/js/bootstrap-datepicker.js"></script>
<link href="http://vanceblackburn.com/Demo/datepicker/css/bootstrap.min.css" rel="stylesheet">
<link rel="stylesheet" type="text/css" href="http://vanceblackburn.com/Demo/datepicker/css/datepicker.css" />
<div class="form-group">
<label class="control-label col-sm-4">Default Datepicker</label>
<div class="col-sm-6">
<input id="dp1" type="text" value="" size="16" class="form-control">
</div>
</div>
In my datepicker calendar I want to disable previous date from the currnt date.
If today is 30-June-2015 then all the previous date must disabled. Only show the date but not click-able previous date.
Same code is here Boatstrap Datepicker
A:
I think you should initialize a temporary variable with todays date and set it to the datepicker, like this
JS Code:
var nowDate = new Date();
var today = new Date(nowDate.getFullYear(), nowDate.getMonth(), nowDate.getDate(), 0, 0, 0, 0); //temporary variable with todays date
$('#dp1').datepicker({
startDate: today,
orientation: 'top' // will display the datepicker at bottom, as its
//opposite to the orientation given, refer
// https://github.com/eternicode/bootstrap-datepicker/issues/1035
});
HTML CODE:
<div class="col-sm-6">
<input id="dp1" type="text" value="" size="16" class="form-control" />
</div>
Live Demo @ JSFiddle: jsfiddle.net/dreamweiver/py7uotyf/3
Note : there has been a issue already raised on Orientation (keyword opposite to the actuals) on github, https://github.com/eternicode/bootstrap-datepicker/issues/1035.
| {
"pile_set_name": "StackExchange"
} |
Q:
Inter-processor communication for robotic arm
I'm building a hobby 6-DOF robotic arm and am wondering what the best way is to communicate between the processors (3-4 AVRs, 18 inches max separation). I'd like to have the control loop run on the computer, which sends commands to the microprocessors via an Atmega32u4 USB-to-??? bridge.
Some ideas I'm considering:
RS485
Pros: all processors on same wire, differential signal more robust
Cons: requires additional chips, need to write (or find?) protocol to prevent processors from transmitting at the same time
UART loop (ie, TX of one processor is connected to RX of next)
Pros: simple firmware, processors have UART built in
Cons: last connection has to travel length of robot, each processor has to spend cycles retransmitting messages
CANbus (I know very little about this)
My main considerations are hardware and firmware complexity, performance, and price (I can't buy an expensive out-of-box system).
A:
You want to use USB for communications with the computer. If you have a number of microcontrollers, you will probably only connect one of the microcontrollers directly to the computer. The other microcontrollers will need to get their commands from the main microcontroller.
The communication you choose will depend on a number of factors:
required bandwidth (we will assume you are running them at 16MHz)
complexity (wiring and coding)
bi-directional, or master-slave
Almost all options have built-in support on the AVR microcontroller. There is no option you might reasonably prefer over the built-in options which would require additional hardware. Because they have built-in support, the software complexity is all similar, in that you just configure the port (using registers), put the data to transmit in another register, then trigger the transmission by setting a bit in another register. Any data received is found in another register, and an interrupt is triggered so you can handle it. Whichever option you choose, the only difference is the change in register locations, and some changes to the configuration registers.
A USART loop has the following features:
Maximum baud rate of CLK/16 = 1MHz (at 16MHz clock) which is a transfer rate of around 90KB/s
fully bi-directional communications (no master or slave designation)
requires separate wires between each pair of microcontrollers - the Atmega32u4 supports two USART ports natively, limiting the number of microcontrollers you can connect in a network in practice (or else you end up with a long string of microcontrollers - ie. connected in a linked list manner)
Note: this is also what you would use to get RS232 communication, except that because RS232 requires 10V, it requires a driver to get those voltage levels. For communication between microcontrollers, this is not useful (only voltage levels are changed).
RS485:
Essentially, you use the USART port in a different mode - there is no advantage in bandwidth, and it may only simplify the wiring slightly, but it also complicates it. This is not recommended.
Two-wire interface:
This is also referred to as I2C. This means that all devices share the same two wires.
You need a pull-up resistor on both wires
It is slow (because the pull-up resistors are limited in value, and there is increasing capacitance as the number of devices increases, and the wire length increases). For this AVR microcontroller, the speed is up to 400 kHz - slower than USART (but this speed depends on limiting your capacitance). The reason is that although a device drives the data wire low, the opposite transition is accomplished by letting the wire float high again (the pull-up resistor).
It is even slower when you consider that ALL communication shares the same limited bandwidth. Because all communication shares the same limited bandwidth, there may be delays in communication where data must wait until the network is idle before it can be sent. If other data is constantly being sent, it may also block the data from ever being sent.
It does rely on a master-slave protocol, where a master addresses a slave, then sends a command/request, and the slave replies afterwards. Only one device can communicate at a time, so the slave must wait for the master to finish.
Any device can act as both a master and/or a slave, making it quite flexible.
SPI
This is what I would recommend/use for general communication between microcontrollers.
It is high speed - up to CLK/2 = 8MHz (for CLK at 16MHz), making it the fastest method. This is achievable because of its separate wire solely for the clock.
The MOSI, MISO data, and SCK clock wires are shared across the whole network, which means it has simpler wiring.
It is a master-slave format, but any device can be a master and/or slave. However, because of the slave select complications, for shared wiring (within the network), you should only use it in a hierarchical manner (unlike the two-wire interface). IE. if you organise all devices into a tree, a device should only be master to its children, and only a slave to its parent. That means that in slave mode, a device will always have the same master. Also, to do this correctly, you need to add resistors to MISO/MOSI/SCK to the upstream master, so that if the device is communicating downstream (when not selected as a slave), the communications will not affect communications in other parts of the network (note the number of levels you can do this using resistors is limited, see below for better solution using both SPI ports).
The AVR microcontroller can automatically tri-state the MOSI signal when it is slave-selected, and switch to slave mode (if in master).
Even though it might require a hierarchical network, most networks can be organised in a tree-like manner, so it is usually not an important limitation
The above can be relaxed slightly, because each AVR microcontroller supports two separate SPI ports, so each device can have different positions in two different networks.
Having said this, if you need many levels in your tree/hierarchy (more than 2), the above solution using resistors gets too fiddly to work. In this case, you should change the SPI network between each layer of the tree. This means each device will connect to its children on one SPI network, and its parent on the other SPI network. Although it means you only have a single tree of connections, the advantage is that a device can communicate with both one of its children and its parent at the same time and you don't have fiddly resistors (always hard to choose the right values).
Because it has separate MOSI and MISO wires, both the master and slave can communicate at the same time, giving it a potential factor of two increase in speed. A extra pin is required for the slave-select for each additional slave, but this is not a big burden, even 10 different slaves requires only 10 extra pins, which can be easily accommodated on a typical AVR microcontroller.
CAN is not supported by the AVR microcontroller you have specified. As there are other good options, it is probably not important in this case anyways.
The recommendation is SPI, because it is fast, the wiring isn't too complex, and doesn't require fiddly pull-up resistors. In the rare case where SPI doesn't fully meet your needs (probably in more complicated networks), you can use multiple options (eg. use both SPI ports, along with the two-wire interface, as well as pairing some of the microcontrollers using a USART loop!)
In your case, using SPI means that naturally, the microcontroller with the USB connection to the computer can be the master, and it can just forward the relevant commands from the computer to each slave device. It can also read the updates/measurements from each slave and send these to the computer.
At 8MHz, and 0.5m wire length, I don't think it will become a problem. But if it is, try be more careful of capacitance (keep ground and signal wires getting too close, and also be careful of connections between different conductors), and also signal termination. In the unlikely event that it remains a problem, you can reduce the clock rate, but I don't think it is necessary.
A:
I can highly recommend CAN for inter processor communications. We use it in our robots, with up to 22 processors on the same bus. With good protocol design, you can use up about 90% of the available bandwidth (about 640kbps when you take into account all of the error checking and inter frame spacing). We're able to servo 10 motors at 1000Hz on one CAN bus. This is approaching the limit. You could probably squeeze it to 20 motors if you pack the data very carefully.
Generally CAN needs to have one transceiver chip for each processor (it's just a little 8-pin device). The transceiver gives you the nice differential signal which emits very little interference, and also makes it immune to interference if you're working in an electrically noisy environment (motors, solenoids, and radio transmitters).
However, in limited circumstances, it's actually possible to use CAN with no transceivers.
If you ever feel like implementing a bus with serious bandwidth, I suggest you try EtherCAT. It's a 100Mb bus, which can be connected to your PC's Ethernet port. There are two important parts to the bus:
The Bridge. This converts the Ethernet physical layer to a simpler, lower cost LVDS physical layer, which doesn't require the large connectors, Phy chip, and many components that Ethernet itself does.
The nodes. Each node just needs one ET1200 chip and a microcontroller.
The PC can transmit and receive large chunks of data to and from the nodes at 1kHz or faster. You can control quite a lot of stuff on a single EtherCAT bus.
Added:
Shadow Robot Company now sell a useful EtherCAT Bus system called Ronex. It lets you add quite a lot of I/O, and they're going to be introducing lots of other types of board soon, like motor controllers, and high quality ADCs.
| {
"pile_set_name": "StackExchange"
} |
Q:
Apache Cordova and ADT - Update 'www' folder
I am developing an Android app using Apache Cordova 3.6.0.
I have followed the instructions available here and here to create, build and run my app.
I realized there are two folders which store HTML, CSS and JS files: www and platforms/android/assets/www:
According to this answer, I should modify only www because the command $ cordova run android will overwrite platforms/android/assets/www.
When I run my app through ADT, the folder platforms/android/assets/www is not updated.
How can I make ADT update this directory? What am I missing?
A:
The linked answer is correct. You should only modify the content of the root www folder.
But you will have to build your app before you run it to make your changes take effect in the platforms/android/assets/www folder by running:
$ cordova build
or cordova build android.
| {
"pile_set_name": "StackExchange"
} |
Q:
php preg_match returning unexpected true
Could anyone explain to me why this is returning TRUE? There are 7 z's yet it should be returning FALSE because I've set a max limit of 6 in the regex.
preg_match('/z{4,6}/', "zzzzzzz")
A:
That is because your string includes a substring of 4 to 6 'z's. If you want the match to be against your whole string, you have to put in the anchors in your regex.
/^z{4,6}$/
or
/\Az{4,6}\z/
| {
"pile_set_name": "StackExchange"
} |
Q:
c# Visual Studio 2015 - how to create a Setup which uninstall other application
Initially I created an application that I completely rewrite in a second version. It is a complete different Visual studio solution.
Now I would like that its Setup installer uninstall the previous version but because it was not created using the same solution, the automatic uninstallation of previous version does not work.
Is there any way to force the installer to uninstall certain application based on product name or product code?
I found a WMIC command that works when run from command line
wmic product where name="XXXX" call uninstall /nointeractive
So I created a VBS script which execute a bat file containing the WMIC code and I added it to the Setup project
dim shell
set shell=createobject("wscript.shell")
shell.run "uninstallAll.bat",0,true
set shell=nothing
but when I run the result MSI, it fires an error 1001, meaning that a service already exists. , in other words the uninstallation didn't work.
The old program is still present and they create a service with the same name. :/
any suggestion?
A:
There are 2 options:
You can increase the version of MSI project so it will treat as upgrade and it will not throw any error while installing.
another way out is the write some in the installer project as follows:
protected override void OnBeforeInstall(IDictionary savedState)
{
//Write uninstall powershell script
//installutil /u <yourproject>.exe
using (PowerShell PowerShellInstance = PowerShell.Create())
{
PowerShellInstance.AddScript("");
PowerShellInstance.AddParameter("");
}
PowerShellInstance.Invoke();
}
Note: This InstallUtil is available with the .NET Framework, and its path is %WINDIR%\Microsoft.NET\Framework[64]\<framework_version>.
For example, for the 32-bit version of the .NET Framework 4 or 4.5.*, if your Windows installation directory is C:\Windows, the path is C:\Windows\Microsoft.NET\Framework\v4.0.30319\InstallUtil.exe.
For the 64-bit version of the .NET Framework 4 or 4.5.*, the default path is C:\Windows\Microsoft.NET\Framework64\v4.0.30319\InstallUtil.exe
| {
"pile_set_name": "StackExchange"
} |
Q:
Is it possible to develop (not distribute) iPhone applications on Windows using C/C++?
Possible Duplicate:
iPhone development on Windows
I am very used to coding in C/C++ on Microsoft Visual Studio. However, I'm very interested in learning how to write iPhone apps. I do not have a Mac (other than my iPhone). Is there a legal way to develop on Windows and then when it comes time to test on the real device or distribute the app commercially, that I pay for the Apple iPhone developer fee ($99/yr) and compile/ship the code on an actual Mac?
I just want to know if I can continue to work the technologies I already know in order to make iPhone apps? I came across a framework called Dragonfire SDK which looks exactly like what I'm looking for. However, I'm questioning the legality of all of this and what it will really take to get the code onto a real iPhone/app store. For now though, I mainly just want to be able to work with C/C++ and then test using an iPhone simulator of some sort. Is it possible to do this part on a PC?
Again, I'm willing to pay whatever development fees are required to distribute the app and when the time comes I'll find a real Mac to test/distribute on. Thanks!
A:
DragonFire SDK should work because it gets around the requirement for using a Mac by compiling your code on a Mac server. I don't use it; I program in Objective-C. There aren't really any largely negative reviews I could find online, but be careful because it looks like they are going to charge you a lot to use it.
| {
"pile_set_name": "StackExchange"
} |
Q:
Profile backup script for XP migrate to Vista
I have a .cmd i use to copy our users local files when they change computer. This works very well in XP/2000 but now i can see that we (large enterprise) are moving to Vista. And was wondering if the paths used in my .cmd would work on Vista too...
this is a real cut and paste from the script ("to short down my post" if the full script is needed/wanted i can insert that...) just to show the paths:
robocopy "C:\DOCUME~1\%userid%\Desktop "%dest%\%Process%" /S /E /COPYALL /V /NP /LOG+:"%logfile%" /R:10 /W:15 /TEE
robocopy "C:\DOCUME~1\%userid%\Favorites" "%dest%\%Process%" /S /E /COPYALL /V /NP /NJH /LOG+:"%logfile%" /R:10 /W:15 /TEE
robocopy "C:\DOCUME~1\%userid%\My Documents" "%dest%\%Process%" /S /E /COPYALL /V /NP /NJH /LOG+:"%logfile%" /R:10 /W:15 /TEE
robocopy "C:\DOCUME~1\%userid%\Start Menu" "%dest%\%Process%" /S /E /COPYALL /V /NP /NJH /LOG+:"%logfile%" /R:10 /W:15 /TEE
robocopy "C:\DOCUME~1\%userid%\Application Data\Microsoft\Internet Explorer\Quick Launch" "%dest%\%Process%" /S /E /COPYALL /V /NP /NJH /LOG+:"%logfile%" /R:10 /W:15 /TEE
robocopy "C:\DOCUME~1\%userid%\Local Settings\Application Data\other" "%dest%\%Process%" /S /E /COPYALL /V /NP /NJH /LOG+:"%logfile%" /R:10 /W:15 /TEE
robocopy "C:\DOCUME~1\%userid%\%Process%" "%dest%\%Process%" /S /E /COPYALL /V /NP /NJH /LOG+:"%logfile%" /R:10 /W:15 /TEE
robocopy "C:\DOCUME~1\%userid%\Application Data\Microsoft\Address Book" "%dest%\%Process%" /S /E /COPYALL /V /NP /NJH /LOG+:"%logfile%" /R:10 /W:15 /TEE
robocopy "C:\DOCUME~1\%userid%\Application Data\Microsoft\Outlook" "%dest%\%Process%" /S /E /COPYALL /V /NP /NJH /LOG+:"%logfile%" /R:10 /W:15 /TEE
robocopy "C:\DOCUME~1\%userid%\Application Data\Microsoft\Proof" "%dest%\%Process%" /S /E /COPYALL /V /NP /NJH /LOG+:"%logfile%" /R:10 /W:15 /TEE
robocopy "C:\DOCUME~1\%userid%\Application Data\Microsoft\Signatures" "%dest%\%Process%" /S /E /COPYALL /V /NP /NJH /LOG+:"%logfile%" /R:10 /W:15 /TEE
robocopy "C:\DOCUME~1\%userid%\Local Settings\Application Data\Microsoft\Outlook" "%dest%\%Process%" /S /E /COPYALL /V /NP /NJH /LOG+:"%logfile%" /R:10 /W:15 /TEE
set dest=x:\%userid% and %process% is the part after %userid%
A:
You can use the environmental variable %USERPROFILE% to access current user's profile.
And that variable is universal and works on all Windows versions from Win95 to Win7.
On default install of XP it would give you c:\document and settings\username
on Vista it would give you c:\users\username
| {
"pile_set_name": "StackExchange"
} |
Q:
Video quality in android?
I am using the media recorder class for recording video, I initialize the recorder with following properties,
recorder = new MediaRecorder();
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
recorder.setVideoSource(MediaRecorder.VideoSource.DEFAULT);
recorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
recorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264);
But the quality of video is not as same as video I shoot over native android camera, my video recorded using media recorder is of poor quality as compared to the native one, how can I improve the video quality.
If any one knows me help me out.Thanks
A:
Finally I found the code to record high quality video in android 2.1 by setting videEncodingBitRate , AudioEncodingBitRate, AudioSamplingRate ...etc. Using this method you can set the properties for video whatever you want to provide high quality video.
For setting high quality and low quality parameter refer this page,
http://www.andgps.com/20110410/camcorderprofile-predefined-camcorder-profile-settings-for-camcorder-applications
The code i used with base version android 2.1 to produce high quality video is shown below,
recorder = new MediaRecorder();
Method[] methods = recorder.getClass().getMethods();
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
recorder.setVideoSource(MediaRecorder.VideoSource.DEFAULT);
recorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
recorder.setVideoFrameRate(24);
recorder.setVideoSize(720, 480);
for (Method method: methods){
try{
if (method.getName().equals("setAudioChannels")){
method.invoke(recorder, String.format("audio-param-number-of-channels=%d", 1));
}
else if(method.getName().equals("setAudioEncodingBitRate")){
method.invoke(recorder,12200);
}
else if(method.getName().equals("setVideoEncodingBitRate")){
method.invoke(recorder, 3000000);
}
else if(method.getName().equals("setAudioSamplingRate")){
method.invoke(recorder,8000);
}
else if(method.getName().equals("setVideoFrameRate")){
method.invoke(recorder,24);
}
}catch (IllegalArgumentException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
} catch (InvocationTargetException e) {
e.printStackTrace();
}
}
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
recorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264);
`
A:
use the following settings for Video Recordings:-
private void cameraSettings()
{
mediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA);
mediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.DEFAULT);
mediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.DEFAULT);
mediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.DEFAULT);
mediaRecorder.setVideoSize(width, height);
mediaRecorder.setVideoFrameRate(videoFramePerSecond);
}
use videoFramePerSecond = 30 and width = 1280 and height= 720..
This setting you can do by your own as per your requirment.
| {
"pile_set_name": "StackExchange"
} |
Q:
EnumMap- Using to detect escape characters
I'm bench marking various ways to escape special characters, out of personal interest.
A colleague suggested that a EnumMap might also be quick in order to check if a character is contained within the Map.
I'm trying the following code and it works using containsValue();
However, can this be made to work with containsKey();?
public static enum EscapeChars {
COLON(':'), SLASH('\\'), QUESTION('?'), PLUS('+'), MINUS('-'), EXCLAMATION(
'!'), LEFT_PARENTHESIS('('), RIGHT_PARENTHESIS(')'), LEFT_CURLY(
'{'), RIGHT_CURLY('}'), LEFT_SQUARE('['), RIGHT_SQUARE(']'), UP(
'^'), QUOTE('"'), TILD('~'), ASTERISK('*'), PIPE('|'), AMPERSEND('&');
private final char character;
EscapeChars(char character) {
this.character = character;
}
public char getCharacter() {
return character;
}
}
static EnumMap<EscapeChars, Integer> EnumMap = new EnumMap<EscapeChars,Integer>(
EscapeChars.class);
static {
for (EscapeChars ec : EscapeChars.values()) {
EnumMap.put(ec, (int)ec.character);
}
}
static void method5_Enum() {
String query2="";
for (int j = 0; j < TEST_TIMES; j++) {
query2 = query;
char[] queryCharArray = new char[query.length() * 2];
char c;
int length = query.length();
int currentIndex = 0;
for (int i = 0; i < length; i++) {
c = query.charAt(i);
if (EnumMap.containsValue((int)c)) {
if ('&' == c || '|' == c) {
if (i + 1 < length && query.charAt(i + 1) == c) {
queryCharArray[currentIndex++] = '\\';
queryCharArray[currentIndex++] = c;
queryCharArray[currentIndex++] = c;
i++;
}
} else {
queryCharArray[currentIndex++] = '\\';
queryCharArray[currentIndex++] = c;
}
} else {
queryCharArray[currentIndex++] = c;
}
}
query2 = new String(queryCharArray, 0, currentIndex);
}
System.out.println(query2);
}
Reference: https://softwareengineering.stackexchange.com/questions/212254/optimized-special-character-escaper-vs-matcher-pattern
A:
I don't believe you would want to because you would first have to convert to an EscapeChars which is the point of having a Map for the lookup. I would suggest that given your usage I would use a Map<Integer, EscapeChars> and use containsKey on this map.
| {
"pile_set_name": "StackExchange"
} |
Q:
Куда девается значок программы в панели задач?
Вот какой вопрос. Создал программу с двумя окнами. При работе программы открывается первое окно, затем если нажать на кнопку на этом окне, то открывается другое окно. Вроде ничего необычного. Первая проблема: после того, как открылось второе окно пропадает значок программы на панели задач. Как сделать чтобы он сохранился? И как раз из-за этой проблемы, второе окно не сворачивается нормально. Тупо просто в угол рабочего стола уходит. В чем дело?
Например вот такой код:
Первое окно:
from PyQt5 import QtCore, QtGui, QtWidgets
class Window_1(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(300, 146)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.pushButton = QtWidgets.QPushButton(self.centralwidget)
self.pushButton.setGeometry(QtCore.QRect(40, 30, 221, 41))
self.pushButton.setObjectName("pushButton")
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtWidgets.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 300, 21))
self.menubar.setObjectName("menubar")
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.pushButton.setText(_translate("MainWindow", "Нажми"))
Второе окно:
from PyQt5 import QtCore, QtGui, QtWidgets
class Window_2(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(290, 143)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.label = QtWidgets.QLabel(self.centralwidget)
self.label.setGeometry(QtCore.QRect(80, 30, 171, 71))
self.label.setObjectName("label")
MainWindow.setCentralWidget(self.centralwidget)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.label.setText(_translate("MainWindow", "Тут что-то написано"))
И главное которое запускает:
import sys, os
from PyQt5.QtWidgets import *
from PyQt5.QtGui import *
from PyQt5 import QtWidgets, QtGui
from Window1 import Window_1
from Window2 import Window_2
class Window(QMainWindow):
def __init__(self, parent=None):
super(Window, self).__init__(parent)
self.Win = Window_1()
self.Win.setupUi(self)
self.Win.pushButton.clicked.connect(self.check)
def check(self):
des = Window2(parent=self)
self.hide()
class Window2(QMainWindow):
def __init__(self, parent=None):
super(Window2, self).__init__(parent)
self.Win_2 = Window_2()
self.Win_2.setupUi(self)
self.show()
if __name__ == '__main__':
app = QApplication(sys.argv)
window = Window()
window.show()
sys.exit(app.exec_())
A:
Замените:
des = Window2(parent=self)
На:
self.des = Window2()
Дело в том, что у объекта-виджета не было связи с каким-либо объектом, поэтому после завершения функции он был удален. А self.des привязывает объект, что защищает его от уничтожения.
Кст, правильно будет сделать так (нет смысла создавать каждый раз виджет):
...
class Window(QMainWindow):
def __init__(self, parent=None):
super().__init__(parent)
self.Win = Window_1()
self.Win.setupUi(self)
self.Win.pushButton.clicked.connect(self.check)
self.des = Window2()
def check(self):
self.des.show()
self.hide()
class Window2(QMainWindow):
def __init__(self, parent=None):
super().__init__(parent)
self.Win_2 = Window_2()
self.Win_2.setupUi(self)
| {
"pile_set_name": "StackExchange"
} |
Q:
Major notation doubt in calculus.
Which of the following notations is correct?
$$\frac{d}{dx}(y)$$ or $$\frac{dy}{dx}.$$
Please don't think this is trivial.
A:
They are both correct, and they mean the same thing. If there is any difference, it's in the mind set they convey.
$\frac{dy}{dx}$ is a function defined as the derivative of $y$. It's a single symbol. On the other hand, $\frac d{dx}y$ is the result of applying the differentiation operator to the function $y$. It's two symbols (one function, one operator).
| {
"pile_set_name": "StackExchange"
} |
Q:
Remove table row doesn't work
I'm trying to remove a table row (tr) from a table with a form inside. I'm using this function to do it.
$('.removeStop').click(function() {
$(this).closest('tr').fadeTo(400, 0, function () {
$(this).remove();
});
return false;
});
The fadeTo works fine, but when it goes to remove the row, it goes to the remove function, it goes into an infinite loop "cannot remove undefined".
Maybe it's just some stupid error I made, but I would think if it can fade the whole row, it should be able to remove the same thing.
Any help would be great :)
Edit: Here is the HTML:
<tr align="left">
<td width="100">School: </td>
<td>
<select name="stops[school][0][studentid]" class="combobox" style="display: none; ">
<option value="">Select a school...</option>
<option value="1" selected="selected">Hogwarts School of Wizardry</option><option value="2">Itchy and Scratchy Land</option><option value="3">Springfield Elementary</option> </select><input class="ui-autocomplete-input ui-widget ui-widget-content ui-corner-left ui-corner-right inputCustom" autocomplete="off" role="textbox" aria-autocomplete="list" aria-haspopup="true">
</td>
<td width="70"></td>
<td width="100">Time (24hr): </td>
<td><input class="ui-widget ui-widget-content ui-corner-left ui-corner-right inputCustom" value="15" name="stops[school][0][hr]" style="width:50px;" type="text"> <input class="ui-widget ui-widget-content ui-corner-left ui-corner-right inputCustom" value="01" name="stops[school][0][min]" style="width:50px;" type="text"></td>
<td><a href="#" class="removeStop">Remove</a></td>
</tr>
A:
Try this:
$('.removeStop').click(function(e) {
e.preventDefault();
$(this).closest('tr').fadeOut(400, function () {
$(this).remove();
});
});
| {
"pile_set_name": "StackExchange"
} |
Q:
Memory Management in H2O
I am curious to know how memory is managed in H2O.
Is it completely 'in-memory' or does it allow swapping in case the memory consumption goes beyond available physical memory? Can I set -mapperXmx parameter to 350GB if I have a total of 384GB of RAM on a node? I do realise that the cluster won't be able to handle anything other than the H2O cluster in this case.
Any pointers are much appreciated, Thanks.
A:
H2O-3 stores data completely in-memory in a distributed column-compressed distributed key-value store.
No swapping to disk is supported.
Since you are alluding to mapperXmx, I assume you are talking about running H2O in a YARN environment. In that case, the total YARN container size allocated per node is:
mapreduce.map.memory.mb = mapperXmx * (1 + extramempercent/100)
extramempercent is another (rarely used) command-line parameter to h2odriver.jar. Note the default extramempercent is 10 (percent).
mapperXmx is the size of the Java heap, and the extra memory referred to above is for additional overhead of the JVM implementation itself (e.g. the C/C++ heap).
YARN is extremely picky about this, and if your container tries to use even one byte over its allocation (mapreduce.map.memory.mb), YARN will immediately terminate the container. (And for H2O-3, since it's an in-memory processing engine, the loss of one container terminates the entire job.)
You can set mapperXmx and extramempercent to as large a value as YARN has space to start containers.
| {
"pile_set_name": "StackExchange"
} |
Q:
How are items enumerated in Japanese restaurant menus?
I've been asked to render a restaurant menu in Japanese, even though I'm not very good with the language (don’t ask). The original menu has ingredient lists for each entry, usually like this:
Spam, bacon, sausage, eggs, ham.
But also like this:
Spam, bacon, and eggs.
And sometimes like this:
Spam on bread with sausages, bacon, and ham.
My question is, how are these lists normally presented in Japanese restaurant menus? Should I make them like the original with commas, i.e. this:
スパム、ベーコン、ソーセージ、卵、ハム。
Or do they use a lot of とs like this?
スパムとベーコンとソーセージと卵とハム。
Or something else?
And for the last example above, is it weird to do a direct rendering like this?:
ブレッドにスパムとソーセージとベーコンとハム。
What's a natural/nonawkward way of expressing such enumerations?
A:
Ordinary, ingredient list looks like:
スパム、ベーコン、ソーセージ、卵、ハム
No 。is used, because the list is not a sentence.
For
Spam on bread with sausages, bacon, and ham.
It should look like:
スパムの乗ったパン、ソーセージ、ベーコン、ハム
Also I suggest to use ランチョンミート for spam, because スパム is not common for Japanese. So last example should be:
ランチョンミートの乗ったパン、ソーセージ、ベーコン、ハム
| {
"pile_set_name": "StackExchange"
} |
Q:
need regex to extract two substrings from one string
I have a string similar to tvm11551.iso that I am trying to parse. In this string, these bolded numbers vary: tvm 11 5 51 .iso (please ignore the spaces here). I wrote the following program in PHP that would extract those two numbers from that string:
$a = "tvm11551.iso";
if(preg_match('/^tvm\d{5}\.iso/',$a)){
$b = preg_match('/tvm(\d\d)\d\d\d\.iso/' , $a);
$c = preg_match('/tvm\d\d\d(\d\d)\.iso/' , $a);
echo "B: " . $b . "<br>";
echo "C: " . $c;
}
However, I am getting the output as:
B: 1
C: 1
How do I fix the RegEx to get the expected output?
A:
The 1 you are seeing as output is the number of matches, not their contents, that preg_match() returns on a successful match. Since each of your tests had only one () capture group, one match was returned. You need to capture matches into an array as the third argument to preg_match():
$a = "tvm11551.iso";
$matches = array();
preg_match('/^tvm(\d{2})5(\d{2})\.iso$/', $a, $matches);
var_dump($matches);
array(3) {
[0] =>
string(12) "tvm11551.iso"
[1] =>
string(2) "11"
[2] =>
string(2) "51"
}
A:
Based on the patterns you have, I don't see any reason to be using RegEx at all, after the initial match.
You would just want this:
if(preg_match('/^tvm\d{5}\.iso/',$a)){
$b = substr($a, 3, 5);
$c = substr($a, 6, 8);
| {
"pile_set_name": "StackExchange"
} |
Q:
Changing date format in SSIS
I am trying to move data from chinook db(oracle) to chinook DW(mysql) using SSIS.
Currently in my oracle db I have a date format(07-AUG-11) and want since I need to put in MYSQL(2011-08-07) but I am not able to do it.
I tried using this in oracle for conversion
select to_char(to_date(InvoiceDate,'DD-MON-YY'), 'YYYY-MM-DD') from CHINOOK.invoice;
and tried using the expression(from to_char onwards) in SSIS but did not work.
(there are also other expressions I tried but SSIS throws error saying the function does not exist)
I just need to know whats the expression in SSIS.If anyone could help
A:
I would bring it into SSIS as a date, using (in Oracle):
TO_DATE(InvoiceDate,'DD-MON-YY')
Then if you need to convert it into a string with the 'yyyy-MM-dd' format, you can use something like (in SSIS):
(DT_WSTR, 4) YEAR(DateField) + "-" + RIGHT("0" + (DT_WSTR, 2) MONTH(DateField), 2) + "-" + RIGHT("0" + (DT_WSTR, 2) DAY(DateField), 2)
Although, if your destination is a date field, then it should go straight in without any processing/transformation as it is a date already.
| {
"pile_set_name": "StackExchange"
} |
Q:
Understanding JWT
I've spent a couple weeks trying to wrap my head around JWT objects. The premise makes sense but where I get confused is the security aspect. If I am a Javascript Client (e.g. Firebase) and want to send a secure request to an api using Open Auth, I would encrypt my message with a key. However, since the client source may be viewed how can I secure my Key so malicious requests don't go through. Am I missing something. Is there a way to secure the key?
A:
Joel, I think you got the directions wrong ;)
One would use JWT within the OAuth protocol to achieve what some people might call "Stateless Authentication", meaning that the auth server would issue a signed token (for e.g. a client application or a user) after successful authentication (of the client or user) without storing info about/ of it, which would be required when using opaque token.
The signed token could be used by your JS client to e.g. call a certain REST-API endpoint (on a so-called resource server) that would verify the signature of the token and authorize your request or not, based on the content (the claims) of the JWT.
Both, your client application as well as the resource server are able to introspect the token and verify its signature because they either have a shared secret with the auth server (who used the secret to sign the token in the first place) or know the public key that corresponds to the private key the auth server used to sign the token (as Florent mentioned in his comment).
JWTs can also be encrypted, which is useful if the resource server or the auth server require sensitive information but don't want to store/ access the data. You would not be able to introspect it as long as you don't have the used encryption secret.
... long story short, the OAuth protocol describes client auth against a resource or an auth server. JWT can be used to transfer auth prove (as a Bearer token within the Authorization header). However, the idea of using JWT in the OAuth flow is not to "send a secure request to an api".
A:
The encryption process is performed using the public key of the recipient.
Your client has no private key to generate and manage.
If you want to receive and decrypt such JWT, then your client has to create a key pair (private and public) for the session only and then exchange the public key with the server.
| {
"pile_set_name": "StackExchange"
} |
Q:
Sphere revolving around another sphere- CSS
I am trying to create a pure CSS design of a sphere revolving(orbiting) around another sphere. Like a moon orbiting the sun to be precise. The image of the earth fits in properly into the sphere of earth. But the image of moon does not fit into the sphere of moon.
The image attached might help to understand my question better
Below is my CSS script
.center {
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
width: 200px;
height: 200px;
border-radius: 50%;
background: transparent;
}
.center .earth {
position: absolute;
top: 0;
width: 200px;
height: 200px;
background-image: url(https://www.publicdomainpictures.net/pictures/90000/velka/earth-map.jpg);
margin: 3em auto;
border-radius: 50%;
background-size: 630px;
animation: spin 30s linear alternate infinite;
box-shadow: inset 20px 0 80px 6px rgba(0, 0, 0, 1);
color: #000;``
}
.center .earth .moon {
position: absolute;
top: calc(50% - 1px);
left: 50%;
width: 200px;
height: 2px;
transform-origin: left;
border-radius: 50%;
/*animation: rotate 10s linear infinite;*/
}
.center .earth .moon::before {
content: url(moon.jpg);
position: absolute;
top: -25px;
right: 0;
width: 50px;
height: 50px;
background: #fff;
border-radius: 50%;
/*animation: rotate 10s linear infinite;*/
}
@keyframes rotate {
0% {
transform: rotate(0deg);
}
100% {
transform: rotate(360deg);
}
}
@keyframes spin {
100% {
background-position: 100%;
}
}
A:
Make this change content: "";
to background-image: url(moon.jpg);
and remove background: #fff from classname .center .earth .moon::before
body {
background: black;
}
.center {
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
width: 200px;
height: 200px;
border-radius: 50%;
background: transparent;
}
.center .earth {
position: absolute;
top: 0;
width: 200px;
height: 200px;
background-image: url(https://www.publicdomainpictures.net/pictures/90000/velka/earth-map.jpg);
margin: 3em auto;
border-radius: 50%;
background-size: 630px;
animation: spin 30s linear alternate infinite;
box-shadow: inset 20px 0 80px 6px rgba(0, 0, 0, 1);
color: #000;``
}
.center .earth .moon {
position: absolute;
top: calc(50% - 1px);
left: 50%;
width: 200px;
height: 2px;
transform-origin: left;
border-radius: 50%;
/*animation: rotate 10s linear infinite;*/
}
.center .earth .moon::before {
content: "";
background-image: url(https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRsvjrANMGI8aBJSFbsHteVa04rcB1IjjNsbrhm8vTLflfpiG133g);
position: absolute;
top: -25px;
right: 0;
width: 50px;
height: 50px;
border-radius: 50%;
/*animation: rotate 10s linear infinite;*/
}
@keyframes rotate {
0% {
transform: rotate(0deg);
}
100% {
transform: rotate(360deg);
}
}
@keyframes spin {
100% {
background-position: 100%;
}
}
<div class="center">
<div class="earth">
<div class="moon">
</div>
</div>
</div>
| {
"pile_set_name": "StackExchange"
} |
Q:
Small question about calculus
I have this lemme from this paper: "Multiplicity results for quasi-linear problems A.Ayoujil, A.R. El Amrouss, 2008"
We consider the truncated problem $$(\mathcal P_\pm)\begin{cases}-\Delta_pu=f_\pm(x,u) & \text{in }\Omega, \\ u=0 & \text{on }\partial\Omega,\end{cases}$$ where $$f_\pm(x,t)=\begin{cases}f(x,t)&\text{if }\pm t\geqslant0,\\0&\text{otherwise}.\\\end{cases}$$ We denote by $u^+:=\max(u,0)$ and $u^-:=\max(-u,0)$ the positive and negative parts of $u$.
Lemma 2.2. All solutions of $(\mathcal{P}_+)$ (resp. $(\mathcal{P}_-)$) are positive (resp. negative) solutions of $(\mathcal{P})$.
Proof. Define $\Phi_\pm : W_0^{1,p}(\Omega) \to \mathbb{R},$
$$
\Phi_\pm(u) =
\frac{1}{p} \int\limits_\Omega\left|\nabla u\right|^p dx -
\int\limits_\Omega F_\pm(x, u) \, dx,
$$
where $F_\pm(x,t) = \int\limits_0^t f_\pm(x,s)\,ds$. It is well known that under subcritical growth condition on $f$, $\Phi_\pm$ is well defined on $W_0^{1,p}(\Omega)$, weakly lower semi-continous and $C^1$-functionals.
Let $u$ be a solution of $(\mathcal{P}_+)$, or equivalently, $u$ be a critical point of $\Phi_+$. Taking $v = u^-$ in
$$
\langle\Phi_+'(u),v\rangle =
\int\limits_\Omega\left(
\left|\nabla u\right|^{p-2}\nabla u\nabla v - f_+(x,u)v
\right) dx = 0,
$$
shows that $||u^-||=0$, so $u^-=0$ and $u=u^+$ is also a critical point of $\Phi$ with critical value $\Phi(u) = \Phi_+(u)$. Furthermore, by Anane[1], $u\in L^\infty(\Omega)\cap C^1(\Omega)$. The maximum principle implies that either $u > 0$ or $u\equiv 0$. Similarly, nontrivial critical points of $\Phi_-$ are negative solutions of $(\mathcal{P})$. $\quad\square$
I dont understand why $||u^-||_{W^{1,p}_0}=0$ ???
Thank you
A:
Because $f_+(x,u)u_-=0$ and on the RHS of the last display formula you get $\int_\Omega|\nabla u_-|^p dx$.
| {
"pile_set_name": "StackExchange"
} |
Q:
Where should I do reorder on bargraph to achieve make the bar group same squence as dataframe
I have a dataframe like this:
> str(mydata6)
'data.frame': 6 obs. of 4 variables:
$ Comparison : Factor w/ 6 levels "Decreased_Adult",..: 5 2 6 3 4 1
$ differential_IR_number: num 446 305 965 599 1799 ...
$ Stage : Factor w/ 3 levels "AdultvsE11","E14vsE11",..: 2 2 3 3 1 1
$ Change : Factor w/ 2 levels "Decrease","Increase": 2 1 2 1 2 1
column 1,3,4 are factors and column 2 are numeric
I used the following code to do a bargraph:
ggplot(mydata6, aes(x=Stage, y=differential_IR_number, fill=Change)) + #don't need to use "" for x= and y, comparing to the above code
geom_bar(stat = "identity", position = "stack") + #using stack to make decrease and increase stack with each other
theme(axis.text.x = element_text(angle = 90, hjust = 1)) + #using theme function to change the labeling to be vertical
geom_text(aes(label=differential_IR_number), position=position_stack(vjust=0.5))
The result is following:
But I want the order to be E14vsE11 E18vsE11 and AdultvsE11, I tried to reorder/sort at different positions but none works.
Why it does not following the order of mydataframe?
A:
The order is the one of the levels of the factor. You can set the order you want as follows:
mydata6$Stage <- factor(mydata6$Stage, levels = c("E14vsE11", "E18vsE11", "AdultvsE11"))
| {
"pile_set_name": "StackExchange"
} |
Q:
Php - url/string contains arabic chars
I am working on something that grabs 1 image from bing images.
For some reason file_get_contents did not work so I searched a bit and got the following method - which works great with English keywords:
$fp = fsockopen("www.bing.com", 80, $errn, $errs);
$ar = "عربي";
$out = "GET /images/search?q=$ar HTTP/1.1\r\n";
echo "$out";
echo "<Br><br>";
$out .= "Host: www.bing.com\r\n";
$out .= "User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0\r\n";
$out .= "Connection: close\r\n";
$out .= "\r\n";
fwrite($fp, $out);
$response = "";
while ($line = fread($fp, 4096)) {
$response .= $line;
}
fclose($fp);
$response_body = substr($response, strpos($response, "\r\n\r\n") + 4);
// or
list($response_headers, $response_body) = explode("\r\n\r\n", $response, 2);
So if $ar contains english keyword(s), it works perfect. However, when I try and put in an arabic word - the results are convoluted - and doesn't match the results to bing image search.
Top of my php file I have :
<meta http-equiv='content-Type' content='text/html; charset=UTF-8'/>
Any help would greatly be appreciated.
TIA
A:
I know you said you tried urlencode() but this worked for me:
$ar = urlencode("عربي");
| {
"pile_set_name": "StackExchange"
} |
Q:
How Do I Use an Additional DBContext?
I have an existing ABP 1.0.0 project that connects to SQL Server. Now I'm trying to extend this app to get additional data from Teradata. I created a new EF project and referenced Teradata.Client.Provider instead of EntityFramework.SqlServer.
In the new project, I have a new TdContext class that has both desired DbSet Teradata entities, a model builder that maps the entities to the schema/table and the usual public constructors where I specify the connection string. I'm not sure the providerName so I guessed "Teradata.Client.Provider".
When I call the App Service that attempts to inject the IRepository<ATerraDataTable>, ABP throws the generic An Error has Occurred.
I'm not sure what is failing. Aside from the providerName, I can't find anywhere in ABP where I tell it to instantiate the Teradata client. Furthermore, I cannot remember how an IRepository<TEntity> knows which DbContext to use.
My injected IRepository<ASqlDataTable> works fine. When I wrote this solution 3 years ago, I only had the one DBContext to SQL Server, I didn't put much thought into asking how does an Repository know what connection to use. The ABP docs infer that the UOW does this but they don't go into enough detail for me.
.NET Provider for Teradata reference:
https://downloads.teradata.com/doc/connectivity/tdnetdp/14.11/webhelp/DevelopingNetDataProviderforTeradataApplications.html
A:
Abp 1.0.0 is old so i am not sure if my answer will be correct. But here goes.
Abp uses a module system.
if you look at your front end application(if mvc) you will find this class.
appnameWebMvcModule.cs in the folder startup.
In that class you will find the following line [DependsOn(typeof(xxxx))] where xxx is a other project.
You can follow that chain until you reach appnameEntityFrameworkModule.cs
Wich has the following line.
Configuration.Modules.AbpEfCore().AddDbContext....
If your second dbcontext is referencing a other database with a complete other structure, i would create a seperate application and implement a api.
Then have your current application implement that api.
this way you don't break any application and have and easier extensiblity.
| {
"pile_set_name": "StackExchange"
} |
Q:
SQL Anywhere 12 ODBC returns zero rows when using filters in query
I'm building a .NET c# app that's connecting to my SQL Anywhere 12 Database to get data using the ODBC driver but i have a weird problem that whenever i use a filter in the query i get nothing in the reader but if i do the same query in Sybase Center i get the expected results ..
this is an example of my code
connection = new OdbcConnection(conStrMonitor);
connection.Open();
var cmd = new OdbcCommand("Select art_artnr, art_ben from monitor.ARTIKEL WHERE art_artnr='VSV203798'", connection);
var sdr = cmd.ExecuteReader();
while (sdr.Read())
{
SearchArticleNr article = new SearchArticleNr();
article.Article = sdr["art_artnr"].ToString();
article.Ben = sdr["art_ben"].ToString();
SarticleList.Add(article);
}
reader loop does not trigger and when i look at sdr.hasrows it's set as false
Using the same query in Sybase Central
i tried other filters for example LIKE and the same problem occurs then, i'm at a loss as to why this is happening.
reference used for my app is System.Data.Odbc
A:
I finally got it to work but not using system.data.odbc. i found out about the iAnywhere .net reference that you can install into your project. i found mine under C:\Program Files\SQL Anywhere 12\Assembly\v4.5 and i then added the reference to my project browsing to this folder and imported iAnywhere.Data.SQLAnywhere.v4.5.dll .. using this reference i now get results back even using filters in my queries.
here's the code now:
public List<SearchArticleNr> SearchArticle(string articlenr)
{
List<SearchArticleNr> SarticleList = new List<SearchArticleNr>();
iAnywhere.Data.SQLAnywhere.SAConnection myConnection = null;
iAnywhere.Data.SQLAnywhere.SACommand myCommand = null;
iAnywhere.Data.SQLAnywhere.SADataReader myDataReader = null;
try
{
myConnection = new iAnywhere.Data.SQLAnywhere.SAConnection(conStrMonitor);
myConnection.Open();
myCommand = myConnection.CreateCommand();
myCommand.CommandText = "Select art_artnr, art_ben from monitor.ARTIKEL where art_artnr = ?";
myCommand.Parameters.Add("@art", articlenr);
myDataReader = myCommand.ExecuteReader();
int i = 0;
while (myDataReader.Read())
{
i++;
SearchArticleNr article = new SearchArticleNr();
article.Article = myDataReader["art_artnr"].ToString();
article.Ben = myDataReader["art_ben"].ToString();
SarticleList.Add(article);
}
if (i == 0)
{
SearchArticleNr article = new SearchArticleNr();
article.Article = "NO OBJECTS FOUND";
article.Ben = "NO OBJECTS FOUND";
SarticleList.Add(article);
}
}
catch (Exception exp)
{
myConnection = null;
throw exp;
}
finally
{
myConnection.Close();
myDataReader.Close();
myCommand = null;
}
return SarticleList.ToList();
}
| {
"pile_set_name": "StackExchange"
} |
Q:
flask - unit testing session with nose
I am trying to test the following code with nose. The app.py file is as below:
from flask import Flask, session, redirect, url_for, request
app = Flask(__name__)
@app.route('/')
def index():
session['key'] = 'value'
print('>>>> session:', session)
return redirect(url_for("game"))
The test file is below:
from nose.tools import *
from flask import session
from app import app
app.config['TESTING'] = True
web = app.test_client()
def test_index():
with app.test_request_context('/'):
print('>>>>test session:', session)
assert_equal(session.get('key'), 'value')
When I run the test file, I get an assertion error None != 'value'
and the print statement in the test file prints an empty session object. Moreover, the print statement in the app.py file does not print anything. Does this mean that the index function isn't running?
Why is this happening? According to the flask documentation (http://flask.pocoo.org/docs/1.0/testing/#other-testing-tricks),
I should have access to the session contents through test_request_context().
Also, if I write the test_index function like this instead, the test works (and both print statements in the app.py and test files are executed):
def test_index():
with app.test_client() as c:
rv = c.get('/')
print('>>>>test session:', session)
assert_equal(session.get('key'), 'value')
What is the difference between using Flask.test_client() and Flask.test_request_context in a 'with' statement? As I understand it, the point in both is to keep the request context around for longer.
A:
You're only setting up your request context. You need to actually have your app dispatch the request to have anything happen-- similar to your c.get() in your full Client.
Give the following a try and I think you'll have better luck:
def test_index():
with app.test_request_context('/'):
app.dispatch_request()
print('>>>>test session:', session)
assert_equal(session.get('key'), 'value')
| {
"pile_set_name": "StackExchange"
} |
Q:
MySQL workbench SSH connection error [Bad authentication type(allowed_types=['publickey'])]
I have issue regarding SSH connection with my server. When i try to connect it results into error:"Bad authentication type(allowed_types=['publickey'])"
Thanks
A:
You need to ensure that your private key is in openssh format. With puttygen you can export as Openssh. This worked for me.
A:
Check your username and public key this can cause problem.
Attach the private key file with extension .ppk
Also verify your connection with putty.
Also check for the restriction on server.
| {
"pile_set_name": "StackExchange"
} |
Q:
Ember Data: how can I delete/unload a record that's stuck in the "inFlight" state?
Say I am trying to save a Foo record to the back-end. For whatever reason, the back-end never returns (neither success nor failure).
From what I can see, it looks like foo stays in the "in flight" state. The problem with this state is it completely locks the record - you can't do anything on it (can't rollback, can't unload). I understand why it is like that (to try and keep things consistent). But is there something you can do about an edge case like this?
A:
I've not tried this but you might find a solution by looking at ember-data's source code, specifically states.js: https://github.com/emberjs/data/blob/master/packages/ember-data/lib/system/model/states.js#L306-L351
Not sure there is a solid best practice here, but my best guess is that you can recover by sending becameInvalid to the model's stateManager.
A:
Building on Mike's suggestion, I ended up with the following:
record.send('becameInvalid');
record.unloadRecord();
| {
"pile_set_name": "StackExchange"
} |
Q:
If $h(z)=0$ for all $z\in\mathbb{R}^+$ then $h(z)=0$ in $\mathbb{C}$.
I'm doing a course in probability and while calculating the characteristic function of the Gamma distribution my teacher did the following reasoning: (where $X\sim \mathcal{G}(r,\lambda)$)
By the definition of $\phi_X$:
$$\phi_X(\xi)=\int_0^\infty e^{i\xi x}\frac{\lambda^r}{\Gamma(r)}\:x^{r-1}\:e^{-\lambda x}\:\mathrm{d}x=\frac{\lambda^r}{(\lambda-i\xi)^r}\int_0^\infty \frac{(\lambda-i\xi)^r}{\Gamma(r)}\:x^{r-1}\:e^{-(\lambda-i\xi) x}\:\mathrm{d}x.$$
Let $h:\mathbb{C}\to\mathbb{C}$ be defined as
$$h(z)=\int_0^\infty \frac{z^r}{\Gamma(r)}\:x^{r-1}\:e^{-z x}\:\mathrm{d}x.$$
(That is, $\phi_X(\xi)=\lambda^r h(\lambda-i\xi)/(\lambda-i\xi)^r$.) If $z\in\mathbb{R}^+$, then $h(z)=1$ since this is the integral of the density of $\mathcal{G}(r,z)$.
After that he told us that, since $h$ is holomorphic, $h(z)=1$ for all $z\in\mathbb{C}$. I don't get how we can do that. $\mathbb{R}^+$ is not an open subset of $\mathbb{C}$ we I can't use any extension results that I know.
Is there some result that proves this?
A:
It follows from the identity theorem and from the fact that $\mathbb{R}^+$ has accumulation points.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to make rsync read SRC from STDIN?
I want to dump my MySQL database and make daily backups with rsync.
First approach I came up with is something like mysqldump -ufoo -pbar baz > /var/tmp/baz.sql && rsync /var/tmp/baz.sql /backup/ && rm /var/tmp/baz.sql.
Then I started to wonder if it is possible not to use the temporary file /var/tmp/baz.sql, but instead to pipe the output of mysqldump directly to rsync.
To be more specific, what I want is quite similar to a command line which we use to update the GPG key for apt in Ubuntu: gpg --export --armor CE49EC21 | sudo apt-key add -, where the receiver of the pipe supports this '-' argument indicating it'll read from stdin. I suppose rsync doesn't have a similar argument. But I wanna know if there is a workaround.
A:
This is right, it doesn't work this way. It is because rsync is made to transfer complete file trees from A to B.
Because of the way rsync works, it cannot work, because rsync calculates several checksums before choosing to transfer a particular file (or parts of it), and doing so in only 2 iterations (ping-pong-steps).
That means a file has to be read several times. That would not work with a (potentially large) SQL dump because it would have to be buffered somehow. And this buffering is up to the user.
Actually storing the file should be the best workaround, especially if it is a file which only gets gradual differences.
| {
"pile_set_name": "StackExchange"
} |
Q:
What is this code doing&
#include <iostream>
#include <string>
typedef unsigned long long ul64;
double m[10] = { 65.45433, 45435.0054, 232543535.9, 0.00043543, 99.99999, 5345434.000, 754354.6, 80008.243, 0.9, 10.01 };
union foo
{
char cs[8];
double x;
ul64 a;
foo(std::string s) { for (int i = 0; i < s.size(); i++) cs[i] = s[i];}
foo(double y) :x(y){}
foo(ul64 b) :a(b){}
};
int main()
{
foo bar(0xe8e711395e65686d);
for (int i = 1; i <= 10; i++)
bar.x *= m[i];
for (int i = 1; i <= 8; i++)
std::cout << bar.cs[i];
std::cout << std::endl;
return 0;
}
I have this code for my exam, but i don't know what is is doing. Help me please! I only know that it has some errors and display something.
A:
Maybe this is just overwhelming you.
Just start with the first line of main: What does
foo bar(0xe8e711395e65686d);
do? It creates a variable named bar, which has type foo, and a constructor is called with a parameter 0xe8e711395e65686d. So what does this do?
Reading the comment of Mooing Duck: What is it intended to do, and how do you fix it so that it will do what it is intended to do?
You have two for-loops:
for (int i = 1; i <= 10; i++)
for (int i = 1; i <= 8; i++)
Both contain the same beginner's mistake. What is the mistake? Fix it.
At last in the output loop, you need to understand what a union is. The code is weird, but you should still be able to explain what it does.
| {
"pile_set_name": "StackExchange"
} |
Q:
ForEach and the $_ empty result
My Azure CLI task is this (... replaces my Azure credentials):
- task: AzureCLI@2
inputs:
azureSubscription: ...
scriptType: pscore
scriptLocation: "inlineScript"
inlineScript: |
$containers=$(az storage container list --connection-string ...)
$containers.ForEach({ echo $_."name" })
$containers is a set of containers, each of which has a name property; writing $containers.ForEach({ echo $_ }) I obtain something like:
First of all, I don't understand why there's the [ in the top left corner, since I'm printing only the objects inside the array, not the array itself. Second, I can't figure out why the cycle doesn't print the name property of the objects when I write $containers.ForEach({ echo $_."name" }): it prints nothing, the output is blank.
A:
well, because you are getting json, not an object, you should do this:
$containers = $(...) | ConvertFrom-Json -Depth 5
$containers.foreach{ $_.name } # dont need echo, just a waste of characters
| {
"pile_set_name": "StackExchange"
} |
Q:
how to increase the size of UIButtonTypeInfoLight
i have tried the below code , but size of button is remains same
UIButton * pInfoButton = [UIButton buttonWithType:UIButtonTypeInfoLight];
[pInfoButton setFrame:CGRectMake(100,100, 80, 80)];
[pInfoButton setBounds:CGRectMake(0, 0, 80, 80)];
[self.view addSubview:pInfoButton];
please tell me how to increase it's size , thanks in advance
A:
You can increase it applying CGAffineTransform to the button: pInfoButton.transform = CGAffineTransformMakeScale(1.2, 1.2);. But in this case you'll lose a quality.
Another solution is to create custom button with background.
| {
"pile_set_name": "StackExchange"
} |
Q:
query optimization with condition
I want to optimze the follwing query, to not use subquery to get max value:
select c.ida2a2 from table1 m, table2 c
where c.ida3a5 = m.ida2a2
and (c.createstampa2 < (select max(cc.createstampa2)
from table2 cc where cc.ida3a5 = c.ida3a5));
Any idea? Please let me know if you want to get more info.
A:
This may be a more efficient way to write the query:
select c.ida2a2
from table1 m join
(select c.*, MAX(createstampa2) over (partition by ida3a5) as maxcs
from table2 c
) c
on c.ida3a5 = m.ida2a2
where c.createstampa2 < maxcs
I'm pretty sure Oracle optimizes this correctly (filtering the rows before the join). If you wanted to be clearer:
select c.ida2a2
from table1 m join
(select c.*
from (select c.*, MAX(createstampa2) over (partition by ida3a5) as maxcs
from table2 c
) c
where c.createstamp2 < c.maxcs
) c
on c.ida3a5 = m.ida2a2
| {
"pile_set_name": "StackExchange"
} |
Q:
loading flash files (.swf) webview in android
Possible Duplicate:
Load an SWF into a WebView
I have a .swf file and I want to open it into a webview and also want to play flash games loaded there in the webview. How do I do that?
I am getting swf file in encoded form in webview instead of a clock.
A:
I think You should implement setPluginsEnabled method.
example
String url ="file:///android_asset/hoge.swf";
WebView wv=(WebView) findViewById(R.id.WebView01);
wv.getSettings().setPluginsEnabled(true);
wv.loadUrl(url);
| {
"pile_set_name": "StackExchange"
} |
Q:
How to continuously restart/loop R script
I want an R script to continuously run and check for files in a folder and do something with those files.
The code simply checks for a file, then moves the file to somewhere else and renames it, deleting the old file (in reality it's a bit more elabore than this).
If I run the script it works fine, however I want R to automatically detect for the files. In other words, is there a way to have R run the script continuously so that I don't have to run the script if I put files in that folder?
A:
In pure R you just need an infinite repeat loop...
repeat {
print('Checking files')
# Your code to do file manipulation
Sys.sleep(time=5) # to stop execution for 5 sec
}
However there may be better tools suitable to do this kind of file manipulation depending on your OS.
| {
"pile_set_name": "StackExchange"
} |
Q:
SQL Server create clustered index on nvarchar column, enforce sorting
I want have a small table with two columns [Id] [bigint] and [Name] [nvarchar](63). The column is used for tags and it will contain all tags that exist.
I want to force an alphabetical sorting by the Name column so that a given tag is found more quickly.
Necessary points are:
The Id is my primary key, I use it e.g. for foreign keys.
The Name is unique as well.
I want to sort by Name alphabetically.
I need the SQL command for creating the constraints since I use scripts to create the table.
I know you can sort the table by using a clustered index, but I know that the table is not necessarily in that order.
My query looks like this but I don't understand how to create the clustered index on Name but still keep the Id as Primary Key:
IF NOT EXISTS (SELECT * FROM sys.objects
WHERE object_id = OBJECT_ID(N'[dbo].[Tags]')
AND type in (N'U'))
BEGIN
CREATE TABLE [dbo].[Tags]
(
[Id] [bigint] IDENTITY(1,1) PRIMARY KEY NOT NULL,
[Name] [nvarchar](63) NOT NULL,
CONSTRAINT AK_TagName UNIQUE(Name)
)
END
Edit:
I decided to follow paparazzo's advice. So if you have the same problem make sure you read his answer as well.
A:
You can specify that the Primary Key is NONCLUSTERED when declaring it as a constraint, you can then declare the Unique Key as being the CLUSTERED index.
CREATE TABLE [dbo].[Tags] (
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](63) NOT NULL,
CONSTRAINT PK_Tag PRIMARY KEY NONCLUSTERED (Id ASC),
CONSTRAINT AK_TagName UNIQUE CLUSTERED (Name ASC)
);
Also specifying ASC or DESC after the Column name (within the key/index declaration) sets the index sort order. The default is usually ascending.
| {
"pile_set_name": "StackExchange"
} |
Q:
rails - Auto create tasks after creating a project (after_create)
a newbie here. Just started to learn development. Any help would be greatly appreciated
I have two models Project and Task. Each project will have 7 tasks. I want rails to auto create my 7 tasks after I create a project.
My Task Controller
def create
@task = Task.new(task_params)
respond_to do |format|
if @task.save
format.html { redirect_to @task, notice: 'Task was successfully created.' }
format.json { render :show, status: :created, location: @task }
else
format.html { render :new }
format.json { render json: @task.errors, status: :unprocessable_entity }
end
end
end
def task_params
params.require(:task).permit(:title, :description)
end
A:
There are several ways you can do this.
1. Via callbacks
You can use callbacks in the Project model. Personally I do not recommend this approach since it this is not the intended use of callbacks, but it may work for you.
class Project < class Attachment < ActiveRecord::Base
after_create :create_tasks
private
def create_tasks
# Logic here to create the tasks. For example:
# tasks.create!(title: "Some task")
end
end
2. Nested attributes
You can build the child objects into the form and Rails will automatically create the child objects for you. Check out accepts_nested_attributes_for. This is more involved than using callbacks.
3. Use a form object
A form object can be a nice middle ground between callbacks and accepts_nested_attributes_for, but it raises the complexity a notch. Read up more about form objects here. There is also a nice Rails Casts episode on the topic, but it requires subscription.
There are other ways to do this as well, so it's up to you to find the right approach.
| {
"pile_set_name": "StackExchange"
} |
Q:
Php url parsing and editing
I searched "the whole" stackoverflow but didn't find a decent answer that works for me. I need to change the host of a url in php.
This url: http://example123.com/query?t=de&p=9372&pl=bb02799a&cat=&sz=400x320&scdid=e7311763324c781cff2d3bc55b2d83327aba111f2db79d0682860162c8a13c24&rnd=29137126
To This: http://example456.com/test?t=de&p=9372&pl=bb02799a&cat=&sz=400x320&scdid=e7311763324c781cff2d3bc55b2d83327aba111f2db79d0682860162c8a13c24&rnd=29137126
I only need to change the domain and the path or file, so far I've got this:
$originalurl = http://example123.com/query?t=de&p=9372&pl=bb02799a&cat=&sz=400....
$parts = parse_url($originalurl);
$parts['host'] = $_SERVER['HTTP_HOST'];
$parts['path'] = '/test';
$modifiedurl = http_build_query($parts);
print_r(urldecode($modifiedurl));
but it echos
scheme=http&host=localhost&path=/test&query=t=de&p=9372&pl=bb02799a&cat=&sz=400...
Please I don't want to use some strpos or something like that as I need it to be variable.
Thanks ;)
A:
$url = 'http://example123.com/query?t=de&p=9372&pl=bb02799a&cat=&sz=400x320&scdid=e7311763324c781cff2d3bc55b2d83327aba111f2db79d0682860162c8a13c24&rnd=29137126';
$query = parse_url($url)['query'];
$newUrl = 'http://www.younewdomain.com/path?' . $query;
| {
"pile_set_name": "StackExchange"
} |
Q:
Oracle Regular Expression constraint
I currently have a constraint I am trying to build for an Oracle database column.
ALTER TABLE section ADD CONSTRAINT section_days_chk CHECK (Days = 'M' OR Days = 'T' OR Days = 'W' OR Days = 'R' OR Days = 'F' OR Days = 'S' OR Days = 'U');
However, I also need the constraint to allow for any combination of those characters as well.
I have been trying to figure out how to use a regular expression "like" constraint to do that without allowing any other characters. For example, only the characters M or T or W or R or F or S or U or any combination of the letters should be allowed in like "MWF" or "TR" or "M". I've been having trouble finding an example that will constrain the input like this.
This is what I've tried:
ALTER TABLE section ADD CONSTRAINT section_days_chk CHECK (regexp_like(Days,'[MTWRFSU]'));
But it allows any letters as long as they are attached to one included in the set like "MD".
Any ideas? I've been looking for a few days, and this task doesn't seem to be very common. All the examples I have found assume you are looking for words where a certain character is included in it.
A:
Try using following one
REGEXP_LIKE (Days, '^[MTWRFSU]+$')
Explanation:
^ and $ are anchors. They say that your string should be a match itself.
+ mean one or many. So one or many characters from class [MTWRFSU] can come together. If any other character is added match will be discarded
Check REGEXP_LIKE documentation.
| {
"pile_set_name": "StackExchange"
} |
Q:
C# - Easy method for using CSV/Excel data and choosing specific data?
I am currently working on a jeopardy type game and want to include a CSV file with questions, so there are more questions for the game if anyone intend to play it again. I am struggling to figure out a way to seperate this CSV file into categories of some sort. So far i have 2 questions for each button of points. Whenever a button is pressed i want it to retrieve the question from the file and randomizing between these 2 questions, and thereafter display the chosen question in a textbox.
If anyone has suggestions of methods i can use for making this possible, I am open ears!
So far i have tried pretty much all the C# CSV file tutorials on YouTube, the problem is that none of them treat the data in the same way that i want to, which makes it complicated for me.
I will of course provide whatever you need to help me.
And in advance, thank you! :)
A:
EDIT - 2020-04-08 22:57 (UTC+03:00)
Please find the working form below. Just copy / paste into form1.cs
using System;
using System.Globalization;
using System.IO;
using System.Linq;
using System.Windows.Forms;
using CsvHelper;
using CsvHelper.Configuration;
namespace EksamensprojektTest
{
public partial class Form1 : Form
{
static Values[] allRecords;
static Random randomNumberGenerator = new Random();
private string currentCategory = "Danish";
private int currentPoints = 100;
public Form1()
{
InitializeComponent();
}
public sealed class CsvRecordMap : ClassMap<Values>
{
public CsvRecordMap()
{
Map(m => m.ID);
Map(m => m.Question);
Map(m => m.Answer);
Map(m => m.Points);
Map(m => m.Category);
}
}
private void Form1_Load(object sender, EventArgs e)
{
using (var reader = new StreamReader("C:\\Users\\mathi\\OneDrive\\HTX\\3.g\\Programmering\\Eksamensprojekt\\csvfil.csv"))
{
using (var csv = new CsvReader(reader, CultureInfo.InvariantCulture))
{
csv.Configuration.RegisterClassMap<CsvRecordMap>();
allRecords = csv.GetRecords<Values>().ToArray();
}
}
}
public static Values GetRandomQuestion(string category, int points)
{
var matchingQuestions = allRecords
.Where(x => x.Category == category && x.Points == points)
.ToArray();
if (matchingQuestions.Length == 0) { return null; }
// if length is 2, random returns 0 or 1.
// if length is 3, random returns 0 or 1 or 2, etc.
int randomChoice = randomNumberGenerator.Next(0, matchingQuestions.Length);
return matchingQuestions[randomChoice];
}
private void button1_Click(object sender, EventArgs e)
{
Values pickedRecord = GetRandomQuestion(currentCategory, currentPoints);
textBox1.Text = pickedRecord.Question;
}
}
}
EDIT:
The following method can bu used to get a random question having the specified category and points. The allRecords array is the same as the original answer and is filled the same way. This method is just an addition:
Please put this at the class level:
static Random randomNumberGenerator = new Random();
Then the method:
public static CsvRecord GetRandomQuestion(string category, int points)
{
var matchingQuestions = allRecords
.Where(x => x.Category == category && x.Points == points)
.ToArray();
if (matchingQuestions.Length == 0) { return null; }
// if length is 2, random returns 0 or 1.
// if length is 3, random returns 0 or 1 or 2, etc.
int randomChoice = randomNumberGenerator.Next(0, matchingQuestions.Length);
return matchingQuestions[randomChoice];
}
ORIGINAL POST
I've implemented a fully working solution which loads all the csv file into memory as instances of CsvRecord class.
The csv file I've created looks like the following. Please note that the header (field names) is important. It should match our property names. If not, we still have a workaround, but I assume they will be the same.
The csv data is expected to be comma separated by default. This behavior can also be changed.
Id,Category,Points,Question,Answer
1,History,100,Question 1 (History) - 100 Points,Answer 1 (History) - 100 Points
2,History,100,Question 2 (History) - 100 Points,Answer 2 (History) - 100 Points
3,History,200,Question 3 (History) - 200 Points,Answer 3 (History) - 200 Points
4,History,200,Question 4 (History) - 200 Points,Answer 4 (History) - 200 Points
5,History,300,Question 5 (History) - 300 Points,Answer 5 (History) - 300 Points
6,History,300,Question 6 (History) - 300 Points,Answer 6 (History) - 300 Points
7,History,400,Question 7 (History) - 400 Points,Answer 7 (History) - 400 Points
8,History,400,Question 8 (History) - 400 Points,Answer 8 (History) - 400 Points
9,History,500,Question 9 (History) - 500 Points,Answer 9 (History) - 500 Points
10,History,500,Question 10 (History) - 500 Points,Answer 10 (History) - 500 Points
11,Science,100,Question 11 (Science) - 100 Points,Answer 11 (Science) - 100 Points
12,Science,100,Question 12 (Science) - 100 Points,Answer 12 (Science) - 100 Points
13,Science,200,Question 13 (Science) - 200 Points,Answer 13 (Science) - 200 Points
14,Science,200,Question 14 (Science) - 200 Points,Answer 14 (Science) - 200 Points
15,Science,300,Question 15 (Science) - 300 Points,Answer 15 (Science) - 300 Points
16,Science,300,Question 16 (Science) - 300 Points,Answer 16 (Science) - 300 Points
17,Science,400,Question 17 (Science) - 400 Points,Answer 17 (Science) - 400 Points
18,Science,400,Question 18 (Science) - 400 Points,Answer 18 (Science) - 400 Points
19,Science,500,Question 19 (Science) - 500 Points,Answer 19 (Science) - 500 Points
20,Science,500,Question 20 (Science) - 500 Points,Answer 20 (Science) - 500 Points
21,Geography,100,Question 21 (Geography) - 100 Points,Answer 21 (Geography) - 100 Points
22,Geography,100,Question 22 (Geography) - 100 Points,Answer 22 (Geography) - 100 Points
23,Geography,200,Question 23 (Geography) - 200 Points,Answer 23 (Geography) - 200 Points
24,Geography,200,Question 24 (Geography) - 200 Points,Answer 24 (Geography) - 200 Points
Then, the following class (which is a complete console application) demonstrates loading the csv file, filtering and selecting the question to ask.
I hope this helps. I've also added some comment lines here and there to explain things.
using System;
using System.Globalization;
using System.IO;
using CsvHelper;
using CsvHelper.Configuration;
using System.Linq;
namespace console
{
public class Program
{
public class CsvRecord
{
public string Id { get; set; }
public string Category { get; set; }
public int Points { get; set; }
public string Question { get; set; }
public string Answer { get; set; }
}
public sealed class CsvRecordMap : ClassMap<CsvRecord>
{
public CsvRecordMap()
{
Map(m => m.Id);
Map(m => m.Category);
Map(m => m.Points);
Map(m => m.Question);
Map(m => m.Answer);
}
}
static CsvRecord[] allRecords;
static void Main(string[] args)
{
// Fill all records into a shared array (allRecords)
// CSVHelper reads our csv file and creates a CsvRecord
// instance for each row using our ClassMap implementation
using (var reader = new StreamReader("data.csv"))
{
using (var csv = new CsvReader(reader, CultureInfo.InvariantCulture))
{
csv.Configuration.RegisterClassMap<CsvRecordMap>();
allRecords = csv.GetRecords<CsvRecord>().ToArray();
}
}
Console.WriteLine("Enter a category and points to get a random question");
Console.Write("Category: "); string category = Console.ReadLine();
Console.Write("Points: "); int points = int.Parse(Console.ReadLine());
// using System.LinQ;
// A simple filter
CsvRecord[] matchingRecords = allRecords.Where(a => a.Category == category && a.Points == points).ToArray();
Console.WriteLine("{0} records found", matchingRecords.Length);
// Randomly select one of the two
Random random = new Random();
CsvRecord pickedRecord = matchingRecords[random.Next(matchingRecords.Length)];
Console.WriteLine("Selected: {0}", pickedRecord.Question);
}
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Is it OK to use this pattern for value-checking?
For my classes that have methods with similar input-checking logic (for example, a custom multi-dimensional array that has a lot of methods, all of which check if given coordinates are within the array limits), I create a separate private checker that throws runtime exceptions, and also a public checker, that just returns a boolean value indicating if a variable is acceptable for this class methods. Here's example:
public class Foo {
public void doStuff(Variable v) {
checkVariableUnsafe(v);
... // do stuff
}
private void checkVariableUnsafe(Variable v) throws InvalidVariableException {...}
public boolean checkVariable(Variable v) {
try {
checkVariableUnsafe(v);
return true;
} catch (InvalidVariableException e) {
return false;
}
}
}
Is it OK to use it, or are there some downfalls that I fail to see? What's the commonly used pattern in such situations?
A:
It's not just a good idea to use the same code for both validity prediction and actual validation, it's the only right idea. And since the first commandment is Don't repeat yourself!, of course you should extract that check into a method of its own. So this is exactly what I usually do.
A:
It is often recommended to avoid using exceptions for normal program flow. Without debating that issue here, if you wanted to follow that advice, then you could put the logic that actually does the check in the public checkVariable method, and have the private checkVariableUnsafe method call checkVariable and throw an exception if it returns false.
I don't think there is enough context in your question to comment definitively on the appropriateness of the API you have created, but I can see nothing intrinsically wrong with it.
| {
"pile_set_name": "StackExchange"
} |
Q:
Centralizing actions not working in React/Redux
I need to perform some clean up as users go from one component to another. In order to keep it simple and consitent, I created a centralized action creator like below:
import { cleanUpX } from 'moduleX';
import { cleanUpY } from 'moduleY';
import { cleanUpZ } from 'moduleZ';
export const cleanUp = () => {
cleanUpX();
cleanUpY();
cleanUpZ();
}
I then call this action creator in the componentWillUnmount() lifecycle method of my components:
class MyComponent1 extends Component {
... Code omitted for brevity
componentWillUnmount() {
this.props.actions.cleanUp();
}
}
Even though I hit cleanUp() and each individual action creator inside it, I don't hit their respective reducers.
In other words, I see that I hit cleanUpY() but it ends there and I never hit the reducer to perform the actual state change.
If I, however, do the following, it works fine:
class MyComponent1 extends Component {
... Code omitted for brevity
componentWillUnmount() {
this.props.actions.cleanUpX();
this.props.actions.cleanUpY();
this.props.actions.cleanUpZ();
}
}
What am I missing here?
A:
Nothing happens because the imported action creators are not bound to dispatch (unlike their counterparts on this.props.actions), so calling them simply returns an action object without triggering the reducer.
You can manually inject dispatch to your props and then pass it to the cleanup function, but the easiest solution is to install redux-thunk, and write your action creator like this:
export const cleanUp = () => (dispatch, getState) => {
dispatch(cleanupX());
dispatch(cleanupY());
dispatch(cleanupZ());
};
Then you bind it at connect like you did for the other action creators, and call it from the component with this.props.actions.cleanUp().
Installation of redux-thunk is straightforward, just install the npm package and add it to the middleware of your store:
import { createStore, applyMiddleware } from 'redux';
import thunk from 'redux-thunk';
import yourReducer from ...;
const store = createStore(
yourReducer,
applyMiddleware(thunk)
);
| {
"pile_set_name": "StackExchange"
} |
Q:
how to find angle between two latitudes and logtitudes and display them on circle?
Hey I have circle as below.
Circle has two points A(latitude1,longitude1) and B(latitude2, longitude2).
and two (latitude, longitude) pairs. Among them one is (Say A) is the center point of the circle. Now I want to put another point on the circle (B) by calculating angle.
Please how can I do this in android canvas?
Right now to get angle I am using following function:
public double getAngle(double lat1, double lon1, double lat2, double lon2)
{
double dx = lat2 - lat1;
// Minus to correct for coord re-mapping
double dy = Math.cos(Math.PI/180*lat1)*(lon2 - lon1);
double inRads = Math.atan2(dy,dx);
if (inRads < 0)
inRads = Math.abs(inRads);
else
inRads = 2*Math.PI - inRads;
return inRads;
}
Is above function is correct to get angle? and if yes then now how can I display that point on canvas?
A:
That Math.cos() is awfully suspicious. Why isn't dy simply lon2 - lon1? Once you have inRads, why are you fiddling with it? Math.atan2() returns a correct angle between -Pi and +Pi.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to manipulate SVG background image with JavaScript
I want to remove the viewBox property of the root svg element in a background image with JavaScript. How would I do that?
.box {
background-image: url(http://upload.wikimedia.org/wikipedia/commons/8/86/CC-logo.svg);
background-repeat: no-repeat;
background-size: auto auto;
width: 300px;
height: 150px;
border: 1px solid;
}
<div class="box">
stretch background
</div>
Here are the first few characters from the SVG file.
<?xml version="1.0" encoding="utf-8"?><svg xmlns="http://www.w3.org/2000/svg" width="100%" height="100%" viewBox="0 0 512 123">
A:
You can use an SVG fragment identifier together with viewBox(none)
Change the URL to
http://upload.wikimedia.org/wikipedia/commons/8/86/CC-logo.svg#svgView(viewBox(none))
This will work in Firefox (and did work in Opera 12). Not sure how many other UAs support viewBox(none) from SVG 1.2 tiny though.
| {
"pile_set_name": "StackExchange"
} |
Q:
Override validations of gem devise
In my model user I add the following validations The problem is that devise have implemented its own validations. How can I override the validations of Devise
user.rb
class User < ActiveRecord::Base
# Include default devise modules. Others available are:
# :token_authenticatable, :confirmable, :lockable and :timeoutable
devise :database_authenticatable, :registerable,
:recoverable, :rememberable, :trackable, :validatable
# Setup accessible (or protected) attributes for your model
attr_accessible :email, :password, :password_confirmation, :remember_me
validates_presence_of :email, :message=>"email no puede estar vacio"
validates_presence_of :password, :message=>"password no puede estar vacio"
validates_presence_of :password_confirmation,:message=>"validate confirmation no puede estar vacio"
end
Browser
Email can't be blank #validation of devise
Email email no puede estar vacio #my own validation
Password can't be blank #validation of devise
Password password no puede estar vacio #my own validation
Password confirmation validate confirmation no puede estar vacio
Other problem that I have is that rails shows the name of attribute before of the customized message
Email email no puede estar vacio #my own validation #wrong
email no puede estar vacio #well
A:
you are using the :validatable devise module...
you can customise the following lines in the confing/initializers/devise.rb
# ==> Configuration for :validatable
# Range for password length.
config.password_length = 8..128
# Email regex used to validate email formats. It simply asserts that
# one (and only one) @ exists in the given string. This is mainly
# to give user feedback and not to assert the e-mail validity.
# config.email_regexp = /\A[^@]+@[^@]+\z/
but if you are trying to override the error message (and solve the issue with the field name mentioned before your translation),
you better remove your validations (in favour of devise ones)
in other words you have to remove these lines
validates_presence_of :email, :message=>"email no puede estar vacio"
validates_presence_of :password, :message=>"password no puede estar vacio"
validates_presence_of :password_confirmation,:message=>"validate confirmation no puede estar vacio"
and you can override the validation messages using the Rails I18n API
specifically part 4
in simpler words... you have to provide the locale files for your preferred language (:es) in the config/locales/es.rb
providing those specific translations
# please correct me if i'm wrong.
es:
activerecord:
errors:
messages:
record_invalid:
email: email no puede estar vacio
password: password no puede estar vacio
password_confirmation: validate confirmation no puede estar vacio
but it's highly advisable to have the field that failed validation there.
note:
if you are looking for just provide another locale for the devise error messages (say :es locales)...
then it's better to copy config/locales/devise.en.yml to config/locales/devise.es.yml and edit it accordingly...
or use the devise wiki (github.com/plataformatec/devise/wiki/I18n)
| {
"pile_set_name": "StackExchange"
} |
Q:
What are the details needed to receive euros from my company in Germany to a bank account in India
I want to receive Euros from my company in Germany to my savings bank account (HDFC) in India where I work. What are the exact details I have to give to my company in Germany? My company's bank is Deutsche Bank.
Swift code of HDFC which is HDFCINBBXXX, Bank Name & Address and Account Number are sufficient for this ?
A:
Its best to check with your company what details they need. Generally for SWIFT Transfers,
the below is sufficient;
Bank Name
Bank Address
SWIFT BIC
Your Bank Account Number
Your Name
Additionally any transfers to India require a "Purpose of Remittance" to be filled by the company paying to money.
| {
"pile_set_name": "StackExchange"
} |
Q:
Question on slice category
Consider category $\mathcal{C}/Z$ consists of objects over Z, i.e arrows over $Z$. Let $h: H \rightarrow Z$ be a fixed object in $\mathcal{C}/Z$. Let $F=Hom(H,X)$. Show that under $F$, pullbacks over $Z$ are just products in the category of sets.
A:
Here is a possible direction to solve the exercise:
prove that fiber products of morphisms with common codomain $Z$ are products in the category $\mathcal C/Z$
(Enter the Yoneda) use the fact that representable copresheaves preserve limit to conclude the proof.
If you need additional hints feel free to ask.
Addedum(since the OP asked :) ):
Your functor $F$ is nothing but the representable functor associated to the object $h \colon H \to Z$, hence by general results it preserves product: that is it sends product diagrams in $\mathcal C/Z$ into product diagrams in $\mathbf{Set}$.
By (1) products in $\mathcal C/Z$ are fiber products in $\mathcal C$, hence putting all together we get that $F$ sends the fiber products of $\mathcal C$ (i.e. the products of $\mathcal C/Z$) into products of $\mathbf{Set}$.
| {
"pile_set_name": "StackExchange"
} |
Q:
Single Entry Linking
Ok, my site is made up of Cities and Properties within each city. The usual flow is on the homepage there is a list of Cities, you click a city and it goes through to a City view which will then have a list of Properties, to which you can click through to a Property view.
Well some of my Cities only have 1 property, so I would like the link from City to go directly to the Property view, not the city view.
{exp:channel:entries channel="city" dynamic="no" orderby="city_sort_order" sort="asc"}
<li class="location-box colour-{city_colour}">
<a href="{url_title}">
<div class="location-box colour-{city_colour}">
<figure>
<h1>{title}</h1>
<img src="{city_main_image}" alt="">
</figure>
</div>
</a>
</li>
{/exp:channel:entries}
Any advice?
A:
You can count how many playa:parents the city has.
{exp:channel:entries channel="city" dynamic="no" orderby="city_sort_order" sort="asc"}
<li class="location-box colour-{city_colour}">
<a
{if '{exp:playa:total_parents field="property_city"}' == 1}
{exp:playa:parents field="property_city" var_prefix="property"}href="/{url_title}/{property:url_title}"{/exp:playa:parents}
{if:else}
href="/{url_title}"
{/if}
>
<div class="location-box colour-{city_colour}">
<figure>
<h1>{title}</h1>
<img src="{city_main_image}" alt="">
</figure>
</div>
</a>
</li>
{/exp:channel:entries}
Please, test this code.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to get a file close event in python
Using python 2.7 on windows 7 64 bit machine.
How to get a file close event:
when file is opened in a new process of file opener (like notepad, wordpad which opens file everytime in new process of wordpad)
when file is opened in a tab of file opener (like notepad++, which opens all files in new tab but there exist only a single process of notepad++ running)
So, how to get file close event in above cases? Is it possible to achieve above cases through a common code? I am dealing with different file types
A:
This has proven to be a very easy task for *nix systems, but on Windows, getting a file close event is not a simple task. Read below the summary of common methods grouped by OS'es.
For Linux
On Linux, the filesystem changes can be easily monitored, and in great detail. The best tool for this is the kernel feature called inotify, and there is a Python implementation that uses it, called Pynotify.
Pyinotify
Pyinotify is a Python module for monitoring filesystems changes. Pyinotify relies on a Linux Kernel feature (merged in kernel 2.6.13) called inotify, which is an event-driven notifier. Its notifications are exported from kernel space to user space through three system calls. Pyinotify binds these system calls and provides an implementation on top of them offering a generic and abstract way to manipulate those functionalities.
Here you can find the list of the events that can be monitored with Pynotify.
Example usage:
import pyinotify
class EventHandler(pyinotify.ProcessEvent):
def process_IN_CLOSE_NOWRITE(self, event):
print "File was closed without writing: " + event.pathname
def process_IN_CLOSE_WRITE(self, event):
print "File was closed with writing: " + event.pathname
def watch(filename):
wm = pyinotify.WatchManager()
mask = pyinotify.IN_CLOSE_NOWRITE | pyinotify.IN_CLOSE_WRITE
wm.add_watch(filename, mask)
eh = EventHandler()
notifier = pyinotify.Notifier(wm, eh)
notifier.loop()
if __name__ == '__main__':
watch('/path/to/file')
For Windows
Situation for Windows is quite a bit more complex than for Linux. Most libraries rely on ReadDirectoryChanges API which is restricted and can't detect finer details like file close event. There are however other methods for detecting such events, so read on to find out more.
Watcher
Note: Watcher has been last updated in February 2011, so its probably safe to skip this one.
Watcher is a low-level C extension for receiving file system updates using the ReadDirectoryChangesW API on Windows systems. The package also includes a high-level interface to emulate most of the .NET FileSystemWatcher API.
The closest one can get to detecting file close events with Watcher is to monitor the FILE_NOTIFY_CHANGE_LAST_WRITE and/or FILE_NOTIFY_CHANGE_LAST_ACCESS events.
Example usage:
import watcher
w = watcher.Watcher(dir, callback)
w.flags = watcher.FILE_NOTIFY_CHANGE_LAST_WRITE
w.start()
Watchdog
Python API and shell utilities to monitor file system events. Easy install: $ pip install watchdog. For more info visit the documentation.
Watchdog on Windows relies on the ReadDirectoryChangesW API, which brings its caveats as with Watcher and other libraries relying on the same API.
Pywatch
A python near-clone of the Linux watch command. The pywatch.watcher.Watcher class can be told to watch a set of files, and given a set of commands to run whenever any of those files change. It can only monitor the file changed event, since it relies on polling the stat's st_mtime.
Bonus for Windows with NTFS:
NTFS USN Journal
The NTFS USN (Update Sequence Number) Journal is a feature of NTFS which maintains a record of changes made to the volume. The reason it is listed as a Bonus is because unlike the other entries, it is not a specific library, but rather a feature existing on NTFS system. So if you are using other Windows filesystems (like FAT, ReFS, etc..) this does not apply.
The way it works it that the system records all changes made to the volume in the USN Journal file, with each volume having its own instance. Each record in the Change Journal contains the USN, the name of the file, and information about what the change was.
The main reason this method is interesting for this question is that, unlike most of the other methods, this one provides a way to detect a file close event, defined as USN_REASON_CLOSE. More information with a complete list of events can be found in this MSDN article. For a complete documentation about USN Journaling, visit this MSDN page.
There are multiple ways to access the USN Journal from Python, but the only mature option seems to be the ntfsjournal module.
The "proper" way for Windows:
File system filter driver
As descibed on the MSDN page:
A file system filter driver is an optional driver that adds value to
or modifies the behavior of a file system. A file system filter driver
is a kernel-mode component that runs as part of the Windows executive.
A file system filter driver can filter I/O operations for one or more
file systems or file system volumes. Depending on the nature of the
driver, filter can mean log, observe, modify, or even prevent. Typical
applications for file system filter drivers include antivirus
utilities, encryption programs, and hierarchical storage management
systems.
It is not an easy task to implement a file system filter driver, but for someone who would like to give it a try, there is a good introduction tutorial on CodeProject.
P.S. Check @ixe013's answer for some additional info about this method.
Multiplatform
Qt's QFileSystemWatcher
The QFileSystemWatcher class provides an interface for monitoring files and directories for modifications. This class was introduced in Qt 4.2.
Unfortunately, its functionality is fairly limited, as it can only detect when a file has been modified, renamed or deleted, and when a new file was added to a directory.
Example usage:
import sys
from PyQt4 import QtCore
def directory_changed(path):
print('Directory Changed: %s' % path)
def file_changed(path):
print('File Changed: %s' % path)
app = QtCore.QCoreApplication(sys.argv)
paths = ['/path/to/file']
fs_watcher = QtCore.QFileSystemWatcher(paths)
fs_watcher.directoryChanged.connect(directory_changed)
fs_watcher.fileChanged.connect(file_changed)
app.exec_()
A:
The problem you are facing is not with Python, but with Windows. It can be done, but you will have to write some non-trival C/C++ code for it.
A file open or a file close user mode notification does not exist in userland on Windows. That's why the libraries suggested by others do not have file close notification. In Windows, the API to detect changes in userland is ReadDirectoryChangesW. It will alert you of one of the following notifications :
FILE_ACTION_ADDED if a file was added to the directory.
FILE_ACTION_REMOVED if a file was removed from the directory.
FILE_ACTION_MODIFIED if a file was modified. This can be a change in the time stamp or attributes.
FILE_ACTION_RENAMED_OLD_NAME if a file was renamed and this is the old name.
FILE_ACTION_RENAMED_NEW_NAME if a file was renamed and this is the new name.
No amount of Python can change what Windows provides you with.
To get a file close notification, tools like Process Monitor install a Minifilter that lives in the kernel, near the top of other filters like EFS.
To acheive what you want, you would need to:
Install a Minifilter that has the code to send events back to userland. Use Microsoft's Minispy sample, it is stable and fast.
Convert the code from the user program to make it a Python extension (minispy.pyd) that exposes a generator that produces the events. This is the hard part, I will get back to that.
You will have to filter out events, you won't beleive the amount of IO goes on an idle Windows box!
Your Python program can then import your extension and do its thing.
The whole thing looks something like this :
Of course you can have EFS over NTFS, this is just to show that your minifilter would be above all that.
The hard parts :
Your minifilter will have to be digitally signed by an authority Microsoft trusts. Verising comes to mind but there are others.
Debugging requires a separate (virtual) machine, but you can make your interface easy to mock.
You will need to install the minifilter with an account that has adminstrator rights. Any user will be able to read events.
You will have to deal with multi-users your self. There is only one minifilter for many users.
You will have to convert user program from the MiniSpy sample to a DLL, which you will wrap with a Python extension.
The last two are the hardest.
| {
"pile_set_name": "StackExchange"
} |
Q:
Convert OpenSSL encryption into native C#
I have a legacy application which uses OpenSSL to encrypt a string using DES3.
These are the parameters that are set for OpenSSL:
OpenSSL enc -des3 -nosalt -a -A -iv 1234567890123456 -K 1234567890123456...48
The key is a string of 48 digits and the iv is a substring of the first 16 digits of this key.
Now, I am trying to replicate this functionality with C#'s System.Cryptography library and without the use of OpenSSL if possible.
My goal is not to have to use OpenSSL and have the encryption done in native C# code.
Here is what I have got so far:
public string Encrypt(string toEncrypt, bool useHashing)
{
var _key = "48...digits...";
byte[] keyArray;
var toEncryptArray = Encoding.UTF8.GetBytes(toEncrypt);
if (useHashing)
{
var hashmd5 = new MD5CryptoServiceProvider();
keyArray = hashmd5.ComputeHash(Encoding.UTF8.GetBytes(_key));
hashmd5.Clear();
}
else
{
keyArray = Encoding.UTF8.GetBytes(_key);
}
var tdes = new TripleDESCryptoServiceProvider();
tdes.Key = keyArray;
// Is this even the correct cipher mode?
tdes.Mode = CipherMode.CBC;
// Should the PaddingMode be None?
tdes.Padding = PaddingMode.PKCS7;
// THIS is the line where I am currently stuck on:
tdes.IV = Encoding.UTF8.GetBytes(_key.Substring(0, 16));
var cTransform = tdes.CreateEncryptor();
var resultArray = cTransform.TransformFinalBlock(toEncryptArray, 0, toEncryptArray.Length);
return Convert.ToBase64String(resultArray, 0, resultArray.Length);
}
As written as comments in the code, I am not quite sure if I am using the correct cipher, maybe even the padding mode is incorrect and my iv has a length of 16 bytes but only 8 bytes are expected.
Also, I did try my luck already with or without hashing the key/iv.
Is it even possible to convert the above mentioned OpenSSL logic into plain C#?
A:
Key and IV must be specified for openssl enc with -K and -iv as hexadecimal values. This is missing in the C# code, so that essentially the following expressions
tdes.Key = Encoding.UTF8.GetBytes(_key);
tdes.IV = Encoding.UTF8.GetBytes(_key.Substring(0, 16));
would have to be replaced by
tdes.Key = StringToByteArray(_key);
tdes.IV = StringToByteArray(_key.Substring(0, 16));
to produce the same ciphertext for useHashing == false. Here StringToByteArray is a method that converts a hexadecimal string into the corresponding byte array, e.g. here.
It should also be noted that .NET does not accept keys that are too weak, e.g.:
123456789012345612345678901234561234567890123456
In case of such a key a CryptographicException is thrown (Specified key is a known weak key for 'TripleDES' and cannot be used). OpenSSL accepts this key.
Regarding security:
MD5 shouldn't be used nowadays to generate a key, more here.
In addition, MD5 generates a 16 byte hash. Thus, always keying option 2 is used, which is weaker than keying option 1 (Keying options).
Generally it's insecure to use the key as IV, more here.
TripleDES is slow compared to today's standard AES, more here.
| {
"pile_set_name": "StackExchange"
} |
Q:
Search only document libraries using KeywordQuery
Is it possible using KeywordQuery to pull back all (only) records that are documents in document libraries? Wondering if there is a KeywordQuery.QueryText property I can use for this type of thing.
A:
Just add IsDocument:1 to show only results that are documents.
| {
"pile_set_name": "StackExchange"
} |
Q:
Loading images from url using Universal Image Loader gives nullpointer exception in Android
I am trying to retrieve an image from a URL using a Universal Image Loader library. I am getting a nullpointer exception. I am using a SimpleAdapter to display the Listview Image. I have done the following coding. I am very much new to this library and hence guide me step by step.
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_menu_items);
i=getIntent();
imageLoader.getInstance();
ListView salad_list=(ListView)findViewById(R.id.salads);
ListAdapter k=new SimpleAdapter(MenuItemsActivity.this,items,R.layout.menulist,new String[]{"Item_Name","Desc","Currency","Price"},new int[]{R.id.cat_name,R.id.textView1,R.id.textView2,R.id.textView3})
{
@Override
public View getView(int position, View convertView, ViewGroup parent) {
// TODO Auto-generated method stub
final View v = super.getView(position, convertView, parent);
// TextView picpath=(TextView)v.findViewById(R.id.hide2);
String s="http://166.62.17.208//images_large/caeser-salad.jpg";
ImageView picture=(ImageView)v.findViewById(R.id.imageView1);
ImageLoaderConfiguration config = new ImageLoaderConfiguration.Builder(getApplicationContext())
.threadPriority(Thread.NORM_PRIORITY - 2)
.denyCacheImageMultipleSizesInMemory()
.discCacheFileNameGenerator(new Md5FileNameGenerator())
.tasksProcessingOrder(QueueProcessingType.LIFO)
.enableLogging()
.build();
ImageLoader.getInstance().init(config);
DisplayImageOptions options = new DisplayImageOptions.Builder().cacheOnDisc()
.build();
imageLoader.displayImage(s, picture, options);
return super.getView(position, convertView, parent);
}
};
salad_list.setAdapter(k);
}
My error logs are as follows:
E/AndroidRuntime(3340): FATAL EXCEPTION: main
E/AndroidRuntime(3340): Process: com.alrimal, PID: 3340
E/AndroidRuntime(3340): java.lang.NullPointerException
E/AndroidRuntime(3340): at com.alrimal.MenuItemsActivity$2.getView(MenuItemsActivity.java:166)
E/AndroidRuntime(3340): at android.widget.AbsListView.obtainView(AbsListView.java:2263)
E/AndroidRuntime(3340): at android.widget.ListView.measureHeightOfChildren(ListView.java:1263)
E/AndroidRuntime(3340): at android.widget.ListView.onMeasure(ListView.java:1175)
E/AndroidRuntime(3340): at android.view.View.measure(View.java:16458)
E/AndroidRuntime(3340): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:5125)
E/AndroidRuntime(3340): at android.widget.LinearLayout.measureChildBeforeLayout(LinearLayout.java:1404)
E/AndroidRuntime(3340): at android.widget.LinearLayout.measureVertical(LinearLayout.java:695)
E/AndroidRuntime(3340): at android.widget.LinearLayout.onMeasure(LinearLayout.java:588)
E/AndroidRuntime(3340): at android.view.View.measure(View.java:16458)
E/AndroidRuntime(3340): at android.widget.RelativeLayout.measureChildHorizontal(RelativeLayout.java:719)
E/AndroidRuntime(3340): at android.widget.RelativeLayout.onMeasure(RelativeLayout.java:455)
E/AndroidRuntime(3340): at android.view.View.measure(View.java:16458)
E/AndroidRuntime(3340): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:5125)
E/AndroidRuntime(3340): at android.widget.FrameLayout.onMeasure(FrameLayout.java:310)
E/AndroidRuntime(3340): at android.view.View.measure(View.java:16458)
E/AndroidRuntime(3340): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:5125)
E/AndroidRuntime(3340): at android.widget.LinearLayout.measureChildBeforeLayout(LinearLayout.java:1404)
E/AndroidRuntime(3340): at android.widget.LinearLayout.measureVertical(LinearLayout.java:695)
E/AndroidRuntime(3340): at android.widget.LinearLayout.onMeasure(LinearLayout.java:588)
E/AndroidRuntime(3340): at android.view.View.measure(View.java:16458)
E/AndroidRuntime(3340): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:5125)
E/AndroidRuntime(3340): at android.widget.FrameLayout.onMeasure(FrameLayout.java:310)
E/AndroidRuntime(3340): at com.android.internal.policy.impl.PhoneWindow$DecorView.onMeasure(PhoneWindow.java:2289)
E/AndroidRuntime(3340): at android.view.View.measure(View.java:16458)
E/AndroidRuntime(3340): at android.view.ViewRootImpl.performMeasure(ViewRootImpl.java:1914)
E/AndroidRuntime(3340): at android.view.ViewRootImpl.measureHierarchy(ViewRootImpl.java:1111)
E/AndroidRuntime(3340): at android.view.ViewRootImpl.performTraversals(ViewRootImpl.java:1293)
E/AndroidRuntime(3340): at android.view.ViewRootImpl.doTraversal(ViewRootImpl.java:998)
E/AndroidRuntime(3340): at android.view.ViewRootImpl$TraversalRunnable.run(ViewRootImpl.java:5582)
E/AndroidRuntime(3340): at android.view.Choreographer$CallbackRecord.run(Choreographer.java:749)
E/AndroidRuntime(3340): at android.view.Choreographer.doCallbacks(Choreographer.java:562)
E/AndroidRuntime(3340): at android.view.Choreographer.doFrame(Choreographer.java:532)
E/AndroidRuntime(3340): at android.view.Choreographer$FrameDisplayEventReceiver.run(Choreographer.java:735)
E/AndroidRuntime(3340): at android.os.Handler.handleCallback(Handler.java:733)
E/AndroidRuntime(3340): at android.os.Handler.dispatchMessage(Handler.java:95)
E/AndroidRuntime(3340): at android.os.Looper.loop(Looper.java:137)
E/AndroidRuntime(3340): at android.app.ActivityThread.main(ActivityThread.java:4998)
E/AndroidRuntime(3340): at java.lang.reflect.Method.invokeNative(Native Method)
E/AndroidRuntime(3340): at java.lang.reflect.Method.invoke(Method.java:515)
E/AndroidRuntime(3340): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:777)
E/AndroidRuntime(3340): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:593)
E/AndroidRuntime(3340): at dalvik.system.NativeStart.main(Native Method)
A:
You should configure the configurations before doing anything, a wise option being setting up an Application class for doing it and mentioning its name as application name in the manifest.
public class UILInitiator extends Application {
private static Context context;
@Override
public void onCreate() {
super.onCreate();
context = this;
File cacheDir = StorageUtils.getOwnCacheDirectory(
getApplicationContext(),
"/sdcard/Android/data/random_folder_name_for_cache");
DisplayImageOptions options = new DisplayImageOptions.Builder()
.cacheInMemory(true).cacheOnDisc(true).build();
ImageLoaderConfiguration config = new ImageLoaderConfiguration.Builder(
getApplicationContext()).defaultDisplayImageOptions(options)
.discCache(new FileCountLimitedDiscCache(cacheDir, 100))
.build();
ImageLoader.getInstance().init(config);
}
Then, only use your imageLoader to display images as:
ImageView thumbnailImage = (ImageView) view.findViewById(R.id.overview_image);
String imageUrl = "url for image ";
ImageLoader imageLoader = ImageLoader.getInstance();
imageLoader.displayImage(imageUrl, thumbnailImage);
| {
"pile_set_name": "StackExchange"
} |
Q:
CSS: 100% font size - 100% of what?
There are many articles and questions about percentage-sized vs other-sized fonts. However, I can not find out WHAT the reference of the percent-value is supposed to be. I understand this is 'the same size in all browsers'. I also read this, for instance:
Percent (%): The percent unit is much like the “em” unit, save for a few fundamental differences. First and foremost, the current font-size is equal to 100% (i.e. 12pt = 100%). While using the percent unit, your text remains fully scalable for mobile devices and for accessibility.
Source: http://kyleschaeffer.com/best-practices/css-font-size-em-vs-px-vs-pt-vs/
But if you say "ie 12 pt = 100%", then it means you first have to define font-size: 12pt. Is that how it works? You first define a size in an absolute measure, and then refer to this as '100%'? Does not make a lot of sense, as many samples say it is useful to put:
body {
font-size: 100%;
}
So by doing this, WHAT is the font size relative to? I notice that the size I see on my screen differs for every font. Arial looks way bigger than Times New Roman, for instance. Also, if I would just do this, body size = 100%, would that mean that it will be the same on all browsers? Or only if I first define an absolute value?
UPDATE, SAT JUL 23
I am getting there, but please bear with me.
So, the % value relates to the default browser font size, if I understand correctly. Well, that is nice but gives me again several other questions:
Is this standard size always the same for every browser version, or do they vary between versions?
I ! found (see image below) the settings for Google Chrome (never looked at this before!), and I see standard "serif", "sans-serif" and "monospace" settings. But how do I interpret this for other fonts? Say I define font: 100% Georgia;, what size will the browser take? Will it look up the standard size for serif, or has the "Georgia" font a standard size for the browser
On several websites I read things like Sizing text and line-height in ems, with a percentage specified on the body [..], was shown to provide **accurate, resizable text across all browsers** in common use today. But from what I am learning now I believe that you should actually choose between either resizable text (using % or em, like what they recommend in this quote), or having 'accurate, consistent font-sizes across browsers' (by using px or pt as a base). Is this correct?
Google Settings:
This is how I think things could look like if you do not define the size in absolute values.
A:
The browser default which is something like 16pt for Firefox, You can check by going into Firefox options, clicking the Content tab, and checking the font size. You can do the same for other browsers as well.
I personally like to control the default font size of my websites, so in a CSS file that is included in every page I will set the BODY default, like so:
body {
font-family: Helvetica, Arial, sans-serif;
font-size: 14px
}
Now the font-size of all my HTML tags will inherit a font-size of 14px.
Say that I want a all divs to have a font size 10% bigger than body, I simply do:
div {
font-size: 110%
}
Now any browser that view my pages will autmoatically make all divs 10% bigger than that of the body, which should be something like 15.4px.
If I want the font-size of all div's to be 10% smaller, I do:
div {
font-size: 90%
}
This will make all divs have a font-size of 12.6px.
Also you should know that since font-size is inherited, that each nested div will decrease in font size by 10%, so:
<div>Outer DIV.
<div>Inner DIV</div>
</div>
The inner div will have a font-size of 11.34px (90% of 12.6px), which may not have been intended.
This can help in the explanation:
http://www.w3.org/TR/2011/REC-CSS2-20110607/syndata.html#value-def-percentage
A:
My understanding is that when the font is set as follows
body {
font-size: 100%;
}
the browser will render the font as per the user settings for that browser.
The spec says that % is rendered
relative to parent element's font size
http://www.w3.org/TR/CSS1/#font-size
In this case, I take that to mean what the browser is set to.
A:
A percentage in the value of the font-size property is relative to the parent element’s font size. CSS 2.1 says this obscurely and confusingly (referring to “inherited font size”), but CSS3 Text says it very clearly.
The parent of the body element is the root element, i.e. the html element. Unless set in a style sheet, the font size of the root element is implementation-dependent. It typically depends on user settings.
Setting font-size: 100% is pointless in many cases, as an element inherits its parent’s font size (leading to the same result), if no style sheet sets its own font size. However, it can be useful to override settings in other style sheets (including browser default style sheets).
For example, an input element typically has a setting in browser style sheet, making its default font size smaller than that of copy text. If you wish to make the font size the same, you can set
input { font-size: 100% }
For the body element, the logically redundant setting font-size: 100% is used fairly often, as it is believed to help against some browser bugs (in browsers that probably have lost their significance now).
| {
"pile_set_name": "StackExchange"
} |
Q:
Nested wildcards with lower bounds
So I read through the main Java Generic FAQ and the single thing which is holding me up are nested wildcards with lower bounds. I want to give you an example of what I do understand, something specifically which works and how I view it. Maybe you could tell me the way I am thinking about this is wrong even though the compiler isn't complaining in the "good" case.
Example 1 (makes sense):
static void WildcardsMethod(List<? extends Pair<? extends Number>> list)
{
System.out.println("It worked");
}
static void TestWildcardsMethod()
{
List<Pair<Integer>> list = null;
WildcardsMethod(list);
}
I first look at the deepest wildcard and bound in WildcardMethod's signature. It is looking for Pair<? extends Number>. Therefore, I could use Pair<Integer>, Pair<Double> and so on. Now I have something which looks like the below code in my mind if I decided to substitute Pair<Integer> for Pair<? extends Number>:
List<? extends Pair<Integer>>
Now, the wildcard represents a type/subtype of the parametrized type Pair<Integer>. Therefore, I can either pass a Pair<Integer> or SubPair<Integer> to WildcardsMethod.
Example 2 (makes sense):
static void WildcardsMethod(List<? extends Pair<? super Number>> list)
{
System.out.println("It worked");
}
static void TestWildcardsMethod()
{
List<Pair<Number>> list = null;
WildcardsMethod(list);
}
I look and see that I first need a Pair<? super Number> so I decide to pass in Pair<Number> resulting in the following code:
? extends Pair<Number>
I then look at the leftmost wildcard and see that I can use either Pair<Number> or SubPair<Number>. I end up passing List<Pair<Number>>.
So in other words, I see the deepest wildcard as asking for a subtype or supertype of the innermost bound (Number). I then go to the top level wildcard and look for a subtype/supertype of the generic type (Pair).
Example 3 (doesn't make sense):
static void WildcardsMethod(List<? super Pair<? super Number>> list)
{
System.out.println("It worked");
}
static void TestWildcardsMethod()
{
List<Pair<Object>> list = null;
WildcardsMethod(list);
}
Well, in terms of Pair<? super Number>, Object is definitely a supertype of Number so Pair<Object> should work just as it did for the previous examples. The following is what I think of when trying to understand this:
? super Pair<Object>
So I am limited to either Pair<Object> or SuperPair<Object>. However, none of this works.
Example 4 (doesn't make sense):
static void WildcardsMethod(List<? super Pair<? extends Number>> list)
{
System.out.println("It worked");
}
static void TestWildcardsMethod()
{
List<Pair<Integer>> list = null;
WildcardsMethod(list);
}
It's the same thing here. Pair<Integer> belongs to the family of Pair<? extends Number> resulting in the following:
? super Pair<Integer>
I can then pass in either Pair<Integer> or SuperPair<Integer> However, this too does not work.
So I am either thinking of this wrong and somehow that model works for extends but not for super or I am simply missing something about lowerbounds and nested wildcards.
A:
Example 1:
Is List<Pair<Integer>> a subtype of List<? extends Pair<? extends Number>>?
It would be if Pair<Integer> is a subtype of Pair<? extends Number>. Is it?
Yes, because Integer is a subtype of Number.
Example 2:
Is List<Pair<Number>> a subtype of List<? extends Pair<? super Number>>?
It would be if Pair<Number> is a subtype of Pair<? super Number>. Is it?
Yes, because Number is a supertype of Number.
Example 3:
Is List<Pair<Object>> a subtype of List<? super Pair<? super Number>>?
It would be if Pair<Object> is a supertype of Pair<? super Number>. Is it?
No, it is not. A parameterized type with a specific parameter can never be a supertype of a parameterized type with a wildcard.
Example 4:
Is List<Pair<Integer>> a subtype of List<? super Pair<? extends Number>>?
It would be if Pair<Integer> is a supertype of Pair<? extends Number>. Is it?
No, it is not. A parameterized type with a specific parameter can never be a supertype of a parameterized type with a wildcard.
| {
"pile_set_name": "StackExchange"
} |
Q:
Forumula for calculating $1\cdot 2\cdot3\cdot4\cdot5\cdot\ldots\cdot n$
I'm looking for a formula to calculate the (product?) of an arithmetic series. Something like this:
$$\frac{n(a_1+a_n)}{2}$$
which is used to get the sum of the series, expect instead of all the elements added togethor, it would give all of the elements multiplied by each other.
Is there a formula for this? I've looked on the internet, but I don't know a lot of math terms so I don't know what to search.
A:
This is the factorial function, $n\mapsto n!$. There is no neat formula for it as you might find for the sum, but there is Stirling's approximation
$$n!\sim{n^ne^{-n}\over\sqrt{2\pi n}}.$$
| {
"pile_set_name": "StackExchange"
} |
Q:
Heroku error with with amazon s3
For Image uploads on my ruby-on-rails application I use the ruby gem paperclip, which works great locally. When it comes to webhosting on heroku, I want to use amazon s3 to store pictures. But every time I upload an image, I get the message
We're sorry, but something went wrong. If you are the application owner check the logs for more information.
On the web somebody said that I'd have to use a 'aws-sdk' older than v2.0, but unfortunately my console says then
uninitialized constat aws
so that the website does not run on local host anymore but also not on heroku (I get an application error).
So I sticked with 2.3, which is also used on the heroku heorku website.
The AWS information (AWS_ACCESS_KEY_ID, AWS_BUCKET, AWS_REGION, AWS_SECRET_ACCESS_KEY...) and the write/read permission should be correct
The production.rb part looks like this
config.paperclip_defaults = {
storage: :s3,
s3_credentials: {
bucket: ENV.fetch('AWS_BUCKET'),
access_key_id: ENV.fetch('AWS_ACCESS_KEY_ID'),
secret_access_key: ENV.fetch('AWS_SECRET_ACCESS_KEY'),
s3_region: ENV.fetch('AWS_REGION'),
}
}
The Gemfile
gem 'paperclip', '~> 4.3', '>= 4.3.6'
gem 'aws-sdk', '~> 2.3'
Anybody an idea what I could do to make it work?
A:
Problem solved.
It works to run the if you run the following gems parallel.
gem 'aws-sdk', '~> 2.3'
gem 'aws-sdk-v1'
Thats it.
| {
"pile_set_name": "StackExchange"
} |
Q:
CSS3 - mobile website and media queries
Just for tests and learning css3 i am trying to create small mobile website. But for now I have small problem with targeting stylesheets. I'm doing it in such way:
<link rel="stylesheet" media="only screen and (max-width: 180px)" href="xsmall.css" />
<link rel="stylesheet" media="only screen and (min-width: 240px)" href="small.css" />
<link rel="stylesheet" media="only screen and (min-width: 320px)" href="medium.css" />
<link rel="stylesheet" media="only screen and (min-width: 480px)" href="large.css" />
<link rel="stylesheet" media="only screen and (min-width: 540px)" href="wide.css" />
Unfortunatelly after changing xsmall.css change is visible also in other web versions ( so for 480px, 540 px etc ). I test websites (mobile one) on Opera Mobile Emulator. What I'm doing wrong?
thanks
A:
What you are doing wrong is to think that your stylesheet selection includes only one of the style sheets.
A style sheet that you include with min-width will be included for any resolution that is larger, so if I for example have a 600px wide screen, I will get small.css, medium.css, large.css and wide.css, not only wide.css.
(Also, if I have a 200px wide screen, it would not include any style sheet at all...)
You would need to use both min-width and max-width to make it only include one of the style sheets.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to pass parameter values from test case in TFS to test method in unit test method using MTM?
I would like to pass parameter values from test cases which are present in Team Foundation Server. I do the automation with the help of Microsoft Test Manager.
Below is example test method created using Unit Test Project.
namespace UnitTestProject1
{
[TestClass]
public class UnitTest1
{
[TestMethod]
public void TestMethod1(int a, int b, int expectedResult)
{
var sut = new Class1();
var result = sut.Add(a,b);
Assert.AreEqual(expectedResult, result);
}
}
}
Now when I try to build this, I get the below error:
UTA007: Method TestMethod1 defined in class UnitTestProject1.UnitTest1 does not have correct signature. Test method marked with the [TestMethod] attribute must be non-static, public, does not return a value and should not take any parameter. for example: public void Test.Class1.Test(). Additionally, return-type must be Task if you are running async unit tests. Example: public async Task Test.Class1.Test2().
How to achieve parameter passing in this scenario?
A:
To read parameter values from a TestCase in TFS, you could use Data-Driven unit testing:
public TestContext TestContext { get; set; }
public DataRow DataRow { get; set; }
[DataSource("Microsoft.VisualStudio.TestTools.DataSource.TestCase", "http://serverName:8080/tfs/MyCollection;teamProjectName", "541", DataAccessMethod.Sequential)]
[TestMethod]
public void TestMethod()
{
string parameter1 = TestContext.DataRow[0].ToString(); // read parameter by column index
string parameter2 = TestContext.DataRow[1].ToString();
var sut = new Class1();
var result = sut.Add(a, b);
Assert.AreEqual(parameter1, result);
}
Note: 541 is the TestCase id.
| {
"pile_set_name": "StackExchange"
} |
Q:
Weird issue with URLSearchParams() in Angular2
I have a weird issue that appears with URLSearchParams() it works only when defining var search (i.e if I change the variable name to var otherName) it does not work!!
However, for some reason it works only when the name is search !! which is totally not logical at all.
Any idea on what's happening here?
constructor(http) {
this.http = http;
this.genre = null;
this.dishes = null;
//this one works fine
var search = new URLSearchParams();
search.set('order', '-ordersNo');
//Here is the issue (to make it work, I need to remove the previous search declaration, and rename the below var limit to search)
var limit = new URLSearchParams();
limit.set('limit', '2');
this.http.get('https://example.com/classes/Mn', { limit }).subscribe(data => {
this.dishes = data.json().results;
});
this.http.get('https://example.com/classes/Genre',{ search }).subscribe(data => {
this.genre = data.json().results;
});
A:
I guess you actually want to have it like this:
{ search: search }
//and
{ search: limit }
options parameter of http get method must have the following signature:
export interface RequestOptionsArgs {
url?: string;
method?: string | RequestMethod;
search?: string | URLSearchParams;
headers?: Headers;
body?: string;
}
but {limit } actually is the same as { limit: limit }.
| {
"pile_set_name": "StackExchange"
} |
Q:
Would you typically compile a java file before sharing it?
I'm new to Java and new to compiling. What are the pro's/con's of sharing java code that is compiled vs not compiled?
A:
Normally you'd put the code in a source repository of some kind and that's how you share the code.
If you want the share the finished product, well, if it's a standalone app, build it into a full executable entity (using build systems, launch4j, etc – this gets a bit complicated to produce a fully stand-alone thing any end user with no knowledge of programming and nothing installed can just install and use) – and share that. If it's a webapp, host it someplace and share the URL.
| {
"pile_set_name": "StackExchange"
} |
Q:
Unable to plot data
I'm trying to plot some data with a function following this package tutorial instructions.
This is the plot code:
def plot_frequency_recency_matrix(model,
T=1,
max_frequency=None,
max_recency=None,
title=None,
xlabel="Customer's Historical Frequency",
ylabel="Customer's Recency",
**kwargs):
"""
Plot recency frequecy matrix as heatmap.
Plot a figure of expected transactions in T next units of time by a customer's frequency and recency.
Parameters
----------
model: lifetimes model
A fitted lifetimes model.
T: fload, optional
Next units of time to make predictions for
max_frequency: int, optional
The maximum frequency to plot. Default is max observed frequency.
max_recency: int, optional
The maximum recency to plot. This also determines the age of the customer.
Default to max observed age.
title: str, optional
Figure title
xlabel: str, optional
Figure xlabel
ylabel: str, optional
Figure ylabel
kwargs
Passed into the matplotlib.imshow command.
Returns
-------
axes: matplotlib.AxesSubplot
"""
from matplotlib import pyplot as plt
import numpy as np
if max_frequency is None:
max_frequency = int(model.data['frequency'].max())
if max_recency is None:
max_recency = int(model.data['T'].max())
Z = np.zeros((max_recency + 1, max_frequency + 1))
for i, recency in enumerate(np.arange(max_recency + 1)):
for j, frequency in enumerate(np.arange(max_frequency + 1)):
Z[i, j] = model.conditional_expected_number_of_purchases_up_to_time(T, frequency, recency, max_recency)
interpolation = kwargs.pop('interpolation', 'none')
ax = plt.subplot(111)
PCM = ax.imshow(Z, interpolation=interpolation, **kwargs)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
if title is None:
title = 'Expected Number of Future Purchases for {} Unit{} of Time,'. \
format(T, "s"[T == 1:]) + '\nby Frequency and Recency of a Customer'
plt.title(title)
# turn matrix into square
forceAspect(ax)
# plot colorbar beside matrix
plt.colorbar(PCM, ax=ax)
return ax
def forceAspect(ax, aspect=1):
im = ax.get_images()
extent = im[0].get_extent()
ax.set_aspect(abs((extent[1] - extent[0]) / (extent[3] - extent[2])) / aspect)
But when I run:
from lifetimes.plotting import plot_frequency_recency_matrix
plot_frequency_recency_matrix(bgf)
The sample data that I'm trying to plot:
frequency recency T
ID
1 2 30.43 38.86
2 1 1.71 38.86
3 0 0.00 38.86
4 0 0.00 38.86
5 0 0.00 38.86
How can show the plot?
Thanks!
A:
You need to call plt.show() to show your plot:
from matplotlib import pyplot as plt
plot_frequency_recency_matrix(bgf)
plt.show()
| {
"pile_set_name": "StackExchange"
} |
Q:
What can we say about the differences between roots of a polynomial with large Galois group?
Suppose that $K$ is a number field and $L$ is the splitting field of a monic polynomial in $\mathcal{O}_{K}[x]$ of degree $d \geq 5$ with roots $\alpha_{1}, ... , \alpha_{d}$. Assume that the $\mathrm{Gal}(L / K)$ is the full symmetric group $S_{d}$ (there is an obvious variant of this question where we assume that the Galois group is $A_{d}$ instead).
Vaguely stated, my question is, given the relative lack of algebraic relations among the roots, what can one conclude about whether there exist primes $\mathfrak{P}$ of $L$ such that certain subsets of the roots become equal modulo $\mathfrak{P}$? More specifically, could one conclude, for instance, that there exists a prime $\mathfrak{P}$ of $L$ such that $\alpha_{i} - \alpha_{j} \in \mathfrak{P}$ for exactly one choice of $\{i,j\} \subset \{1, ... , d\}$?
Here is a (possibly) closely related question: what can we say about the powers of primes of $K$ which contain the discriminant $D$ of this polynomial (again assuming its Galois group is $S_{d}$)? Can one conclude that there exists a prime $\mathfrak{p}$ of $K$ such that $D \in \mathfrak{p} \setminus \mathfrak{p}^{2}$? Is it possible for $D \in (K^{\times})^{n}$ for some $n \geq 3$?
I'm sorry that I'm asking several vaguely related questions here rather than narrowing things down to be more concrete. But I've been trying but failing to get small results on this theme using elementary algebraic number theory for a while now, and I'm curious as to whether any statements of this kind are already known. (Perhaps unsolvability and $2$-transitivity are the only properties of $S_{n}$ which we really need to use here.)
A:
Kedlaya proved that for every $n$ there is a monic polynomial $f\in\mathbb Z[X]$ with square-free discriminant and Galois group $S_n$. See here (published version) or here (preprint).
| {
"pile_set_name": "StackExchange"
} |
Q:
Where is the stylesheet code?
I want to change some values in this code on the header of my site:
<link rel='stylesheet' id='style-css' href='localhost/wp-content/themes/xxxx/style.css'
type='text/css' media='all' />
I can't find this line in any of the source code, or the database. Where is this code?
A:
The most common would be /wp-content/themes/your-theme-name/style.css but this may not be the case depending upon the theme you are using.
The easiest way to find out is to use Chrome Developer Tools.
In Chrome open your website and right click on the element you would like to alter the CSS for, then click 'Inspect Element'. The CSS files and classes which apply to that element will be listed in the right hand pane.
| {
"pile_set_name": "StackExchange"
} |
Q:
class object array inside struct
I need to put an array of class objects inside a struct.
The class inside a header file:
class aClass
{
private:
int num;
public:
aClass();
~aClass();
int getNum();
void setNum(int num);
}
The typedef inside a another header file
#include "aClass.hpp"
typedef struct
{
aClass* classObject[3];
} newType_t;
At least the application
newType_t l_obj;
l_obj.classObject[1]->getNum();
The compiler works but at execution it comes to an segmentation fault. How to define the type correctly?
Thanks a lot
Alex
2nd try:
aClass.hpp
class aClass
{
private:
int num;
public:
aClass();
~aClass();
int getNum();
void setNum(int num);
};
app.hpp
class aClass;
typedef struct
{
aClass classObject[3];
} newType_t;
app.cpp
newType_t l_obj;
l_obj.classObject[1].getNum();
g++
error: field 'classObject' has incomplete type
another try:
aClass.hpp
class aClass
{
private:
int num;
public:
aClass();
~aClass();
int getNum();
void setNum(int num);
};
app.hpp
#include "aClass.hpp"
typedef struct
{
aClass classObject[3];
} newType_t;
app.cpp
newType_t l_obj;
l_obj.classObject[1].getNum();
g++
error: 'aClass' does not name a type
A:
You have an array of pointers, which are uninitialized since you haven't told them where to point to. Using uninitialized pointers causes undefined behavior, which has manifested itself as a crash in your case.
You should have an array of objects since there is no use for pointers here:
aClass classObject[3];
Notice the removal of the * in the declaration. You can call the method by doing:
l_obj.classObject[1].getNum();
Also, typedef struct is unnecessary. You can name your structure directly:
struct newType_t { .. }
| {
"pile_set_name": "StackExchange"
} |
Q:
pandas,how to get the index after using the func pandas.Series.value_counts?
there is my code
..
code_c = data.code.value_counts()
print code_c
ss = code_c.loc[code_c.values == 15]
print ss
get:
>>>code_c
600644 16
600101 16
600652 15
600256 15
717 15
600282 15
543 15
709 15
..
2352 5
2478 5
2379 5
>>>ss
600652 15
600256 15
807 15
600868 15
531 15
795 15
600188 15
..
I have the trouble getting the list(600652,600256,807,600686,...)
Can you help me?thanks a lot.
A:
You can get the index via the index attribute.
ss.index.tolist() # to return a list
or
# to return a pandas series with index values as series values
ss.index.to_series()
| {
"pile_set_name": "StackExchange"
} |
Q:
auto- remove charters that are not valid
need to make sure the input fields in the client side are valid, but I want to prevent a lot of characters (I have a lot of condition for the input filed to be valid.) My javascript is not currently working, but I had an idea for a simpler method: the javascript will auto remove invalid characters as user writes. For example "d#an" will be auto fixed to "dan"
I have a wrote a JS method that works in "the old way" of taking the input filed checking them, and returning true or false
but this is not my target.
the invail charcters are: capital letters (if they are not at the start) special charters,name that is shorter than tow letter(in this case the system will auto add tow letters) a name without "ouiea" (the system will auto add one of this randomly) and a non English letters like: שלום –
A:
a jobmate figre it. javascript:
function Valid(input)
{
var str = /[^a-z]/gi;
input.value = input.value.replace(str, "");
}
html:
<form>
<input type="text" name="fname" id="fname" onkeyup="Valid(this)" />
</form>
| {
"pile_set_name": "StackExchange"
} |
Q:
Forecasts with specific VAR model lags
Thanks in advance. In this post, I asked the question of how to choose specific lags in a VAR model. After a quick reply and information of the 'restrict' and 'coef' functions I was able to successfully run a VAR model with the specific lags I wanted. However, what's the code I need to use the restricted VAR model to make forecasts?
Sample of my code is below:
##Attempt to Restrict VAR Coefficients
##VAR has 5 lags with three variables plus constant and 11 seasonal dummies.
library("vars")
var1 <- VAR(DVARmat, p = 5, type ="const", season = 12)
restrict <- matrix (c(1,0,0,1,0,1,0,0,0,0,0,1,0,1,0,0,0,1,0,0,1,0,1,0,0,0,0,
1,0,0,1,0,1,0,0,0,0,0,1,0,1,0,0,0,1,0,0,1,0,1,0,0,0,0,
1,0,0,1,0,1,0,0,0,0,0,1,0,1,0,0,0,1,0,0,1,0,1,0,0,0,0),
nrow = 3, ncol = 27, byrow = T)
var1_restrict <- coef(restrict(var1, method ="man", resmat = restrict))
var1_restrict
I know the forecast code after a normal VAR, but can't seem to fudge the restricted VAR into it. Thanks again.
A:
After generating the restricted coefficient matrix restrict, you can use predict on restrict(...) since restrict(...) also returns an object of class varest :
##Attempt to Restrict VAR Coefficients
##VAR has 5 lags with three variables plus constant and 11 seasonal dummies.
library("vars")
var1 <- VAR(DVARmat, p = 5, type ="const", season = 12)
restrict <- matrix (c(1,0,0,1,0,1,0,0,0,0,0,1,0,1,0,0,0,1,0,0,1,0,1,0,0,0,0,
1,0,0,1,0,1,0,0,0,0,0,1,0,1,0,0,0,1,0,0,1,0,1,0,0,0,0,
1,0,0,1,0,1,0,0,0,0,0,1,0,1,0,0,0,1,0,0,1,0,1,0,0,0,0),
nrow = 3, ncol = 27, byrow = T)
var1_restrict <- coef(restrict(var1, method ="man", resmat = restrict))
var1_restrict
expostrestrict <- predict(restrict(var1, method="man", resmat = restrict), n.ahead = 13, ci=.95)
Note that restrict(var1, method="man", resmat = restrict) is an object that could be generated, thus if one prefers they may also use the following :
restrict_var <- restrict(var1, method="man", resmat = restrict)
expostrestrict <- predict(restrict_var, n.ahead = 13, ci=.95)
| {
"pile_set_name": "StackExchange"
} |
Q:
What are FIX protocol tags between 10000 and 19999 for?
I know that the FIX protocol specifies user defined fields that cover the range 5000 to 9999. The same specification says, that you can use the tags 20000 to 39999 bilaterally between parties.
In December 2009 the Global Technical Committee Governance Board approved the use of tag numbers in the 20000 to 39999 range for use as user defined tags to be used bilaterally between parties.
But tags between 10000 and 19999 are also used - for example Trading Technologies uses tag 18214 as IncludeNumberOfOrders in MarketDataRequest (V).
Can somebody explain the usage of tags 10000 to 19999 and give an overview to the current tag ranges of the FIX protocol specification?
A:
Tag numbers from 10000 to 19999 are also user defined but should only be used internally, i.e. actually you should not talk to counterparties using these tag numbers. But if for example you have intra-firm FIX connections you could use these tag numbers to convey information.
The tag numbers greater than or equal to 10000 have been reserved for
internal use (within a single firm) and do not need to be
registered/reserved via the FIX website.
Source: FIX Protocol website: user-defined fields
Edit: so if TradingTechnologies uses a tag between 10000 and 19999 for external communication it is actually discouraged.
| {
"pile_set_name": "StackExchange"
} |
Q:
Wrong answer to dynamic programming problem of finding minimum stair cost
I am trying to solve the following problem on Leetcode:
On a staircase, the i-th step has some non-negative cost cost[i] assigned (0 indexed).
Once you pay the cost, you can either climb one or two steps.
You need to find minimum cost to reach the top of the floor, and you can either start from the step with index 0, or the step with index 1.
This is my solution so far. I believe I'm not correctly taking into account the fact that I can start at stair 0, or stair 1, and I'm not sure how to do so.
class Solution {
public:
int minCostClimbingStairs(vector<int>& cost) {
return helper(cost, cost.size() - 1);
}
int helper(vector<int>& cost, int currStair) {
static vector<double> minCost(cost.size(), 0);
minCost[0] = cost[0];
minCost[1] = cost[1];
if (currStair < 0 || cost.size() <= 1) {
return 0;
}
if (minCost[currStair] > 0) {
return minCost[currStair];
}
return minCost[currStair] = min(helper(cost, currStair - 1), helper(cost, currStair - 2)) + cost[currStair];
}
};
A:
This is very much the right idea, but I think it's sort of a conflation of top-down and bottom-up approaches.
Since the problem tells us we can start on steps 0 or 1, I think it's more intuitive to work through the cost array from front to back--you can still use a top-down recursive DP approach as you're doing. This makes it easier to distinguish between starting at the 0th or 1st step. The final solution that's returned by your code is always minCost[minCost.size()-1] which doesn't take this into account.
Using a static vector makes the function non-idempotent, so it'll persist stale values on a second run. This doesn't impact correctness as far as Leetcode is concerned because it seems to create a new instance of your class per test case. Nonetheless, it seems related to the above general misunderstanding; initializing 0 and 1 indices isn't setting a correct base case as you may think (this is how you'd set the base case in a bottom-up approach).
With this in mind, approach the problem from the first stair and walk forward to the last. Initialize the cache vector non-statically, then populate the cache recursively from index 0. The prohibitive 2n branching factor will be handled by the cache, reducing the complexity to linear, and the final results will be the min of the cost of starting at stair 0 or 1. The fact that the problem constrains the input cost vector to 2 <= cost.size() is a big hint; we know minCost[0] and minCost[1] will always be available to choose between without preconditions.
Another minor point is that using 0 as the empty cache flag could time out on huge vectors filled with zeroes. Since we need to distinguish between an unset index and a 0, we should use -1 as the flag to indicate an unset cache index.
class Solution {
public:
int minCostClimbingStairs(vector<int>& cost) {
vector<int> minCost(cost.size(), -1);
helper(cost, 0, minCost);
return min(minCost[0], minCost[1]);
}
int helper(vector<int>& cost, int currStair, vector<int>& minCost) {
if (currStair >= cost.size()) return 0;
else if (minCost[currStair] >= 0) {
return minCost[currStair];
}
return minCost[currStair] = cost[currStair] +
min(helper(cost, currStair + 1, minCost),
helper(cost, currStair + 2, minCost));
}
};
| {
"pile_set_name": "StackExchange"
} |
Q:
Dozer map between two strings knowing what is source and what is target
I have two objects that have two separate representations of a string and I am using Dozer to perform object-to-object mapping. I am having a problem running bi-directional data conversion when a string on one object is mapped to a string on another using a custom converter.
Say for example you have:
public class ClassA { private string1; }
and
public class ClassB { private string1; }
The data conversion is setup as follows:
ClassA String ClassB String
--------------- ---------------
STRING_A_1 <-> STRING_A_2
STRING_B_1 <-> STRING_B_2
STRING_C_xxx <-> STRING_C_xxx
My mapper is set up as follows:
public class CustomConverter extends DozerConverter<String, String> implements MapperAware {
public CustomConverter() {
super(String.class, String.class);
}
@Override
public String convertTo(String source, String target) {
return MyEnum.toA(source);
}
@Override
public String convertFrom(String source, String target) {
return MyEnum.toB(source);
}
}
The only method that gets called is convertFrom(String, String). I tried implementing the MapperAware interface but did not see any means of loading the source and target class types. I was hoping to detect in either method what is being called to figure out the appropriate mapping direction to use.
How can I use my converter to detect what the actual direction of the mapping should be?
A:
In a Dozer converter, convertFrom and convertTo are called solely based on their parameter type. The order of class-a and class-b in the mapping configuration are not considered.
Thus as you noted, only convertFrom is called.
The issue here is that Dozer is doing class instance conversion, while you really require string conversion.
Thus you will need to identify the format of the source string and do the conversion manually.
Alternatively, if you can use JSON, then a JSON parsing library will do this for you. E.g. in Jackson:
jsonMapper = new ObjectMapper();
A a = jsonMapper.readValue(new StringReader(source), A.class);
B b = dozerMapper.map(a, B.class);
StringWriter sw = new StringWriter();
jsonMapper.writeValue(sw, b);
target = sw.toString();
| {
"pile_set_name": "StackExchange"
} |
Q:
Problem with nested fetch request in React
New to React, I'm currently trying to create a data table with data from an API.
I want to have a first fetch, and then run another with response from the first (id) in order to complete my table.
Here is my code :
class HomePage extends React.Component {
constructor(props) {
super(props);
this.state = {
user: {},
data: []
};
}
componentDidMount() {
this.setState({
user: JSON.parse(localStorage.getItem('user'))
}, function () {
this.loadAllObjectsInfo()
});
}
// Fetch all object info in order to fill the table
loadAllObjectsInfo() {
const requestOptions = {
method: 'GET',
headers: {
'Content-Type': 'application/json',
'bbuser': this.state.user.userId,
'bbtoken': this.state.user.secret
},
};
fetch('https://xxxxx/api/objects', requestOptions)
.then(response => response.json())
.then((data) => {
this.setState({ data: data })
})
}
With this code, I have the data I want to render my table but I need to run another fetch to get other info with the id coming from the first request.
How can I do that nested fetch request ?
Thanks a lot,
Matthieu
A:
You can easily manage this with async/await:
async loadAllObjectsInfo() {
const requestOptions = {
method: 'GET',
headers: {
'Content-Type': 'application/json',
'bbuser': this.state.user.user
'bbtoken': this.state.user.secret
},
};
let response = await fetch('https://xxxxx/api/objects', requestOptions);
let data = await response.json();
// here is another fetch - change to fit your request parameters (this is just example)
let info = await fetch('https://xxxxx/api/objects/' + data.id);
this.setState({ data });
}
You can read more about async function.
A:
You can write the code as below.
fetch('https://xxxxx/api/objects', requestOptions)
.then(response => response.json())
.then((res1) => {
fetch('https://xxxxx/api/objects', requestOptions)
.then(response => response.json())
.then((res2) => {
this.setState({ data: res2 });
});
});
Hope this will work for you!
A:
@JourdanM, you should return a new fetch request from one of the then handlers. I've made a simple snippet for you. There are no data validators and spinners. This is a simple showcase. =)
A fetch request returns a promise, and you can chain promises by simply returning them from the then handlers. Here is a good article about it, it has great examples: https://javascript.info/promise-chaining
function fetchUser (user) {
return fetch(`https://api.github.com/users/${user.login}`)
}
class User extends React.Component {
state = {
user: null
}
componentDidMount () {
fetch("https://api.github.com/users")
.then(response => response.json())
.then(users => fetchUser(users[0]))
.then(response => response.json())
.then(user => {
this.setState({user})
})
}
render () {
return (
<div>
<pre>{JSON.stringify(this.state.user, null, 2)}</pre>
</div>
)
}
}
ReactDOM.render(<User />, document.querySelector("#root"));
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.6.3/umd/react.production.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.6.3/umd/react-dom.production.min.js"></script>
<div id="root"></div>
| {
"pile_set_name": "StackExchange"
} |
Q:
Systemd Socket Activation Trigger a bash script
I would like to periodically trigger a remote bash script. How this needs to work is that a 3rd party application will connect to a CentOS 7 system on a specific TCP port and send a short text message. SSH is not an option because of the 3rd party application.
When the message is received it needs to pass the IP address to the bash script. I would like the bash script to run and then go dormant until the next message. I would prefer not to write a daemon. I just want to keep this simple.
These messages may come a few times per week or less. We had this running on using xinetd but not sure exactly how to make this work with systemd.
Here is what I have so far:
/etc/systemd/systemfoo.service
[Unit]
Description=Foo Service
After=network.target foo.socket
Requires=foo.socket
[Service]
Type=oneshot
ExecStart=/bin/bash /opt/foo/foo.sh
TimeoutStopSec=5
[Install]
WantedBy=multi-user.target
/etc/systemd/systemfoo.socket
[Unit]
Description=Foo Socket
PartOf=foo.service
[Socket]
ListenStream=127.0.0.1:7780
[Install]
WantedBy=sockets.target
/opt/foo/foo.sh
#!/bin/bash
# Not sure how to get IP
logger -t FOO "Connection received:"
# Do some action
This is what I see in the log:
Jul 10 17:29:32 localhost systemd: Listening on Foo Socket.
Jul 10 17:29:32 localhost systemd: Starting Foo Socket.
Jul 10 17:29:32 localhost systemd: Started Foo Service.
Jul 10 17:29:32 localhost systemd: Starting Foo Service...
Jul 10 17:29:32 localhost FOO: Connection received
Jul 10 17:29:53 localhost systemd: Started Session 4 of user vagrant.
Jul 10 17:29:53 localhost systemd-logind: New session 4 of user vagrant.
Jul 10 17:29:53 localhost systemd: Starting Session 4 of user vagrant.
Jul 10 17:29:56 localhost su: (to root) vagrant on pts/1
Jul 10 17:30:11 localhost systemd: Started Foo Service.
Jul 10 17:30:11 localhost systemd: Starting Foo Service...
Jul 10 17:30:11 localhost FOO: Connection received
Jul 10 17:30:11 localhost systemd: Started Foo Service.
Jul 10 17:30:11 localhost systemd: Starting Foo Service...
Jul 10 17:30:11 localhost FOO: Connection received
Jul 10 17:30:11 localhost systemd: Started Foo Service.
Jul 10 17:30:11 localhost systemd: Starting Foo Service...
Jul 10 17:30:11 localhost FOO: Connection received
Jul 10 17:30:11 localhost systemd: Started Foo Service.
Jul 10 17:30:11 localhost systemd: Starting Foo Service...
Jul 10 17:30:11 localhost FOO: Connection received
Jul 10 17:30:11 localhost systemd: Started Foo Service.
Jul 10 17:30:11 localhost systemd: Starting Foo Service...
Jul 10 17:30:11 localhost FOO: Connection received
Jul 10 17:30:11 localhost systemd: start request repeated too quickly for foo.service
Jul 10 17:30:11 localhost systemd: Failed to start Foo Service.
Jul 10 17:30:11 localhost systemd: Unit foo.socket entered failed state.
Jul 10 17:30:11 localhost systemd: Unit foo.service entered failed state.
Jul 10 17:30:11 localhost systemd: foo.service failed.
Any suggestions on how to make systemd run the script once and then wait for the next message before running it again?
For testing I'm just running:
echo "Hello" | nc 127.0.0.1 7780
Updated Working Configuration
/etc/systemd/[email protected]
Note the @.
[Unit]
Description=Foo Service
After=network.target systemfoo.socket
Requires=systemfoo.socket
[Service]
Type=oneshot
ExecStart=/bin/bash /opt/foo/foo.sh
TimeoutStopSec=5
[Install]
WantedBy=multi-user.target
/etc/systemd/systemfoo.socket
[Unit]
Description=Foo Socket
[email protected]
[Socket]
ListenStream=127.0.0.1:7780
Accept=Yes
[Install]
WantedBy=sockets.target
/opt/foo/foo.sh
#!/bin/bash
# Not sure how to get IP
logger -t FOO "Connection received: $REMOTE_ADDR $REMOTE_PORT"
# Do some action
Required to load the configuration into systemd:
systemctl enable systemfoo.socket
systemctl start systemfoo.socket
A:
You should add Accept=yes to the socket unit, indicating that systemd should start separate service instances for each connection, and then turn the service unit into a template ([email protected]), to be instantiated separately for each connection. Then the remote address and port should be available in the REMOTE_ADDR and REMOTE_PORT environment variables, according to the systemd.socket(5) manpage.
| {
"pile_set_name": "StackExchange"
} |
Q:
Java classes in Android app
I want to use Java classes in my Android project. Is that possible and if it is possible, what do I have to do to use the following classes?
BufferedImage
ImageIO
ImageOutputStream
FileImageOutputStream
GifSequenceWriter
A:
Android doesn't have an implementation of AWT. The closest parallel is android.graphics.
| {
"pile_set_name": "StackExchange"
} |
Q:
How do we create a block (reusable set of functions) in Keras?
I am using Keras, actually tensorflow.keras to be specific and want to know if it is possible to create reusable blocks of inbuilt Keras layers. For example I would like to repeatedly use the following block at different times in my model.
conv1a = Conv3D(filters=32, strides=(1, 1, 1), kernel_size=(3, 3, 3), padding='same')(inputs)
bn1a = BatchNormalization()(conv1a)
relu1a = ReLU()(bn1a)
conv1b = Conv3D(filters=32, strides=(1, 1, 1), kernel_size=(3, 3, 3), padding='same')(relu1a)
bn1b = BatchNormalization()(conv1b)
relu1b = ReLU()(bn1b)
I have read about creating custom layers in Keras but I did not find the documentation to be clear enough.
Any help would be appreciated.
A:
You could simply put it inside a function then use like:
relu1a = my_block(inputs)
relu1b = my_block(relu1a)
Also consider adding something such as with K.name_scope('MyBlock'): in the beginning of your function, so that things get wrapped in the graph as well.
So you'd have something like:
def my_block(inputs, block_name='MyBlock'):
with K.name_scope(block_name):
conv = Conv3D(filters=32, strides=(1, 1, 1), kernel_size=(3, 3, 3), padding='same')(inputs)
bn = BatchNormalization()(conv)
relu = ReLU()(bn)
return relu
If you specify block names:
relu1a = my_block(inputs, 'Block1')
relu1b = my_block(relu1a, 'Block2')
| {
"pile_set_name": "StackExchange"
} |
Q:
Automatically change the version of only assembly which has code changes in Dot Net
I am creating patch for my application to deploy small changes to customer. In my application I have 100 .CSProject. Out of 100 library I made code changes in class library A, B, C and Library D is calling to A,B and C library. So Is there any way when I build my application then It should change the version of only A,B,C and D library which have changes. Manually I can change but I need any automatic way.
A:
So Is there any way when I build my application then It should change
the version of only A,B,C and D library which have changes. Manually I
can change but I need any automatic way.
Actually, Directory Build.targets file can probably realize it. When you create a file called Directory.Build.targets, it can search and act on all projects in the current folder and subordinates. See this document about its search scope. Directory Build.targets is pretty much like Directory Build.props. And Directory Build.targets executes after build while Directory Build.props executes before the build starts.
In your situation, you want to overwrite property Version, you should use Directory Build.targets.
Then when you're building a project, it automatically puts content in the scope of the project.
Solution
1) create a file called Directory Build.targets file under your solution or upper level directory of 100 projects.
2) add these into Directory Build.targets file:
<Target Name="change_property" AfterTargets="CoreCompile">
<PropertyGroup>
<Version>xxxxx</Version>
</PropertyGroup>
</Target>
3) Then you can set all changed projects' version to the same version that you wish.
And this function makes the same property changes for all the changed projects.
You should Note that it has a flaw
But you should not modify other related projects(code changes or add new item, picture or anything else) If you change unrelated projects, this feature also changes the version of unrelated projects. This is a special feature of CoreCompile target.
Suggestion
Therefore, you should pay attention not to modifying other projects or you could use Alt+A to select all projects-->unload project and then reload required projects and modify them in case any other unrelated projects are modified.
Or try to put these required projects into a new folder called Modified Projects under the solution folder(which exists xxx.sln file) and then put Directory Build.targets into Modified Projects folder.
-----------------Use other methods like this to avoid this situation.
Update 1
1) If your projects are in the same solution folder, then you can create a file called Directory.Build.targets.
It will work on all items in the current folder and its secondary directory.
And add my sample code in this file:
<Project>
<Target Name="change_property" AfterTargets="CoreCompile">
<PropertyGroup>
<Version>xxxxx</Version>
</PropertyGroup>
</Target>
</Project>
And then you can modify your nuget projects, when you finishing your related projects, you can build your solution or use MSBuild.exe to build xxx.sln and this file will be executed automatically and will change the version of modified nuget projects.
Hope it could help you and any other feedback will be expected.
Update 2
Please try to change to use in Directory.Build.targets and it will work under the whole solution to apply to every verison number.
<Project>
<Target Name="change_property" AfterTargets="CoreCompile">
<PropertyGroup>
<Version>xxxxx</Version>
</PropertyGroup>
</Target>
</Project>
<PropertyGroup>
<GeneratePackageOnBuild>true</GeneratePackageOnBuild>
</PropertyGroup>
And make sure that <GeneratePackageOnBuild>true</GeneratePackageOnBuild> in Directory.Build.targets. Otherwise it will fail. So please remove this node in xxx.csorj file and add <GeneratePackageOnBuild>true</GeneratePackageOnBuild> in Directory.Build.targets.
| {
"pile_set_name": "StackExchange"
} |
Q:
Sharing code across two separate apps in React Native
I am currently developing two iOS applications: one for employees and one for managers.
I am using a different target for each app so that I can re-use code (like the model for a user, as employees & managers are all considered a user and only have different permission levels).
The two applications were started fairly recently, so I am considering re-writing them from scratch to use React Native (because I will be doing the exact apps for Android when the iOS apps are finished).
My question is: How easy is it to share code across two separate applications (with different functionalities) using React Native? Are there features (or libraries) that are meant for this purpose?
By the end of the project, I will have four separate apps: two for Android and two for iOS.
I have seen some answers to this question, but they mainly seem to apply to a single app having a 'Production', 'Development', and 'Staging' target, with each app being basically the same.
A:
So I think what you actually may want here is 3 projects. 1 app for employees and 1 app for managers obviously. But your 3rd project would essentially be a common one for components that will be shared between the 2 apps. Your user class, for example, would go into this project. Then when you are developing your app projects, you can import the common one as a dependency (via package.json). This would add your reusable classes to the node_modules in your app projects and you can access those by importing them in your app specific classes.
Of course, you could also take Matt's suggestion and build a single app with everything that would just show different routes for employee or managers based on permissions. Just depends on how you want to approach it.
| {
"pile_set_name": "StackExchange"
} |
Q:
how to use setInterval in vue component
I define the timer in each my-progress, used to update the value of view, but the console shows the value of the constant changes, and the value of view is still not changed, how can I do in the timer to change the value of view
Vue.component('my-progress', {
template: '\
<div class="progress progress-bar-vertical" data-toggle="tooltip" data-placement="top">\
<div class="progress-bar" role="progressbar" aria-valuenow="0" aria-valuemin="0" aria-valuemax="100" :style="{height: pgvalue}">{{pgvalue}}\
</div>\
</div>\
',
data : function(){
return {
pgvalue : '50%',
intervalid1:'',
}
},
computed:{
changes : {
get : function(){
return this.pgvalue;
},
set : function(v){
this.pgvalue = v;
}
}
},
mounted : function(){
this.todo()
},
beforeDestroy () {
clearInterval(this.intervalid1)
},
methods : {
todo : function(){
this.intervalid1 = setInterval(function(){
this.changes = ((Math.random() * 100).toFixed(2))+'%';
console.log (this.changes);
}, 3000);
}
},
})
here is the link:
jsbin.com/safolom
A:
this is not pointing to the Vue. Try
todo: function(){
this.intervalid1 = setInterval(function(){
this.changes = ((Math.random() * 100).toFixed(2))+'%';
console.log (this.changes);
}.bind(this), 3000);
}
or
todo: function(){
const self = this;
this.intervalid1 = setInterval(function(){
self.changes = ((Math.random() * 100).toFixed(2))+'%';
console.log (this.changes);
}, 3000);
}
or
todo: function(){
this.intervalid1 = setInterval(() => {
this.changes = ((Math.random() * 100).toFixed(2))+'%';
console.log (this.changes);
}, 3000);
}
See How to access the correct this inside a callback?
A:
check this example:
Vue.component('my-progress-bar',{
template:
`<div class="progress">
<div
class="progress-bar"
role="progressbar"
:style="'width: ' + percent+'%;'"
:aria-valuenow="percent"
aria-valuemin="0"
aria-valuemax="100">
{{ percent }}%
</div>
</div>`,
props: { percent: {default: 0} }
});
new Vue({
el: '#app',
data: {p: 50},
created: function() {
var self = this;
setInterval(function() {
if (self.p<100) {
self.p++;
}
}, 100);
}
});
<script src="https://cdn.jsdelivr.net/npm/vue"></script>
<link href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" rel="stylesheet">
<div id='app'>
<my-progress-bar :percent.sync='p'>
</my-progress-bar>
<hr>
<button @click='p=0' class='btn btn-danger bt-lg btn-block'>
Reset Bar Progress
</button>
</div>
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I change the baseline value for the negative serial chart?
http://www.amcharts.com/demos/date-based-line-chart/
So I basically have this working. I would like to change the baseline of this graph (0) so that it starts with 22.21 instead. Anything below 22.21 goes into the negative area of the chart, and anything above goes into the positive area of the chart.
I can't seem to find out how to change that baseline, or starting index, or base Axis in the documentation.
The context is this:
I have a suggested price (22.21) - if the price for the item is above 22.21, then it should be in the positive range. If it's lower than 22.21, then it should be in the negative range.
Thank you for any help.
A:
You should add: "negativeBase":22.21 to graphs config.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to Count enabled CheckBoxList items checked using Jquery
I have this simple jQuery procedure running to make sure a user checks at least 1 checkbox.
var AllAppsCheck = $('#<%= FillInfo2.FindControl("AllAppsCheck").ClientID %> input:checked').length;
if (AllAppsCheck == 0 ) {
alert("Please select atleast 1 role!");
return false;
}
I would like to add to this code to count only the items in the checkboxlist which are enabled and to disregard items which are disabled.
A:
if($('input[type="checkbox"]:enabled:checked').length) {
// at least one checked
} else {
// none checked
}
| {
"pile_set_name": "StackExchange"
} |
Q:
finding highest cumulative percent change
I have a dataframe where the daily sales is recorded. I need to know the fastest growing product. For e.g. in this example, ice-cream sale during 22-23 Jan was the highest across all products.
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
myst="""
20-01-17 pizza 90
21-01-17 pizza 120
22-01-17 pizza 239
23-01-17 pizza 200
20-01-17 fried-rice 100
21-01-17 fried-rice 120
22-01-17 fried-rice 110
23-01-17 fried-rice 190
20-01-17 ice-cream 8
21-01-17 ice-cream 23
22-01-17 ice-cream 21
23-01-17 ice-cream 100
"""
u_cols=['date', 'product', 'sales']
And this is how I created the dataframe:
myf = StringIO(myst)
import pandas as pd
df = pd.read_csv(StringIO(myst), sep='\t', names = u_cols)
It will look like this in a spreadsheet. How will pandas handle it?
A:
I think you need pct_change:
df['new'] = df.groupby('product')['sales'].pct_change().mul(100)
print (df)
date product sales new
0 20-01-17 pizza 90 NaN
1 21-01-17 pizza 120 33.333333
2 22-01-17 pizza 239 99.166667
3 23-01-17 pizza 200 -16.317992
4 20-01-17 fried-rice 100 NaN
5 21-01-17 fried-rice 120 20.000000
6 22-01-17 fried-rice 110 -8.333333
7 23-01-17 fried-rice 190 72.727273
8 20-01-17 ice-cream 8 NaN
9 21-01-17 ice-cream 23 187.500000
10 22-01-17 ice-cream 21 -8.695652
11 23-01-17 ice-cream 100 376.190476
a = df.groupby('product')['sales'].pct_change().idxmax()
print (a)
11
b = 'sale: {}, during: from {} to {}'.format(df.loc[a, 'product'],
df.loc[a-1, 'date'],
df.loc[a, 'date'])
print (b)
sale: ice-cream, during: from 22-01-17 to 23-01-17
| {
"pile_set_name": "StackExchange"
} |
Q:
Greater than and less than symbol in regular expressions
Hey guys am new to regular expression ..I am just tired by really studying all of the regex charatcer and all ..I need to know what is the purpose of greater than symbol in regex for eg.
preg_match('/(?<=<).*?(?=>)/', 'sadfas<[email protected]>', $email);
Please tell me the use of greater than symbo and less than symbol in regex .
Any help would be greatly appreciated ..:)
A:
The greater than symbol simply matches the literal > at the end of your target string.
The less than symbol is not so simple. First let's review the lookaround syntax:
The pattern (?<={pattern}) is a positive lookbehind assertion, it tests whether the currently matched string is preceded by a string matching {pattern}.
The pattern (?={pattern}) is a positive lookahead assertion, it tests whether the currently matched string is followed by a string matching {pattern}.
So breaking down your expression
(?<=<) assert that the currently matched string is preceded by a literal <
.*? match anything zero or more times, lazily
(?=>) assert than the currently matched string is followed by a literal >
Putting it all together the pattern will extract [email protected] from the input string you have given it.
| {
"pile_set_name": "StackExchange"
} |
Q:
Javascript on Individual Tumblr Post
I am aware that you can edit the html/AngularJS that renders the all posts blog homepage on a tumblr blog. But, is there any way to add a custom <script>...</script> to the individual posts? I want to do some javascript stuff on a post-by-post basis, and cant seem to find where that code can be edited (or, if it even can)...
A:
When writing a post, on the menu bar where you can click bold, italic, strikethrough, etc, there is a button that says <html>. If you click this, it will bring up the HTML for the post. Then, all you do is have to add a script tag with the javascript code you want. For example:
<script>
alert("Hello World!");
</script>
NOTE: I believe the javascript will only work when users navigate to your actual page, not when it is in their feed unless the box that is shown to represent extra content is clicked. This is to prevent unnecessary content from showing and keeping load time low in the feed
EDIT: If you want the same javascript to apply to all posts you write, I'd put it into the theme's HTML. To do so you'd go to Customize -> Edit HTML (of the theme) -> put <script> at the bottom of body (generally speaking) or add to an existing <script>. Each posts has the class .POST, so use that selector to obtain each. To find out the classes of individual types you can look at the Tumblr API or use inspect element to find out for yourself
NOTE: The script added to the theme will not affect posts in user's feed. It is a change to the theme, thus will not affect individual posts when seen not directly on the site.
| {
"pile_set_name": "StackExchange"
} |
Q:
I get the error "no main manifest attribute", when compiling and running a simple program in Kotlin
Here's the program
data class Resultado (val resultado: Int, val tesoro: Boolean)
fun main() {
val busca = fun(intento: Int): Resultado
{
val cosas = listOf( 3, 33, 333, 42, 1, 1, 111 )
if ( intento == 4 ) {
return Resultado( 42, true )
} else {
return Resultado( cosas[intento], false )
}
}
val (valor1, premio1) = busca( 2 )
println( "2 devuelve " + valor1 + " y tiene premio " + premio1 )
val (valor2, premio2) = busca( 4 )
println( "4 devuelve " + valor2 + " y tiene premio " + premio2 )
}
It compiles correctly either directly or with
kotlinc code/tesoro.kt -include-runtime -d tesoro.jar
Leaving all kind of files in the directory:
ls *.class *.jar
Resultado.class tesoro.jar TesoroKt.class TesoroKt$main$busca$1.class
However, it does not run
java -jar tesoro.jar
no hay ningún atributo de manifiesto principal en tesoro.jar
Which means pretty much as said above, "No main manifest attribute". This is
java -version
java version "1.8.0_201"
Java(TM) SE Runtime Environment (build 1.8.0_201-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode)
Kotlin version is 1.1.3-2
Is the program missing something?
A:
Be sure to use the latest stable version. With kotlinc 1.3.31 it works correctly, also with fun main() in place.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is it possible to automate use of passphrase-encrypted private keys?
Suppose I have a public/private keypair, which I would like to use to serve HTTPS from a webserver. The part I'm worried about is this: the private key is encrypted with a passphrase, so that if an attacker gains access to my webserver he doesn't get the private key. But I want some automated system to launch the webserver: say, monit to restart the server if it crashes, or puppet to deploy it to many different machines. Is this possible? It seems to me that I have three choices:
Decrypt the private key and store it in plain text somewhere on the machine, in an area that I hope an attacker will be unable to compromise.
Store and use the encrypted key, but also store the passphrase for use by automated systems. This doesn't seem fundamentally any more secure than (1).
Store only the encrypted key, and require manual intervention anytime a server needs to be started.
Are there any options I'm missing? I can't imagine large-scale websites like Wikipedia or Google require manual intervention to start a server, so I'm guessing they must be storing the cleartext keys; but that seems like it must be a bad security practice.
A:
A server restart is a serious occurrence, and should never happen on its own. Ideally there should be an admin logged on to trigger the restart - or at least to restart the server after he/she has diagnosed why it went down in the first place, if it was an unscheduled down.
So the admin may supply the passphrase manually.
As a weaker alternative, the SSLPassPhraseDialog may be used to supply the passphrase.
To strenghten the scenario, the script itself has to be secured (otherwise it's no better than a cleartext passphrase, or no passphrase), so it would have to ask for the passphrase to a second server that has some means to ascertain whether the request is legitimate or not.
For example, if there is a diagnosed down on the server, then and only then a SSH login for the servermaint user from that one IP is granted and the correct password echoed:
ssh -i identity_file servermaint@passphraseserver "getpassphrase"
And the identity_file only identifies that one server with a known IP address.
An attacker gaining access to the Web server would find no passphrase, and would be unable to login to passphraseserver (actually he would, but such an attempt would trigger an alert and log him instantly off instead of supplying the passphrase - and would also mark the server as compromised).
An attacker would then have not only to break into the server, but also to cause a webserver shutdown, and remain in control of the webserver while shutdown diagnostics and forensics are carried out - an unlikely event at best: with that kind of control, he might be able to exploit the Apache process and get a copy of the passphrase when the real sysadmin types it in.
Of course, in a simpler and way more common scenario ("I just want my server to go back up if it goes down, as fast as possible - don't care why it went down, it will always happen every now and then"), it would be easier to leave out the passphrase altogether.
A:
You're assessment that 2 isn't any more secure that 1 is correct. It's another step for an attacker to untangle but not that challenging.
Option 3 is better than 1 and 2 but a smart attacker with root-level privileges can probably pry your certificate out of memory.
Bottom line: your server needs to have your certificate in unencrypted form to serve HTTPS traffic and someone with root-level access is going to be able to get to it. It's just a matter of time and effort.
Your best option is to protect your certificate on the host with file permissions, run your server with sub-root privs (having it drop from root to nobody or http after it's bound to port 80 and 443 and loaded the cert) and monitor the host for unusual behavior.
If the worst should happen and your cert is compromised, that's what certificate revocation lists are for.
[Edit] One additional thing... the protections you put around your cert should depend on the level of risk you're willing to accept.
| {
"pile_set_name": "StackExchange"
} |
Q:
Rails 3 accessing has_many through join model in another controller
I'm doing an online judge application, so I have a User model, a Problem model and a Solution model to make the many to many relation. In that Solution model I have an extra column called "state" where I plan to store the state of a problem for a certain user: solved, wrong anwser, not solved.
I'm trying to modify the index action in my problems controller to render the state of the problem in the problem list (so a user can see if he has solved a problem or not, like I said before). Nevertheless I'm having an "uninitialized constant Admin::ProblemsController::Solution" error when I access the view.
I'm really new to RoR and my experience so far has been really harsh, so I'll appreciate any leads. Here is the code in the controller and the view:
problems_controller.rb
def index
@problems = Problem.all
if current_user
@solutions = Solution.includes(:problem).where(:user_id => current_user.id)
end
respond_to do |format|
format.html # index.html.erb
format.json { render json: @problems }
end
end
views/problems/index.html.erb
<% @problems.each do |problem| %>
<tr>
<td><%= problem.name %></td>
<td><%= problem.code %></td>
<td><%= problem.description %></td>
<% if current_user %>
<%= for solution in @solutions do %>
<% if solution %>
<td><%= solution.state%></td>
<% else %>
<td>Not Solved</td>
<% end %>
<% end %>
<% end %>
<td><%= link_to 'Show', problem %></td>
<% if current_user && current_user.is_admin? %>
<td><%= link_to 'Edit', edit_problem_path(problem) %></td>
<td><%= link_to 'Delete', problem, method: :delete, data: { confirm: 'Are you sure?' } %></td>
<% end %>
</tr>
<% end %>
I'm not sure if that's the best way I should be accessing the Solutions table or if I should be doing that in another controller (in the users controllers? in a solutions controller file perhaps?).
I want to be clear of how to use that "Solutions" join table. I had a has_and_belongs_to_many before and changed it because of the extra column. I've read a lot about many to many relationships, but I can't understand it for this case =(
A:
Just need to use:
problem.solution.state
Unless a problem may have many solutions, then it would need to be something like:
problem.solutions.first.state
However this will just give the state of the first, so I'd define a method in Problem which calculates a status (eg. If any of the solutions solve it then problem is solved)
For 1 problem, many solutions for a given user.
In Solution.rb
scope :for_user, lambda {|user_id| :conditions => {:user_id => user_id}}
Then we can call:
problem.solutions.for_user(current_user.id).first.state
It might look a bit long but it's highly flexible.
| {
"pile_set_name": "StackExchange"
} |
Q:
Get timestamp from date & time in UTC
I have table that have date and time with timezone.
date_time_table
+------------+--------------+
| date_t | time_tz |
+------------+--------------+
| 2016-05-13 | 23:00:00 -02 |
| 2016-05-14 | 13:00:00 +06 |
+------------+--------------+
After that I run SQL for query 'UTC' time
SELECT timetz AT TIME ZONE 'UTC' FROM date_time_tz
The result is:
+--------------+
| time_tz |
+--------------+
| 01:00:00 +00 |
| 07:00:00 +00 |
+--------------+
Can I write SQL combined date_t with time_tz for calculate with date too?
The result that I expect is:
+------------+------------+
| date_time_tz |
+------------+------------+
| 2016-05-14 01:00:00 +00 |
| 2016-05-14 07:00:00 +00 |
+------------+------------+
I try with :
SELECT concat(date_t , ' ' ,time_tz) AT TIME ZONE 'UTC' FROM date_time_table
But it does not work.
A:
SELECT concat(date_t , ' ' ,time_tz)::timestamp AT TIME ZONE 'UTC' FROM date_time_table
Try casting like this
| {
"pile_set_name": "StackExchange"
} |
Q:
Are there any equivalent of JMock in Flex?
Are there any equivalent of JMock in Flex? There are FlexMock libraries, but they are for Python and Ruby.
A:
There are several. My favorite is Mockito, but you can take your pick:
Mockito
Mockolate
asMock
| {
"pile_set_name": "StackExchange"
} |
Q:
Limit the count and get the rest as "Others"
I 'm counting a something for displaying in a piechart. I want to display only highest 6 counts by number with actual investmentType and the rest as "Others" as investmentType.
SELECT i.investment_type AS investmentType,
COUNT(*) AS investmentCount
FROM investment i,
vertical v
WHERE v.vertical_name = i.investment_type
AND v.type = 'STR'
AND i.status_funding = 'o'
GROUP
BY i.investment_type
ORDER
BY investmentCount desc
The above query gives me a result
By adding limit 6 to the query i get
What i need is one more row with investmentType "Others" and investmentCount "7".
A:
You might want to have a try at something like:
SELECT i.investment_type as investmentType,COUNT(*) as investmentCount FROM investment i,vertical v
WHERE v.vertical_name =i.investment_type AND v.type='STR' AND i.status_funding ='o'
group by i.investment_type order by investmentCount desc
limit 6
UNION
SELECT "others" as investmentType, SUM(othersInvestmentCount) as investmentCount FROM (
SELECT COUNT(*) as othersInvestmentCount FROM investment i,vertical v
WHERE v.vertical_name =i.investment_type AND v.type='STR' AND i.status_funding ='o'
group by i.investment_type order by investmentCount desc
limit 6, 4294967296
)
I did not test this query, you can edit it if you find syntax problems. Three actual queries involved, but it should not be crazy slow (and if faster is not needed, then no need to try faster).
I am assuming that you have less than 2^32 records in your database, which seems like a very reasonable assumption for a MySQL database (but just replace it by 2^64 if you feel insecure).
| {
"pile_set_name": "StackExchange"
} |
Q:
iOS Strech Animation - Where should I start looking?
How can I achieve a stretch animation look?
Where do I need to start looking? CABasicAnimation does not seem to do the trick. Something like this: is the desired effect:
http://inspirationmobile.tumblr.com/post/112168531484/sidebar-animation-by-jacub-antalik-ramotion-com
A:
There is a lot more than a simple animation in this view.
It is used UIKitDynamics, but the smart thing is that the final effect is composed by the single effects of small invisible UIViews inside that view.
The border is a CAShapeLayer made by a combination of bezier path that interpolates those single views.
Each drawing cicle, the CAShapeLayer path is refreshed based on the position of those views.
You can find more about this effect here.
| {
"pile_set_name": "StackExchange"
} |
Q:
Can I simulate or somehow create a WPF UI thread within a Console Application or Class Library?
I have to use an external API that for whatever reason only works as long as it is initialized and run on a WPF app's UI thread. If I spin up a task/thread that does not use the UI Synchonization Context even within a WPF test app the API does not work.
I need to make it work within a Console App, Windows Service, class library...but not WPF application.
A:
This works for me in console application
I'm not sure whether Dispatcher is needed for your case or the code simply requires STA apartment state.
class Program
{
static void Main(string[] args)
{
var thread = new Thread(() =>
{
Dispatcher.CurrentDispatcher.BeginInvoke(new Action(() =>
{
Console.WriteLine("Hello world from Dispatcher");
}));
Dispatcher.Run();
});
thread.SetApartmentState(ApartmentState.STA);
thread.Start();
thread.Join();
}
}
You just need to add WindowsBase.dll reference for Dispatcher.
| {
"pile_set_name": "StackExchange"
} |
Q:
Unable to debug FireBreath plugin APIs using Visual Studio
I have tried the mentioned steps to debug my FB plugin and I am unable to put a break point in the Plugin APIs.
Steps:
1) Launch a “sample page” on Firefox browser which loads my FB plugin.
2) Go to Debug -> "Attach to Process" in Visual studio 2008.
3) Attach the FireFox.exe process which has the “sample page” title.
4) Unable to put a breakpoint in the plugin API as my plugin DLL symbols are not loaded in Visual Studio.
I am not sure why my Plugin DLL symbols are not loaded. Please help me out.
NOTE: FB Plugin is built in Debug mode.
Thank you, Sande
A:
Your problem is that firefox runs plugins in a seperate process; I think it's called something like plugin_container.exe
More info on FireBreath's debugging plugins page
| {
"pile_set_name": "StackExchange"
} |
Q:
Benefits of random search over other optimization methods in Neural Network hyperparameter tuning
When reading about methods of hyperparameter tuning of neural networks, I have mainly come across grid search and random search in textbooks and articles online. I was wondering why other optimization methods such as Simulated Annealing or Genetic Algorithms not used? Are there any benefits to random search over the others?
A:
Random search it trivial to implement, that's why it is so popular. Moreover, there are mixed results on comparing efficiency of different hyperparameter tuning algorithms: in many cases more advanced approaches like Bayesian optimization (based on Gaussian process, or Tree Parzen Estimator) work better, but there are also results showing that random search gives comparable results, or is not much worse, especially if you double the number of iterations (for discussion see this blog entry by Kevin Jamieson and Ben Recht and this PyData Berlin 2018 talk by Thorben Jensen). Long story short, if you want something that is "not bad" for hyperparameter tuning, but you don't want to bother that much about technicalities, then random search is one of the possible choices.
| {
"pile_set_name": "StackExchange"
} |
Q:
Angular 2 ngrx : what is the proper way to immutably update an array of objects in reducers?
In my Angular2 application, I use ngrx to manage states, so when I receive data from the server, I dispatch an action to a reducer.
MyExampleReducer.ts :
export const Reviews: ActionReducer<any> = (state: any[] = [], action: Action) => {
switch (action.type) {
case GET_REVIEWS:
return action.payload;
case ADD_REVIEW :
return [...state, {review : action.payload, replays : []}];
case UPDATE_REVIEW:
return '' // return what ?
case DELETE_REVIEW:
return state.filter(item => {
return item.id !== action.payload.id;
});
default:
return state;
}
};
The problem is when i have to update an item in my Reviews array, what is the best way to do in the redux way ?
A:
Assuming that your state is just an array full of reviews you could do the following:
export const Reviews: ActionReducer<any> = (state: any[] = [], action: Action) => {
switch (action.type) {
case GET_REVIEWS:
return action.payload;
case ADD_REVIEW :
return [...state, {review : action.payload, replays : []}];
case UPDATE_REVIEW:
// get an array of all ids and find the index of the required review
let index = state.map(review => review.id)
.indexOf(action.payload.id);
return [
...state.slice(0, index),
Object.assign({}, state[index], action.payload),
...state.slice(index + 1)
]
case DELETE_REVIEW:
return state.filter(item => {
return item.id !== action.payload.id;
});
default:
return state;
}
First you need to find the index of the review that should be updated. After that you can create a new array where you replace the object at the index's position.
A great resource for this kind of mutations is this video.
A:
You can use map to return an array that has the element that corresponds to the action updated:
export const Reviews: ActionReducer<any> = (state: any[] = [], action: Action) => {
switch (action.type) {
case ADD_REVIEW:
return [...state, { review: action.payload, replays: [] }];
case UPDATE_REVIEW:
return state.map(item => item.id === action.payload.id ? { review: action.payload, replays: [] } : item);
case DELETE_REVIEW:
return state.filter(item => item.id !== action.payload.id);
default:
return state;
}
}
Also, you can simplify the reviews reducer by using the review reducer to perform the ADD_REVIEW and UPDATE_REVIEW actions - the reviews reducer is then only concerned with managing the list of reviews and not the reviews themselves:
import { reviewReducer } from '...';
export const Reviews: ActionReducer<any> = (state: any[] = [], action: Action) => {
switch (action.type) {
case ADD_REVIEW:
return [...state, reviewReducer(undefined, action)];
case UPDATE_REVIEW:
return state.map(item => item.id === action.payload.id ? reviewReducer(item, action) : item);
case DELETE_REVIEW:
return state.filter(item => item.id !== action.payload.id);
default:
return state;
}
}
| {
"pile_set_name": "StackExchange"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.