text
stringlengths 64
89.7k
| meta
dict |
---|---|
Q:
pop() is not working properly
Here is a c++ code on stack.Ignore the extra code here
#include<iostream>
using namespace std;
class mystack
{
private:
int top;
int size;
int * s;
public:
void initialize()
{
top=-1;
cin>>size;
s=new int[size];
}
~mystack(){delete [] s;}
void push()
{
int x;
if(top==size-1)
cout<<"stack overflow!"<<endl;
else
{
cout<<"Enter element to be pushed:";
cin>>x;
top++;
s[top]=x;
cout<<s[top]<<endl;
}
}
int pop()
{
int p=s[top];
if(top==-1)
return 0;
else
{
top--;
return p;
}
}
int maxsize()
{
return size;
}
int isempty()
{
if(top==-1)
return 0;
else
return 1;
}
void display()
{
int i,p=top;
cout<<s[0]<<endl;
for(i=0;i<=p;i++)
cout<<s[i]<<endl;
}
};
int main()
{
int n,i;
cout<<"Enter no. of stacks:";
cin>>n;
mystack * st=new mystack[n];
for(i=0;i<n;i++)
{
cout<<"Enter size of stack "<<i+1<<":";
st[i].initialize();
}
int c,s;
while(1)
{
cout<<"*****Operations*****"<<endl;
cout<<"1.Push 2.Pop 3.Maxsize 4.isempty 5.Display 6.Quit"<<endl;
cout<<"Enter your choice:";
cin>>c;
if(n>1)
{
cout<<"Operation on which stack:";
cin>>s;
}
else
s=1;
if(c==1)
st[s-1].push();
else if(c==2)
{
if(st[s-1].pop()==0)
cout<<"stack underflow!"<<endl;
else
cout<<st[s-1].pop()<<endl;
}
else if(c==3)
cout<<st[s-1].maxsize()<<endl;
else if(c==4)
{
if(st[s-1].isempty()==0)
cout<<"True"<<endl;
else
cout<<"False"<<endl;
}
else if(c==5)
st[s-1].display();
else if(c==6)
break;
else
{
cout<<"Wrong input!"<<endl;
continue;
}
}
return 0;
}
Here accessing pop operation gives the element of top-1.I can't understand why.What should I do?When I do return s[top--] same thing is happening.
A:
Since you haven't gotten back to this, I am going to presume you've already found your logic error.
So here is the one error I found. There may be more, I quit looking ...
In the following code, how many times is pop() being called?
else if(c==2)
{
if(st[s-1].pop()==0)
cout<<"stack underflow!"<<endl;
else
cout<<st[s-1].pop()<<endl;
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
General Solution of Euler's Equation
Find the general solution to the Euler's Equation:
$$
x^2\frac{d^2y}{dx^2}+2x\frac{dy}{dx}-6y=0
$$
using change of independent variable given by transformation:
$$
x = e^z
$$
Any help would be greatly appreciated.Thanks :)
A:
Hint Use the following
$$x\frac{dy}{dx}=e^z\frac{dy}{dz}*\frac{dz}{dx}
= e^z\frac{dy}{dz}*\frac{1}{e^z}=\frac{dy}{dz}$$
$$x^2\frac{d^2y}{dx^2}=e^{2z}\left(\frac{d}{dx}\frac{dy}{dx}\right)=e^{2z}\frac{d}{dx}\left(e^{-z}\frac{dy}{dz}\right)$$
A:
Another approach:
Let's look for solutions of the form $y=x^k,$ so
$$\frac{dy}{dx}=kx^{k-1}\qquad\text{and}\qquad \frac{d^2y}{dx^2}=k(k-1)x^{k-2}$$
Then
\begin{align*}
x^2\frac{d^2y}{dx^2}+2x\frac{dy}{dx}-6y=0\quad&\iff\quad x^2k(k-1)x^{k-2}+2xkx^{k-1}-6x^k=0\\
&\iff\quad x^k(k^2-k-6)=0 \quad\text{for all }x>0\\
&\iff\quad k^2-k-6=0
\end{align*}
It follows that $k=3$ and $k=-2$, then $$y=c_1x^3+c_2x^{-2}\qquad \text{for }x>0.$$
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to run shell script commands manually
I'm running Ubuntu on my Samsung ARM Chromebook in chroot via crouton. I'm trying to run Cisco AnyConnect VPN in Ubuntu and ran into an issue. It installs but the daemon won't start. I found a description of the issue here: https://github.com/dnschneid/crouton/issues/15
So I found the shell script for AnyConnect in /etc/init.d but I'm not smart enough to figure out how to run these commands manually. I'm hoping someone can point me in the right direction.
Here are the contents of the vpnagentd_init file:
#!/bin/sh
#
# chkconfig: 345 85 25
# description: vpnagentd is used for managing the cisco vpn client datapath.
# processname: vpnagentd
# Source function library.
if [ -e "/etc/init.d/functions" ]; then
. /etc/init.d/functions
fi
RETVAL=0
start() {
# If TUN isn't supported by the kernel, try loading the module...
/sbin/lsmod | grep tun > /dev/null
if [ $? -ne 0 ]; then
/sbin/modprobe tun > /dev/null 2> /dev/null
if [ $? -ne 0 ]; then
# check for /dev/net/tun
[ -c "/dev/net/tun" ] || echo Warning: Unable to verify that the tun/tap driver is loaded. Contact your system administrator for assistance.
fi
fi
echo -n $"Starting up Cisco VPN daemon "
/opt/cisco/vpn/bin/vpnagentd
RETVAL=$?
echo
return $RETVAL
}
stop() {
echo -n $"Shutting down Cisco VPN daemon "
killall vpnagentd
RETVAL=$?
echo
return $RETVAL
}
dostatus() {
status vpnagentd
}
restart() {
stop
start
}
# See how we were called.
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
restart
;;
status)
dostatus
;;
*)
echo $"Usage: vpnagent {start|stop|restart|status}"
exit 1
esac
exit $RETVAL
A:
As an example, to run the 'start daemon' section, copy the contents of the file between "start(){" and "}" and put it in a text file startScript (as ane example). Use chmod+x startScript to make the script executable, then use ./startScript to run it. The same can be done for the stop and status sections if desired.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can I use HTMLPurifier inside a PHP class?
As the title states; how can I use the HTMLPurifier library inside my class?
I'm trying to get to grips with OOP and PHP classes for the first time and have successfully built a class that connects to my database using my database class and returns a blog article.
I would now like to parse the HTML markup for the blog article using HTMLPurifier but I would like to achieve this inside my blog class and I'm wondering how it can be achieved, as HTMLPurifier is a class.
My class so far:
namespace Blog\Reader;
use PDO;
use HTMLPurifier_Config; <--- trying to include it here
use \Database\Connect;
class BlogReader {
private static $instance = null;
private static $article = null;
private static $config = null;
private static $db = null;
private static function InitDB() {
if (self::$db) return;
try {
$connect = Connect::getInstance(self::$config['database']);
self::$db = $connect->getConnection();
} catch (Throwable $t) {}
}
private function __construct($config) {
self::$config = $config;
}
public static function getInstance($config) {
if (!self::$instance) {
self::$instance = new BlogReader($config);
}
return self::$instance;
}
public static function getArticle($id) {
self::InitDB();
try {
if (self::$db) {
$q = self::$db->prepare("
// sql
");
$q->bindValue(':id', (int) $id, PDO::PARAM_INT);
$q->execute();
self::$article = $q->fetchAll(PDO::FETCH_ASSOC);
//////////// <----- and trying to use it here
$HTMLPurifier_Config = HTMLPurifier_Config::createDefault();
$purifier = new HTMLPurifier($HTMLPurifier_Config);
///////////
} else {
throw new Exception("No database connection found.");
self::$article = null;
}
} catch (Throwable $t) {
self::$article = null;
}
return self::$article;
}
private function __clone() {}
private function __sleep() {}
private function __wakeup() {}
}
However, I get the following error log when trying anything like this:
Uncaught Error: Class 'HTMLPurifier_Config' not found in
....../php/classes/blog/reader/blogreader.class.php
And the line number of the error is on this line:
$HTMLPurifier_Config = HTMLPurifier_Config::createDefault();
My class directory structure:
[root]
[blog]
blog.php <--- using classes here
[php]
afs-autoload.php
[classes]
[blog]
[database]
[vendor]
[htmlpurifier-4.10.0]
[library]
HTMLPurifier.auto.php <--- this is what I used to `include` on blog.php to autoload HTMLPurifier_Config::createDefault() and new HTMLPurifier($purifier_config).
My Autoloader (afs-autoload.php) file:
define('CLASS_ROOT', dirname(__FILE__));
spl_autoload_register(function ($class) {
$file = CLASS_ROOT . '/classes/' . str_replace('\\', '/', strtolower($class)) . '.class.php';
if (file_exists($file)) {
require $file;
}
});
I literally started learning classes today, so I'm really baffled as to how I can achieve this, especially with the namespace system I used.
I hope somebody with better experience can guide me in the right direction.
A:
Rewritten answer:
1) Your auto loader is looking for <class>.class.php files; but your HTMLPurifier_Config is in a HTMLPurifier.auto.php file.
2) Still in your autoloader: str_replace('\\' You do not need to escape characters when in single quotes, so this should be: str_replace('\'.
3) This excellent answer should help you learn when and how to use the use PHP keyword.
4) Your issue is not the scope of your use (I don't think you even need to use use). But is that your autoloader is looking for the wrong type of files. Try manually loading the class using require and seeing if it works properly.
Original Answer
namespace Blog\Reader;
use PDO;
use HTMLPurifier_Config;
What you're actually doing here is using the values within the defined namespace; so you're using:
Blog\Reader\HTMLPurifier_Config
If you're HTMLPurifier_Config file is within its own namespace you need to specify that so that the "use" grabs the right data!
If its not in its own namespace then it will be in the global namespace which is identified with a slash:
namespace Blog\Reader;
use PDO;
use \HTMLPurifier_Config;
If it is in the namespace HTMLPurifier, for example:
namespace Blog\Reader;
use PDO;
use \HTMLPurifier\HTMLPurifier_Config;
to load the correct data reference.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Urban Airship device tokens remain active between installs
So I'm running into a peculiar problem that I have not been able to find much information on. Looking for any input or experience at all.
I have recorded the deviceToken of an existing app install using Urban Airship. Then deleting the app and reinstalling, I recorded the new device token as well. These tokens are different. From the UA test panel, I am able to send a test push to both of these tokens and the device receives 2 pushes, one for each token, even though the first token has since been uninstalled. But, in UA device lookup, both tokens are marked as active.
This was only caught after getting our push server running which triggers a push once every morning at most, based on a hosted file that determines the push contents and if one should happen. My development device is now getting up to 8 pushes at once from the server.
There are ways to unsubscribe or unregister for push notifications with Apple, UA, and the server, but I'm wondering on the best practices for this. There is no way to get the uninstall event either which would be the only time to unsubscribe. Is the best solution just to wait for UA to determine a token is inactive? I have found this list here for reasons a token could be inactive: http://docs.urbanairship.com/reference/troubleshooting/ios-push.html#inactive-device-token
But none seem to apply here, especially because some of these device tokens are nearly a month old and still sending to my test device. The app uses an Enterprise profile so this is happening in a production environment.
A:
Are you getting the same channel every time? Usually reinstalls will generate the same channel which is tied to a single device token. Then when apple generates a new token it will update the channel's token. You are probably better off contacting support directly. They will be able to help gather all the device info they need and look up registration and push records to figure out whats going on.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Using mPDF to create a PDF from a HTML form
I want to use mpdf to create my PDF-files because I use norwegian letters such as ÆØÅ. The information on the PDF-file would mostly consist of text written by the user in a HTML form. But, I have some problems.
When using this code:
$mpdf->WriteHTML('Text with ÆØÅ');
The PDF will show the special characters.
But when using this:
<?php
include('mpdf/mpdf.php');
$name = 'Name - <b>' . $_POST['name'] . '</b>';
$mpdf = new mPDF();
$mpdf->WriteHTML($name);
$mpdf->Output();
exit;
?>
The special characters will not show.
The HTML form looks like this:
<form action="hidden.php" method="POST">
<p>Name:</p>
<input type="text" name="name">
<input type="submit" value="Send"><input type="reset" value="Clear">
</form>
Why won't the special characters show with this method? And which method should I use?
A:
Since echoing the POST-data back onto the website does not show the characters as well, this clearly isn't an issue with mpdf. When using content including non-Ascii characters, special care about the websites character encoding has to be taken.
From the mpdf-documentation it can be seen that it supports UTF-8 encoding, so you might want to use that for your data. POST-data is received in the same encoding that is used by the website. So if the website is in latin-1, you will need to call utf8_encode() to convert the POST-data to unicode. If the website already uses UTF-8 you should be just fine.
If you don't set a specific encoding in the website header (which you should always to avoid this kind of trouble), encoding might depend on several factors such as the operating system and configuration on the server or the encoding of the original php sourcefile which, as it turns out, is influenced by your own OS configuration and choice of editor.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to change button text colour in a Toolbar?
Is it possible to change the colour of Button text, where the button exists in a Toolbar?
I tried the following in app.xaml
<Style TargetType="{x:Type Toolbar}"> <!-- this changes the background colour -->
<Setter Property="Background" Value="AliceBlue"/> <!-- works -->
<Setter Property="Foreground" Value="Red"/> <!-- doesn't work -->
</Style>
and I've tried
<Style TargetType="{x:Type Button}"> <!-- this changes both colours -->
<Setter Property="Background" Value="AliceBlue"/> <!-- works -->
<Setter Property="Foreground" Value="Red"/> <!-- work -->
</Style>
The toolbar is defined as:
<ToolBarTray Background="White" Width="Auto">
<ToolBar UseLayoutRounding="True" >
<Button Content="Options" Name="btnOptionsSettings" Click="btnOptionsSettings_Click" ></Button>
<Button Content="Timer" Name="btnTimerSettings" Click="btnTimerSettings_Click" ></Button>
<Button Content="Blocks" Name="btnBlocks" Click="btnBlocks_Click" ></Butto
</ToolBar>
</ToolBarTray>
A:
you can do this by calling external style sheet
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can I track the version of Python in Raspberry PI O/S
I'm trying to keep both a Mac (Mojave - 2.7.16 by default) and a VM Raspberry Pi Desktop up to date with Python so that it matches that on the Pi but other than firing up a Pi and running
python3 --version
I'm a bit stuck to find the version released in the latest Buster releases.
I've looked in the release notes and checked on GITHUB but I must be missing something or my GoogleFu is weak today!
I'm not sure if this level of accuracy for the core Python programs is needed (I dabble at the mo) but I'm having issues with library versions so think I may as well go the whole hog and match the core as well.
Longer term I'm going to go back to Visual Studio Code and remote debugging but for now I'm deploying on Zeros and cannot run the remote debug due to ARM version limitations.
I do not want to get involved in something like BuildRoot if there is a simple way to find out...
I know an other option is to freeze the deployment platform but that feels like running Windows 7 when 10 is out... No comments on Catalina / Big Sur vs Mojave I like a stable development platform.
A:
https://wiki.debian.org/Python
Debian Bullseye contains 2.7, 3.7, 3.8
Debian Buster contains Python 2.7, 3.7
Debian Stretch contains Python 2.7, 3.5
Debian Jessie contains Python 2.7, 3.4
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Equivalence definition for convergence in probability
Let $(X_n)$ be a sequnce of random variables in the probability space $(\Omega, \mathscr{F}, \mathbb{P}).$ Then $X_n \rightarrow X$ in probability if and only if $\mathbb{E} \min(|X_n - X|, 1) \rightarrow 0$ as $n \uparrow \infty.$
($\rightarrow)$ Let $Y_n = \min(|X_n - X|,1).$ Then $|Y_n| \leq 1$. So by Dominated convergence thm, $$\lim \mathbb{E}Y_n = \mathbb{E} \lim Y_n.$$ Since $X_n \rightarrow X$ in probability, almost every $x$,
$$Y_n(x) = |X_n-X|(x)$$ for $n$ large. So it should be like $$Y(x) = \lim |X_n - X|(x)$$ almost every $x$. I feel that $\lim |X_n - X|$ should be $0$, but I cannot find a good reason to support this (I know just $X_n \rightarrow X$ in probability, but not $X_n \rightarrow X$ in usual sequence sence.)
Any help for this direction ?
A:
Note that convergence in probiability does not give you that for almost every $x$ that $|X_n - X|$ is small almost surely. It just guarantees that the set where $|X_n - X|$ is large has small (not zero!) probiability.
Recall that $X_n - X$ in probiability means
$$ \forall \epsilon > 0: \def\P{\mathbf P}\def\E{\mathbf E}\P\bigl(|X_n - X| > \epsilon\bigr) \to 0 $$
Suppose this is true and we want to prove $\E(Y_n) \to 0$. Let $\epsilon \in(0, 1)$. Choose $N \in \mathbf N$ such that
$$ \P\bigl(|X_n - X| > \epsilon\bigr) < \epsilon, \qquad n \ge N.$$
We have
\begin{align*}
Y_n &= \min(1, |X_n - X|)\\
&\le \epsilon \chi_{\{|X_n - X|\le \epsilon\}} + 1\chi_{\{|X_n - X| > \epsilon\}}
\end{align*}
Taking the expected value, we have
\begin{align*}
\E Y_n &\le \epsilon \P(|X_n - X| \le \epsilon) + \P(|X_n - X| > \epsilon)\\
&\le \epsilon + \epsilon\\
&= 2\epsilon
\end{align*}
Hence $\E Y_n \to 0$.
For the other direction suppose $\E Y_n \to 0$, let $\epsilon \in (0,1)$. We have by Markov
\begin{align*}
\P(|X_n - X| > \epsilon) &= \P(\min\{1,|X_n - X|\}> \epsilon)\\
&= \P(Y_n > \epsilon)\\
&\le \frac 1\epsilon \E Y_n\\
&\to 0.
\end{align*}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Passing parameters to constructor of a service using Dependency Injection?
I have custom service class:
@Injectable()
export class CustomService {
constructor(num: number) {
}
}
This class is injected in constructor of component like this:
constructor(private cs: CustomService) {
}
But how to pass parameter num to service in constructor described above?
Something like that:
constructor(private cs: CustomService(1)) {
}
I know as solution I can use Fabric pattern, but is there only one way to do that?
A:
If CustomService instances should not be injector singletons, it is:
providers: [{ provide: CustomService, useValue: CustomService }]
...
private cs;
constructor(@Inject(CustomService) private CustomService: typeof CustomService) {
this.cs = new CustomService(1);
}
If CustomService is supposed be memoized to return singletons for respective parameter, instances should be retrieved through additional cache service:
class CustomServiceStorage {
private storage = new Map();
constructor(@Inject(CustomService) private CustomService: typeof CustomService) {}
get(num) {
if (!this.storage.has(num))
this.storage.set(num, new this.CustomService(num));
return this.storage.get(num);
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Maximum setlocal recursion level reached in batch
The task is to replace the reference path with some new path in collection of html files. I used the below code for that and it throws the maximum setlocal recursion level reached error,
@echo off
for /r ".\" %%f in (\html\*.htm) do (
SETLOCAL
call :SUB ../icons ../../icons "%%f">"%%f_new"
del "%%f"
)
for /r ".\" %%f in (*.htm_new) do rename "%%f" "*.htm"
ENDLOCAL
exit /b
:SUB
call
if "%~1"=="" findstr "^::" "%~f0"&GOTO:EOF
for /f "tokens=1,* delims=]" %%A in ('"type %3|find /n /v """') do (
set "line=%%B"
call set "line=echo.%%line:%~1=%~2%%"
for /f "delims=" %%X in ('"echo."%%line%%""') do %%~X
)
exit /b
Can anyone please let me know how to resolve this error?
Thanks in advance..
A:
First, you should ident your code.
The parenthesis aren't balanced, there are more opened than closed parenthesis.
You call a label/function which is a part of your first FOR /r loop, that will never work.
Perhaps this is what you want (but I can't even guess what you try withyour code)
@echo off
for /r ".\" %%f in (\html\*.htm) do (
SETLOCAL
call :SUB ../icons ../../icons "%%f">"%%f_new"
del "%%f"
ENDLOCAL
)
for /r ".\" %%f in (*.htm_new) do rename "%%f" "*.htm"
exit /b
:SUB
if "%~1"=="" findstr "^::" "%~f0"&GOTO:EOF
for /f "tokens=1,* delims=]" %%A in ('"type %3|find /n /v """') do (
set "line=%%B"
call set "line=echo.%%line:%~1=%~2%%"
for /f "delims=" %%X in ('"echo."%%line%%""') do %%~X
)
exit /b
After edited your code:
The setlocal/endlocal should be in the same block, in your case you call SETLOCAL for every html file, but only call ENDLOCAL once.
But each SETLOCAL needs an ENDLOCAL
After your comment:
You try to modify a html file with percent expansion, that will fail in many cases, as it's tricky to handle the special characters in an html file like <>&|.
Btw. Your For /f loop to read the file content will fail when a line begins with ].
This one should work
:SUB
if "%~1"=="" findstr "^::" "%~f0"&GOTO:EOF
setlocal DisableDelayedExpansion
for /f "tokens=* delims=" %%A in ('"type %3|find /n /v """') do (
set "line=%%B"
setlocal EnableDelayedExpansion
set "line=!line:%~1=%~2!"
set "line=!line:*]=!"
echo(!line!
endlocal
)
exit /b
But there is a much simpler solution using the repl.bat tool from dbenham
|
{
"pile_set_name": "StackExchange"
}
|
Q:
node.js/mongoDB: Return custom value if field is null
What I want to do is to query my database for documents and, if a field is not set, return a custom value instead of it.
Imagine I'm having a collection of songs and some of these have the field pathToCover set and some don't. For these, I want to return the URL to a placeholder image that I store in a config variable.
I am running node.js and mongoDB with express and mongoose.
I am not sure what the best approach for this is. An obvious solution would be to query for the documents and then in the callback iterate over them to check if the field is set. But this feels quite superfluous.
Currently, my code looks like this:
exports.getByAlbum = function listByAlbum(query, callback) {
Song.aggregate({
$group: {
_id: '$album',
artists: { $addToSet: '$artist' },
songs: { $push: '$title' },
covers: { $addToSet: '$pathToCover'},
genres: { $addToSet: '$genre'},
pathToCover: { $first: '$pathToCover'}
}
},
function (err, result) {
if (err) return handleError(err);
result.forEach(function(album) {
if ( album.pathToCover == null) {
album.pathToCover = config.library.placeholders.get('album');
}
})
callback(result);
});
}
What is the best approach for this?
Thanks a lot in advance!
A:
Where the value of the field is either null or is unset and possibly missing from the document, then you use $ifNull in your aggregation to set to the alternate value:
exports.getByAlbum = function listByAlbum(query, callback) {
var defaultCover = config.library.placeholders.get('album');
Song.aggregate(
[
{ "$group": {
"_id": "$album",
"artists": { "$addToSet": "$artist" },
"songs": { "$push": "$title" },
"covers": { "$addToSet": { "$ifNull": [ "$pathToCover", defaultCover ] } },
"genres": { "$addToSet": "$genre" },
"pathToCover": { "$first": { "$ifNull": [ "$pathToCover", defaultCover ] } }
}}
],
function (err, result) {
if (err) return handleError(err);
callback(result);
}
);
}
If it is an empty string then use the $cond statement with an $eq test in place of $ifNull:
{ "$cond": [ { "$eq": [ "$pathToCover", "" ] }, defaultCover, "$pathToCover" ] }
Either statement can be used inside of a grouping operator to replace the value that is considered.
If you are just worried that perhaps not "all" of the values are set on the song, then use something like $min or $max as a appropriate to your data to just pick one of the values instead of using $first.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
UWP ScrollViewer and StackPanels don't work properly
The UWP ScrollViewer element isn't scrolling at all with my StackPanel. I've tried Grids and the row definitions but that didn't work either. Here is my current XAML.
GIF showing scrolling not working properly
<Page
x:Class="Thunderstorm.Pages.Home"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:Thunderstorm.Pages"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d">
<ScrollViewer VerticalScrollBarVisibility="Auto" Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
<StackPanel>
<StackPanel x:Name="Weather"
Background="LightBlue"
Height="500"
MaxHeight="750"
VerticalAlignment="Top"
Padding="10,0,0,0">
<TextBlock Text="25℉" Style="{ThemeResource HeaderTextBlockStyle}" FontSize="85" Margin="25,25,0,0" MaxWidth="500" HorizontalAlignment="Left"/>
</StackPanel>
<StackPanel>
<TextBlock Text="Scroll test string to overflow"/>
<TextBlock Text="Scroll test string to overflow"/>
<TextBlock Text="Scroll test string to overflow"/>
<TextBlock Text="Scroll test string to overflow"/>
<TextBlock Text="Scroll test string to overflow"/>
<TextBlock Text="Scroll test string to overflow"/>
<TextBlock Text="Scroll test string to overflow"/>
<TextBlock Text="Scroll test string to overflow"/>
<TextBlock Text="Scroll test string to overflow"/>
<TextBlock Text="Scroll test string to overflow"/>
<TextBlock Text="Scroll test string to overflow"/>
<TextBlock Text="Scroll test string to overflow"/>
<TextBlock Text="Scroll test string to overflow"/>
<TextBlock Text="Scroll test string to overflow"/>
<TextBlock Text="Scroll test string to overflow"/>
<TextBlock Text="Scroll test string to overflow"/>
<TextBlock Text="Scroll test string to overflow"/>
<TextBlock Text="Scroll test string to overflow"/>
</StackPanel>
</StackPanel>
</ScrollViewer>
</Page>
(I apologize if this is a stupid question - I'm new to UWP development, also disregard me explicitly defining font size on the TextBlock)
A:
The frame I was calling navigate on was inside of a stackpanel and I never noticed it until now. I removed the stackpanel and scrolling worked with my original code.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why does a convex set have trivial fundamental group?
Why does a convex set have trivial fundamental group?
Without using the definition of contractible spaces, could anyone explain for me why that is true?
A:
Take a loop $\gamma :[0,1]\to C$ based at $x_0\in C$. Define the homotopy
$$H:[0,1]\times [0,1]\to C$$
by the formula
$$H(t,s)=sx_0+(1-s)\gamma(t).$$
This is well defined because $C$ is convex and is is a homotopy between any path $\gamma$ and the constant loop $x_0$, so the fundamental group of $C$ is trivial.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Django 1.11: Flip image horizontally before saving into a Django model
I'm doing this application where I take a user's image and then flip it horizontally using ImageOps from the Pillow library. To do so I made a model like above:
from django.db import models
class ImageClient(models.Model):
image = models.ImageField(null=False, blank=False)
I made a form using ImageField with a html form with enctype="multipart/form-data" and in my views I did the following:
from django.shortcuts import render, redirect
from .forms import ImageForm
from .models import ImageClient
from PIL import Image, ImageOps
def new(request):
"""
Returns mirror image from client.
"""
if request.method == 'POST':
form = ImageForm(request.POST, request.FILES)
if form.is_valid():
image = Image.open(form.cleaned_data['image'])
image = ImageOps.mirror(image)
form_image = ImageClient(image=image)
form_image.save()
return redirect('img:detail', pk=form_image.id)
else:
form = ImageForm()
return render(request, 'img/new_egami.html', {'form':form})
....
As you see, when a check if the form is valid, I open the form's image and flip it horizontally (using ImageOps.mirror()) then I save it. But I always getting this error 'Image' object has no attribute '_committed'. I know the Image object is from Pillow, but I do not understand this error. Can someone explain and/or solve this error?
A:
The error is raised because the image is a PIL Image object, whereas Django is expecting and requires its own File object. You could save the Image object to an absolute file path and then refer to it, but there are more efficient ways here and here. Here is an adaptation of @madzohan's answer in the latter link for your image operation:
# models.py
from io import BytesIO
from django.core.files.base import ContentFile
from PIL import Image, ImageOps
class ImageClient(models.Model):
image = models.ImageField(null=False, blank=False, upload_to="image/path/")
def save(self, *args, **kwargs):
pil_image_obj = Image.open(self.image)
new_image = ImageOps.mirror(pil_image_obj)
new_image_io = BytesIO()
new_image.save(new_image_io, format='JPEG')
temp_name = self.image.name
self.image.delete(save=False)
self.image.save(
temp_name,
content=ContentFile(new_image_io.getvalue()),
save=False
)
super(ImageClient, self).save(*args, **kwargs)
and views.py:
...
if form.is_valid():
new_image = form.save()
return redirect('img:detail', pk=new_image.id)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to name an address validation api based on the naming standards
I have created an api which fetches address information from the user and sends it here maps api to get the corresponding timezone and lat,long information.
My aim is to verify whether the address has corresponding lat,long information or not.
If there is no information for that address here maps would fail and i would generate a 404 error for that address.
Basically I am trying to create a address validation api which returns success if address is correct and 404 for incorrect address.
POST - api/v1/timezone-address/validate
Is this api naming correct or can I improve?
A:
You can include the version number it is considered to be good practice and plus check out this doc which I found helpful when I needed
Follow this link https://cloud.google.com/apis/design/naming_convention
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to set text of a view defined in the Main Activity (no XML)
I have a textView that is first created in the Main Activity onCreate file. Like this
Button btn = new Button();
And then I set some attributes of it like an Id and some LayoutParams
btn.setText("Button");
btw.setId("btn");
btn.setLayoutParams(new ViewGroup.LayoutParams(
ViewGroup.LayoutParams.WRAP_CONTENT,
ViewGroup.LayoutParams.WRAP_CONTENT));
So my question is how can I access this Id within the same file to change the text of the button. Like
findViewById(R.id.btn);
Except when I do this I get an error. I am assuming since this is not defined within the XML.
EDIT: I cannot pre define these buttons in xml as they are generated based on other factors of the program.
Thank you.
A:
findViewById() gets an integer, so in order to get a reference to your dynamically created TextView, it's enough to pass the same ID to this method.
// Assign it ID 100, for example, when you're creating it
btw.setId(100);
In this example you can find it using:
TextView textView = (TextView) findViewById(100);
Note that findViewById only finds views attached to the hierarchy, so you should make sure you have attached your dynamically created TextView to the hierarchy.
Alternate way: Saving the reference as a class member.
First define a private class member of your MainActivity class. And then initialize it in your onCreate method. So, wherever you need this TextView within the activity class, it's enough to use that class member.
public class MainActivity extends Activity{
// This is the class member, I was talking about
private TextView mTextView;
@Override
protected void onCreate(Bundle savedInstanceState) {
mTextView = new TextView(this);
mTextView.setText("Button");
mTextView.setLayoutParams(new ViewGroup.LayoutParams(
ViewGroup.LayoutParams.WRAP_CONTENT,
ViewGroup.LayoutParams.WRAP_CONTENT));
}
public void someMethod(){
// Here you can reach to that TextView by using its reference saved in mTextView
mTextView.setText("Hi there!");
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
wp_redirect only works on main site and not on other sites
My main site is https://www.domain.com/, all my multisites will have https://www.domein.com/multisite1/ link. I made a function in functions.php which will check if the option (in the admin panel) is true or false. If it is true the all sites should redirect to https://maintenance.domain.com.
PHP 7:
// Check if user is on login page
function is_login_page() {
return in_array($GLOBALS['pagenow'], array('wp-login.php', 'wp-register.php'));
}
// Check maintenance mode
function nmai_maintenance() {
$options = get_option('nmai_section_enable_maintenance_id');
$redirecturl = get_option('nmai_section_maintenance_url_id');
// If is admin
if ($options == 1 && ! is_admin() && ! is_network_admin() && ! is_user_logged_in() && ! is_login_page()) {
// Check the url
if (empty($redirecturl)) {
wp_redirect('https://maintenance.domain.com');
exit();
}
else {
wp_redirect($redirecturl);
exit();
}
}
}
add_action('wp_loaded', 'nmai_maintenance');
This code works for the main site (https://www.domain.com/) and will redirect it, but all my other sites will just go to their home page. I have tried wp_safe_redirect, and used init instead of wp_loaded but this makes no difference. Putting this PHP code on top in functions.php or only use this code and delete all other code will make no difference.
A:
get_option is a per sub-site function. In other words it gives you only the value of the option in the sub-site.
If you want to have a netwrok wide option you should use get_site_option and update_site_option.
You can also query an option from a specific sub-site by using get_blog_option and passing as parameter to it the main sub-site id, but in my opinion, get_site_option should be preferred.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Actionscript 3 array scope / multidimentional array questions
I seem to have an array scope issue. I have a global variable;
var itemConnect:Array = new Array();
Which is initialized at the start. I then have a function to populate it as a 2-d array:
// Draw connections
function initConnections() {
for (var i:Number = 0; i < anotherArray.length; i++) {
for (var j:Number = 0; j < anotherArray[i].length; j++) {
itemConnect[i] = new Array();
itemConnect[i][j] = new Shape();
}
}
}
The data structure looks something like:
CREATE: i = 0, j = 1, val = [object Shape]
CREATE: i = 0, j = 14, val = [object Shape]
CREATE: i = 1, j = 2, val = [object Shape]
CREATE: i = 1, j = 3, val = [object Shape]
CREATE: i = 1, j = 4, val = [object Shape]
CREATE: i = 1, j = 5, val = [object Shape]
CREATE: i = 1, j = 6, val = [object Shape]
...
If I try to access this array in another function, I just get this:
i = 0, j = 14, val = [object Shape]
i = 1, j = 51, val = [object Shape]
TypeError: Error #1010: A term is undefined and has no properties.
at main_fla::MainTimeline/mouseDownHandler()
I tried to initialize the array at the start as a 2-d array as follows:
var itemConnect:Array = new Array();
for (var counti = 0; counti < anotherArray.length; counti++) {
itemConnect[counti] = new Array();
}
Which produces slightly better results, but still misses many of the nodes:
i = 0, j = 14, val = [object Shape]
i = 1, j = 51, val = [object Shape]
i = 3, j = 47, val = [object Shape]
i = 6, j = 42, val = [object Shape]
i = 7, j = 42, val = [object Shape]
i = 8, j = 45, val = [object Shape]
i = 9, j = 42, val = [object Shape]
...
It seems to have scope access to just one of each of the [i] nodes, so [1][2], [1][3], [1][4] are missing - only the last [j] element appears.
What is the correct way of doing this? I also don't know the exact size of the array at the start which may be an issue.
Thanks
A:
Isn't your nested loop meant to look more like this?
function initConnections() {
for (var i:Number = 0; i < anotherArray.length; i++) {
itemConnect[i] = new Array();
for (var j:Number = 0; j < anotherArray[i].length; j++) {
itemConnect[i][j] = new Shape();
}
}
}
Notice that in this version the construction of the inner array is happening outside of the loop that is meant to be iterating it.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Airflow - is depends_on_past and trigger_rule the same?
In airflow.models.BaseOperator. You have two default parameter:
depends_on_past=False and trigger_rule=u'all_success'
According to doc
depends_on_past (bool) – when set to true, task instances will run sequentially while relying on the previous task’s schedule to succeed.
trigger_rule (str) – defines the rule by which dependencies are applied for the task to get triggered.
Isn't both the same thing ? I don't get why there are redundant parameters.
A:
No, both are entirely different. depends_on_past(boolean) is for to check whether to run a task or not depending on its previous DAG run(last run). trigger_rule is used to trigger a task depends on its parent task(s) state.
refer offical document
|
{
"pile_set_name": "StackExchange"
}
|
Q:
So I'm writing a voting program in C and I can't come up with a for loop to count total votes
So below is the program I am trying to run. The part I am having trouble with is running
a for loop to count total amount of votes. I don't understand why total_votes += col_totals[i] doesn't work. When I run the program I get an total number that makes no sense. eg. 42003993.
int main()
{
int votes[5][4]={192,48,206,37,
147,90,312,21,
186,12,121,38,
114,21,408,39,
267,13,382,29};
char cand[4]={'A','B','C','D'};
int row_totals[5];
int col_totals[4];
int row, col;
for(col=0;col<=3;col++) {
col_totals[col]=0;
for(row=0;row<=4;row++) {
col_totals[col] += votes[row][col];
}
}
total_votes is what I'm having trouble with. I have to create a for loop using either row_totals or col_totals to calculate the total votes. This is what I have:
int total_votes;
int i;
for (i=0; i<=3; i++) {
total_votes += col_totals[i];
}
float percent[4]; //Not finished yet
for(row=0;row<=4;row++)
{
row_totals[row]=0;
for(col=0;col<=3;col++)
{
row_totals[row] += votes[row][col];
}
}
Here's the rest of the code just for reference: It won't let me post without adding more detail so here's some filler text.
printf(" Candidate Candidate Candidate Candidate Total\n");
printf(" Precinct A B C D Votes\n");
for(row=0;row<=4;row++)
{
printf("%6d",row+1);
for(col=0;col<=3;col++)
{
printf("%12d",votes[row][col]);
}
printf("%11d\n",row_totals[row]);
}
printf("\nTotal:");
for (col = 0; col <=3; col++) {
for(row=0;row<=4;row++);
{
printf("%12d",col_totals[col]);
}
}
printf("%11d\n", total_votes);
return 0;
}
A:
int total_votes = 0;
You start by += total_votes but what is total_votes initially? Without setting it to 0, it is a random number that the memory address it is given held before you created the variable
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Best Solution for Caching
Where is the best place to implement caching in a web based app?
At the presentation layer (hope not)?
At the Business Logic Layer?
At the data layer?
I'm going to use something like memcached or MS Velocity under the hood.
I'm just finding myself writing so much code to update the cache here and there in the business logic layer, so would it be better to create a fabric in between the data access layer at the Database Server to cache data?
I think these complications are down to the fact, most of the data we are caching is user specific and we are duplicating data in the cache. We struggling to find the best solution.
A:
Cache is an important part or a web app, but there is no magical solution that will fit every project. Working on optimisations before your app is working is usually a bad idea. Before asking yourself where you should implement a cache layer, the first step is to be sure that your application works well (even if slowly) without any cache optimisation.
When this first step is achieved, you can start profiling the app, listing the features that seem to be using a lot of resources (may it be CPU, memory, i/o, database access) or taking a lot of time to complete (usually because of the same symptoms).
Once you have a list of features that you think can be optimized with a cache system, there are two questions you need to ask yourself :
"How can I improve all these features at the same time" (macro focus): An obvious answer to this one is often data-access cache. You usually don't want to send the same query to your database server over and over again if the data you get in return is always the same. So storing this type of data in cache, with a clever lifespan, will always be a good idea.
"How can I improve each feature" (micro focus): This is tricky, and you need to understand your application very well to figure this one out. Some data can be cached, some shouldn't, some mustn't. A debugger and a profiler are usually great tools for this step, because they help you being sure why a feature is slow, and give you hints about how they should be optimized.
The optimisations you're going to figure out could be related to any layer of your application (presentation, business logic, data), but that doesn't mean you should implement them all. There are several important things you should take into account:
Does this feature really need to be optimized? (is it a noticeable gain for the customer? For the hardware? For the whole app? For other apps?)
What performance gain can I achieve? (1%, 200%, ...)
How much time will it take me to optimize it? (1 hour, 12 days, ...)
How risky is it to optimize it? (could it break things for the app? For the customer?)
Once you have the answers to these questions, it's time to talk about this with your project manager, with your colleagues, or even with people that don't work on the application with you. Having neutral opinion is good, as well as having non-technical (or less technical) opinions. Talking with these people should help you figure out what should be done and what shouldn't.
At this point you should have a list of optimisations that's pretty clear, that you thought through several times, and you should have no problem coding and testing them.
A:
Caching is a performance optimization, so do it where the bottleneck is. You know where the bottleneck is by measuring it three times and then once more.
Beware of the relaxed consistency of your data when you cache, e.g. you don't want to cache all of your stock trading app.
A:
You should consider caching at EVERY layer.
The best place to cache is as near to the client request as possible (so you do as little work as possible to serve the response). In web apps, yes at the presentation layer, at the business layer and the data layer.
(Side note: If you are basically peppering your business logic code with caching logic here and there, you should really look into seperation of concerns to avoid your code becoming a big ball of mud :-) )
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Use xhr code error on switch/case (javascript)
I'm trying to display some modal depending on the xhr error code, but I don't know why my switch is failing:
error: function (xhr) {
var codigo_error = parseInt(xhr.status);
console.log("codigo_error: " + codigo_error);//this shows 404
switch (codigo_error){
case 404: body = "Error 404";
default: body = "Other";
}
modal(body);
}
I always get the default case but in console.log I can see the 404. i tried with case '404' but there's no difference.
And if I put if(codigo_error == 404) alert(codigo_error); I can see the alert with 404
A:
I always use a break in my switch cases.
switch(codigo_error){
case 404:
body = "Error 404";
break;
default:
body = "Other";
break;
}
modal(body);
Try that and see if it works.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Not able to connect Linux Azure VM from Windows Azure VM
I have 2 azure VMs 1st is Linux machine(ubuntu 18.04) and 2nd is windows azure vm. I have MySql database in Linux machine. I want to move the data from Linux MySql database to SQL Server Database which is hosted on windows Azure machine. I am creating a SSIS package to perform this operation but I am not able to connect to the Linux machine, I have open all the required ports (22,3306,1433) ports in all inbound and outbound rules in both vm but still I am not able to create successful connection.
Both the servers are in same vnet and load balancer is not applicable.
A:
Sharing the answer as per the comment by the original poster.
For me the main criteria was to connect with mysql (Linux machine) from ssis (windows machine). The issue is with the user we were using, we created a user and provided all the rights to that new user, from this new user we are able to connect.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Node JS for loop inserting data twice
I am trying import the contacts in a database but sometime its inserting records 2 times. Requirement is, If the number is already exist it should get updated otherwise inserted as a new row. I am using MySQL database.
I am using for loop with async.
var numbers = {
numbers:[
{
name:"A",
number:9876543211
},
{
name:"B",
number:7876543211
},
{
name:"C",
number:9886543211
},
{
name:"D",
number:8876543211
}
]
};
async.forEachOf(numbers, (numberObj, key, callback) => {
var createdAt = moment.utc().valueOf();
var updatedAt = moment.utc().valueOf();
gfs.checkContact(userInfo.user_id, code, numberObj.number, function(contactInfo, err){
if(err){
response.error = "sorry";
res.send(response); return false;
}else{
if (contactInfo.length > 0) {
gfs.qry("UPDATE contacts SET fullName='"+numberObj.name+"', updatedAt='"+updatedAt+"' WHERE cid='"+contactInfo[0].cid+"'").then(function (results){
}).catch(function (errorMessage){
})
}else{
gfs.qry("INSERT INTO contacts(user_id, fullName, code, mobile, createdAt, updatedAt) VALUES('"+userInfo.user_id+"', '"+numberObj.name+"', '"+code+"', '"+numberObj.number+"', '"+createdAt+"', '"+updatedAt+"')").then(function (results){
}).catch(function (errorMessage){
})
}
}
callback();
});
}, err => {
if (err){
response.error = "sorry";
res.send(response);
}else{
response.success = "success";
response.numbers = numbers;
res.send(response);
}
});
I want to insert the contact number if it's not exist in the database for logged-in user id or it should get update the other fields like name, updated at if number already in database for the logged-in user id.
A:
I have changed my code, now using promise. It's working fine now.
var numbers = [
{
name:"A",
number:9876543211
},
{
name:"B",
number:7876543211
},
{
name:"C",
number:9886543211
},
{
name:"D",
number:8876543211
}
];
async.forEachOf(numbers, (numberObj, key, callback) => {
var createdAt = moment.utc().valueOf();
var updatedAt = moment.utc().valueOf();
gfs.checkContactPromise({
user_id:userInfo.user_id,
code:code,
mobile:numberObj.number,
fullName:numberObj.name,
createdAt:createdAt,
updatedAt:updatedAt
}).then( function (addContactQry){
gfs.qry(addContactQry).then(function (results){
userContacts.push("'"+numberObj.number+"'");
callback();
}).catch(function (errorMessage){
callback();
})
}).catch( function (errorMessage){
callback();
});
}, err => {
if (err){
response.error = "sorry";
res.send(response);
}else{
response.success = "success";
response.numbers = numbers;
res.send(response);
}
});
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Terraform aws_cloudwatch_metric_alarm An error occurred contacting the server
I'm creating CloudWatch alarm with Terraform, e.g.:
resource "aws_cloudwatch_metric_alarm" "terraform_cloudwatch_metric_alarm_CPUUtilization" {
alarm_name = "terraform_cloudwatch_metric_alarm_CPUUtilization"
alarm_description = "terraform_cloudwatch_metric_alarm_CPUUtilization"
comparison_operator = "GreaterThanOrEqualToThreshold"
evaluation_periods = "1"
metric_name = "CPUUtilization"
namespace = "AWS/EC2"
period = "300"
statistic = "Average"
threshold = "90"
dimensions = {
"InstanceId" = "${aws_instance.terraform_instance.id}"
}
}
But I'm getting An error occurred contacting the server:
When I'm creating the same from console it works fine:
Any ideas why and how to fix it?
PS Temporary workaround is to add local-exec to aws_instance:
provisioner "local-exec" {
command = <<-EOF
aws cloudwatch put-metric-alarm \
--alarm-name "cloudwatch_metric_alarm_CPUUtilization" \
--alarm-description "cloudwatch_metric_alarm_CPUUtilization" \
--no-actions-enabled \
--metric-name "CPUUtilization" \
--namespace "AWS/EC2" \
--statistic "Average" \
--dimensions "Name=InstanceId,Value=${aws_instance.terraform_instance.id}" \
--period "300" \
--unit "Percent" \
--evaluation-periods "1" \
--threshold "90" \
--comparison-operator "GreaterThanOrEqualToThreshold" \
--treat-missing-data "missing"
EOF
}
A:
The problem was in SOFT HYPHEN present in dimension name, both VSCode and Sublime 3 showed code normally:
only Vim/nano were showing correctly:
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do I get a jQuery Mobile slider for hexadecimal numbers?
I am trying to alter the behavior of the normal jQuery Mobile slider widget to display hexadecimal numbers. However, I don't get it to work.
The slider's slidestart and slidestop events aren't appropriate, since they only trigger at the start and stop of the interaction respectively.
So, I tried to bind a change handler to the input element of the slider instead.
$("input", slider).on("change", function() {
// change the value of the input to hexadecimal...
});
Doesn't work either. Nothing happens. Is there a way to achieve this?
A:
Gajotres pointed me to a bug in my conversion code.
Binding a change handler to the input element of the slider was the right approach, though.
So for the sake of completeness (and others with the same problem) here is my final solution.
$("input", slider).on("change", function() {
var number = parseInt(this.value);
$(this).val(number.toString(16));
});
|
{
"pile_set_name": "StackExchange"
}
|
Q:
vue-multiselect does not load CSS
I am using vue-multiselect component (b15) with vue-cli (webpack template) but the component's CSS is not loaded and the component renders wrong. Any idea?
My code:
<template>
<div>
<div class="select2-container select2-container-multi full-width">
<multiselect
class="form-control form-control-select textarea"
:class="{ 'has-error': showError }"
:options="localOptions"
:label="labelKey"
track-by="id"
:multiple="multiple"
:selected="value"
:searchable="true"
:placeholder="placeholder"
:loading="loading"
:custom-label="formatLabel"
:disabled="disabled"
:readonly="readonly"
@input="updateSelected"
@close="blur">
</multiselect>
</div>
</div>
</template>
<script>
import Multiselect from 'vue-multiselect'
export default {
mixins: [inputMixin],
components: {
Multiselect
}
}
</script>
Multiselect is rendered and everything just no style is applied.
A:
You need to add css seperately at the end, so you file looks like that:
<template>
<div>
<div class="select2-container select2-container-multi full-width">
<multiselect
class="form-control form-control-select textarea"
:class="{ 'has-error': showError }"
@input="updateSelected"
@close="blur">
</multiselect>
</div>
</div>
</template>
<script>
import Multiselect from 'vue-multiselect'
export default {
mixins: [inputMixin],
components: {
Multiselect
}
}
</script>
<style src="vue-multiselect/dist/vue-multiselect.min.css"></style>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
sequential strpos() faster than a function with one preg_match?
i need to test if any of the strings 'hello', 'i am', 'dumb' exist in the longer string called $ohreally, if even one of them exists my test is over, and i have the knowledge that neither of the others will occur if one of them has.
Under these conditions I am asking for your help on the most efficient way to write this search,
strpos() 3 times like this?
if (strpos ($ohreally, 'hello')){return false;}
else if (strpos ($ohreally, 'i am')){return false;}
else if (strpos ($ohreally, 'dumb')){return false;}
else {return true;}
or one preg_match?
if (preg_match('hello'||'i am'||'dumb', $ohreally)) {return false}
else {return true};
I know the preg_match code is wrong, i would really appreciate if someone could offer the correct version of it.
Thank You!
Answer
Please read what cletus said and the test middaparka did bellow. I also did a mirco time test, on various strings, long and short. with these results
IF, you know the probability of the string values occurring ORDER them from most probable to least. (I did not notice a presentable different in ordering the regex itself i.e. between /hello|i am|dumb/ or /i am|dumb|hello/.
On the other hand in sequential strpos the probability makes all the difference. For example if 'hello' happens 90%, 'i am' 7% and 'dumb' 3 percent of the time. you would like to organize your code to check for 'hello' first and exit the function as soon as possible.
my microtime tests show this.
for haystacks A, B, and C in which the needle is found respectively on the first, second, and third strpos() execution, the times are as follows,
strpos:
A: 0.00450 seconds // 1 strpos()
B: 0.00911 seconds // 2 strpos()
C: 0.00833 seconds // 3 strpos()
C: 0.01180 seconds // 4 strpos() added one extra
and for preg_match:
A: 0.01919 seconds // 1 preg_match()
B: 0.02252 seconds // 1 preg_match()
C: 0.01060 seconds // 1 preg_match()
as the numbers show, strpos is faster up to the 4rth execution, so i will be using it instead since i have only 3, sub-stings to check for : )
A:
The correct syntax is:
preg_match('/hello|i am|dumb/', $ohreally);
I doubt there's much in it either way but it wouldn't surprise me if the strpos() method is faster depending on the number of strings you're searching for. The performance of strpos() will degrade as the number of search terms increases. The regex probably will to but not as fast.
Obviously regular expressions are more powerful. For example if you wanted to match the word "dumb" but not "dumber" then that's easily done with:
preg_match('/\b(hello|i am|dumb)\b/', $ohreally);
which is a lot harder to do with strpos().
Note: technically \b is a zero-width word boundary. "Zero-width" means it doesn't consume any part of the input string and word boundary means it matches the start of the string, the end of the string, a transition from word (digits, letters or underscore) characters to non-word characters or a transition from non-word to word characters. Very useful.
Edit: it's also worth noting that your usage of strpos() is incorrect (but lots of people make this same mistake). Namely:
if (strpos ($ohreally, 'hello')) {
...
}
will not enter the condition block if the needle is at position 0 in the string. The correct usage is:
if (strpos ($ohreally, 'hello') !== false) {
...
}
because of type juggling. Otherwise 0 is converted to false.
A:
Crazy idea, but why not test both 'n' thousand times in two separate loops, both surrounded by microtime(); and the associated debug output.
Based on the above code (with a few corrections) for 1,000 iterations, I get something like:
strpos test: 0.003315
preg_match test: 0.014241
As such, in this instance (with the limitations outlined by others) strpos indeed seems faster, albeit by a largely meaningless amount. (The joy of pointless micro-optimisation, etc.)
Never estimate what you can measure.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
error: instance member 'tomato' cannot be used on type 'hamburger'
1.what I code
class hamburger {
var tomato: String
var patty: String
var bread: String
var number: Int
init(_ tomato: String, _ patty: String, _ bread: String, _ number: Int) {
self.tomato = tomato
self.patty = patty
self.bread = bread
self.number = number
}
init() {
self.tomato = "tomato"
self.patty = "patty"
self.bread = "bread"
self.number = 10
}
}
let sandwich = hamburger("texas" , "iii" , "iii" , 10)
print(hamburger.tomato)
2.error message
Playground execution failed:
error: dotinstall.playground:342:7: error: instance member 'tomato'
cannot be used on type 'hamburger'
print(hamburger.tomato)
^~~~~~~~~ ~~~~~~
3.The sample I followed
enter code here// Class
class User {
let name: String // property
var score: Int
init(name: String, score: Int) {
init(_ name: String, _ score: Int) {
self.name = name
self.score = score
}
init() {
self.name = "bob"
self.score = 30
}
}
//let tom = User(name: "tom", score: 23)
let tom = User("tom", 23)
print(tom.name)
print(tom.score)
let bob = User()
print(bob.name)
print(bob.score)
I have coded like 1 following 3, but I got a error message like 2.
what I did to solve
・anyway follow this sample to be like same
・studied a basic of class syntax, initializer, instance on website
・was looking for a mistypes
・I checked the order of property
I don't why it is not worked even if I just follow the sample code.
please give me tips on the solution.
thanks
A:
You're making a mistake regarding object oriented programming. With hamburger.tomato you try to access the property tomato of the Class Hamburger, not the Object, which is sandwich here. So the solution would be:
print(sandwich.tomato)
In the future, what you might want to do is take a look at styling your code better. Classes(Hamburger) are written starting with an uppercased letter, while the objects, or instances(sandwich), of the Classes are written starting with a lowercased letter.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Reflection Class and parameters in PHP
So I attempted, and possibly failed, to write a "mixin" class of sorts. It works as expected for the most part until you have multiple parameters for a class to pass in, then the world implodes. My class is as such:
class AisisCore_Loader_Mixins {
private $_classes;
private $_class_objects = array();
private $_methods = array();
public function __construct(){
$this->init();
}
public function init(){}
public function setup($class){
if(!is_array($class)){
throw new AisisCore_Loader_LoaderException('Object passed in must be of type $class_name=>$params.');
}
$this->_classes = $class;
$this->get_class_objects();
$this->get_methods();
}
public function get_class_objects(){
foreach($this->_classes as $class_name=>$params){
$object = new ReflectionClass($class_name);
$this->_class_objects[] = $object->newInstance($params);
}
}
public function get_methods(){
foreach($this->_class_objects as $class_object){
$this->_methods = get_class_methods($class_object);
}
}
public function call_function($name, $param = null){
foreach($this->methods as $method){
$this->isParam($method, $param);
}
}
private function isParam($method, $param){
if($param != null){
call_user_func($method, $param);
}else{
call_user_func($method);
}
}
}
And is extended and used in a "bridge" class as such:
class AisisCore_Template_Helpers_Loop_LoopMixins extends AisisCore_Loader_Mixins{
private $_options;
private $_wp_query;
private $_post;
private $_components;
public function __construct($options){
parent::__construct();
global $wp_post, $post;
if(isset($options)){
$this->_options = $options;
}
if(null === $wp_query){
$this->_wp_query = $wp_query;
}
if(null === $post){
$this->_post = $post;
}
$this->_components = new AisisCore_Template_Helpers_Loop_LoopComponents($this->_options);
$this->setup(array(
'AisisCore_Template_Helpers_Loop_Helpers_Loops_Single' => array($options, $this->_components),
'AisisCore_Template_Helpers_Loop_Helpers_Loops_Query' => array($options, $this->_components),
'AisisCore_Template_Helpers_Loop_Helpers_Loops_Page' => array($options, $this->_components),
));
}
public function init(){
parent::init();
}
}
Whats the issue?
Warning: Missing argument 2 for AisisCore_Template_Helpers_Loop_Helpers_Loops_Single::__construct()
Warning: Missing argument 2 for AisisCore_Template_Helpers_Loop_Helpers_Loops_Query::__construct()
Warning: Missing argument 2 for AisisCore_Template_Helpers_Loop_Helpers_Loops_Page::__construct()
I thought doing something like:
array($options, $this->_components)
Takes the parameters for that class, wraps it in an array and then newInstanceArgs implodes that array putting both parameters into the class. In other words I thought i was passing two arguments??
A:
The error message is telling you exactly what's going wrong:
Warning: Missing argument 2 for BlahBlahBlah::__construct()
So the issue is that all your parameters aren't being passed to the constructor when you're instantiating an object here:
$this->_class_objects[] = $object->newInstance($params);
If you consult the relevant documentation for ReflectionClass::newInstancedocs you'll see this:
Creates a new instance of the class. The given arguments are passed to
the class constructor.
So no matter how many elements you have in the $params array you're only passing Argument 1 to the constructor with your current approach. The solution is to use ReflectionClass::newInstanceArgsdocs instead as this will expand the array of parameters and pass them as individual arguments to the constructor:
$this->_class_objects[] = $object->newInstanceArgs($params);
|
{
"pile_set_name": "StackExchange"
}
|
Q:
WebRTC integration in iOS With OWN Server
is it possible to integrate the WebRTC concept in iOS application with out use third party API in our own server URL. If it is possible then tell me how we are configure the connection,by using Backend API or can we configure it from our Application itself.
Our Web team already integrated it for browser(by using Java Script) and now we are trying to integrate the concept in to iOS.
A:
You can make your own signaling server to allow clients to negotiate a call with each other. How you want to do this is up to you, but we use node.js for this. Our apps connect to our node.js server through a TCP socket.
When our app makes a call it sends a create signal to our signaling server. The server then sends a voip push to the callee (when a user logs in, the app registers for voip push notifications and sends it's device id to the signaling server). The callee connects to the server and the clients can start sending the offer and answer SDP and the ICE candidates.
This is the class we use for the WebRTC part of the call <Link removed>. You can drop those files in your project and extend a viewcontroller on it and implement the delegation. For signaling you will have to design your own system, but any decent programmer should be able to create a simple signaling server.
To add the library to your project I recommend cocoapods. Then use this to add the library:
target 'your_project_here' do
pod 'libjingle_peerconnection'
post_install do |installer_representation|
installer_representation.pods_project.targets.each do |target|
target.build_configurations.each do |config|
config.build_settings['ONLY_ACTIVE_ARCH'] = 'NO'
config.build_settings['VALID_ARCHS'] = ['armv7', 'arm64']
end
end
end
end
Or you can follow the guide provided to compile the library yourself.
You can read more about using WebRTC natively here. The diagrams show the order in which you have to implement your signals. It's not that hard, when client A calls client B, basically you do:
A creates the peerconnection factory
A creates a peerconnection
A creates a local media stream
A creates an offer SDP
A sets the offer as a local SDP
A starts generating ICE candidates
A sends ICE candidates to B as they come *
A sends the offer to B *
B creates the peerconnection factory
B creates a peerconnection
B sets the offer as a remote SDP
B creates a local media stream
B creates an answer SDP
B sets the answer as a local SDP
B starts generating ICE candidates
B sends ICE candidates to A as they come *
B sends the answer to A *
A sets the answer as a remote SDP
* If you use the class I linked, you only have to worry about these points
Note that this class is just a starting point, it doesn't allow for multi-user calls (just 2 peers) and doesn't have much features.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What is the relation between Aliasing and Flickering?
So in the class I've learned that Aliasing refers to the jagged edges resultant from the discrete nature of computer graphics way of representation.
Also, I know that Anti-aliasing refers to a technic (mainly of blurring) to remove (our camouiflage) aliasing.
But I was presented a question about "A picture flickering in a game room when the user moves" and the answer was given as being an aliasing problem.
I did not get the relation between flickering and aliasing. Can someone clarify it to me?
A:
Flickering can be a form of temporal aliasing. It's a similar phenomenon to spatial aliasing such as jaggies, but it occurs in time instead of space.
For instance, a common cause of image flickering in graphics is when the camera or geometry is in motion, and geometric features fluctuate in pixel size as they move. For example, imagine a railing with thin vertical bars. Depending where a bar appears relative to the pixel grid, it might get rendered as 2 pixels wide, only 1 pixel wide, or it might not appear at all. And in motion, it may rapidly fluctuate between these states, creating a visually objectionable flicker.
Another common cause of image flickering is specular surfaces with a bumpy normal map and a high specular power. The specular highlights can flicker in motion, due to their alignment with pixels changing from frame to frame.
Antialiasing strategies that address only spatial aliasing will often produce an image that looks good in a static screenshot, but turns into a flickery mess as soon as things start moving. This is one reason why temporal antialiasing has become popular in games in recent years.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Ensuring Updated CSS/JavaScript on Client Side
I'm trying to ensure that visitors of my ASP.NET MVC website always have the most-current CSS and Javascript (and not some older cached version).
I tried to realize this by generating a seed value when the application domain starts, and automatically append it to the CSS and Javascript URLs (so now instead of /Content/All.js the link is /Content/All.js?549238 etc.).
Unfortunately I just found out by debugging via Firebug that this causes now a full download request every time (the new "seeded" response is no longer cached at all, but I only wanted the first check to download the 'updated' version, but then cache again/only check if there is a difference).
How can I achieve my goal, is there a better way of doing this? I need the client to always request the newest version, but then cache if no change happened.
Edit: This appears to be related to the fact that my page is served over SSL. I asked a follow up question here regarding enabling clientside caching with SSL.
A:
I think the seed value you are generating is a random number. Replace this with a version number so that it will be re downloaded only when the version number of your application changes.
Like
/Content/All.js?appVer1.5
and when you modify the application you can change that to something like
/Content/All.js?appVer1.6
|
{
"pile_set_name": "StackExchange"
}
|
Q:
MC34063ADR overheats and burns
I am trying to get -13.5V from mc34063.
On the first design, I used 8DIP package MDR34063AP and no problem getting -13.5V.
On the second design changed the package to 8-soic and used S1B-13F (100V 1A diode), which are replacements with same characteristics.
However, after 2 seconds IC starts to consume more and more current gets hot and burns.
When I increased the R22=1R resistor to 2.5R, it doesn't overheat but I can't get -13.5V anymore. It stays around -9V.
Any comment or help would be really useful,
Thank you for your time.
Design 1
Design 2
A:
1B-13F, 1N400x are inappropriate for SMPS designs- too long recovery time even for a pig of a chip like 34063- and not even specified so the specs can be met yet the diodes can be very different.
Use a Schottky diode like the datasheet shows. An ultra fast diode such as UF4004 can also be used.
Inductor saturation is another thing to look for if that part changed.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Users who are only interested in final answers
I feel annoyed when someone doesn't reply to comments asking some illustrations about the question, the context of the problem or improving the quality of the question. They seem they don't have the time/ignore to reply to people who are interested in offering some free help. This might lead to understanding the question wrongly or even close it at the end. This category might include users who refuse to put some time learning LaTeX to improve the quality of the question. While some of them improve by time others get away with it because of some generous free editing.
My question
Is this behavior caused by the action of some helpers who always answer low-quality questions or edit them?
Is there a record of such users? such that we vote to close some repetitive low-quality questions by the same user?
A:
The best way to respond to low quality content is to downvote and leave a comment explaining why. You can't do anything to make someone who fundamentally doesn't care, and trying to seems like an exercise in futility.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Ring Sandwiched between PIDs
If I have three commutative rings $R \subset S \subset T$, such that $R$ and $T$ are principal ideal domains, will this imply that $S$ itself is a principal ideal domain?
A:
For the original question, one counterexample would be $\Bbb Z \subseteq \Bbb Z[x]\subseteq \Bbb Q (x)$.
The ring in the middle is Noetherian but isn't Bezout, and thus certainly isn't a principal ideal ring.
If, as you mentioned in the comment, we add that $Frac(R)=Frac(T)$, then the picture is different. By gathering up all the denominators of fractions of $R$ lying in $S$, you have a multiplicative set $M$ such that $M^{-1}R=S$. It's elementary to show that a localization of a principal ring is principal, and the localization of a Bezout ring is Bezout, so $S$ will have either of these properties if $R$ does. In this situation, $T$ doesn't play any role.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Auxiliary arrows in forms like in Eclipse
When you open a dialog in Eclipse, where is a form layout, you can see that when you hover your mouse over some item, its label or space between them, there's an auxiliary arrow shown. Screenshot:
My question is: is there any (simple) way to achieve the same in Java with SWT and JFace?
Regards
A:
No there is no standard way to achieve this through SWT or JFace, as it is not a built in feature. It is not that difficult to add on your own though.
Have a look at this ConfigurationBlock.java file from the PDE source. This class is the base for all option blocks in PDE preference pages. This exact same code snippet is also used by JDT but it has a different copy in OptionConfigurationBlock.java.
The method that gets called for each combo control is ConfigurationBlock#addHighlight(..), which is responsible for adding the highlight when the control is in focus or when mouse is hovering over its label.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why did the Community User accept this answer?
I'm not sure if this question should be posted here or on Meta SE. Perhaps the mods can guide me best on this.
I was going through the activity log of the Community User and noticed that it has accepted many answers. I checked on Meta SE for any info on this behaviour, and this post explains that when the OP of either the question or the accepted answer has their account deleted, then the ownership of the accept transfers to the Community User.
However, in this question both OP's are active, yet the accept belongs to the Community User (check the activity on Dec 14, 2016). Why is this? Does this have anything to do with the fact that the question was migrated from Music Practice and Theory SE?
A:
Most possibly, yes, it's because of the migration.
This answer on Meta SE explains that
The accepted answer will now persist when it is migrated.
Now, that question was originally posted on Dec '16 before migrated. However, the OP didn't have an account on this site until Sep '17 (hover on the timestamp on "Member for"). Thus, when the question was migrated, it (and the acceptance) belonged to the Community user.
When the user finally created an account, the question now belonged to them, but not the acceptance (possibly until the OP unaccept and reaccept the answer)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to set SEO attributes in aurelia
Here is the main problem I'm having. I want to set social share buttons to an Aurelia application page.
In general, I have to set three meta tags objects:
head title
[property="og:image"]
[property="og:description"]
What is the best way to handle this in Aurelia? Is it possible to do this using the Route object?
A:
I got around this by just writing a service that modifies the head content directly using the DOM API. There is no way to nicely bind to the head content as a view.
Here is a gist of my implementation
https://gist.github.com/dpix/6f508727b9d03d692d0659eb1776ad85
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How main thread runs before this thread?
I have the following code:
public class Derived implements Runnable {
private int num;
public synchronized void setA(int num) {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
}
System.out.println("Setting value " + Thread.currentThread().getName());
this.num = num;
}
@Override
public void run()
{
System.out.println("In run: " + Thread.currentThread().getName());
setA(20);
}
public static void main(String[] args) {
Derived obj = new Derived();
Thread t1 = new Thread(obj);
t1.start();
obj.setA(32);
}
}
This prints:
In run: Thread-0
Setting value main
Setting value Thread-0
My question is if I started the thread 't1' first, and it entered the run method as the output shows, how come main thread was able to call setA before 't1' if the object was locked by t1? (or even before getting a lock on 'obj' main was able to get a lock) Is it just the scheduler or am I thinking it wrong?
A:
how come main thread was able to call setA before 't1' if the object was locked by t1?
The whole point of using multiple threads is to allow code in each thread to run independently. The Thread.start() (or any method) is not instantaneous. It take time and while your thread is starting, you can run code in your current thread, in fact it can run to completion before your background thread even starts.
Is it just the scheduler
That is part of it. But it's also the fact that starting a Thread isn't free and takes a non-trivial amount of time.
public class Test {
public static void main(String[] args) {
long start = System.nanoTime();
new Thread(() -> System.out.println("Thread took " +
(System.nanoTime() - start) / 1e6 + " ms to start"))
.start();
}
}
I have a fast machine but when I run this program to start the first thread takes a while.
Thread took 44.695419 ms to start
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Facebook Application Tab not showing content
I'm started off developing a facebook application. One of the requirements is to have a 'profile tab'. Our web application is hosted on my local machine (for now), my /etc/hosts file has webapp.xyz going to 127.0.0.1. (i.e. I can use webapp.xyz as a domain name for my web app and everything works locally).
I have set up webapp.xyz/facebook/ as the Canvas URL and tab as the Tab URL. My webapp handles this correctly and print a simple hello world output. If I go to http://apps.facebook.com/MYAPPNAME/tab, I see my hello world output. I can also see the access logs on my local machine.
I have added the application to a page, and added the profile tab. I can see the tab there, but when I click on it, there is nothing, just an empty page. I see the 'throbber' flashing for a second then an empty page. I see no access logs on my local machine. Firebug tells me there is no iframe in the middle (the big empty white space). What's going on?
A:
I found the solution to this. You can't have 'local' applications that work as facebook tabs. Facebook's server's POST to your URL and parse and clean the data there, so your web application need to be globally reachable.
I solved this by setting up SSH tunnels to a host I controlled and changing the Facebook application to that URL & port.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
using the variable passed in constructor in prototype function in javascript with jquery
I have several div with the same "album" class, so I wanted to create a class using constructor and prototype. Here's what I did
function Album(album){
this.album = album;
console.log(this.album === album)
console.log($(this.album) === $(album))
}
Album.prototype = {
init: function(){
},
loadImages: function(){
}
};
$('.album').each(function(){
var album = new Album(this);
});
I need to access the album variable that I passed in to the class Album in the init function, so I have to store it in this.album. However I don't understand that why
console.log(this.album === album) is true but
console.log($(this.album) === $(album)) is false
I need to use jquery in prototype, is there other way to do so? Thanks.
A:
$('body') === $('body') // false
Basically, you are doing this right. jQuery is screwing with you.
With objects the === operator is only true if it is the same object. In this case, jQuery makes a brand new object each time it wraps a DOM element, making a new object even if it's wrapping the same element it did a second ago.
Here's an example of why this is in plain JS, without jQuery:
var domEl = document.getElementById('whatev');
var a = { el: domEl };
var b = { el: domEl };
domEl === domEl // true
a === b // false
Here there is 2 objects, both have identical data and wrap the same object. But they are different objects and therefore not === to each other.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
SDK for Service Management API
Just like we have SDK for the Azure Storage [Tables, Blobs, Queues] along with the REST API;
Do we have SDK or library for handling Service Management APIs in c#...?
A:
Naveen,
Take a look at "Microsoft.WindowsAzure.ServiceManagementClient.dll". You can find this along with Azure SDK (C:\Program Files\Windows Azure SDK\v1.4\bin directory). I think this is what you're looking at.
Hope this helps.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Solr 6.1 warning: Couldn't add files to classpath
I freshly configured a Solr 6.1 server (actually a migration from version 4) and get a lot of warnings like:
WARN (coreLoadExecutor-6-thread-2) [ ] o.a.s.c.SolrConfig Couldn't add files from C:\dev\solr-6.1.0\server\solr\configsets\foobar\..\..\..\dist filtered by solr-cell-\d.*\.jar to classpath: C:\dev\solr-6.1.0\server\solr\configsets\foobar\..\..\..\dist
or
WARN (coreLoadExecutor-6-thread-2) [ ] o.a.s.c.SolrConfig Couldn't add files from C:\dev\solr-6.1.0\server\solr\configsets\foobar\..\..\..\contrib\extraction\lib filtered by .*\.jar to classpath: C:\dev\solr-6.1.0\server\solr\configsets\foobar\..\..\..\contrib\extraction\lib
But: There are no *.jar files anywhere below C:\dev\solr-6.1.0\server\solr\configsets\foobar (in fact there is only a subdirectory conf with some .xml files) and the server is running fine so far, so I'm wondering what this warning is going to tell me. Can I just ignore it? Am I missing anything important?
A:
Thanks to Oyeme's comment I discovered that I included some example stuff in the solrconfig.xmls:
<lib dir="../../../contrib/extraction/lib" regex=".*\.jar"/>
<lib dir="../../../dist/" regex="solr-cell-\d.*\.jar"/>
<lib dir="../../../contrib/clustering/lib/" regex=".*\.jar"/>
<lib dir="../../../dist/" regex="solr-clustering-\d.*\.jar"/>
<lib dir="../../../contrib/langid/lib/" regex=".*\.jar"/>
<lib dir="../../../dist/" regex="solr-langid-\d.*\.jar"/>
<lib dir="../../../contrib/velocity/lib" regex=".*\.jar"/>
<lib dir="../../../dist/" regex="solr-velocity-\d.*\.jar"/>
Deleting these lines was the solution.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why is JIT part of Execution engine of JVM?
Following is the flow of a java program execution:
Bytecode (Javac) -> ClassLoader -> Execution Engine (JIT).
When the source code is compiled and classloader feeds the bytecode to execution engine to interpret and run the program, why the Just-In-Time (JIT) compiler present in an execution engine when there is nothing to compile?
A:
The bytecode contains abstract instructions for the Java virtual machine. The instructions are not directly executable by conventional machines. The JIT step compiles this abstract bytecode into concrete machine code that can be executed by the machine's CPU.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Getting Stack Overflow Error when writing log by using log4j
I'm using log4j for writing log when running test automation.
Here is my method to write log:
public class Log {
private static Logger Log = Logger.getLogger(Log.class.getName());
public static void info(String message)
{
Log.info(message);
}
}
But whenever I use it, the stack overflow error is thrown like below:
Calling method:
Log.info("Click action is performed on My Account link");
Error:
java.lang.StackOverflowError at
helpers.Log.info(Log.java:21) at
helpers.Log.info(Log.java:21) at
helpers.Log.info(Log.java:21) at
helpers.Log.info(Log.java:21)
Can anyone please help?
A:
i dont see the configuration for log4j.properties file. try this
import org.apache.log4j.Logger;
import org.apache.log4j.xml.DOMConfigurator;
public class Logs {
public static Logger Application_Log = Logger.getLogger(Logs.class.getName());
public Logs(){
DOMConfigurator.configure("log4j-config.xml");
}
public void info(String message){
Application_Log.info(message);
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Beginner ILNumerics Plotting Sphere example
failing to follow a begiiner example: create a interactove sphere with ILNumerics. I added the nuget package as reference and draging a ILPanel from the toolbar to my form.
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using ILNumerics;
using ILNumerics.Drawing;
using ILNumerics.Drawing.Plotting;
namespace WindowsFormsApplication1 {
public partial class Form1 : Form {
public Form1() {
InitializeComponent();
}
private void ilPanel1_Load_1(object sender, EventArgs e) {
var scene = new ILScene();
scene.Add(new ILSphere());
ilPanel1.Scene = scene;
}
}
}
It shows a sphere. But the sphere always is the full size of the window. Mouse rotation does not work either. What am I missing?
A:
Instead of
scene.Add(new ILSphere());
you can add the sphere below the standard Camera in the scene:
scene.Camera.Add(new ILSphere());
This will give you the desired result. The camera creates its own coordinate system, positions objects within its subtree and provides all interactive options for them (rotation, zoom, pan etc.)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Could not create target cluster during upgrade 9.5 to 9.6
I am on ubuntu 16.04, and install postgres 9.5 on it
Then I to upgrade postgres 9.5 to 9.6, and I following postgresql offical download page to install 9.6:
then I run apt install postgresql-9.6, after install, I run following commands to upgrade
# stop the 9.6
$ sudo pg_dropcluster 9.6 main --stop
# upgrade 9.5 to latest version
$ sudo pg_upgradecluster 9.5 main
sudo pg_upgradecluster 9.5 main
Stopping old cluster...
Notice: extra pg_ctl/postgres options given, bypassing systemctl for stop operation
Disabling connections to the old cluster during upgrade...
Restarting old cluster with restricted connections...
Redirecting start request to systemctl
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = "en_US:en",
LC_ALL = (unset),
LC_CTYPE = "UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to a fallback locale ("en_US.UTF-8").
Error: The locale requested by the environment is invalid.
Error: Could not create target cluster
Then I trying to using psql command to connection old cluster. but errors:
$ psql -U postgres -h localhost
psql: FATAL: no pg_hba.conf entry for host "::1", user "postgres", database "postgres", SSL on
FATAL: no pg_hba.conf entry for host "::1", user "postgres", database "postgres", SSL off
seems the pg_hba.conf is wrong, Then I check pg_hba.conf at /etc/postgresql/9.5/main/pg_hba.conf, but it seems fine:
local all postgres peer
A:
The only solution I could find to fix this is to run the following command:
export LC_CTYPE=en_US.UTF-8 export LC_ALL=en_US.UTF-8
Before proceeding with sudo pg_upgradecluster 9.5 main
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Simple Rspec test for positive number just won't pass
I'm new to rails and trying to write my first app. I have a table with columns order_size:integer and price:decimal(8,5). The price column holds currency prices so it needs to be really precise, in case you're wondering. I'm trying to write tests to make sure the price and order_size are a positive numbers, but no matter what I do they won't pass.
Here are the Rspec tests
it "should require a positive order size" do
@attr[:order_size] = -23
@user.orders.create!(@attr).should_not be_valid
end
it "should require a positive price" do
@attr[:price] = -1.2908
@user.orders.create!(@attr).should_not be_valid
end
Here are the Order class validations
validates_presence_of :user_id
validates_numericality_of :order_size, :greater_than => 0,
:only_integer => true
validates_numericality_of :price, :greater_than => 0
Here's the test results
Failures:
1) Order validations should require a positive order size
Failure/Error: @user.orders.create!(@attr).should_not be_valid
ActiveRecord::RecordInvalid:
Validation failed: Order size must be greater than 0
# ./spec/models/order_spec.rb:39:in `block (3 levels) in <top (required)>'
2) Order validations should require a positive price
Failure/Error: @user.orders.create!(@attr).should_not be_valid
ActiveRecord::RecordInvalid:
Validation failed: Price must be greater than 0
# ./spec/models/order_spec.rb:44:in `block (3 levels) in <top (required)>'
What exactly is going on here? I even tried running the test asserting they should be_valid, but they still fail. Any help would be appreciated.
A:
Looks to me like the creation of the records is failing due to your validations, and thus never getting to your assertion! As apneadiving points out, you want to do:
order = Order.new(:order_size => -23)
order.should_not be_valid
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Sublime - Python3 not printing non utf-8 characters (Spanish)
I think it's sublime related rather than python, running this snippet:
x = "Buenos días"
print(x)
will print in terminal/command prompt but not in sublimes build results. Did already a little bit of research but couldn't find a working solution. Removing the í acute accent to a i works as expected.
A:
Generally speaking problems like this are caused by some interplay between how Python determines behind the scenes the encoding that it should use when it's generating output and how Sublime is executing the Python interpreter.
In particular where it may correctly determine the correct encoding when run from a terminal, the Python interpreter may get confused and pick the wrong one when Sublime invokes it.
The PYTHONIOENCODING environment variable can be used to tell the interpreter to use a specific encoding in favor of whatever it might have otherwise automatically selected.
The sublime-build file lets you specify custom environment variables to apply during a build using the env key, so you can do something like the following:
{
"shell_cmd": "python -u \"$file\"",
"file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)",
"selector": "source.python",
"env": {"PYTHONIOENCODING": "utf-8"},
"variants":
[
{
"name": "Syntax Check",
"shell_cmd": "python -m py_compile \"${file}\"",
}
]
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to apply an 'indicator' function to infinite product?
Suppose there exists an 'indicator' function whose argument is the positive integers $n$, and which yields $1$ or $0$ depending on whether $n$ is a member of some set $S$:
$$f(n):=
\begin{cases}
1, & n\in S \\
0, & n\notin S
\end{cases}$$
I want to find the infinite product
$$\prod_{n\in S}n$$
but I want to express it in terms of all $n$ rather than pre-selecting $n\in S$, so that for some function $g$
$$\prod_{n\in S}n=\prod_{n=1}^\infty g(n)$$
Is it possible to find $g$ as a function of $f$?
If the series was a summation rather than a product it would of course be easy:
$$\sum_{n\in S}n=\sum_{n=1}^\infty f(n) n$$
But the product case appears to be much harder, and I'm wondering if I'm missing something blindingly obvious.
A:
Yes $$\prod _{n \in \mathbb N}n^{f(n)}$$
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to compute the gradient for logistic regression in Matlab?
I'm trying to minimize function f, firstly I was using fminsearch but it works long time, that's why now I use fminunc, but there is one problem: I need function gradient for acceleration.
f = @(w) sum(log(1 + exp(-t .* (phis * w'))))/size(phis, 1) + coef * w*w';
options = optimset('Display', 'notify', 'MaxFunEvals', 2e+6, 'MaxIter', 2e+6);
w = fminunc(f, ones(1, size(phis, 2)), options);
phis size is NxN+1
t size is Nx1
coef is const
Can you help me please construct gradient for function f, coz I always get this warning:
Warning: Gradient must be provided for trust-region algorithm;
using line-search algorithm instead.
A:
The gradient should be (by chain rule)
%the gradient
%helper function
expt = @(w)(exp(-t .* (phis * w')));
%precompute -t * phis
tphis = -diag(t) * phis; %or bsxfun(@times,t,phis);
%the gradient
gradf = @(w)((sum(bsxfun(@times,expt(w) ./ (1 + expt(w)), tphis),1)'/size(phis,1)) + 2*coef * w');
probably would be faster not to compute expt(w) twice per evaluation, so you can rewrite this in terms of another anonymous function which takes exptw as input.
also I may have goofed up the dimensions on the sum--it seems like you are using w as a row vector, which is somewhat nonstandard.
edit: as @whuber noted, this kind of thing is easy to screw up. I didn't actually try the code I had previously. the above should be correct now. To test it, I estimated the gradient numerically and compared to the 'exact' value, as below:
%set up the problem
N = 9;
phis = rand(N,N+1);
t = rand(N,1);
coef = rand(1);
%the objective
f = @(w)((sum(log(1 + exp(-t .* (phis * w'))),1) / size(phis, 1)) + coef * w*w');
%helper function
expt = @(w)(exp(-t .* (phis * w')));
%precompute -t * phis
tphis = -diag(t) * phis; %or bsxfun(@times,t,phis);
%the gradient
gradf = @(w)((sum(bsxfun(@times,expt(w) ./ (1 + expt(w)), tphis),1)'/size(phis,1)) + 2*coef * w');
%test the code now:
%compute the approximate gradient numerically
w0 = randn(1,N+1);
fw = f(w0);
%%the numerical:
delta = 1e-6;
eyeN = eye(N+1);
gfw = nan(size(w0));
for iii=1:numel(w0)
gfw(iii) = (f(w0 + delta * eyeN(iii,:)) - fw) ./ delta;
end
%the 'exact':
truegfw = gradf(w0);
%report
fprintf('max difference between exact and numerical is %g\n',max(abs(truegfw' - gfw)));
when I run this (sorry, should have set the rand seed), I get:
max difference between exact and numerical is 4.80006e-07
YMMV, depending on the rand seed and the value of delta used.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Pass-by-reference in C - downsides?
Most high-level languages (Python, Ruby, even Java) use pass-by reference. Obviously, we don't have references in C, but we can imitate them using pointers. There are a few benefits to doing this. For example:
int findChar(char ch, char* in)
{
int i = 0;
for(i = 0; in[i] != '\0'; i++)
if(in[i] == ch)
return i;
return -1;
}
This is a common C paradigm: catch an abnormal or erroneous situation by returning some error value (in this case, return -1 if the character is not in the string).
The problem with this is: what if you want to support strings more than 2^31 - 1 characters long? The obvious solution is to return an unsigned int but that won't work with this error value.
The solution is something like this:
unsigned int* findChar(char ch, char* in)
{
unsigned int i = 0;
for(i = 0; in[i] != '\0'; i++)
if(in[i] == ch)
{
unsigned int index = (unsigned int*) malloc(sizeof(unsigned int));
*index = i;
return index;
}
return NULL;
}
There are some obvious optimizations which I didn't make for simplicity's sake, but you get the idea; return NULL as your error value.
If you do this with all your functions, you should also pass your arguments in as pointers, so that you can pass the results of one function to the arguments of another.
Are there any downsides to this approach (besides memory usage) that I'm missing?
EDIT: I'd like to add (if it isn't completely obvious by my question) that I've got some experience in C++, but I'm pretty much a complete beginner at C.
A:
It is a bad idea because caller is responsible to free the index, otherwise you are leaking memory. Alternatively you can use static int and return its address every time - there will be no leaks, but function becomes non-reentrant, which is risky (but acceptable if you bear it in mind).
Much better would be to return pointer to char function finds, or NULL if it is not present. That's the way strchr() works, BTW.
Edited to reflect changes in original post.
A:
Without the malloc, the position can be still a stack variable and you can use it in an if statement:
int findChar(char ch, char* in, int* pos)
{
int i = 0;
for(i = 0; in[i] != '\0'; i++)
{
if(in[i] == ch)
{
*pos = i;
return 1;
}
}
return 0;
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Log onto a domain elsewhere
I am wondering if an user can log onto my domain outside the office, or are you only able to do this locally? (in the office)
Can you be outside the town or even country login onto a company's domain? (windows server 2008)
A:
Sure, as long as you can access the company network, which is normally what VPN's are for.
Once VPN is defined in the workstation, the login screen even offers the option to FIRST loginto the VPN, THEN process the user login.
Standard functionality in a LOT of companies.
"or even out of country" is idiotic, btw. - this is internet, and the internet doesn ot care whether you are across the street or in another country, as long as the internet connection is available. THe only sensible discintion is "inside company network" and "outside company network, requiring something like a VPN".
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Tips to beat Defense Grid Awakening level "Waste Disposal"?
I'm stuck on the level "Waste disposal". I'd like some strategies to beat this level.
The previous level (Turnaround) required a lot of build-and-then-tear-down to turn the attackers back and keep them cycling around until you kill them. This "Waste disposal" level seems that it would permit such a tactic, but I have been unable to beat it that way.
I have gotten as far as the 25th and final wave, but with only one Core left. Pretty pathetic.
A:
This video provides an excellent way of getting a gold badge on this level. You have the right idea: you want to make them circle around, and one of the best areas to do so is right at the beginning.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Profiling web apps on iPhone
Is there a tool to profile web apps on the iPhone?
I'm looking for something like Google's Speed Tracer or the Network tab on Chrome's developer tools. I'd like to see:
which requests are made to the server
what HTTP responses are given
which items are pulled from cache, and
a timeline of all the requests.
Ideally, this would profile web pages as well as web services requests made from within native apps.
Is there any tool to do this? Does anyone have a good way to get at this information?
A:
I think weinre might be exactly what you want. It is a remote FireBug clone; you put one line into your HTML, run a server on your desktop, then work in a FireBug-like tool on your desktop; you can even run stuff from a console, to be executed on your web page that is being displayed on your iPhone.
Unfortunately, this is only for web pages (or web apps); I don't know how you could do it for the native apps.
EDIT: To see all traffic, you might consider a debugging HTTP proxy, such as Fiddler. Set up your iPhone at Settings -> WiFi -> [your access point] -> DHCP -> HTTP Proxy -> Manual, then sit back and let Fiddler count things for you.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Spring AOP @DeclareParents conditionally
There is a method for create an Aspect introduction conditionally? what i want is to extend a class using Spring AOP conditionally:
@Aspect
public class Test1Aspect {
@DeclareParents(value="com.test.testClass",defaultImpl=Test1Impl.class)
public ITest iTest;
}
@Aspect
public class Test2Aspect {
@DeclareParents(value="com.test.testClass",defaultImpl=Test2Impl.class)
public ITest iTest;
}
so testClass extends Test1Impl or Test2Impl depending of a properties file where i set that option, its possible? how i can exclude Aspects for being called, i try to use aspectj-maven-plugin but it don't exclude my Aspects:
pom.xml
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>aspectj-maven-plugin</artifactId>
<version>1.5</version>
<configuration>
<sources>
<source>
<basedir>src/main/java</basedir>
<excludes>
<exclude>**/*.java</exclude>
</excludes>
</source>
</sources>
</configuration>
<executions>
<execution>
<goals>
<!-- use this goal to weave all your main classes -->
<goal>compile</goal>
</goals>
</execution>
</executions>
</plugin>
EDIT
I remove the aspectj-maven-plugin and using only Spring AOP, following is the configuration and the test aspect:
Aplication.java
@Configuration
@ComponentScan(basePackages= {
"demo"
//"demo.aspect"
})
@EnableAutoConfiguration(exclude=AopAutoConfiguration.class)
//@EnableLoadTimeWeaving(aspectjWeaving=AspectJWeaving.ENABLED)
@EnableAspectJAutoProxy
public class Application {
public static final Logger LOGGER = LogManager.getLogger(Application.class);
@Bean
public testService testService() {
return new testService();
}
@Bean
@Conditional(TestCondition.class) //CLASS THAT ONLY RETURNS TRUE OR FALSE
public TestAspect testAspect() {
LOGGER.info("TEST ASPECT BEAN");
TestAspect aspect = Aspects.aspectOf(TestAspect.class);
return aspect;
}
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
TestAspect
//@Component
//@Profile("asdasd")
//@Configurable
//@Configuration
@Aspect
public class TestAspect{
public static final Logger LOGGER = LogManager.getLogger(TestAspect.class);
@Autowired
private testService testService;
public TestAspect() {
LOGGER.info("TEST ASPECT INITIALIZED");
}
@Around("execution(* demo.testControllerEX.test(*))")
public String prevent(ProceedingJoinPoint point) throws Throwable{
LOGGER.info("ASPECT AROUND " + testService); // ALWAYS CALLED NO MATTER IF THE CONDITION IS FALSE, THE ONLY DIFFERENCE IS THAT testService IS NULL WHEN THE CONDITION IS FALSE.
String result = (String)point.proceed();
return result;
}
/*@DeclareParents(value="(demo.testControllerEX)",defaultImpl=TestControllersImpl.class)
private ITestControllerEX itestControllerEX;*/
}
A:
Finally i found the solution, the main problem is that in my Eclipse project i Enable Spring Aspects tooling in the option menu of Spring tools (Right click on project) and that in some manner was compiling my aspects with traditional Aspectj before Spring AOP, so that Explains why no matter which Conditional i was using over the aspect was always applied.
So the solution is do not enable Spring Aspectj Tools. or if is enabled do right click in project AspectJ Tools -> Remove AspectJ Capability.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Create a Tiff Bitmap file from a DatagridView
I want to create a Tiff file from a Datagridview. I was able to get the Datagridview to a Tiff file, however I just want the Rows and Columns and nothing else.
Is this possible without using 3rd party Tool?
Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load
DataGridView1.Rows.Add(New String() {"Value1", "Value2", "Value3"})
Dim height As Integer = DataGridView1.Height
DataGridView1.Height = DataGridView1.RowCount * DataGridView1.RowTemplate.Height
Dim bitmap As Bitmap = New Bitmap(Me.DataGridView1.Width - 1, Me.DataGridView1.Height - 1)
DataGridView1.DrawToBitmap(bitmap, New Rectangle(0, 0, Me.DataGridView1.Width - 1, Me.DataGridView1.Height - 1))
'Save the Bitmap to folder.
bitmap.Save("C:Development\DataGridView.Tiff")
End Sub
I dont want the Highlighted
A:
There are a few things to consider:
The DataGridView Rows and Columns must be visible when the control is drawn to a Bitmap,
ScrollBars may be present,
The height of the Rows may be different, so we have to sum the height of all rows,
The same for the Columns, since each Column has it's own width,
CellFormatting may be in place, so we need to refresh the DataGridView before drawing it: rows that are not visible may not have been formatted yet,
There's a limit (32,767) in the Bitmap dimensions.
Call this method as follows, specifying whether you want to include the Row or Column Headers or exclude both, passing True/False as the ColumnHeaders and RowHeaders arguments.
The dgv argument is of course the DataGridView control instance that will be drawn:
' Prints the DataGridView including the Columns' Headers only
Dim dgvBitmap = DataGridViewToBitmap(DataGridView1, True, False)
Dim imagePath = Path.Combine(AppContext.BaseDirectory, $"{NameOf(DataGridView1)}.tiff")
dgvBitmap.Save(imagePath, ImageFormat.Tiff)
' Dispose of the Bitmap or set it as the PictureBox.Image, dispose of it later.
dgvBitmap.Dispose()
Private Function DataGridViewToBitmap(dgv As DataGridView, ColumnHeaders As Boolean, RowHeaders As Boolean) As Bitmap
dgv.ClearSelection()
Dim originalSize = dgv.Size
dgv.Height = dgv.Rows.OfType(Of DataGridViewRow).Sum(Function(r) r.Height) + dgv.ColumnHeadersHeight
dgv.Width = dgv.Columns.OfType(Of DataGridViewColumn).Sum(Function(c) c.Width) + dgv.RowHeadersWidth
dgv.Refresh()
Dim dgvPosition = New Point(If(RowHeaders, 0, dgv.RowHeadersWidth), If(ColumnHeaders, 0, dgv.ColumnHeadersHeight))
Dim dgvSize = New Size(dgv.Width, dgv.Height)
If dgvSize.Height > 32760 OrElse dgvSize.Width > 32760 Then Return Nothing
Dim rect As Rectangle = New Rectangle(Point.Empty, dgvSize)
Using bmp As Bitmap = New Bitmap(dgvSize.Width, dgvSize.Height)
dgv.DrawToBitmap(bmp, rect)
If (dgv.Width > originalSize.Width) AndAlso dgv.ScrollBars.HasFlag(ScrollBars.Vertical) Then
dgvSize.Width -= SystemInformation.VerticalScrollBarWidth
End If
If (dgv.Height > originalSize.Height) AndAlso dgv.ScrollBars.HasFlag(ScrollBars.Horizontal) Then
dgvSize.Height -= SystemInformation.HorizontalScrollBarHeight
End If
dgvSize = New Size(dgvSize.Width - dgvPosition.X, dgvSize.Height - dgvPosition.Y)
dgv.Size = originalSize
Return bmp.Clone(New Rectangle(dgvPosition, dgvSize), PixelFormat.Format32bppArgb)
End Using
End Function
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Confusion in Gibbs sampling
I am self-studying Gibbs sampling from a book. The book introduces metropolis hastings algortihm to generate representative values from a posterior distribution. So we know $p(D | \theta) p(\theta)$ but not in the normalized form. So we generate those values.
Now when it introduces Gibbs sampling it quotes,
Gibbs sampling is especially useful when the
complete joint posterior, $p(θ_i |D)$, cannot be analytically determined
and cannot be directly sampled, but all the conditional distributions,
$ p(θ_i |\{θ_{j \ne i}\}, D)$, can be determined and directly sampled.
Suppose we have two parameters $\theta_1$ and $\theta_2$ so for determining $p(θ_1|θ_2, D)$, we need $\frac{p(θ_1, θ_2|D)}{p(θ_2|D)}$, and now I am all the more confused. First it says
when we can't analytically determine the posterior
and then uses $\frac{p(θ_1, θ_2|D)}{p(θ_2|D)}$ , where we don't know yet the posterior distribution, because that is not "analytically determined". Can anyone explain, what am I understanding wrong.
A:
Since I'm not sure where are you stuck at, I'll try multiple shots:
Explanation 1: The thing is that you only need the form of the unnormalized posterior, and that is why it's enough if you can get:
$$
p(\theta_1 | \theta_2, D) \propto p(\theta_1, \theta_2 | D)
$$
The normalizing constant is not interesting, this is very common in bayesian statistics, . With Gibbs sampling, Metropolis-Hasting or any other Monte Carlo method, what you are doing is drawing samples from this posterior. That is, the more density around a point, the more samples you'll get there.
Then, once you have enough samples from this posterior distribution, you know that the normalized density at some point $x$ is the proportion of samples that felt at that point.
You can even plot an histogram on the samples to see the (unnormalized) posterior.
In other words, if I give you the samples $1,3,4,5,1.....,3,4,16,1$ and I tell you these are samples from a density function, you know to compute the probability of every value.
Explanation 2: If you observe the analytical form of your unnormalized posterior (you always know it [1]), two things can happen:
a) It has the shape of some known distribution (e.g. Gaussian): then you can get the normalize posterior since you know the normalizing constant of a gaussian distribution.
b) It has an ugly form that corresponds to no familiar distribution: then you can always sample with Metropolis-Hastings (there are others).
b.1) M-H is not the most efficient of the methods (you reject a lot of samples, usually more than 2/3). If the posterior was ugly but the conditionals of the individual variables are pretty (known distributions) then you can do Gibbs sampling by sampling for one single variable at a time.
Explanation 3
If you use conjugate priors for the individual variables, the denominator of their conditional probability will be always nice and familiar, and you will know what the normalizing constant in the denominator is. This is why Gibbs sampling is so popular when the joint probability is ugly but the conditional probabilities are nice.
Maybe this thread, and specially the answer with puppies, helps you:
Why Normalizing Factor is Required in Bayes Theorem?
[1] Edit: not true, see @Xi'an's comment.
Update (example)
Imagine you have:
$$
P(\theta_1, \theta_2 | D)
=
\frac{ p(D , \theta_1, \theta_2)}
{\int_{\theta_1, \theta_2} p(D , \theta_1, \theta_2) \text{d} \theta_1, \theta_2}
\propto p(D , \theta_1, \theta_2)
$$
If the joint probability is complicated, then you can't know the normalization constant. Sometimes, if it does not contain things like large $\sum$ or $\prod$ that would make it painful to compute, you can even plot the posterior. In this case you would have some 2-D plot with axes $\theta_1$ and $\theta_2$. Yet, you plot is right up to a missing constant. Sampling algorithms say "ok, I don't know what the normalization factor is, but if I might draw samples from this function in such a way that, if $p(D, \theta_1=x_1, \theta_2=x_2)$ is two times $p(D, \theta_1=x_3, \theta_2=x_4)$, then then I should get the sample $(x_1, x_2)$ twice as much as $(x_3, x_4)$"
Gibbs sampling does this by sampling every variable separatedly. Imagine $\theta_1$ is a mean $\mu$ and that its conditional probability is (forget about the $\sigma$'s, imagine we know them):
$$
p(\mu | D)
=
\frac{
\mathcal{N}(D | \mu, \sigma_d) \mathcal{N}(\mu, \sigma)
}
{\int \mathcal{N}(D | \mu, \sigma_d) \mathcal{N}(\mu, \sigma) \text{d} \mu}
$$
The product of two normals is another normal with new parameters (see conjugate priors and keep that table always at hand. Even memorize the ones you end up using the most). You do the multiplication, you drop everything that does not depend on $\mu$ into a constant $K$ and you get something that you can express as:
$$
p(\mu | D)
=
K \exp\left(\frac{1}{a}(\mu - b)^2\right)
$$
It has the functional form of a Gaussian. Therefore since you know it is a density, $K$ must be the normalizing factor of $\mathcal{N}(b, a)$.
Thus, your posterior is a Gaussian distribution with posterior parameters (b,a).
The short version is that the product of the prior and the likelihood have the functional form of a familiar distribution (it actually has the form of the prior if you chose conjugates), then you know how to integrate it. For instance, the integral of the $\exp(...)$ element of a normal distribution, that is, a normal without its normalizing factor, is the inverse of its normalizing factor.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do to resolve this error "Exception: Invalid argument: replacement"?
I have a function that makes a copy of an existing document (template) and then merges data in dynamically by matching the header names to the tags listed within the document. The function worked without any problems, but now suddenly I'm receiving an error message whenever it tries to merge. Can anyone give me some insight into what the issue might be?
Error Message:
Exception: Invalid argument: replacement
The weird thing is that it doesn't prevent the information from merging, but the error does stop the function from completing the other tasks.
Line with the error
headers.forEach(function(e){
body.replaceText("<<"+e+">>",data[e]);
return;
});
The whole code:
function documents(sheet, data){
var headers = Object.keys(data[0]);
var docsToMerge = data.map(function(e){
var name = e.location +" - "+e.employeeLastName+", "+e.employeeFirstName+" - "+e.docName+" "+Utilities.formatDate(new Date(e.effectivePayDate), "UTC-4", "M/d/yy");
var newDoc = DriveApp.getFileById(e.template).makeCopy(name, DriveApp.getFolderById(e.folderId));
e.documentLink = newDoc.getUrl();
e.documentId = newDoc.getId();
return e;
});
docsToMerge.forEach(function(e){
mergeDocuments(e, headers, signatureFolderId);
});
}
function mergeDocuments(data, headers){
var id = DocumentApp.openByUrl(data.documentLink).getId();
var doc = DocumentApp.openById(id);
var body = doc.getBody();
headers.forEach(function(e){
body.replaceText("<<"+e+">>",data[e]);
return;
});
doc.saveAndClose();
return;
}
A:
Desactive Runtime V8 in Run section of your script.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How prove this inequality $\frac{a+\sqrt{ab}+\sqrt[3]{abc}+\sqrt[4]{abcd}}{4}\le\sqrt[4]{\frac{a(a+b)(a+b+c)(a+b+c+d)}{24}}$
let $a,b,c,d>0$, show that
$$\dfrac{a+\sqrt{ab}+\sqrt[3]{abc}+\sqrt[4]{abcd}}{4}\le\sqrt[4]{\dfrac{a(a+b)(a+b+c)(a+b+c+d)}{24}}$$
This post three -varible $a,b,c>0,a+b+c=21$ prove that $a+\sqrt{ab} +\sqrt[3]{abc} \leq 28$
How prove it Four-varible ? I think use AM-GM inequality to solve it,But I can't.Thank you
I guess this follow is also true:
let $a_{1},a_{2},\cdots,a_{n}>0$,show that
$$\dfrac{a_{1}+\sqrt{a_{1}a_{2}}+\sqrt[3]{a_{1}a_{2}a_{3}}+\sqrt[4]{a_{1}a_{2}a_{3}a_{4}}+\cdots+\sqrt[n]{a_{1}a_{2}\cdots a_{n}}}{n}\le\sqrt[n]{\dfrac{a_{1}(a_{1}+a_{2})(a_{1}+a_{2}+a_{3})\cdots(a_{1}+a_{2}+\cdots+a_{n})}{n!}}$$
Thank you
A:
\begin{align}
a& \frac{a+b}2 \frac{a+b+c}3 \frac{a+b+c+d}4 \\
&= \frac1{4^4} \left(a+a+a+a \right) \left(a+a+b+b \right) \left(a+b+\tfrac{a+b+c}3+c \right) \left(a+b+c+d \right) \\
&\ge \frac1{4^4} \left(a+a+a+a \right) \left(a+a+b+b \right) \left(a+b+\sqrt[3]{abc}+c \right) \left(a+b+c+d\right) \\
&\ge \frac1{4^4} \left(a + \sqrt{ab}+\sqrt[3]{abc}+\sqrt[4]{abcd}\right)^4 \quad \text{by Holder}
\end{align}
Not sure if one can extend the pattern to general $n$ though. For that, you may want to see http://www.jstor.org/stable/2975630
|
{
"pile_set_name": "StackExchange"
}
|
Q:
(e)def and BibLaTeX
I'm trying to understand why the following code does not work:
\documentclass[12pt]{book}
\usepackage{polyglossia}
\setdefaultlanguage{french}
\setotherlanguage{english}
\def\col{\begin{english}:\end{english}}
\usepackage{filecontents}
\begin{filecontents*}{library.bib}
@article{Bara2006,
author = {Bara, Judith},
title = {{English Citation: entry}},
year = {2006},
journal = {Journal of Space},
pages = {20\col1--20\col12},
}
@article{Baranov2001,
author = {Jacques, Paul},
title = {{Citation française : espace}},
year = {2001},
journal = {Comptes Rendus sur les Espaces},
pages = {20:1--20:12},
}
\end{filecontents*}
\usepackage[bibencoding=auto,backend=biber,autolang=other]{biblatex}
\addbibresource{library.bib}
\DefineBibliographyStrings{french}{
inseries = {dans},
in = {dans}
}
\begin{document}
\textsc{url:} www. \\
\begin{french}
\textsc{url:} www. \\
\end{french}
\textsc{url:} www. \\
\textsc{url\col} www. \\
C'est un essai. \parencite{Bara2006,Baranov2001}
\printbibliography
\end{document}
If I comment the line \printbibliography, everything runs as expected.
A:
You want to define \col as a character that's not influenced by the interchar token state; you also want that the English citation is typeset using English rules, that's obtained by setting the hyphenation field.
\documentclass[12pt]{book}
\usepackage{polyglossia}
\setdefaultlanguage{french}
\setotherlanguage{english}
\newcommand\col{\mbox{\XeTeXinterchartokenstate=0 :}}
\usepackage{filecontents}
\begin{filecontents*}{\jobname.bib}
@article{Bara2006,
author = {Bara, Judith},
title = {{English Citation: entry}},
year = {2006},
journal = {Journal of Space},
pages = {20\col1--20\col12},
hyphenation={english},
}
@article{Baranov2001,
author = {Jacques, Paul},
title = {{Citation française : espace}},
year = {2001},
journal = {Comptes Rendus sur les Espaces},
pages = {20\col1--20\col12},
}
\end{filecontents*}
\usepackage[bibencoding=auto,backend=biber,autolang=other]{biblatex}
\usepackage{csquotes}
\addbibresource{\jobname.bib}
\DefineBibliographyStrings{french}{
inseries = {dans},
in = {dans}
}
\begin{document}
\textsc{url:} www. \\
\begin{french}
\textsc{url:} www. \\
\end{french}
\textsc{url:} www. \\
\textsc{url\col} www. \\
C'est un essai. \parencite{Bara2006,Baranov2001}
\printbibliography
\end{document}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Message queue in Perl
I want to implement a message queue with Perl. I get data from stdin and send it to the queue.
My message's structure is
struct message => {
mtype => '$',
buffer_size => '$',
last_message => '$',
buff => '$',
};
I have to receive data from the queue with a C program. My C program works well before, but now when I run it to receive data from queue it shows me something like this
age=HASH(0x1daa088) 1936942445 4000
I read chunks of data with a 4000-byte buffer that I print on stdout. But instead of age=HASH(0x1daa088) 1936942445 the program should print the size of message received.
What happened here? Is it because the message in C is a structure and in Perl it's a hash?
My C code:
#include <stdio.h>
#include <stdlib.h>
#include <linux/ipc.h>
#include <linux/msg.h>
#include <time.h>
#include <string.h>
#define bufsize 4000
struct mymsgbuf {
long mtype; /* Message type */
int buffer_size;
char buff[bufsize];
int last_message;
} msg;
int read_message( int qid, long type, struct mymsgbuf *qbuf ) {
int result, length;
/* The length is essentially the size of the structure minus sizeof(mtype)*/
length = sizeof(struct mymsgbuf) - sizeof(long);
if ( (result = msgrcv( qid, qbuf, length, type, MSG_NOERROR)) == -1 ) {
return(-1);
}
fprintf(stderr, "\t%d\t\t%d\t\t%d \n", qbuf->buffer_size, bufsize, qbuf->last_message);
write(1,qbuf->buff,qbuf->buffer_size);
return(result);
}
int open_queue( key_t keyval ) {
int qid;
if ( (qid = msgget( keyval, 0660 )) == -1 ) {
return(-1);
}
return(qid);
}
main() {
int qid;
key_t msgkey;
msg.last_message = 0;
/* Generate our IPC key value */
msgkey = ftok("/home/joobeen/Desktop/learning", 'm');
/* Open/create the queue */
if (( qid = open_queue( msgkey)) == -1) {
perror("open_queue");
exit(1);
}
fprintf(stderr, "byte received:\tbuffer_size:\tlast_message:\n");
/* Bombs away! */
while (1) {
if ( (read_message( qid,0, &msg )) == -1 ) {
perror("receive_message");
exit(1);
}
if ( msg.last_message == 1 )
break;
}
return 0;
}
My Perl code:
use strict;
use warnings;
use IPC::SysV qw(IPC_PRIVATE IPC_CREAT S_IRUSR S_IWUSR ftok);
use IPC::Msg;
use Class::Struct;
struct message => {
mtype => '$',
buffer_size => '$',
last_message => '$',
buff => '$',
};
my $key_in = ftok( "/home/joobeen/Desktop/learning", 'm' );
my ( $buffer ) = "";
my $buf_size = 4000;
my $file = shift @ARGV;
my $ifh;
my $is_stdin = 0;
my $type_sent = 1;
my $last;
if ( defined $file ) {
open $ifh, "<", $file or die $!;
}
else {
$ifh = *STDIN;
$is_stdin++;
}
my $ipc_id = msgget( $key_in, IPC_CREAT | S_IRUSR | S_IWUSR );
my $msg = message->new(
mtype => 1,
last_message => 0
);
print "\tbyte sent\tbuffer_size\tlast_message\n";
while ( <$ifh> ) {
$last = read( $ifh, $buffer, $buf_size );
$msg->buff( $buffer );
$msg->buffer_size( $buf_size );
if ( $last < $buf_size ) {
$msg->last_message( 1 );
}
msgsnd( $ipc_id, pack( "l! a*", $type_sent, $msg ), 0 );
print "\t", $last, "\t\t", $buf_size, "\t\t", $msg->last_message, "\n";
}
close $ifh unless $is_stdin;
A:
Your code has multiple problems. I won't fix them all for you but I can give you some guideline on how to implement IPC in general.
Reading binary data directly into C structs is extremely fragile. You have to care about byte order, struct padding and the size of types like int or long. Depending on your platform, both of these types could be 32-bit or 64-bit and little or big endian. So first of all, you need an exact specification of the "on-the-wire protocol" of your messages. To simplify things, let's use fixed-sized messages:
mtype: 32-bit unsigned integer, little endian
buffer_size: 32-bit unsigned integer, little endian
buffer: 4000 bytes
last_message: 32-bit unsigned integer, little endian
This is just an example. You could use big endian integers as well, like most network protocols do for historical reasons. If you only want to use IPC on a single machine, you could also specify native byte order.
Now the length of a message is fixed to 4012 bytes. To decode such a message in a portable way in C, you should read it into a char array and extract each field separately. You know each field's offset and size.
Encoding such a message in Perl is easy using the pack function:
my $msg = pack('V V a4000 V', $mtype, $buffer_size, $buffer, $last);
There's no need for Class::Struct. This module does not do what you expect.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Stop Excel formula from changing when inserting/deleting rows
I'm trying to set up a budget workbook for my personal budget using 13 sheets, 1 for the totals and the other 12 for each month. I cannot for the life of me figure out why the formulas I have change when I insert a row into one of the monthly sheets. Here's an example of one of the formulas I have:
=SUMIF(JUN!$G$2:$G$500,"Utilities", JUN!$D$2:$D$500)
If I insert a row at the top of a sheet, it will increment the twos to threes, throwing off the calculations. Is there any way I can lock the formula from changing at all? It's incredibly frustrating.
A:
What you need to understand is that the absoluteness of absolute references, as specified by the $, is not absolutely absolute ;-)
Now that that tongue-twister is out of the way, let me explain.
The absoluteness only applies when copy-pasting or filling the formula. Inserting rows above, or columns to the left, of an absolutely referenced range will "shift" the address of the range so that the data the range points to remains the same.
In addition, inserting rows or columns in the middle of the range will expand it to encompass the new rows/columns. Thus to "add" a row of data to a range (table) you need insert it after the first data row.
The simplest way to allow adding a data row above the current data range is to always have a header row, and include the header row in the actual range. This is exactly the solution proposed by cybernetic.nomad in this comment.
But, there's still one more issue left, and that's adding a row of data after the end of the table. Just typing the new data in the row after the last row of data won't work. Nor will inserting a row before the row after the last row.
The simplest solution for this is to use a special "last" row, include that row in the data range, and always append new rows by inserting before that special row.
I typically reduce the row height and fill the cells with an appropriate colour:
For your example, the full "simplest" formula would thus be:
=SUMIF(JUN!$G$1:$G$501,"Utilities",JUN!$H$1:$H$501)
Another way to achieve the same goal is to use a dynamic formula that auto adjusts to the amount of data in the table. There are a few different variations of this, depending on the exact circumstances and precisely what is to be allowed to be done to the table.
If, as is typically the case (your example, for instance), the table starts at the top of the worksheet, has a one row header, and the data is contiguous with no gaps, a simple dynamic formula would be:
=SUMIF(INDEX(JUN!$G:$G,2):INDEX(JUN!$G:$G,COUNTA(JUN!$G:$G)),"Utilities",INDEX(JUN!$H:$H,2):INDEX(JUN!$H:$H,COUNTA(JUN!$G:$G)))
This is a better solution than using INDIRECT() as
It is non-volatile and therefore the worksheet calculates faster, and
It won't break if you insert columns to the left of the table.
The dynamic formula technique can be further improved by using it in a Named Formula.
Of course, the best solution is to convert the table to a proper Table, and use structured references.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What was the largest ancient theatre?
If I was to be more specific, can anyone name the largest ancient Greco-Roman theater-(excluding the Roman amphitheaters, such as the Coliseum in Rome and Verona, as well as the amphitheater in Pompeii). For example, the Ancient theater in Ephesus, as well as the ancient theater in Epidaurus holds up to 15,000 people, though the ancient theaters in Aspendos, Turkey, as well as Orange, France and Merida, Spain appear to be larger in size. A precise or nearly precise answer would be welcomed.
A:
Short answer
It is impossible to say with any certainty, but there are a number of theatres which, according to modern estimates, could hold around 20,000, give or take a few thousand. Among these are the ones at Ephesus, Syracuse, Apamea, Smyrna and Megalopolis.
While Pliny the Elder gives substantially larger figures for both the Theatre of Pompey and a temporary theatre from around 58 AD, he was prone to exaggeration. The Greek geographer Pausanias gives no numbers but says that the one at Megalopolis was the largest in Greece.
Details
In terms of capacity, there is some dispute, much of it due to variations in how much seating space is 'allowed' per person. If Pliny is to be believed, the largest theatre would appear to be the Theatre of Pompey, completed during Pompey's consulship in 55 BC, with a capacity of 40,000. However, modern estimates put the capacity at only 10,000 but there are no details on how this number was arrived at.
Frank Sear, in 'Roman Theatres: An Architectural Study', disagrees with this estimate:
The fourth-century Regionary catalogues state that it had
15,580 feet of seating or around 11,600 seats. This figure seems
very low for a theatre of this size and it may be explained by
the condition of the 400-year-old building in the fourth century.
It is quite possible that by then parts of the auditorium were
unusable.
In a footnote, Sear adds (referring to the aforementioned Regionary catalogues):
According to the same catalogues, the Theatre of Marcellus had a greater seating capacity, although its diameter was 20 m less than that of the Theatre of Pompey.
Sear goes into much detail on seat measurements and theatre types, and provides several tables showing the capacity of different theatres. He also explains the problems in calculating capacity:
the Regionary catalogues,
which gave the seating capacity of the theatres of Rome,
specified the length of seating, rather than the actual number of
seats. That was presumably because ancient theatres did not have individual seats as do modern theatres. They had continuous
seating, which meant that capacity varied according to the
amount of space assigned to an individual seat (locus). A standard
seat width was normally between 0.36 and 0.50 metres. At
Stobi prohedroi were allocated 0.80 metres, which suggests that
normal seats were 0.40 metres. In the Theatre of Dionysus at
Athens marks indicating individual seats were 0.41 metres apart,
and only 0.36 metres apart at Corinth.
He further explains that theatres in Greece and Asia Minor had a much greater capacity than western theatres with a similar diameter. Sears provides tables with seating capacity calculations which give 19,717 for Ephesus (but another estimate he quotes is 21,500) and 18,537 for Miletus. He also quotes a figures of 20,350 for Smyrna (Izmir) and 19,700 for Megalopolis (Arkadia).
Theatre at Ephesus. Source: Livius
Another candidate is Roman Theatre at Apamea which Wikipedia describes as
one of the largest surviving theatres of the Roman world with a cavea diameter of 139 metres (456 ft) and an estimated seating capacity in excess of 20,000. The only other known theatre that is considerably larger was the Theatre of Pompey in Rome, with a cavea diameter of approximately 156.8 metres (514 ft)
The 20,000 estimate is based the work of Cynthia Finlayson who was involved in excavations in the area from 2008 to 2010.
Also, (in Sear)
Pliny gives what is clearly an exaggerated
account of the temporary theatre built by Marcus Aemilius
Scaurus, the aedile of 58 bc....The auditorium held 80,000 spectators, twice the capacity of the Theatre of Pompey....adorned with 3,000 bronze statues and
360 columns, the lowest storey of marble with columns 38 feet
high, the middle one of glass (an extravagance unparalleled
even in later times!), while the top storey was made of gilded
planks.
How much did Pliny exaggerate? For the Theatre of Pompey, it would seem his number is between 3 and 4 times the actual. If this scale of exaggeration is also true for the temporary theatre, we arrive at a figure of between 20,000 and 26,600.
Finally, the geographer Pausanias (c. 110 to 180 AD) states in Descriptions of Greece (8.32.1) that the theatre at Megalopolis was
the largest theatre in all Greece
The Ancient Theatre of Megalopolis. Source: Grecorama
He also wrote (2.27.5):
The people of Epidaurus have a theater [theātron] within the sacred
space [hieron], and it is in my opinion very much worth seeing [théā].
I say this because, while the Roman theaters are far superior to those
anywhere else in their splendor, and the Arcadian theater at
Megalopolis is unequalled for size, what architect could seriously
rival Polycleitus in symmetry [harmoniā] and beauty?
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Wordpress - Programmatically adding products not generating thumbnails
I'm creating a custom CSV importer for a client and the pictures are added, however the thumbnails aren't being generated properly. After using a plugin like Regenerate Thumbnails they do show correctly.
Here is the code in which I add the attachment and link it to the post.
$uploadDir = 'wp-content/uploads/importedproductimages/';
$siteurl = get_option('siteurl');
$thumbnail = 'importedproductimages/' . $name;
$filename = 'importedproductimages/' . $name;
$wp_filetype = wp_check_filetype($filename, null);
$attachment = array(
'post_author' => 1,
'post_date' => current_time('mysql'),
'post_date_gmt' => current_time('mysql'),
'post_mime_type' => $wp_filetype['type'],
'post_title' => $filename,
'comment_status' => 'closed',
'ping_status' => 'closed',
'post_content' => '',
'post_status' => 'inherit',
'post_modified' => current_time('mysql'),
'post_modified_gmt' => current_time('mysql'),
'post_parent' => $post_id,
'post_type' => 'attachment',
'guid' => $siteurl.'/'.$uploadDir.$name
);
$attach_id = wp_insert_attachment( $attachment, $filename, $post_id );
$attach_data = wp_generate_attachment_metadata( $attach_id, $thumbnail );
wp_update_attachment_metadata( $attach_id, $attach_data );
// add featured image to post
add_post_meta($post_id, '_thumbnail_id', $attach_id);
Why aren't the thumbnails being generated properly?
Thank you in advance.
EDIT:
I have also included image.php like so:
require_once(ABSPATH . 'wp-admin/includes/image.php');
A:
This ended up working for me:
function createnewproduct($product)
{
$new_post = array(
'post_title' => $product['Product'],
'post_content' => $product['Long_description'],
'post_status' => 'publish',
'post_type' => 'product'
);
$skuu = $product['SKU'];
$post_id = wp_insert_post($new_post);
update_post_meta($post_id, '_sku', $skuu );
update_post_meta( $post_id, '_regular_price', $product['ourPrice'] );
update_post_meta( $post_id, '_manage_stock', true );
update_post_meta( $post_id, '_stock', $product['Qty'] );
update_post_meta( $post_id, '_weight', $product['Weight'] );
if (((int)$product['Qty']) > 0) {
update_post_meta( $post_id, '_stock_status', 'instock');
}
$dir = dirname(__FILE__);
$imageFolder = $dir.'/../import/';
$imageFile = $product['ID'].'.jpg';
$imageFull = $imageFolder.$imageFile;
// only need these if performing outside of admin environment
require_once(ABSPATH . 'wp-admin/includes/media.php');
require_once(ABSPATH . 'wp-admin/includes/file.php');
require_once(ABSPATH . 'wp-admin/includes/image.php');
// example image
$image = 'http://localhost/wordpress/wp-content/import/'.$product['ID'].'.jpg';
// magic sideload image returns an HTML image, not an ID
$media = media_sideload_image($image, $post_id);
// therefore we must find it so we can set it as featured ID
if(!empty($media) && !is_wp_error($media)){
$args = array(
'post_type' => 'attachment',
'posts_per_page' => -1,
'post_status' => 'any',
'post_parent' => $post_id
);
// reference new image to set as featured
$attachments = get_posts($args);
if(isset($attachments) && is_array($attachments)){
foreach($attachments as $attachment){
// grab source of full size images (so no 300x150 nonsense in path)
$image = wp_get_attachment_image_src($attachment->ID, 'full');
// determine if in the $media image we created, the string of the URL exists
if(strpos($media, $image[0]) !== false){
// if so, we found our image. set it as thumbnail
set_post_thumbnail($post_id, $attachment->ID);
// only want one image
break;
}
}
}
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Flatness over a perfectoid ring
I want to prove the following: Let $R$ be a perfectoid ring and $\varpi$ a pseudo uniformizer in $R$ which admits all $p$-th power roots, then a module over $R^\circ$ is flat if and only if it has no $\varpi$ nontrivial torsion.
(I know this is true when $R$ is a perfectoid field.)
A:
Even in the case $R$ and $S$ are perfectoid algebras over a perfectoid field, it's not the case that $S$ is $R$-flat in many situations, eg. perf $R$, take a higher rank point in $\text{Spa}(R,R^0)$, look at the completed res field $\kappa$, at the $\kappa$-normalization $\kappa^+$ of $R^0$, and finally at the map $R^0\to\kappa^+$.
If $R$ is a perfectoid algebra over a perfectoid field $K$, $K^0$-flatness conveniently comes for free for $R^0$. You want to use it to check that ``derived relative perfectness'':
$$A_{\varphi}\otimes_{\varphi, A}^{\mathbf{L}} B\xrightarrow{\simeq} B_{\varphi}[0]$$
in $\text{D}(B)$, $A$, $B$ $\mathbf{F}_p$-algebras, $\varphi$ the abs Frobenius, $A_{\varphi}$ is $A$ as an $A$-algebra under $\varphi$, same for $B$, is met when $A = K^0/\varpi$ and $B = R^0/\varpi$. This is obvious in deg $0$, and if $K$ is a perfectoid field also in higher degs by $K^0$-flatness of $R^0$. From here it's an easy lemma that $\mathbf{L}_{(R^0/\varpi)/(K^0/\varpi)}\simeq 0$ in $\text{D}(R^0/\varpi)$.
For $K$ a perfectoid ring, you can't directly invoke derived rel perfectness and such vanishing lemma, and it's honestly not so clear why you should hunt flatness at all costs to make it available again.
Rather, you should reduce to the char $p$ case where perfect $\mathbf{F}_p$-algebras are derived relatively perfect. From this you show that if $R$ is perfectoid and $S$ is a perfectoid $R$-algebra, then the analytic cotangent complex of $R^0\to S^0$ as introduced in Gabber and Ramero's book on Almost Ring Theory, always vanishes.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Analog computing vs Numerical simulation
Does the electronic circuit equivalent of a mathematical model have any inherent advantage over numerical simulation of the same model ?
For instance, if I numerically simulate the Navier-Stokes equation and obtain a time-series of the velocity field and compare with the time-series of the voltage from an electronic circuit equivalent of the Navier-Stokes equation.
A:
Does the electronic circuit equivalent of a mathematical model have any inherent advantage over numerical simulation of the same model ?
I've bolded the most important word in your question.
Yes, there are a few advantages, but as an electrical engineer you always work with trade-offs. Meaning that you gain X but lose Y, you can't have both X and Y, unless you increase $. I will mention some advantages first, and then all the disadvantages.
Precision is one advantage that Jonk mentioned a little bit in his comment. Instead of dealing with number of bits in a data type, such as a float (32 bit) or a double (64 bit) or a long double (128 bit). Instead you are dealing with planck units which is the smallest value anything can have, be it mass, length, time, charge and temperature. We are interested in time and charge, because those two relate to the only part we are interested in when it comes to implementing the mathematical equation in an equivalent electronic circuit.
To put some numbers to it, the Planck time is about \$5.4\times10^{-44}\text{ s}\$, and the Planck charge is about \$1.9\times10^{-18}\text{ C}\$. These are the values that the universe is using to quantify these units, meaning that if you can represent these values then you can make a perfect prediction of the real world, iff you ignore all the noise.
A float can approximate a Planck charge with some error, this is called quantization noise (which is a small error), which will show up as noise, error, whatever you want to call it, unwanted effects. Which will appear if you try to do it with a digital computer. A float doesn't have enough bits to start representing a Planck time. A double can approximate the constant Planck charge and Planck time with some less error. So you can use a long double or use some custom library, sure, go for it. But you will still never be able to represent the constants exactly because we are using base 2 and the constants are defined by irrational numbers. So now we know that we can't even represent them, ever, with a digital computer. But with an electronic circuit you can. Then add all the operations you need to perform digitally, you got errors in your values from the start, as you start calculating all the errors will accumulate and your answer will contain the true answer + the errors, so digitally you will never truly see the exact value.
Execution time is another advantage, you will most certainly be able to propagate through the mathematical equation much faster than you would digitally. This depends on the actual mathematical equation that is implemented though, since multiplication and division tends to be faster in the digital domain. But if you are multiplying or dividing by a constant then you can use op-amps which can outperform a digital computer also doing a multiplication by a constant.
Power consumption is maybe an advantage, since implementing the equation in a circuit should take less power than doing the same thing on a computer. This depends heavily on the architecture of whatever processor we are talking about. But in a perfect world where noise doesn't exist, the physical circuit should be more energy efficient. This is more of a gut feeling, which is why I said maybe advantage.
There are also other advantages, I'm sure I've forgotten to mention some, but at some point this answer must be delivered. The major advantages are precision, speed and power efficiency. But at what cost?
As I said earlier, it's all about trade-offs. What do we lose? Or rather, what penalty must we face if we only use the analog domain? I've accidentally mentioned some disadvantages above.. so it will be few repetitions.
Noise is the major disadvantage, noise comes in all shapes and forms, all from people breathing onto the components to radio waves and a bazillion other sources. Since noise is the largest culprit I will mention them separately below.
Inaccuracy is another disadvantage, this comes from noise in the form of components changing their properties over time, meaning that their values drift from their original values. The source of this can be due to temperature, aging, humidity, radio waves, position of the moon, you name it. Even pressure or the concentration of helium in the air. On a digital computer 5*3 will yield 15 on a sunny day or on a winter day. But in the analog domain you might get 14.99 or 15.01, again, noise.
Price is another disadvantage, implementing everything in software is technically for free, while implementing the exact same thing on a physical circuit can cost anything, from cheap to very very expensive. The more expensive you go the better the components and you can opt for less drift components and all that. There are components specifically made to drift very little as their temperature changes or as they age, but you can't eliminate the drift. This is also heavily related to the tolerances of components. In an ideal world the tolerances of all component values are 0%.
Measuring (loading your circuit) is another disadvantage, you mentioned that you wanted a voltage output which obviously is because you want to see the output in a clear way. The act of measuring will load the circuit, what I mean by this is that the voltage will change slightly just because you are measuring with an instrument that doesn't have infinite impedance, so whatever value you do see, it will not be true. This is because no circuit has 0 output impedance, so measuring will form a voltage divider + a filter of some sort, probably a band-pass filter due to other mumbo jumbo capacitance and inductance in your probe.
The quantization noise is another disadvantage, I mentioned quantization noise before, here I will name it again because you will use an ADC (Analog to Digital Converter). The loaded output that you are measuring will snap to predefined values (imagine a grid) which whatever instrument you are using can interpret digitally. Say you have a 2bit ADC with a voltage reference of 3 volt. Then the voltage 0.51 V and 1.49 V will snap to the value 1. As you can see there is almost an entire volt that returns the same value. That's quantization noise, which you can move around by using filters (high-pass for an example), but you can't eliminate it.
non-ideal components (parasitics) is another disadvantage, a resistor isn't really a resistor, it's a resistor and an inductor and a capacitor where the inductance and capacitance has been designed to be as low as possible so it will behave as much as a resistor as humanly possible. Same thing with an inductor and a capacitor, they all contain parts of each other, which are called parasitics. That's right, capacitors leak, meaning that it will discharge on its own, it will also have an ESR (equivalent series resistance) and other unwanted parameters. I'm barely touching on the surface of all the problems due to parasitics. This means that you probably can't design your mathematical equation good enough.
Maybe being forced to use shielding is another disadvantage, if you want some circuit to receive less noise from the outside world, then you need to place a cage around your entire circuit and connect the cage to ground.
Reflection (in a transmission line) is another disadvantage, if your circuit is very fast then the signal will bounce back and forth on the wire, to get rid of this bouncing you need to terminate the signal. Literally kill the signal with a resistor. If this resistor doesn't have the correct value then a part of the reflection will keep bouncing and eventually die off through the resistor and other resistive components, such as the wire. So this is wasting power, but this is only relevant for high frequency circuits.
I'm sure there's more that I haven't touched on. But this answer can't be too large.
The bottom line is this, a computer is reliable and cheap, an analog circuit is only as good as less noise there is. Technically both have quantization errors, so no matter what you do, you won't get the exact value. I doubt that you have the time and energy to make a proper analog circuit for only calculating the Navier-stokes equation. I recommend using a computer. Just write good code and you'll be good.
If the Navier-stokes equation takes so much time to compute digitally, then use googles HPC (High Performance Computation). Then you can use custom floating point data types with absurd number of bits. Though I bet that a 128 bit long double will suffice. Or just use several threads on your computer so you parallelize your problem.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What is the polarity of this barrel plug with a double circle?
Related but doesn't answer my specific icon: How to tell polarity expected of a DC barrel jack?
I have a dell laptop barral plug adapter, and I can't tell from the graphic whether the tip is positive or negative. It looks like the tip has nothing, and there are two outer shells, based on the icon, but I don't think that's how I'm supposed to read it.
Can someone tell me how to interpret the polarity?
A:
The outer cylinder is negative with the inner surface being positive.
The tip connects to a Maxim One-Wire memory in the power supply that is read by the computer to obtain information such as power supply capability, serial number etc.
Be VERY careful if you probe the cable, if you accidentally short the centre pin to the inner cylinder it may destroy the One-Wire memory (on the one I investigated there was NO protection against this!).
If the memory is non-functional the computer will still be powered but it will assume it is a non-Dell low-power supply, it may reduce the speed of the computer to minimize power consumption and may refuse to charge the battery.
I accidentally damaged my power supply when I probed a Dell power supply to see what the voltage was. There is not really a way to repair the power supply if this happens.
A:
It looks like the tip has nothing, and there are two outer shells, based on the icon
That's correct. The tip is a 3rd conductor carrying some sort of signalling between the adapter and the computer.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Streaming learning OCaml
I wrote a simple online logistic regression, calibrated using gradient descent to compare the speed of an OCaml implementation vs the same Python script, executed with Pypy. It turned out that the OCaml implementation was slightly faster than the one run with Pypy (by about 10%). Now I would like to optimize my code even further.
The assumption about the data is that the values of each rows are sparse (can be considered as factors), they are encoded as integers (collisions are allowed) and stored in a large array.
maths.ml
(** Various mathematical functions*)
(** Given a list of indices v and a vector of weights *)
let dot_product indices weights =
let rec aux indices weights acc =
match indices with
| [] -> acc
| h::tail -> aux tail weights (acc +. weights.(h)) in
aux indices weights 0.
(** Evaluates {%latex: $s(x)=\frac{1}{1+\exp(-x)}$ %}*)
let sigmoid x = 1. /. (1. +. exp(0. -. x))
(** Logarithmic loss, p (the first argument) is the predicted value, y (the second argument) is the actual value*)
let log_loss p y = match y with 1. -> -. log(p) | _ -> -. log(1. -. p)
(** Evaluates {%latex: $a^b$ %} where {%latex: $a$ %} is the first argument, {%latex: $b$ %} the second argument*)
let rec pow a = function
| 0 -> 1
| 1 -> a
| n ->
let b = pow a (n / 2) in
b * b * (if n mod 2 == 0 then 1 else a)
read_tools.ml
open Str
let csv_separator = ","
let err_lists_sizes = "Incompatible lists size"
(** Streams the lines of a channel.*)
let line_stream_of_channel channel =
Stream.from (fun _ -> try Some (input_line channel) with End_of_file -> None)
(** Streams the lines of a file.*)
let read_lines file_path = line_stream_of_channel (open_in file_path)
(** Reads the first line of a file.*)
let read_first_line file_path = Stream.next (read_lines file_path)
(** Splits a line according the separator.*)
let split_line line = Str.split (Str.regexp csv_separator) line
(** Given two lists, returns a hashtable whose keys are the elements of the first list and the values are the elements of the second list. *)
let to_dict list1 list2 =
let rec aux list1 list2 my_hash = match list1,list2 with
| [],[] -> my_hash
| a,[] -> failwith err_lists_sizes
| [],a -> failwith err_lists_sizes
| h1::t1,h2::t2 -> Hashtbl.add my_hash h1 h2; aux t1 t2 my_hash in aux list1 list2 (Hashtbl.create 15)
(** Given a file path to a csv file, reads it as a stream of hashtable whose keys are the header of the file *)
let dict_reader file_path =
let line_stream = read_lines file_path in
let header = split_line (Stream.next line_stream) in
Stream.from
(fun _ ->
try Some (to_dict header (split_line (Stream.next line_stream))) with End_of_file -> None)
train.ml
(** Implements the usual framework for streaming learning *)
(** Predict the target and update the model for every line of the stream, engineered by the feature_engine *)
let train dict_stream feature_engine updater predict loss_function refresh_loss target_name =
let rec aux updater dict_stream t loss = match (try Some(Stream.next dict_stream) with _ -> None) with
| Some dict ->
let y = float_of_string (Hashtbl.find dict target_name) in
Hashtbl.remove dict target_name;
let indices = feature_engine dict in
let p = predict indices in
updater indices p y;
if ((t mod refresh_loss) == 0) && t > 0 then begin
Printf.printf "[TRA] Execution time: %fs \t encountered %n \t loss : %f" (Sys.time()) t (loss /. float_of_int(t));
print_endline " ";
end;
aux updater dict_stream (t + 1) (loss +. (loss_function p y))
| None -> () in aux updater dict_stream 0 0. ;;
log_reg.ml
open Maths
open Read_tools
open Train
(* data *)
let train_dict_stream = dict_reader "train_small.csv"
(* parameters *)
(** Number of slots to store the features*)
let n = pow 2 20
(** Vector of weights for the features *)
let weights = Array.make n 0.
(** Print progress every refresh_loss lines *)
let refresh_loss = 1000000
(** Parameter of the model *)
let alpha = 0.01
(* feature engineering *)
let _get_indices dict n = Hashtbl.fold (fun k v acc -> ((Hashtbl.hash k) lxor (Hashtbl.hash v) mod n) :: acc) dict []
let feature_engineer dict = _get_indices dict n
(* logistic regression *)
let rec _update indices weights step = match indices with
| [] -> ()
| h::tail -> weights.(h) <- (weights.(h) -. step) ; _update tail weights step
let predict indices = sigmoid (dot_product indices weights)
let update indices p y = _update indices weights ((p -. y) *. alpha)
let () = train train_dict_stream feature_engineer update predict log_loss refresh_loss "click"
A:
I would say that 10% gain over pypy means that code is suboptimal, I would expect the gain factor of 10 to 100. To get 100 you it would better to use BLAS or GSL bindings. They will basically deliver a performance of C and Fortran programs. But, I assume, that it is a toy project, with the main purpose to learn OCaml, and to understand its internals.
So let's start with math. The dot_product is fine, except the name. It is not the dot_product, so it might confuse the readers e.g., myself (and yourself, few months later). The log_loss would be readable if you will use if/else instead of match. The pow function is not need, surprisingly OCaml already has one - pow x y = x**y
In read_tools it looks like that a more straightforward code without a stream, would be more readable (and efficient). Just write a recursive function, that will read a line and populate the hashtable. There is no need for intermediate data structures.
In train.ml, you have to many parameters to the train function. If you indeed need so much, the it is better to gather them into a record. But usually, such amount of parameters indicate improper choice of abstractions. Maybe you should split your functions differently. The inner aux function has parameters that are loop invariant (updater and dict_stream). It is better to transfer strings to numbers, when you're populating the hashtable, instead of doing this every time you found an element. First of all it is a better style, as you enforce the invariants as soon as possible (usually you should try to catch errors as soon as possible). Second, it is more efficient, if you will hit the same value more than once.
Final note, you're using recursive function to often in places where you can use iter and fold_left function. For example,
let rec _update indices weights step = match indices with
| [] -> ()
| h::tail -> weights.(h) <- (weights.(h) -. step) ; _update tail weights step
Can be written as
let _update idxs ws step =
List.iter (fun idx -> ws.(idx) <- ws.(idx) -. step) idxs
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to read multiple Google Drive CSV files and append then in a single one with Python?
I have multiple CSV files in a Google Drive folder and I want to concatenate them in a single one. There are a lot of them, that's why I'm requesting the API to get the files ID list and doing this Python loop for code to read and concatenate.
fileList = drive.ListFile({'q': "'1AtVlO3pL1OyP8yy02gWuAm9aMxOEWcnu' in parents and trashed=false"}).GetList()
df_filelist_id = pd.DataFrame(fileList)
list_ov_id = df_filelist_id['id']
df_overview = []
for i in list_ov_id:
downloaded = drive.CreateFile({'id':i})
downloaded.GetContentFile('Filename.csv')
df_ov = pd.read_csv('Filename.csv')
df_overview.append(df_ov)
df_overview = pd.DataFrame(df_overview)
But that's the result:
FileNotDownloadableError Traceback (most recent call last)
<ipython-input-65-8dbfd98f1ef9> in <module>()
8 for i in list_ov_id:
9 downloaded = drive.CreateFile({'id':i})
---> 10 downloaded.GetContentFile('Filename.csv')
11 df_ov = pd.read_csv('Filename.csv')
12 df_overview.append(df_ov)
2 frames
/usr/local/lib/python3.6/dist-packages/pydrive/files.py in FetchContent(self, mimetype, remove_bom)
263 else:
264 raise FileNotDownloadableError(
--> 265 'No downloadLink/exportLinks for mimetype found in metadata')
266
267 if mimetype == 'text/plain' and remove_bom:
FileNotDownloadableError: No downloadLink/exportLinks for mimetype found in metadata
Does anyone have any idea how to solve it? Is there another better way?
Thx!
A:
Solved!
There was a non CSV files at the folder. By the way, I did some changes at the code. That's the final version:
listed = drive.ListFile({'q': "title contains '.csv' and 'FileOrFolderID' in parents"}).GetList()
list_id = []
list_title = []
for file in listed:
list_id.append(file['id'])
list_title.append(file['title'])
df = pd.DataFrame()
for id, title in zip(list_id, list_title):
each_file = drive.CreateFile({'id': id})
each_file.GetContentFile(title)
df_each_file = pd.read_csv(title)
df = df.append(df_each_file, ignore_index=True)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Query HDF5 in Pandas
I have following data (18,619,211 rows) stored as a pandas dataframe object in hdf5 file:
date id2 w
id
100010 1980-03-31 10401 0.000839
100010 1980-03-31 10604 0.020140
100010 1980-03-31 12490 0.026149
100010 1980-03-31 13047 0.033560
100010 1980-03-31 13303 0.001657
where id is index and others are columns. date is np.datetime64. I need to perform query like this (the code doesn't work of course):
db=pd.HDFStore('database.h5')
data=db.select('df', where='id==id_i & date>bgdt & date<endt')
Note id_i, bgdt, endt are all variables, not actual values and need to be passed within a loop. for example:
dates is a Pandas Period index or timestamps index, either way, I can convert to each other.
dates=['1990-01', 1990-04','1990-09',......]
id_list is a list of IDs
id_list=[100010, 100011,1000012,.......]
The loop looks like this (the reason I am doing loop is because the data is huge, there are other datasets I have query in the same time and then perform some operations)
db=pd.HDFStore('database.h5')
for id_i in id_list:
for date in dates:
bgdt=date-1 (move to previous month)
endt=date-60 (previous 60 month)
data=db.select('df', where='index==id_i & date>bgdt & date<endt')
......
This problem have 2 parts:
I don't know how to query index and columns in the same time. The doc in pandas showed how to query based on index conditions OR columns conditions, but no examples on how to query based on them in the SAME TIME.
(BTW, This is very common in Pandas Documentation. The doc usually shows very simple thing like how to do 'A', OR how to do 'B', but not how to do BOTH 'A' and 'B'. A good example is use query on a MultiIndex pandas dataframe. The doc shows based on either level=0 OR level=1, but no example on how to do BOTH in the SAME TIME.)
I don't know how to pass three variables id_i, bgdt, endt to the query. I know how to pass only on by using %s, but not all of them.
I am also a little confused with the datetime datatype. There seems to be quite a few of datetimes:datetime.datetime, numpy.datetime64, pandas.Period. I am mostly working on monthly data, so pandas.Period is the most useful one. But I can't easily convert a column (not index) of timestamps (the default date type of Pandas when parsed from raw data). Is there any datatype that is simply a 'date', not timestamps, not period, but just a simple DATE with only year,month and day?
A lot troubles, but I really LOVE python and pandas (I am trying to move my workflow from SAS to Python). Any help will be appreciated!
A:
here are the docs for querying on non-index columns.
Create the test data. It is not clear how the original frame is constructed, e.g. whether its unique data and the ranges, so I have created a sample, with 10M rows, and a multi-level date range with the id column.
In [60]: np.random.seed(1234)
In [62]: pd.set_option('display.max_rows',20)
In [63]: index = pd.MultiIndex.from_product([np.arange(10000,11000),pd.date_range('19800101',periods=10000)],names=['id','date'])
In [67]: df = DataFrame(dict(id2=np.random.randint(0,1000,size=len(index)),w=np.random.randn(len(index))),index=index).reset_index().set_index(['id','date'])
In [68]: df
Out[68]:
id2 w
id date
10000 1980-01-01 712 0.371372
1980-01-02 718 -1.255708
1980-01-03 581 -1.182727
1980-01-04 202 -0.947432
1980-01-05 493 -0.125346
1980-01-06 752 0.380210
1980-01-07 435 -0.444139
1980-01-08 128 -1.885230
1980-01-09 425 1.603619
1980-01-10 449 0.103737
... ... ...
10999 2007-05-09 8 0.624532
2007-05-10 669 0.268340
2007-05-11 918 0.134816
2007-05-12 979 -0.769406
2007-05-13 969 -0.242123
2007-05-14 950 -0.347884
2007-05-15 49 -1.284825
2007-05-16 922 -1.313928
2007-05-17 347 -0.521352
2007-05-18 353 0.189717
[10000000 rows x 2 columns]
Write the data to disk, showing how to create a data column (note that the indexes are automatically queryable, this allows id2 to be queryable as well). This is de-facto equivalent to doing. This takes care of opening and closing the store (you can accomplish the same thing by opening a store, appending, and closing).
In order to query a column, it MUST BE A DATA COLUMN or an index of the frame.
In [70]: df.to_hdf('test.h5','df',mode='w',data_columns=['id2'],format='table')
In [71]: !ls -ltr test.h5
-rw-rw-r-- 1 jreback users 430540284 May 26 17:16 test.h5
Queries
In [80]: ids=[10101,10898]
In [81]: start_date='20010101'
In [82]: end_date='20010301'
You can specify dates as string (either in-line or as variables; you can also specify Timestamp like objects)
In [83]: pd.read_hdf('test.h5','df',where='date>start_date & date<end_date')
Out[83]:
id2 w
id date
10000 2001-01-02 972 -0.146107
2001-01-03 954 1.420412
2001-01-04 567 1.077633
2001-01-05 87 -0.042838
2001-01-06 79 -1.791228
2001-01-07 744 1.110478
2001-01-08 237 -0.846086
2001-01-09 998 -0.696369
2001-01-10 266 -0.595555
2001-01-11 206 -0.294633
... ... ...
10999 2001-02-19 616 -0.745068
2001-02-20 577 -1.474748
2001-02-21 990 -1.276891
2001-02-22 939 -1.369558
2001-02-23 621 -0.214365
2001-02-24 396 -0.142100
2001-02-25 492 -0.204930
2001-02-26 478 1.839291
2001-02-27 688 0.291504
2001-02-28 356 -1.987554
[58000 rows x 2 columns]
You can use in-line lists
In [84]: pd.read_hdf('test.h5','df',where='date>start_date & date<end_date & id=ids')
Out[84]:
id2 w
id date
10101 2001-01-02 722 1.620553
2001-01-03 849 -0.603468
2001-01-04 635 -1.419072
2001-01-05 331 0.521634
2001-01-06 730 0.008830
2001-01-07 706 -1.006412
2001-01-08 59 1.380005
2001-01-09 689 0.017830
2001-01-10 788 -3.090800
2001-01-11 704 -1.491824
... ... ...
10898 2001-02-19 530 -1.031167
2001-02-20 652 -0.019266
2001-02-21 472 0.638266
2001-02-22 540 -1.827251
2001-02-23 654 -1.020140
2001-02-24 328 -0.477425
2001-02-25 871 -0.892684
2001-02-26 166 0.894118
2001-02-27 806 0.648240
2001-02-28 824 -1.051539
[116 rows x 2 columns]
You can also specify boolean expressions
In [85]: pd.read_hdf('test.h5','df',where='date>start_date & date<end_date & id=ids & id2>500 & id2<600')
Out[85]:
id2 w
id date
10101 2001-01-12 534 -0.220692
2001-01-14 596 -2.225393
2001-01-16 596 0.956239
2001-01-30 513 -2.528996
2001-02-01 572 -1.877398
2001-02-13 569 -0.940748
2001-02-14 541 1.035619
2001-02-21 571 -0.116547
10898 2001-01-16 591 0.082564
2001-02-06 586 0.470872
2001-02-10 531 -0.536194
2001-02-16 586 0.949947
2001-02-19 530 -1.031167
2001-02-22 540 -1.827251
To answer your actual question I would do this (their is really not enough information, but I'll put some reasonable expectations):
Do't loop over queries, unless you have a very small number of absolute queries
Read the biggest chunk into memory that you can. Usually this is accomplished by selecting out the biggest ranges of data that you need, even if you select MORE data than you actually need.
Then subselect using in-memory expressions, which will generally be orders of magnitude faster.
List elements are limited to about 30 elements in total (this is current an implementation limit on the PyTables side). It will work if you specify more, but what will happen is that you will read in a lot of data, then it will be reindexed down (in-memory). So user needs to be aware of this.
So for example say that you have 1000 unique ids with 10000 dates per as my example demonstrates. You want to select say 200 of these, with a date range of 1000.
So in this case I would simply select on the dates then do the in-memory comparison, something like this:
df = pd.read_hdf('test.h5','df',where='date=>global_start_date & date<=global_end_date')
df[df.isin(list_of_ids)]
You also might have dates that change per ids. So chunk them, this time using a list of ids.
Something like this:
output = []
for i in len(list_of_ids) % 30:
ids = list_of_ids[i:(i+30)]
start_date = get_start_date_for_these_ids (global)
end_date = get_end_date_for_these_ids (global)
where = 'id=ids & start_date>=start_date & end_date<=end_date'
df = pd.read_hdf('test.h5','df',where=where)
output.append(df)
final_result = concat(output)
The basic idea then is to select a superset of the data using the criteria that you want, sub-selecting so it fits in memory, but you limit the number of queries you do (e.g. imagine that you end up selecting a single row with your query, if you have to query this 18M times that is bad).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Essayer: j'essaie, or j'essaye?
I was confused how I should conjugate the verb essayer, or any other ones like nettoyer.
I looked up in a conjugation website,
J'essaie / ye
I'm guessing that means both j'essaie and j'essaye are accepted. However, my teacher said to use j'essaie.
Which one is correct? If both are accepted, is there any difference?
A:
It looks like both are in regular use today, and that this is also the case for "payer", "balayer", as well as any other verb in "-ayer". I am not aware of any rule mandating to use one rather than the other when writing vs. speaking.
The pronunciation is slightly different though: "j'essaye" would sound at the end like "pareil", whereas "j'essaie" would sound like "sait".
I am not aware that one pronunciation or the other would have this or that connotation. It is even possible for native speakers to use both in the same conversation.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
aggregate sql with a where clause
Im having trouble with this query.
SELECT adm_Consultant, count(adm_Consultant) as num
FROM Admission
WHERE adm.adm_ReferralDate >= '01/01/2014 00:00:00' AND adm.adm_ReferralDate <= '31/12/2014 00:00:00'
AND adm.adm_PriorSurgery = 'Yes'
AND adm.adm_Consultant <> ''
GROUP BY adm_Consultant
ERROR: General error
this works though, but returns the null values as-well
SELECT adm_Consultant, count(adm_Consultant) as num
FROM Admission
GROUP BY adm_Consultant
I tried the HAVING clause instead of the WHERE clause, but still it fails.
Please help.
here was my reading material.
COUNT(expr)
Returns a count of the number of non-NULL values of expr in the rows retrieved by a SELECT statement. The result is a BIGINT value.
https://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_count
A:
You are forgetting to create alias adm
SELECT adm_Consultant, count(adm_Consultant) as num
FROM Admission adm
WHERE adm.adm_ReferralDate >= '01/01/2014 00:00:00' AND
adm.adm_ReferralDate <= '31/12/2014 00:00:00'
AND adm.adm_PriorSurgery = 'Yes'
AND adm.adm_Consultant <> ''
GROUP BY adm_Consultant
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is the use of <> as a not-equal operator documented somewhere?
looking at existing APEX code I've seen quite a few places where not equal is tested using "<>" instead of "!=".
I can't find an official documentation where this behaviour is described. Does anyone know if this has any potential side effects? Should that be changed to use != wherever possible?
I'd be thankful for any hints / references.
A:
I personally always use != for clarity, unless I'm looking for a number that is either greater than or less than another number. Since <> implies greater than or less than as opposed to not equals, it doesn't read well with strings - unless alpha-sorting/ranking is what's desired.
For example:
String myVar = 'apple';
system.assert(myVar <> 'orange'); // Pass
system.assert(myVar < 'orange'); // Pass
system.assert(myVar > 'orange'); // Fail
So in the spirit of creating self-describing code, != is my choice for all "not-equal" scenarios that don't involve ranking or ordering of any kind. Otherwise <> will work the same, but with incompatible types you can't perform a < OR a >. In other words:
system.assert(myVar <> null); // Pass
system.assert(myVar > null); // Error: Comparison arguments must be compatible types: String, NULL)
So whatever more accurately describes the comparison is my 2-cents.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
java sqlite - Error: function not yet implemented for SQLite
I created the following class in java to make using SQLite easier when I code.
import java.sql.*;
public class Dbm {
//We want to use the connection througout the whole class so it is
//provided as a class level private variable
private Connection c = null;
//This constructor openes or creates the database provided by the arguement
//NameOfDatabase
public Dbm(String NameOfDatabase){
try {
//Database is checked for in project folder, if doesnt exist then creates database
c = DriverManager.getConnection("jdbc:sqlite:" + NameOfDatabase);
} catch ( Exception e ) {
System.err.println( e.getClass().getName() + ": " + e.getMessage() );
System.exit(0);
}
System.out.println("Opened database successfully");
}
public void CloseDB(){
try{
c.close();
System.out.println("Closed Database Successfull");
}
catch (Exception e){
System.out.println("Failed to close Database due to error: " + e.getMessage());
}
}
public void ExecuteNoReturnQuery(String SqlCommand){
//creates a statment to execute the query
try{
Statement stmt = null;
stmt = c.createStatement();
stmt.executeUpdate(SqlCommand);
stmt.close();
System.out.println("Sql query executed successfull");
} catch (Exception e){
System.out.println("Failed to execute query due to error: " + e.getMessage());
}
}
// this method returns a ResultSet for a query which can be iterated throughd
public ResultSet ExecuteSqlQueryWithReturn(String SqlCommand){
try{
Statement stmt = null;
stmt = c.createStatement();
ResultSet rs = stmt.executeQuery(SqlCommand);
return rs;
}catch (Exception e){
System.out.println("An Error has ocured while executing this query" + e.getMessage());
}
return null;
}
}
Here is the main code in the program
import java.sql.*;
public class InstaText {
/**
* @param args the command line arguments
*/
public static void main(String[] args) {
Dbm db = new Dbm("people.db");
ResultSet rs = db.ExecuteSqlQueryWithReturn("select * from people;");
try{
String name = "";
int age = 0;
String address = "";
while (rs.isLast() == false){
name = rs.getString("name");
age = rs.getInt("age");
address = rs.getString("address");
System.out.println("Name is " + name +" age is " + age + " Address is " + address);
rs.next();
}
}catch (Exception e ){
System.out.println("Error: " + e.getMessage());
}
db.CloseDB();
}
}
But when I execute it I get the following output:
Opened database successfully
Error: function not yet implemented for SQLite
Closed Database Successfull
So how do I solve the Error "Error: function not yet implemented for SQLite"?
I am running the NetBeans Ide with the latest JDBC on mac os sierra.
Edit: Here is the output after adding e.printstacktrace(); in the catch block:
Opened database successfully
Error: function not yet implemented for SQLite
java.sql.SQLException: function not yet implemented for SQLite
Closed Database Successfull
at org.sqlite.jdbc3.JDBC3ResultSet.isLast(JDBC3ResultSet.java:155)
at instatext.InstaText.main(InstaText.java:24)
A:
The problem is not your select query but the isLast() method you are using on the ResultSet instance to retrieve the result. Try the next() method, it should work :
while (rs.next()){
name = rs.getString("name");
age = rs.getInt("age");
address = rs.getString("address");
System.out.println("Name is " + name +" age is " + age + " Address is " + address);
rs.next();
}
You can read here :
https://github.com/bonitasoft/bonita-connector-database/issues/1
that with SQLLite, you may have some limitations with the isLast() method :
According to JDBC documentation
(http://docs.oracle.com/javase/7/docs/api/java/sql/ResultSet.html)
calls to isLast() and first() methods are forbidden if the result set
type is TYPE_FORWARD_ONLY (e.g SQLite).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Get array objects with specific ObjectId in nested array
I'm trying to get a specific array of objects depending on ObjectId they have.
Here is my MongoDB database:
{
"_id" : ObjectId("59edb571593904117884b721"),
"userids" : [
ObjectId("59edb459593904117884b71f")
],
"macaddress" : "MACADDRESS",
"devices" : [ ],
"projectorbrand" : "",
}
{
"_id" : ObjectId("59edb584593904117884b722"),
"userids" : [
ObjectId("59edb459593904117884b71f"),
ObjectId("59e4809159390431d44a9438")
],
"macaddress" : "MACADDRESS2",
"devices" : [ ],
"projectorbrand" : "",
}
The command in MongoDB is:
db.getCollection('co4b').find( {
userids: { $all: [ ObjectId("59edb459593904117884b71f") ] }
} )
This will work and will return an array filtered correctly.
I would like to translate this query in Golang.
Here is my code:
pipe := bson.M{"userids": bson.M{"$all": objectId}}
var objects[]models.Objects
if err := uc.session.DB("API").C("objects").Pipe(pipe).All(&objects); err != nil {
SendError(w, "error", 500, err.Error())
} else {
for i := 0; i < len(objects); i++ {
objects[i].Actions = nil
}
uj, _ := json.MarshalIndent(objects, "", " ")
SendSuccessJson(w, uj)
}
I'm getting error like wrong type for field (pipeline) 3 != 4. I saw that $all needs string array but how to filter by ObjectId instead of string?
Thanks for help
A:
You are attempting to use the aggregation framework in your mgo solution, yet the query you try to implement does not use one (and does not need one).
The query:
db.getCollection('co4b').find({
userids: {$all: [ObjectId("59edb459593904117884b71f")] }
})
Can simply be transformed to mgo like this:
c := uc.session.DB("API").C("objects")
var objects []models.Objects
err := c.Find(bson.M{"userids": bson.M{
"$all": []interface{}{bson.ObjectIdHex("59edb459593904117884b71f")},
}}).All(&objects)
Also note that if you're using $all with a single element, you can also implement that query using $elemMatch, which in MongoDB console would like this:
db.getCollection('co4b').find({
userids: {$elemMatch: {$eq: ObjectId("59edb459593904117884b71f")}}
})
Which looks like this in mgo:
err := c.Find(bson.M{"userids": bson.M{
"$elemMatch": bson.M{"$eq": bson.ObjectIdHex("59edb459593904117884b71f")},
}}).All(&objects)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Display duplicate events after update firestore data, but the data itself in firestore isn't duplicated
I have a web application(Angular) and mobile application(Ionic). Both of them share the same Firestore data.
Use web application update existing data but the ionic app shows duplicate items(the duplicates will be gone after restart the mobile app), I check the item data itself in Firestore, it was updated and unique. Does anyone have any clue on this?
This issue only occurs on the mobile app other than the web app, both of them use "angularfire2": "^5.0.0-rc.4",
import { AngularFirestore, AngularFirestoreCollection } from 'angularfire2/firestore';
this.posts$ = this.db.getRecentPosts().snapshotChanges().pipe(
map(arr => arr.map(doc => {
return { id: doc.payload.doc.id, ...doc.payload.doc.data() }
}
))
);
Did research and it seems like(not 100% sure) an angularfire2 issue:
AngularFirestoreCollection sometimes returns duplicate of records after inserting a new record
A:
Since duplicates are gone after restart and other people report this issue as well it feels to me that the problem is within AngularFirestore itself. As a workaround you could try the following:
import { AngularFirestore, AngularFirestoreCollection } from 'angularfire2/firestore';
this.posts$ = this.db.getRecentPosts().snapshotChanges().pipe(
map(arr => arr.reduce((acc, doc) => { // map -> reduce
const id = doc.payload.doc.id
// remove !(id in acc) to get last duplicate
!(id in acc) && acc[id] = { id, ...doc.payload.doc.data() }
return acc }
}, {} // reduce seed
)),
// you can also Object.values(arr.reduce(...)), but this I find bit more readable
map(coll => Object.values(coll))
);
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Count files occurrences linux
I want to count the number of times each word in column 3 occurs. Below is the input
IN A three
US B one
LK C two
US B three
US A one
IN A one
US B three
LK C three
US B two
US A two
IN A two
US B two
Output should look like:
IN A three 4
US B one 3
LK C two 5
US B three 4
US A one 3
IN A one 3
US B three 4
LK C three 4
US B two 5
US A two 5
IN A two 5
US B two 5
A:
This can be a way;
$ awk 'FNR==NR{++a[$3]; next} {print $0, a[$3]}' file file
IN A three 4
US B one 3
LK C two 5
US B three 4
US A one 3
IN A one 3
US B three 4
LK C three 4
US B two 5
US A two 5
IN A two 5
US B two 5
Explanation
It loops through the file twice: firstly to fetch data, secondly to print it.
FNR==NR{++a[$3]; next} when looping for the first time, keep track of how many times the 3rd value appears.
{print $0, a[$3]} when looping for the second time, print the line plus the counter value.
To have a nicer output you can also use printf to print a tab after the 3rd column:
{printf "%s\t%s\n", $0, a[$3]}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Override template shell on linux system in Active Directory domain?
Is there an easy way to override the Samba "template shell = /bin/bash" setting on a per-user basis?
This is for Linux systems joined to an Active Directory domain. Some users want /bin/bash. Others including myself want /bin/zsh. Is there some AD attribute I can set?
Anything I've found via googling seems hackish at best (writing a script to replace /bin/sh -- maintenance hassle).
A similar serverfault question Override LDAP shell seems OpenLDAP-oriented (but if someone knows how to get it working with AD, please say so).
A:
If you're using winbind, you can do the following:
Install IDMU as Christoph suggested. If you have 2003 R2 or later, the necessary RFC 2307 schema is already installed, so you can skip this step.
Add the following to smb.conf, per the Samba wiki:
winbind nss info = rfc2307
Again, this is only going to work if you're using winbind. Restart it once you've made the change.
Set users' loginShell attribute in Active Directory. winbind will honor that setting on its next refresh.
A:
If you have a large number of Linux systems using AD, this may not be efficient, but for a small number of systems, the simplest way of doing this is to run the following on the Linux machine:
getent passwd ADUSER >> /etc/passwd
Then edit the corresponding line in /etc/passwd to reflect the preferred shell (or better yet, use sed to change the shell entry on the fly before appending to /etc/passwd). IDMU, as others have suggested, is probably the most elegant solution for multiple hosts, but the above example gets the job done if you're just looking to do this on a few systems.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
how to restrict unwanted image storage in wordpress media upload
iam using wordpress and when i upload a new image it tend to crop in different size (thumbnail(150*150), medium(300*200), large ) and stores it. For a single image it generates around 4 different images which are generally not used. I need to restrict it to just thumbnail and original image. how to do it.
A:
you just need to mention the value of medium and large images as 0 in media settings in wordpress.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
RegEx to match a number
I'm looking to write a grep function to find which lines of a text have a number of ANY format in it.
[exs of formats: (156),(1.67),(1,467),($1,654.00), (one thousand two hundred and sixty), (Two Hundred Six), roman numerals such as MCCXXXIV. ]
** I am assuming that if "I" is by itself it is the english word and not the Roman Numeral**
A:
Here's a working soution, based off this answer, which is the one I could most easily get to work with word boundaries.
I'll leave it to you to decide how high you want the number words to go... note that, as written, I use word boundaries with the number words to prevent matching words like "none" or "bitten", which contain number words. The downside is that while it will match "twenty one" and "twenty-one", it will not match "twentyone".
I filled out the examples a little bit to illustrate.
detect_arabic_numerals = function(x) grepl("[0-9]", x)
detect_roman_numerals = function(x) {
x = gsub("\\bI\\b", "", x, ignore.case = TRUE) # Prevent lone I matches
grepl("\\b(M{1,4}(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3})|M{0,4}(CM|C?D|D?C{1,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3})|M{0,4}(CM|CD|D?C{0,3})(XC|X?L|L?X{1,3})(IX|IV|V?I{0,3})|M{0,4}(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|I?V|V?I{1,3}))\\b", x, ignore.case = TRUE)
}
detect_number_words = function(x) {
number_words = c(
"one",
"two",
"three",
"four",
"five",
"six",
"seven",
"eight",
"nine",
"ten",
"eleven",
"twelve",
"thirteen",
"fourteen",
"fifteen",
"sixteen",
"seventeen",
"eighteen",
"nineteen",
"twenty",
"thirty",
"forty",
"fifty",
"sixty",
"seventy",
"eighty",
"ninety",
"hundred",
"thousand",
"million"
)
grepl(paste("\\b", number_words, "\\b", collapse = "|", sep = ""), x, ignore.case = TRUE)
}
detect_numbers = function(x) {
detect_arabic_numerals(x) | detect_number_words(x) | detect_roman_numerals(x)
}
stuff<-c("Examples of numbers are one and two, 3, 1,284 and fifty nine.",
"Do you have any lucky numbers?",
"Roman numerals such as XIII and viii are my favorites.",
"I also like colors such as blue, green and yellow.",
"This ice pop costs $1.48.",
"Extra case none match",
"But please match this one",
"Even hyphenated forty-five",
"Wish to match fortyfive")
stuff[detect_numbers(stuff)]
# [1] "Examples of numbers are one and two, 3, 1,284 and fifty nine."
# [2] "Roman numerals such as XIII and viii are my favorites."
# [3] "This ice pop costs $1.48."
# [4] "But please match this one"
# [5] "Even hyphenated forty-five"
It's not perfect---the problem I just noticed is that, because punctuation is counted as a word-boundary, contractions where the suffix is a valid Roman numeral like "I'll" or "We'd" will match as Roman numerals. You could potentially remove punctuation as a pre-process step inside detect_roman_numerals, much like I already pre-process to remove the lone "I"s.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Xcode 4: editing color of highlighted line
In Xcode 4, how do I change the color of a line highlighted by error correction?
In this example, the first highlighted line correctly uses the color set by the "Selection" option in "Fonts&Colors", while the other one is highlighted clicking on its warning sign. How do I change that color?
A:
I've made an Xcode Plugin that allows you to customize the colors of the inline error and warning messages. See here.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Contextual ActionBar copy text from custom ListView
I have a custom ListView adapter which contains 2 Textviews for each item , On long click on the item I want to have the option to copy the text from 1 of these Textviews to the clipboard, the question is how can i get this Textview's text?
public boolean onActionItemClicked(ActionMode mode, MenuItem item) {
switch (item.getItemId()) {
case R.id.copy:
Toast.makeText(getActivity(),"Text copied to clipboard.", Toast.LENGTH_SHORT).show();
//HOW CAN I GET THE TEXT?
mode.finish();
return true;
case R.id.share:
return false;
default:
return false;
}
}
A:
ListView list = (ListView) findViewById(R.id.yourList);
list.setOnItemLongClickListener(new OnItemLongClickListener() {
public void onItemLongClick(AdapterView<?> a, View v, int position,long id) {
TextView yourFirstTextView = (TextView) v.findViewById(R.id.yourFirstTextViewId);
TextView yourSecondTextView = (TextView) v.findViewById(R.id.yourSecondTextView);
copyTextToClipboard(yourFirstTextView);//if you want to copy your first textview
copyTextToClipboard(yourSecondTextView);//if you want to copy your second textview
});
public void copyTextToClipboard(TextView txtView){
int sdk = android.os.Build.VERSION.SDK_INT;
if(sdk < android.os.Build.VERSION_CODES.HONEYCOMB) {
android.text.ClipboardManager clipboard = (android.text.ClipboardManager) getSystemService(Context.CLIPBOARD_SERVICE);
clipboard.setText(txtView.getText().toString());
} else {
android.content.ClipboardManager clipboard = (android.content.ClipboardManager) getSystemService(Context.CLIPBOARD_SERVICE);
android.content.ClipData clip = android.content.ClipData.newPlainText("text label",txtView.getText().toString());
clipboard.setPrimaryClip(clip);
}
}
I don't test this code but it may work.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Stored procedure expects parameter when no IN parameters are defined
I have this very simple stored procedure in Oracle that executes a sequence and gives the next sequence number as output.
create or replace PROCEDURE NEXT_NUMBER
(SEQUENCE_OUT OUT NUMBER)
IS
BEGIN
EXECUTE IMMEDIATE 'SELECT TEST_SEQUENCE.NEXTVAL FROM DUAL' INTO sequence_out;
END;
As you can see, there are no IN parameters to this procedure so I'm puzzled when I execute this procedure like this: execute CRS_NEXT_CRC_NUMBER;
and I get the following error:
Error starting at line : 1 in command -
execute NEXT_NUMBER
Error report -
ORA-06550: line 1, column 7:
PLS-00306: wrong number or types of arguments in call to 'CRS_NEXT_CRC_NUMBER'
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
06550. 00000 - "line %s, column %s:\n%s"
*Cause: Usually a PL/SQL compilation error.
*Action:
Any idea why this could be happening? I can call the NEXTVAL function on the SEQUENCE outside of the procedure without a problem.
A:
create or replace PROCEDURE NEXT_NUMBER
(SEQUENCE_OUT OUT NUMBER)
IS
BEGIN
SELECT TEST_SEQUENCE.NEXTVAL INTO sequence_out FROM DUAL;
-- or simply (in newer Oracle releases)
sequence_out := TEST_SEQUENCE.NEXTVAL;
END;
In SQLPLUS:
> var ID NUMBER
> exec NEXT_NUMBER(:ID);
> print ID
|
{
"pile_set_name": "StackExchange"
}
|
Q:
git command not recognisable in android studio terminal
I have just downloaded android studio as well as git.exe also. I have updated path in version control-->>git also with appropriate git.exe path and tested and it is successful. Now from terminal when I type git clone http://my projectlink, it says
"'git' is not recognized as an internal or external command, operable program or batch file."
Am I missing anything else?
A:
I found the simple solution here.
Go to File--> Settings-->Terminal-->Shell Path --> Browse--> path of git bash.exe-->Apply-->OK
Now Restart the Android studio to effect changes.
OR
You could also just start a new session
That's it. Now you can use terminal as git bash..
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to use slugs in routes
Is there a way to use slugs in my routes like instead domain/technicalInformation it would be domain/technical-information. Thank You!
PagesController.php
class PagesController extends Controller
{
public function technicalInformation(){
return view('pages.technical-information');
}
}
web.php
Route::get('/technicalInformation','ConsumerController@technicalInformation')->name('technical-information');
A:
Well you just change to this and it should work.
Route::get('/technical-information','ConsumerController@technicalInformation')->name('technical-information');
|
{
"pile_set_name": "StackExchange"
}
|
Q:
PHP CLI won't log errors
PHP currently will not log errors produced from the command line.
I have :
log_errors = On
error_log = /var/log/php_errors.log
in /etc/php5/cli/php.ini
Am I missing a further setting to get this working?
A:
Please check that the user account running PHP CLI has write access to /var/log/php_errors.log.
Additionally, you can verify that you are using the correct php.ini file like this:
php -a -c /etc/php5/cli/php.ini
A:
This question and answer thread was very helpful to me while setting up PHP CLI logging on an Ubuntu 12.04 environment, so I wanted to post an answer that distills what I learned. In addition to the great info provided by David Chan as well as George Cummins I have created a logrotate.d script to ensure the PHP CLI error log doesn’t grow out of control as well as set this up so multiple users will be able to log errors to the common PHP CLI error log.
First, the default behavior of the PHP CLI is to log error messages to standard output; logging to a file is not default behavior. Which usually means logging to the same command line terminal session that is running the PHP CLI command. While the PHP ini file does have accommodations for a specified error_log additional accommodations need to be made to truly make it work.
First, I had to create an initial php_errors.log file:
sudo touch /var/log/php_errors.log
Since the server in question is used by web developers working on various projects, I have setup a common group for them called www-users. And in this case, I want the php_errors.log to be readable and writable by www-users I change the ownership of the file like this:
sudo chown root:www-users /var/log/php_errors.log
And then change the permissions of the file to this:
sudo chmod 664 /var/log/php_errors.log
Yes, from a security standpoint having a log file readable and writable by anyone in www-users is not so great. But this is a controlled shared work environment. So I trust the users to respect things like this. And besides, when PHP is run from the CLI, any user who can do that will need write access to the logs anyway to even get a log written.
Next, go into /etc/php5/cli/php.ini to adjust the default Ubuntu 12.04 settings to match this new log file:
sudo nano /etc/php5/cli/php.ini
Happily log_errors is enabled by default in Ubuntu 12.04:
log_errors = On
But to allow logging to a file we need to change the error_log to match the new file like this:
error_log = /var/log/php_errors.log
Setup a logrotate.d script.
Now that should be it, but since I don’t want logs to run out of control I set a logrotate.d for the php_errors.log. Create a file called php-cli in /etc/logrotate.d/ like this:
sudo nano /etc/logrotate.d/php-cli
And place the contents of this log rotate daemon script in there:
/var/log/php_errors.log {
weekly
missingok
rotate 13
compress
delaycompress
copytruncate
notifempty
create 664 root www-users
sharedscripts
}
Testing the setup.
With that done, let’s test the setup using David Chan’s tip above:
php -r "error_log('This is an error test that we hope works.');"
If that ran correctly, you should just be bounced back to an empty command prompt since PHP CLI errors are no longer being sent to standard output. So check the actual php_errors.log for the test error message like this:
tail -n 10 /var/log/php_errors.log
And there should be a timestamped error line in there that looks something like this:
[23-Jul-2014 16:04:56 UTC] This is an error test that we hope works.
A:
as a diagnostic you can attempt to force a write to the error log this way.
php -c /etc/php5/cli/php.ini -r " error_log('test 123'); "
you should now see test 123 in your log
tail /var/log/php_errors.log
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Differential of a matrix function with Taylor development
I'm a beginner and I have a question. I'm sorry for my bad english.
Let $A$ be an invertible n by n matrix, and let $F$ be a function defined on $M_n(C)$ by $F(X) = X^2 - A$.
I would like to know how we can calculate $DF(X)(H)$ the differential of $F$ at the point $X \in M_n(C)$ for an increase $H \in M_n(C)$, using a Taylor development at the order 1.
Could someone help me ? Thank you in advance.
A:
In this case, the best way to calculate the directional derivative is straight by definition.
$\begin{align}DF(X)H&=\lim_{t\to0}\frac{F(X+tH)-F(X)}{t}\\&=\lim_{t\to0}\frac{tHX+tXH+t^2H^2}{t}\\&=HX+XH.\end{align}$
|
{
"pile_set_name": "StackExchange"
}
|
Q:
sbt new not a valid command
I am going through a coursera course and as explained, I am trying to create a new sbt project using the below command:
sbt new scala/hello-world.g8
In a Windows machine. I have sbt 0.13.8 installed. When executing the command it is giving the below error.
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
[info] Set current project to workspaces (in build file:/D:/software%20materials%20workspaces/WorkSpaces/)
[error] Not a valid command: new (similar: set)
[error] Not a valid project ID: new
[error] Expected ':' (if selecting a configuration)
[error] Not a valid key: new (similar: name, run, runner)
[error] new
[error] ^
I am not able to understand what the problem is. Help me out in understanding and solving this problem
A:
I agree with Seth. It looks like the first version to include new is 0.13.13. I just came across this in creating new apps with more recent versions of the Scala Play framework (ex: 2.5). The github templating command line, giter8, is called via sbt new like this:
sbt new scala/scala-seed.g8
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Trying to replace an element of a string array but only able to input a new string in that position. C++
Hello I am attempting to create a very simplified inventory system and I am having a slight issue. When trying to change an element in my string array, I am instead putting a new string value into the existing array and pushing everything else forwards.
// ConsoleApplication2.cpp : Defines the entry point for the console application.
#include "stdafx.h"
#include <iostream>
#include <string>
int main()
{
const int maxItems = 10; //Maximum number of items inventory can hold
int numbItems = 0; //number of current items
std::string inventory[maxItems]; //inventory
//Items in inventory
inventory[++numbItems] = "Sword";
inventory[++numbItems] = "Cloak";
inventory[++numbItems] = "Boots";
//Show player items in inventory
for (int i = 0; i <= numbItems; ++i)
{
std::cout << inventory[i] << "\n";
}
inventory[0] = "Axe"; //Replace sword with axe
//Show player items in inventory
for (int i = 0; i <= numbItems; ++i)
{
std::cout << inventory[i] << "\n";
}
//keep window open
std::string barn;
std::cin >> barn;
return 0;
}
This code outputs; "axe, sword, cloak and boots" when the desired result is "axe, cloak and boots".
Thank you in advance.
A:
As far as I can see from your code, the numbItems for sword is 1 (you use pre-increment). Probably using post-increment will fix your problem.
Try using the code below (live IdeOne code):
// ConsoleApplication2.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include <iostream>
#include <string>
int main()
{
const int maxItems = 10; //Maximum number of items inventory can hold
int numbItems = 0; //number of current items
std::string inventory[maxItems]; //inventory
//Items in inventory
inventory[numbItems++] = "Sword";
inventory[numbItems++] = "Cloak";
inventory[numbItems++] = "Boots";
//Show player items in inventory
for (int i = 0; i <= numbItems; ++i)
{
std::cout << inventory[i] << "\n";
}
inventory[0] = "Axe"; //Replace sword with axe
//Show player items in inventory
for (int i = 0; i <= numbItems; ++i)
{
std::cout << inventory[i] << "\n";
}
//keep window open
std::string barn;
std::cin >> barn;
return 0;
}
On my computer, this works (the output is Sword,Cloak,Boots and Axe,Cloak,Boots).
A:
You use the preincrement-operator to fill your array.
inventory[++numbItems] = "Sword";
As numbItems starts at 0, you insert your first element at 1.
Just use the post-increment, and it will work fine.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
object oriented design question for gui application
guys, I am programming a GUI for an application, a cd container to insert cd, and currently I am not very clear and I think I need some help to clarify my understanding about object oriented design.
so, first, I use observer pattern to build abstract Model and View classes and also the concrete models(cd container) and concrete views(cd container view). Then I start to use the wxwidget framework to design and graphical appearance or layout (CDContainerWidget, from wxPanel) for the cd container and other gui controls MainFrame(from wxFrame), etc..
so now I have three classes: CDContainerModel (cd container), CDContainerView (the class for observer pattern), and CDContainerWidget (the gui controls).
then I become not that clear about what I should do with the CDContainerView and CDContainerWidget?
I think CDContainerWidget and CDContainerView both need CDContainerModel. I think about four approaches, but do not know which one is approriate:
1). associate CDContainerWidget into CDContainerView as a member variable, then put the CDContainerView into the Main Frame as a member variable.
class CDContainerView:
def __init__:
self.gui=CDContainerWidget
class MainFrame:
def __init__:
CDContainerView
2). CDContainerView subclass CDContainerWidget:
class CDContainerView(CDContainerWidget):
class MainFrame:
def __init__:
CDContainerView
3). CDContainerWidget subclass CDContainerView:
class CDContainerWidget(CDContainerView):
class MainFrame:
def __init__:
CDContainerWidget
4). instead of using CDContainerWidget and CDContainerView, use only a single class CDContainerBig which subclass the abstract class View and wxPanel
class CDContainerBig(View, wxPanel)
My question is what is the right solution? I have read the wiki page of MVC pattern, but I do not really understand its descrption and do not know how and also wonder if it is approriate to apply it to my problem.
well, I put some additional comments. originally, when i start to design to program, I did not think much and just choose, 2) approach. but now, I think 3) is good. since it is reasonable to put widget in widget(CDContainerWidget into MainFrame). but I am not really sure. Also it seems with observer pattern, the three classes are twisted and awkard. And sometimes, it appears to me that these 4 maybe are the same, just who includes who, or who sends messages to who. well, I think I really need clarification on this point.
Also, I am in favour of 3) because of a practical point.The CDContainerWidget actually contains several subwidget components (button, input box, etc.) and if we change something like set new values via a subcomponent widget, then for 1), we need CDContainerWidget to be aware of CDContainerView, to let CDContainerView to notify other views. for 2) even worse, CDContainerWidget has to be aware of its childen CDContainerView. for 3) CDContainerWidget itself is CDContainerView, so quite reasonable. for 4) well, easy but no logic separation. this is my own thought, do not know if it is correct.
Thanks!!
A:
What might make this a bit easier for you to shed the coupling between classes would be implementing a signal slot pattern with something like Spiff Signal, or one of the other signal/slot modules available.
By decoupling the communication logic you can free yourself entirely of the need for modules to talk directly but rather use message passing with callbacks.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to copy bitmap to clipboard using the win32 API?
How do I copy a buffer that would save to a ".BMP" file to the clipboard using the win32 API? I.e., I have a raw buffer of a Windows V3 Bitmap (including the header) that I can literally write() to a file and will result in a valid .BMP file, but I want to copy it to the clipboard instead.
On OS X, in plain C, the code would look something like this (which works as intended):
#include <ApplicationServices/ApplicationServices.h>
int copyBitmapToClipboard(char *bitmapBuffer, size_t buflen)
{
PasteboardRef clipboard;
CFDataRef data;
if (PasteboardCreate(kPasteboardClipboard, &clipboard) != noErr) {
return PASTE_OPEN_ERROR;
}
if (PasteboardClear(clipboard) != noErr) return PASTE_CLEAR_ERROR;
data = CFDataCreateWithBytesNoCopy(kCFAllocatorDefault, bitmapBuffer, buflen,
kCFAllocatorNull);
if (data == NULL) {
CFRelease(clipboard);
return PASTE_DATA_ERROR;
}
if (PasteboardPutItemFlavor(clipboard, 42, kUTTypeBMP, data, 0) != noErr) {
CFRelease(data);
CFRelease(clipboard);
return PASTE_PASTE_ERROR;
}
CFRelease(data);
CFRelease(clipboard);
return PASTE_WE_DID_IT_YAY;
}
I am unsure how to accomplish this with the win32 API. This is as far as I've gotten, but it seems to silently fail (that is, the function returns with a successful error code, but when attempting to paste, the menu item is disabled).
#include <windows/windows.h>
int copyBitmapToClipboard(char *bitmapBuffer, size_t buflen)
{
if (!OpenClipboard(NULL)) return PASTE_OPEN_ERROR;
if (!EmptyClipboard()) return PASTE_CLEAR_ERROR;
if (SetClipboardData(CF_DSPBITMAP, bitmapBuffer) == NULL) {
CloseClipboard();
return PASTE_PASTE_ERROR;
}
CloseClipboard();
return PASTE_WE_DID_IT_YAY;
}
Could anyone provide some insight as to how to fix this?
Edit
Per Aaron and martinr's suggestions, I've now modified the code to the following:
#include <windows/windows.h>
int copyBitmapToClipboard(char *bitmapBuffer, size_t buflen)
{
HGLOBAL hResult;
if (!OpenClipboard(NULL)) return PASTE_OPEN_ERROR;
if (!EmptyClipboard()) return PASTE_CLEAR_ERROR;
hResult = GlobalAlloc(GMEM_MOVEABLE, buflen);
if (hResult == NULL) return PASTE_DATA_ERROR;
memcpy(GlobalLock(hResult), bitmapBuffer, buflen);
GlobalUnlock(hResult);
if (SetClipboardData(CF_DSPBITMAP, hResult) == NULL) {
CloseClipboard();
return PASTE_PASTE_ERROR;
}
CloseClipboard();
return PASTE_WE_DID_IT_YAY;
}
But it still has the same result. What am I doing wrong?
Final Edit
The working code:
#include <windows/windows.h>
int copyBitmapToClipboard(char *bitmapBuffer, size_t buflen)
{
HGLOBAL hResult;
if (!OpenClipboard(NULL)) return PASTE_OPEN_ERROR;
if (!EmptyClipboard()) return PASTE_CLEAR_ERROR;
buflen -= sizeof(BITMAPFILEHEADER);
hResult = GlobalAlloc(GMEM_MOVEABLE, buflen);
if (hResult == NULL) return PASTE_DATA_ERROR;
memcpy(GlobalLock(hResult), bitmapBuffer + sizeof(BITMAPFILEHEADER), buflen);
GlobalUnlock(hResult);
if (SetClipboardData(CF_DIB, hResult) == NULL) {
CloseClipboard();
return PASTE_PASTE_ERROR;
}
CloseClipboard();
GlobalFree(hResult);
return PASTE_WE_DID_IT_YAY;
}
Thanks, martinr!
A:
I think the hMem needs to be a return value from LocalAlloc, an HMEMORY rather than a pointer.
EDIT
Sorry yes, GlobalAlloc with GMEM_MOVEABLE is required, not LocalAlloc.
EDIT
I suggest you use CF_DIB clipboard data format type.
DIB is the same as BMP except it is without the BITMAPFILEHEADER, so copy the source bytes except for the first sizeof(BITMAPFILEHEADER) bytes.
EDIT
From OpenClipboard() documentation (http://msdn.microsoft.com/en-us/library/ms649048(VS.85).aspx):
"If an application calls OpenClipboard with hwnd set to NULL, EmptyClipboard sets the clipboard owner to NULL; this causes SetClipboardData to fail."
You need to set up a window; even if you're not doing WM_RENDERFORMAT type stuff.
I found this a lot with Windows APIs. I haven't used the Clipboard APIs per se but with other APIs I usually found that creating a hidden window and passing that handle to the relevant API was enough to keep it quiet. There's usually some notes on issues to do with this if you're creating a window from a DLL rather than an EXE; read whatever is the latest Microsoft word about DLLs, message loops and window creation.
As regardsBITMAPINFO, that's not the start of the stream the clipboard wants to see :- the buffer you give to SetClipboardData should start right after where the BITMAPFILEHEADER stops.
A:
You need to pass a HANDLE to SetClipboard() (that is - memory allocated with GlobalAlloc()) rather than passing a straight pointer to your bitmap.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Magento: edit quantity and update price of bundle option in quote item
I am working with Magento bundle products that contain hidden bundle options that need to have the quantity updated programmatically. A problem arises when the bundle quantity is edited from the shopping cart. I have a button set up to submit the bundle quote item to an updateLineItemAction() method in my own CartController to handle the update of the hidden bundle option.
The updateLineItemAction() method locates the hidden bundle option and assigns the updated quantity to the "selection_qty_X" and "product_qty_X" properties, where X is the ID of the bundle option. Values in the "info_buyRequest" are also updated. After saving the quote item, there's a redirect to the cart to show the updated cart values.
The updated quantity is displayed correctly in the cart, and the hidden bundle option has the correct quantity assigned. The problem is that the bundle item price has not updated to reflect the updated quantity on the hidden bundle option. I did something similar to this in Magento 1.1.x and it worked fine. Doing this now in 1.4.1.2, the price is not automatically updated when the quote item is saved. I've tried saving the quote and the cart again after updating the item, but that doesn't seem to have any effect.
What is the proper way to recalculate the price for a quote item when subitems have had the quantity changed? Is there a better way to change the quantity of a quote item bundle option so that the price of the bundle item is updated correctly?
A:
The solution for my case was to update the quantity values of the selections as noted in my original post, and also to adjust the quantity assigned to the bundle item associated with the hidden bundle option. This last step was not necessary in versions prior to Magento 1.4.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Creating a new ABRecord
I am working with ABAddressBook. I have checked out the API docs but could not find any
API related to creating a new ABRecord. But in ABAddressBook, a method ABAddressBookAddRecord is available. But I didnt find any APIs available to create a new record. Is there any way to do this?
Best Regards,
Mohammed Sadiq.
A:
// create new address book person record
ABRecordRef aRecord = ABPersonCreate();
CFErrorRef anError = NULL;
// adjust record firstname
ABRecordSetValue(aRecord, kABPersonFirstNameProperty,
CFSTR("Jijo"), &anError);
// adjust record lastname
ABRecordSetValue(aRecord, kABPersonLastNameProperty,
CFSTR("Pulikkottil"), &anError);
if (anError != NULL) {
NSLog(@"error while creating..");
}
CFStringRef firstName, lastName;
firstName = ABRecordCopyValue(aRecord, kABPersonFirstNameProperty);
lastName = ABRecordCopyValue(aRecord, kABPersonLastNameProperty);
ABAddressBookRef addressBook;
CFErrorRef error = NULL;
addressBook = ABAddressBookCreate();
// try to add new record in the address book
BOOL isAdded = ABAddressBookAddRecord ( addressBook,
aRecord,
&error
);
// check result flag
if(isAdded){
NSLog(@"added..");
}
// check error flag
if (error != NULL) {
NSLog(@"ABAddressBookAddRecord %@", error);
}
error = NULL;
// save changes made in address book
BOOL isSaved = ABAddressBookSave (
addressBook,
&error
);
// check saved flag
if(isSaved){
NSLog(@"saved..");
}
// check error flag
if (error != NULL) {
NSLog(@"ABAddressBookSave %@", error);
}
CFRelease(aRecord);
CFRelease(firstName);
CFRelease(lastName);
CFRelease(addressBook);
|
{
"pile_set_name": "StackExchange"
}
|
Q:
BigInteger -> byte[] -> Biginteger. Looks equal but if statement fails
I am playing around with an idea I have for storing a public key for myself. For this I would need to transform the BigInteger in some sort of a variable and then recreate the BigInteger from that value.
I have searched through Stackoverflow to find the best way to do this is with byte[].
This is my code in Eclipse:
import java.math.BigInteger;
import java.security.KeyPair;
import java.security.KeyPairGenerator;
import java.security.interfaces.RSAPublicKey;
public class Vaja2 {
public static void main(String[] args){
try {
// Create RSA Keypair (to obtain a BigInteger)
KeyPairGenerator kpg = KeyPairGenerator.getInstance("RSA");
kpg.initialize(1024);
KeyPair keypair = kpg.generateKeyPair();
// Extract the public key
RSAPublicKey publicKey = (RSAPublicKey) keypair.getPublic();
// Get the public Exponent from the public key, I have BigInteger now.
BigInteger pkPublicExBI = publicKey.getPublicExponent();
//Try this: BigInteger -> byte-> BigInteger
byte[] pkModByte = pkPublicExBI.toByteArray();
BigInteger pkPublicExBIrecreated = new BigInteger(pkModByte);
// Show Results
System.out.println("Original BigInteger: " + pkPublicExBI);
System.out.println("Recreated BigInteger: " + pkPublicExBIrecreated);
if (pkPublicExBI == pkPublicExBIrecreated) {
System.out.println("\nThey are equal");
}
else {
System.out.println("\nThey are NOT equal");
}
} catch (Exception e) {
// Nothing happens
}
}
}
And this is the result in shown in the Eclipse console.
Original BigInteger: 65537
Recreated BigInteger: 65537
They are NOT equal
The if statement tells me, that the two BigIntegers are not equal. Yet in the console, I see them as both being equal to 65537.
My questions:
Why does the "if" statement fail?
How would you recommend me to make the code different. Assuming the program will demand public key to be stored in notepad.exe or similar text-encoding program (ASCII will be used).
A:
Use .equals() as opposed to == when comparing objects. == will compare the objects' references, whereas .equals() will check to see if they have the same values. Since two objects will very rarely have the same reference you should never use == except for comparing primitive types (int, char, but String is not a primitive type!) where it doesn't matter.
So you want:
if (pkPublicExBI.equals(pkPublicExBIrecreated)) {
Instead of
if (pkPublicExBI == pkPublicExBIrecreated) {
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do chess engines manage time when they play real games?
I was just watching a video on YouTube that analysed a game in between two chess engines. So, chess engines do find good moves but how do they manage their time when they're playing a game ? Is there any algorithm that tells them how much longer they must continue thinking till they make a move, keeping the amount of time left for them in the game in mind ?
A:
Possibilities:
Number of nodes
Fixed depth
Fixed time
Divide a fixed percentage of remaining time
By the complexity of the position
Let's take a quick look at Stockfish. Briefly, the file timemann.cpp calculates the minimum time allowed for a move. The default minimum time allowed is about 20 seconds.
Later, during the search it does:
// Stop the search if only one legal move is available, or if all
// of the available time has been used, or if we matched an easyMove
// from the previous search and just did a fast verification.
...
if ( rootMoves.size() == 1
|| Time.elapsed() > Time.optimum() * unstablePvFactor * improvingFactor / 628
|| (mainThread->easyMovePlayed = doEasyMove, doEasyMove))
If the position requires longer thinking (unstable position -
unstablePvFactor), the engine would extend the search. How long it extend depend on calibrated parameters.
If there is an obvious move (doEasyMove), Stockfish plays it immediately.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Use of << and >>> in a hash function
I have a project in C where I need to created a suitable hash function for void pointers which could contain alphanumeric chars, ints or just plain ol' chars.
I need to use a polynomial hash fuction, where instead of multiplying by a constant, I should use a cyclic shift of partial sums by a fixed number of bits.
In this page here
, there's the java code(I assume this is java because of the use of strings):
static int hashCode(String s) {
int h = 0;
for (int i = 0; i < s.length(); i++) {
h = (h << 5) | (h >>> 27); // 5-bit cyclic shift of the running sum
h += (int) s.charAt(i); // add in next character
}
return h;
}
What exactly is this line, below, doing?
h = (h << 5) | (h >>> 27); // 5-bit cyclic shift of the running sum
Yes, the comment says 5bit cyclic shift, but how does the <<, | and >>> operands work in this regard? I've never seen or used any of them before.
A:
As it says, it's a 5-bit cyclic left shift. This means that all the bits are shifted left, with the bit "shifted off" added to the right side, five times.
The code replaces the value of h with the value of two bit patterns ORed together. The first bit pattern is the original value shifted left 5 bits. The second value is the original value shifted right 27 bits.
The left shift of 5 bits puts all the bits but the leftmost five in their final position. The leftmost 5 bits get "shifted out" by that shift and replaced with zeroes as the rightmost bits of the output. The right shift of 27 bits put the leftmost five bits in their final position as the rightmost bits, shifting in zeroes for the leftmost 27 bits. ORing them together produces the desired output.
The >>> is Java's unsigned shift operation. In C or C++, you'd just use >>.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Adjusting string format in fprintf depending on numeric values
I want to export a numerical array as a .csv file. So, the simplified term looks like this:
fid = fopen('output.csv','wt')
toprint = [1.0, 1.1, 1.2, 1.3];
fprintf(fid, '%f, %f, %f, %f\n', toprint);
fclose(fid)
In this case there is no problem. I use %f in string format to maintain precision. However, sometimes, or rather usually, there are zeros in the array like this:
toprint = [1.0, 0, 0, 1.1];
In such situation, I want to adjust the string format to:
'%f, %d, %d, %f\n' % where "%f" were replaced by "%d" at the positions of the zeros
to reduce output file size since I do not need the precision of zero numbers. The original solution I applied was to detect data types through the array. If zero was detected, then concatenate '%d' onto string format. But it seems to be very inefficient.
What I am looking for is a efficient method to adjust string format depending on input data. Is there any way to achieve this?
A:
Two approaches:
You can use "%g" to simplify floating-point output when possible. This also shortens other whole numbers like 1.0 or 2.0, which may or may not be what you want
Dynamically construct the format string based on the the values
>> fprintf('%g %g %g %g\n', [1.0, 1.1, 1.2, 1.3])
1 1.1 1.2 1.3
>> fprintf('%g %g %g %g\n', [1.0, 1.1, 0, 1.3])
1 1.1 0 1.3
>> fprintf('%g %g %g %g\n', [1.0, 1, 0, 1.3])
1 1 0 1.3
Approach 2:
>> a = [1.1 1.2 0 1.3]
a =
1.1000 1.2000 0 1.3000
>> tokens = {'%f', '%d'}
tokens =
'%f' '%d'
>> strformat = strcat(strjoin(tokens((a==0)+1), ', '), '\n')
strformat =
%f, %f, %d, %f\n
>> fprintf(strformat, a)
1.100000, 1.200000, 0, 1.300000
|
{
"pile_set_name": "StackExchange"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.