text
stringlengths 64
89.7k
| meta
dict |
---|---|
Q:
Python -- import the package in a module that is inside the same package
I have a project structure something like this...
/some_app
build/
README
out.py
some_app/
__init__.py
mod1.py
mod2.py
Now I want to import some_app package into mod2, without messing with sys.path trickery. What I simply did is...
# mod2.py
import some_app
Now when I run the mod2.py from the command line
some_app $ python mod2.py
it throws error ImportError: No module named some_app
BUT, inside the out.py file, when I do
# out.py
import some_app.mod2
and then do
some_app $ python out.py
it runs perfectly.
Hence, what is happening is this. I load a package in a module that is within the same package, and then run that module as the __main__ file -- and it doesn't work. Next, I load the same module (the one that I ran as __main__) inside another module, and then run that another module as __main__ -- and it works.
Can someone please elaborate on what's going on here?
UPDATE
I understand that there is no straightforward reason for doing this -- because I could have directly imported any modules inside the some_app package. The reason I am trying this is because, in the Django project, this is what they're doing. See this file for example
In every module, all the non-standard imports start with django.. So I wondered why and how they are doing that.
UPDATE 2
Relevant links
How to do relative imports in Python?
Python: import the containing package
A:
mod2.py is part of some_app. As such, it makes no sense to import the module, since you're already inside it.
You can still import mod1. I'm assuming you need some_app/__init__.py to run. Not sure that's possible.
EDIT:
Looks like from . import some_module will do what you're after.
| {
"pile_set_name": "StackExchange"
} |
Q:
Unable to set accessibilityIdentifier of UISegmentedControl's segments
I found out, that even though I could set accessibilityLabel of UISegmentedControl's segment (see: How do I set the accesibility label for a particular segment of a UISegmentedControl?), I couldn't set accessibilityIdentifier, which was equally important for my project. I need to target a segment irrespective of its text and accessibilityLabel for automation purposes.
For example, the code:
NSString *name = [NSString stringWithFormat:@"Item %li", (long)idx];
segment.accessibilityIdentifier = name;
NSLog(@"ID: %@", segment.accessibilityIdentifier);
results in:
ID: (null)
No exceptions are thrown.
Does anybody have insight into why accessibilityLabel is implemented, but not accessibilityIdentifier?
A:
I got around this issue by writing a Swift extension for XCUIElement that added a new method tap(at: UInt). This method gets the buttons query of the element and sort the results based on their x position. This allows us to specify which segment of the UISegmentedControl we want to tap rather than relying on the button text.
extension XCUIElement {
func tap(at index: UInt) {
guard buttons.count > 0 else { return }
var segments = (0..<buttons.count).map { buttons.element(boundBy: $0) }
segments.sort { $0.0.frame.origin.x < $0.1.frame.origin.x }
segments[Int(index)].tap()
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
MySQL Workbench 6.0 ce will not start on Windows 8 Pro
Just install windows 8 Pro
Install xampp for mysql and php
while install mysql workbench 6.0
Install went fine but the application does not start
Try to install the version of mysql workbench 5.0
Still the same problem.
Tried the zip version still not working
A:
After several hours of search i noted that
You should check for the following
To be able to run MySQL Workbench 5.2 your System needs to have libraries listed below installed.
Microsoft .NET Framework 4 Client Profile
Microsoft Visual C++ 2010 Redistributable Package
OpenGL
Note that even though windows 8 comme with preinstall Microsoft Visual C++ 2010 Redistributable Package and Microsoft .NET Framework 4 Client Profile
You should make sure you have the 32-bit version of these application.
| {
"pile_set_name": "StackExchange"
} |
Q:
Can multi-classing prerequisite ability score(s) be met through a magic item?
The multiclassing rules (PHB, p. 163) state:
To qualify for a new class, you must meet the ability score prerequisites for both your current class and your new one [...] Without the full training that a beginning character receives, you must be a quick study in your new class, having a natural aptitude that is reflected by higher-than-average ability scores.
Can the ability score prerequisite(s) for multiclassing be met by (long-term) use of a magic item(s) such as the Headband of Intellect (INT 19), Ioun Stones (+2 to various ability scores), etc.?
The phrase "natural aptitude" suggests nonmagical, but a character could have been using a magic item for years which seems virtually indistinguishable.
A:
In this unofficial tweet, Jeremy Crawford states that the prerequisite ability scores for multiclassing are intended to be met by your base score and not a temporary score. As such, we can infer that magic items that don't permanently increase your ability scores wouldn't work.
Keith
@mikemearls @JeremyECrawford would a temporary stat bump fulfill a multi-class pre-req or does the base score have to meet the #
Jeremy
The intent is that your base score, not a temporary score, has to meet a multiclassing prerequisite.
As of 2016, this has also been reiterated in the official Sage Advice Compendium:
Would a temporary stat bump fulfill a multiclass prerequisite, or does the base score have to meet the requirement? Your base score, not a temporary score, has to meet a multiclassing prerequisite.
A:
There is nothing in the PHB or DMG that I can find that indicates that this is not possible.
However, because of this fact, this is a conversation you need to have with your DM.
It's possible that in the campaign they envision, there would be severe complications, it's possible that there won't be. But only they can determine whether or not this is a good idea for your game.
A:
There are two pieces of evidence in the citation which indicate that the answer is no.
natural aptitude that is reflected by higher-than-average ability scores
The character needs not just aptitude but natural aptitude. In D&D natural typically contrasts with magical, and this looks like a pretty clear statement that enhancement of ability scores through magical effects do not enable a character to qualify.
Aptitude is reflected in ability scores, implying scores themselves are evidence of that aptitude. Items which raise ability scores therefore do not necessarily affect aptitude.
The passage is worded in a way which strongly suggests the author did not envision a scenario in which magical enhancement would give rise to multiclassing opportunities. Given this is a system in which magical enhancement of ability scores is commonplace (for PCs at least) it would be implausible to think it's a special case the authors overlooked.
We can also look at the consequences of such an interpretation to determine if it would be wise to house-rule magical bonuses from items as an acceptable source of natural aptitude:
This would make ability-enhancing magical items more powerful. Typically, the role of items in the game is to affect what your character can do or how well they can do it moment to moment, but don't directly influence overall character development.
My gut feeling is that players are more likely to metagame or become stubborn when dividing up loot if not getting an item today means they can't multiclass tomorrow. When your player and your character are both looking at items in terms of their use value then conflict is more likely to stay in-character and have an acceptable resolution.
Does the character really wear the magical item all the time, even when at rest? What if the attribute was improved not by a headband but a helmet which needed to be taken off when eating, sleeping, washing etc.? If that changes your answer, do you think it's already factored into the in-game value of different magical items? What would happen if the character loses the item part way through training?
What if the item providing the bonus is thematically inconsistent with the new class?
Could similar effects be achieved with ability-enhancing spells cast regularly?
Conversely, do temporary effects cause you to fail to multiclass? What if any point an ability score is lowered temporarily through magical or other means?
What about other character development options which have ability scores as prerequisites - could they, also, be met by the same means?
Perhaps you feel you can come up with a consistent and satisfactory answer to these questions. But it is inviting a level of metagaming and rules lawyering that is likely to harm the playing experience. So a DM's answer to a request to permit it by house rule, unless they have a policy of being indulgent and happy to manage the resulting fallout, will probably also be no. And even if they were tempted, it might be simpler to simply ignore ability modifier prerequisites rather than create a loophole.
I cannot see any reason you would conclude from the rules that ability enhancements granted by worn items can be used to meet class prerequisites, or any game objective it would serve to do so; allowing it would seem contrary to the intention of the rules, and is likely to cause undesirable effects and distractions from gameplay.
| {
"pile_set_name": "StackExchange"
} |
Q:
ASP.NET synchronous commands handlers
A question about DDD, for validating a Proof Of Concept.
Let's say we have a webpage that triggers a Domain Event. For instance, updating the status of a client after an interaction occurred on that page. From a user perspective, we want the event to be handled immediately, and the page to refreshed in the server response, because the status has a lot of weight on the information that is displayed on the page.
Domain Events are meant to lead to "Eventual Consistency". How do you handle synchronous events in an ASP.NET application ?
Thanks
A:
Got a response. I was looking for some kind of "Request/Respond" pattern implementation, that I found.
[Edit for Svick] For instance, the web application should emit a "DoSomething" command and then wait for a "SomethingDone" notification (or a combo of notifications), introducing a timeout. For sagas and more complex workflows, the point is to know what significant (and minimal) notification(s) should be handled/aggregated by the web application to let the user know his request has been handled and is on a good way to completion.
| {
"pile_set_name": "StackExchange"
} |
Q:
template class inheritance problem
Can you please tell me, what am I missing?
template <class T> struct Base
{
T data;
Base(const T &_data):data(_data) { }
};
template <class T> struct Derived : Base<T>
{
Derived():Base(T()) {} //error: class 'Derived<T>' does not have any field named 'Base'
};
A:
template <class T> struct Derived : Base<T>
{
Derived():Base<T>(T()) {}
};
| {
"pile_set_name": "StackExchange"
} |
Q:
Filling a dropdownlist in a grid view with manual data
I am looking to fill a dropdownlist within a gridview with manual data. Is there anyway to do this through the markup code? I am just trying to fill it with days etc. 1 day(s), 2 day(s), 3 day(s) etc. Here is the code for my drop down
<asp:TemplateField HeaderText="No. Days">
<ItemTemplate>
<asp:DropDownList ID="txtFinishDate" runat="server"></asp:DropDownList>
</ItemTemplate>
</asp:TemplateField>
I've tried filling it manually on the page load but it doesn't seem to work and I think it is because of the grid view. Sorry if the question seems vague and thanks in advance!
A:
what you have, but you have to add items to the list.
<asp:TemplateField HeaderText="custom">
<ItemTemplate>
<asp:DropDownList ID="DropDownList1" runat="server">
<asp:ListItem Text="[ select an item ]" Value="0"></asp:ListItem>
<asp:ListItem Text="yes?" Value="1"></asp:ListItem>
<asp:ListItem Text="no?" Value="2"></asp:ListItem>
<asp:ListItem Text="umm..." Value="3"></asp:ListItem>
</asp:DropDownList>
</ItemTemplate>
</asp:TemplateField>
| {
"pile_set_name": "StackExchange"
} |
Q:
import .sql file into mysql from mac command line. tried mysql -u root -p db_name > path/to/dbfile.sql
I have tried doing it from the command line
mysql -u root -p db_name > ~/Documents/db_name.sql
I have tried doing it from mysqlimport
mysqlimport -u root -p db_name ~/Documents/db_name.sql
I have tried both while being in the correct directory using just the file name.
I have tried entering mysql using
mysql -u root -p
use db_name;
source ~/Documents/db_name.sql;
(nothing happens - no response)
(tried with absolute path - no response)
\. ~/Documents/db_name.sql
(nothing happens)
I feel like I'm missing something. This seems like a trivial operation according to the last 30 minutes of googling and attempts.
Ultimately I had to copy and paste the entire .sql file into the mysql shell while using the correct db.
I feel like a caveman. Please help.
Edit: SQL file contents
-- phpMyAdmin SQL Dump
-- version 4.4.15.5
-- http://www.phpmyadmin.net
--
-- Host: 127.0.0.1:8889
-- Generation Time: May 09, 2017 at 09:27 PM
-- Server version: 5.6.34-log
-- PHP Version: 7.0.13
SET SQL_MODE = "NO_AUTO_VALUE_ON_ZERO";
SET time_zone = "+00:00";
/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8mb4 */;
--
-- Database: `movie-buff`
--
-- --------------------------------------------------------
--
-- Table structure for table `directors`
--
CREATE TABLE IF NOT EXISTS `directors` (
`director_id` int(11) NOT NULL,
`first` varchar(60) DEFAULT NULL,
`last` varchar(60) DEFAULT NULL,
`country` varchar(100) DEFAULT NULL
) ENGINE=InnoDB AUTO_INCREMENT=10 DEFAULT CHARSET=latin1;
--
-- Dumping data for table `directors`
--
INSERT INTO `directors` (`director_id`, `first`, `last`, `country`) VALUES
(1, 'Jean-Pierre', 'Jeunet', 'France'),
(2, 'Jean', 'Renoir', 'France'),
(3, 'Akira', 'Kurosawa', 'Japan'),
(4, 'Jane', 'Campion', 'New Zealand'),
(5, 'Sally', 'Potter', 'UK'),
(6, 'Kasi', 'Lemmons', 'USA'),
(7, 'Ava', 'DuVernay', 'USA'),
(8, 'Todd', 'Haynes', 'USA'),
(9, 'Marleen', 'Gorris', 'Netherlands');
-- --------------------------------------------------------
--
-- Table structure for table `movies`
--
CREATE TABLE IF NOT EXISTS `movies` (
`movie_id` int(11) NOT NULL,
`title` varchar(130) DEFAULT NULL,
`year` int(11) DEFAULT NULL,
`director_id` int(11) NOT NULL
) ENGINE=InnoDB AUTO_INCREMENT=23 DEFAULT CHARSET=latin1;
--
-- Dumping data for table `movies`
--
INSERT INTO `movies` (`movie_id`, `title`, `year`, `director_id`) VALUES
(1, 'The City of Lost Children', 1995, 1),
(2, 'Amelie', 2001, 1),
(3, 'The Rules of the Game', 1939, 2),
(4, 'La Grande Illusion', 1937, 2),
(5, 'The Lower Depths', 1936, 2),
(6, 'Alien: Resurrection', 1997, 1),
(7, 'Ran', 1985, 3),
(8, 'Seven Samurai', 1954, 3),
(9, 'Throne of Blood', 1957, 3),
(10, 'An Angel at My Table', 1990, 4),
(11, 'The Piano', 1993, 4),
(12, 'Orlando', 1992, 5),
(13, 'The Tango Lesson', 1997, 5),
(14, 'Talk to Me', 2007, 6),
(15, 'Eve''s Bayou', 1997, 6),
(16, 'Selma', 2014, 7),
(18, 'Far From Heaven', 2002, 8),
(19, 'I''m Not There', 2007, 8),
(20, 'Carol', 2015, 8),
(21, 'Antonia''s Line', 1995, 9),
(22, 'Mrs. Dalloway', 1997, 9);
-- --------------------------------------------------------
--
-- Table structure for table `viewers`
--
CREATE TABLE IF NOT EXISTS `viewers` (
`viewer_id` int(11) NOT NULL,
`first` varchar(60) DEFAULT NULL,
`last` varchar(60) DEFAULT NULL,
`email` varchar(80) DEFAULT NULL
) ENGINE=InnoDB AUTO_INCREMENT=7 DEFAULT CHARSET=latin1;
--
-- Dumping data for table `viewers`
--
INSERT INTO `viewers` (`viewer_id`, `first`, `last`, `email`) VALUES
(1, 'Tim', 'Labonne', '[email protected]'),
(2, 'Alicen', 'Brightley', '[email protected]'),
(3, 'Renard', 'Sartor', '[email protected]'),
(4, 'Luigi', 'Greco', '[email protected]'),
(5, 'Jackie', 'Linwood', '[email protected]'),
(6, 'Caroline', 'Smith', '[email protected]');
-- --------------------------------------------------------
--
-- Table structure for table `viewings`
--
CREATE TABLE IF NOT EXISTS `viewings` (
`viewing_id` int(11) NOT NULL,
`viewer_id` int(11) NOT NULL,
`movie_id` int(11) NOT NULL,
`date_viewed` date DEFAULT NULL
) ENGINE=InnoDB AUTO_INCREMENT=34 DEFAULT CHARSET=latin1;
--
-- Dumping data for table `viewings`
--
INSERT INTO `viewings` (`viewing_id`, `viewer_id`, `movie_id`, `date_viewed`) VALUES
(1, 1, 4, '2008-10-07'),
(2, 1, 2, '2009-12-18'),
(3, 1, 1, '2010-02-27'),
(4, 1, 21, '2010-03-14'),
(5, 2, 21, '2015-04-15'),
(6, 2, 22, '2015-10-04'),
(7, 2, 7, '2015-11-30'),
(8, 2, 9, '2016-01-05'),
(9, 2, 12, '2016-04-14'),
(10, 2, 16, '2017-01-23'),
(11, 3, 8, '2016-02-14'),
(12, 3, 18, '2016-03-20'),
(13, 3, 22, '2016-04-07'),
(14, 4, 20, '2017-01-03'),
(15, 4, 18, '2017-01-14'),
(16, 4, 15, '2017-02-08'),
(17, 4, 10, '2007-09-23'),
(18, 4, 2, '2017-03-05'),
(19, 4, 4, '2017-04-13'),
(20, 4, 12, '2017-04-30'),
(21, 4, 14, '2017-05-02'),
(22, 4, 21, '2017-05-08'),
(23, 5, 2, '2013-08-25'),
(24, 5, 3, '2013-12-16'),
(25, 5, 7, '2014-03-18'),
(26, 6, 11, '2013-11-30'),
(27, 6, 2, '2013-12-18'),
(28, 6, 14, '2014-04-29'),
(29, 6, 5, '2016-12-03'),
(30, 6, 13, '2017-01-09'),
(31, 6, 18, '2017-02-13'),
(32, 6, 21, '2017-03-14'),
(33, 6, 15, '2017-04-15');
--
-- Indexes for dumped tables
--
--
-- Indexes for table `directors`
--
ALTER TABLE `directors`
ADD PRIMARY KEY (`director_id`);
--
-- Indexes for table `movies`
--
ALTER TABLE `movies`
ADD PRIMARY KEY (`movie_id`),
ADD KEY `director_id` (`director_id`);
--
-- Indexes for table `viewers`
--
ALTER TABLE `viewers`
ADD PRIMARY KEY (`viewer_id`);
--
-- Indexes for table `viewings`
--
ALTER TABLE `viewings`
ADD PRIMARY KEY (`viewing_id`),
ADD KEY `viewer_id` (`viewer_id`),
ADD KEY `movie_id` (`movie_id`);
--
-- AUTO_INCREMENT for dumped tables
--
--
-- AUTO_INCREMENT for table `directors`
--
ALTER TABLE `directors`
MODIFY `director_id` int(11) NOT NULL AUTO_INCREMENT,AUTO_INCREMENT=10;
--
-- AUTO_INCREMENT for table `movies`
--
ALTER TABLE `movies`
MODIFY `movie_id` int(11) NOT NULL AUTO_INCREMENT,AUTO_INCREMENT=23;
--
-- AUTO_INCREMENT for table `viewers`
--
ALTER TABLE `viewers`
MODIFY `viewer_id` int(11) NOT NULL AUTO_INCREMENT,AUTO_INCREMENT=7;
--
-- AUTO_INCREMENT for table `viewings`
--
ALTER TABLE `viewings`
MODIFY `viewing_id` int(11) NOT NULL AUTO_INCREMENT,AUTO_INCREMENT=34;
--
-- Constraints for dumped tables
--
--
-- Constraints for table `movies`
--
ALTER TABLE `movies`
ADD CONSTRAINT `movies_ibfk_1` FOREIGN KEY (`director_id`) REFERENCES `directors` (`director_id`);
--
-- Constraints for table `viewings`
--
ALTER TABLE `viewings`
ADD CONSTRAINT `viewings_ibfk_1` FOREIGN KEY (`viewer_id`) REFERENCES `viewers` (`viewer_id`),
ADD CONSTRAINT `viewings_ibfk_2` FOREIGN KEY (`movie_id`) REFERENCES `movies` (`movie_id`);
/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
/*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;
A:
You should use the mysql command in order to import a mysqldump sql file:
mysql -u root -p db_name < ~/Documents/db_name.sql
The mysqlimport utility is used to insert data from textfiles into the database, it is a wrapper around the LOAD DATA INFILE sql statement. From mysqlimport documentation:
The mysqlimport client provides a command-line interface to the LOAD
DATA INFILE SQL statement. Most options to mysqlimport correspond
directly to clauses of LOAD DATA INFILE syntax. See Section 13.2.6,
“LOAD DATA INFILE Syntax”.
| {
"pile_set_name": "StackExchange"
} |
Q:
Diameter of a quotient of the infinite dimensional sphere
Suppose a group $\Gamma$ acts by isometries on the Hilbert space $\mathbb{H}^\infty$ and it fixes the origin. So $\Gamma$ acts on the unit sphere $\mathbb{S}^\infty$ as well.
Assume that the action $\Gamma$ on $\mathbb{S}^\infty$ has no dense orbits. Is there a universal constant $\varepsilon >0$ such that there are two orbits of $\Gamma$ on distance at least $\varepsilon$ from each other?
In other words, is there a constant $\varepsilon>0$ such that $$\mathrm{diam}\, (\mathbb{S}^\infty/\Gamma) >0
\quad\Longrightarrow\quad
\mathrm{diam}\, (\mathbb{S}^\infty/\Gamma) > \varepsilon\ ?$$
Comments. The statement is related to the following results in finite dimension. It was proved in [Gre] that, on the $n$-sphere for $n\ge 2$, there is such a lower bound $\varepsilon_n>0$ (choose it as optimal). It is expected (see the introduction of this arXiv paper by Gorodski and Lytchak) that $\inf_{n\ge 2}\varepsilon_n>0$.
This is announced to hold by Claudio Gorodski, Christian Lange, Alexander Lytchak and Ricardo Mendes (it is not published yet). Namely, for some universal constant $\varepsilon>0$, for any $n\ge 2$ and for any isometric group action of any compact group on the unit $n$-sphere, the orbit space of the action is either a point or has diameter $\ge\varepsilon$.
[Gre] S. J. Greenwald, Diameters of spherical Alexandrov spaces and curvature one orbifolds,
Indiana Univ. Math. J. 49 (2000), no. 4, 1449–1479.
A:
There is no such universal constant $\epsilon > 0$. Work with the complex Hilbert space $L^2[0,1]$ (which of course is also a real Hilbert space). Fix $n \in \mathbb{N}$.
Let $\Gamma_0$ be the set of continuous piecewise linear increasing bijections from $[0,1]$ to itself. [1] It is a group with composition as product. [2] It acts by isometries of $L^2[0,1]$ by the map $f \mapsto \sqrt{\phi'}\cdot (f\circ\phi)$ for $f \in L^2[0,1]$ and $\phi \in \Gamma_0$. Also let $\Gamma_1 \subset L^\infty[0,1]$ consist of the measurable functions from $[0,1]$ to $\mathbb{T}_n = \{e^{2\pi i k/n}: 0 \leq k < n\}$, identifying functions which differ on a null set. This is a group under pointwise product and [3] it acts isometrically by multiplication on $L^2[0,1]$. Let $\Gamma$ be the group of isometries of $L^2[0,1]$ generated by $\Gamma_0$ and $\Gamma_1$ under these actions. ([4] This is a semidirect product of $\Gamma_0$ and $\Gamma_1$.)
[5] The $\Gamma_0$ action takes the unit vector $1_{[0,1]}$ to any piecewise constant strictly positive unit vector $f \in L^2[0,1]$. (If $f$ takes the value $c$ on an interval $I$, let $\phi$ have slope $c^2$ on this interval.) [6] These functions are dense in the positive part of the unit sphere. [7] Applying the action of $\Gamma_1$ then gets us to arbitrarily close to any unit vector in $L^2[0,1]$ whose argument lies almost everywhere in $\mathbb{T}_n$. [8] It follows that the distance from $1_{[0,1]}$ to any other orbit is at most $\alpha = |1 - e^{\pi i/n}|$ ($\approx \frac{\pi}{n}$ for large $n$). [9] It follows straightforwardly that the same is true for any positive unit vector in place of $1_{[0,1]}$, and then [10] that the distance between any two orbits is at most $2\alpha$.
[11] On the other hand, the distance from the orbit of $1_{[0,1]}$ to the vector $e^{\pi i/n}\cdot 1_{[0,1]}$ is at least the distance from $(1,0) \in \mathbb{R}^2$ to the line through the origin of slope $\frac{\pi}{n}$ (again approximately $\frac{\pi}{n}$ for large $n$), so this orbit is not dense and since the action is isometric no orbit is dense.
Edit: maybe people want more details.
[1] The composition of two continuous functions is continuous, of two PL functions is PL, of two increasing functions is increasing, of two bijections is a bijection. The inverse of a continuous PL increasing bijection is a continuous PL increasing bijection.
[2] $\|\sqrt{\phi'}\cdot (f\circ \phi)\|_2^2 = \int_0^1 \phi'|f\circ\phi|^2\, dt = \int_0^1 |f|^2\, dt = \|f\|_2^2$.
[3] If $h \in \Gamma_1$ then $\|hf\|_2^2 = \int_0^1 |hf|^2\, dt = \int_0^1 |f|^2\, dt = \|f\|_2^2$ since $|h| = 1$ a.e.
[4] This isn't needed, but anyway if $\phi \in \Gamma_0$ and $h \in \Gamma_1$ then $\sqrt{\phi'}\cdot (hf\circ \phi) = (h\circ \phi)\cdot \sqrt{\phi'}\cdot(f\circ\phi)$.
[5] Let $f$ be a piecewise constant strictly positive unit vector in $L^2[0,1]$. Then $f = a_0\cdot 1_{[0,t_1)} + \cdots + a_k\cdot 1_{[t_k,1)}$ a.e. for some $0 < t_1 < \cdots < t_k < 1$ and some $a_0, \ldots, a_k > 0$. The unit norm condition means that $\sum_{i=1}^k a_i^2(t_{i+1} - t_i) = 1$. Now define $\phi: [0,1] \to \mathbb{R}$ so that $\phi(0) = 0$, $\phi$ is continuous, and $\phi$ is linear with slope $a_i^2$ on $[t_{i-1},t_i]$. The unit norm condition just detailed shows that $\phi(1) = 1$, i.e., $\phi$ is a continuous PL increasing bijection. We have $\sqrt{\phi'}\cdot (1_{[0,1]}\circ \phi) = \sqrt{\phi'}$, which takes the value $a_i$ constantly on $(t_{i-1},t_i)$. So $1_{[0,1]}$ is taken to $f$.
[6] First, positive piecewise constant functions can uniformly approximate any continuous function on $[0,1]$, and since the positive continuous functions are dense for the $L^2$ norm in the positive part of $L^2[0,1]$, this shows that positive piecewise constant functions are dense in the positive part of $L^2[0,1]$. Given a positive $f \in L^2[0,1]$ with unit norm, find a sequence $(f_k)$ of positive piecewise constant functions with $f_k \to f$ in $L^2[0,1]$. Then $\|f_k\|_2 \to 1$ so $\frac{1}{\|f_k\|_2}f_k \to f$. Thus, any positive unit vector is approximated by positive piecewise constant unit vectors.
[7] Since multiplying by $h \in \Gamma_1$ is an isometry, if $f = h|f| \in L^2[0,1]$ is a unit vector whose argument $h$ lies in $\mathbb{T}_n$ a.e. and $g$ is a positive piecewise constant unit vector which is close to $|f|$, then $hg$ will be equally close to $f$.
[8] Given any unit vector $f = h|f| \in L^2[0,1]$, we can find $\tilde{h} \in \Gamma_1$ such that $|h(t) - \tilde{h}(t)| \leq \alpha$ a.e. As we just saw that the orbit of $1_{[0,1]}$ comes arbitrarily close to $\tilde{h}|f|$, it follows that the distance from $f$ to this orbit is at most $\|f - \tilde{h}|f|\|_2 = \|(h - \tilde{h})|f|\|_2 \leq \alpha \|f\|_2 = \alpha$.
[9] We saw already that any positive unit vector $f$ is approximated by elements in the orbit of $1_{[0,1]}$. Since the action is isometric, this means that $1_{[0,1]}$ is approximated by elements in the orbit of $f$. Again by isometric action, since we can take $1_{[0,1]}$ to within $\alpha'$ of any unit vector, for any $\alpha' > \alpha$, the same is then true of $f$.
[10] Any two unit vectors lie within $\alpha'$ of the orbit of $1_{[0,1]}$, for any $\alpha' > \alpha$. So (by isometric action again) one lies within $2\alpha'$ of the orbit of the other.
[11] The argument of any vector $f$ in the orbit of $1_{[0,1]}$ lies pointwise a.e. in $\mathbb{T}_n$. So $|f(t) - e^{\pi i/n}| \geq \beta$ pointwise, where $\beta$ is the distance from $(1,0) \in \mathbb{R}^2$ to the line through the origin of slope $\frac{\pi}{n}$ (= the distance from $e^{\pi i/n} \in \mathbb{C}$ to the union of the lines through the origin of slopes $\frac{2k\pi}{n}$). Thus $\|f - e^{\pi i/n}\cdot 1_{[0,1]}\|_2 \geq \beta$.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to get weekdays array from system in Swift?
How can I take an array with days of week from the system (from NSDate, I think)?
Until now, I can only take the current day, but I'd like to be able to take all weekdays in an array.
If the first day of week is set to Monday, my array would look like:
[ Mon, Tue, Wed... ]
If the first day of week is Sunday, my array would look like:
[Sun, Mon, Tue... ]
Code:
let dateNow = NSDate()
let calendar = NSCalendar.currentCalendar()
let components = calendar.components(.CalendarUnitHour | .CalendarUnitMinute | .CalendarUnitSecond | .CalendarUnitYear , fromDate: dateNow)
/*This is the way how i take system time */
let format = NSDateFormatter()
format.dateFormat = "EEE"
stringDay = format.stringFromDate(dateNow)
A:
Try these properties:
let fmt = NSDateFormatter()
fmt.weekdaySymbols // -> ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"]
fmt.shortWeekdaySymbols // -> ["Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"]
fmt.veryShortWeekdaySymbols // -> ["S", "M", "T", "W", "T", "F", "S"]
fmt.standaloneWeekdaySymbols // -> ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"]
fmt.shortStandaloneWeekdaySymbols // -> ["Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"]
fmt.veryShortStandaloneWeekdaySymbols // -> ["S", "M", "T", "W", "T", "F", "S"]
It seems they always return Sun ... Sat array regardless .firstWeekday property of the .calendar. So, you have to rotate it manually.
let firstWeekday = 2 // -> Monday
var symbols = fmt.shortWeekdaySymbols
symbols = Array(symbols[firstWeekday-1..<symbols.count]) + symbols[0..<firstWeekday-1]
// -> ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"]
| {
"pile_set_name": "StackExchange"
} |
Q:
Multiply column values per group in r
I have a data frame as following:
id weight value
231 50 0.6
231 50 0.43
420 30 0.86
420 30 0.12
How can I multiply all values per each id and weight to have the following table:
id weight value
231 50 0.6*0.43
420 30 0.86*0.12
A:
Use prod() to multiply all the values in its arguments.
library(dplyr)
df %>%
group_by(id, weight) %>%
summarise(prod = prod(value))
# # A tibble: 2 x 3
# # Groups: id [2]
# id weight prod
# <int> <int> <dbl>
# 1 231 50 0.258
# 2 420 30 0.103
Or a base way
aggregate(value ~ id + weight, df, prod)
| {
"pile_set_name": "StackExchange"
} |
Q:
Assign memory to structure using structure
Suppose I have the following code:
typedef struct _SingleList {
struct _SingleList *link;
int data;
} SingleList;
SingleList *prepend(SingleList *list, int data) {
SingleList a;
SingleList *newNode = &a; // Note, here I assign memory using structure not malloc.
newNode->data = data;
newNode->link = list;
return newNode;
}
As, you can see in above prepend function, I assign memory using address of a instead of malloc and it works perfectly, when I call prepend function. Here's a sample:
int main(void) {
SingleList *list = NULL;
list = prepend(list, 10);
printf("%d", list->data);
list = prepend(list, 20);
printf("\n%d", list->link->data);
list = prepend(list, 30);
printf("\n%d", list->link->link->data);
}
So, how it works and if it works for others too, then why we use malloc() for linked list instead of simply assigning a structure.
A:
In the function prepend, the return value newNode is a pointer that points to a local variable a. When the function ends, it's undefined behavior to access it. It just happens to work in your machine in this simple code, you can't rely on it.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why is Ruby's Array#shift used in this task?
I'm going through this annoying tutorial on Rubymonk and it asks me to do the following:
Write for me three methods - calculate, add and subtract. The tests should all pass. Take a look at the hint if you have trouble! And as a little extra hint: remember that you can use something.is_a?(Hash) or another_thing.is_a?(String) to check an object's type.
I couldn't even understand what they ask me to do, so I just decided to see the solution and and work my way into a decent understanding of the task.
Here is the solution:
def add(*numbers)
numbers.inject(0) { |sum, number| sum + number }
end
def subtract(*numbers)
current_result = numbers.shift
numbers.inject(current_result) { |current_result, number| current_result - number }
end
def calculate(*arguments)
# if the last argument is a Hash, extract it
# otherwise create an empty Hash
options = arguments[-1].is_a?(Hash) ? arguments.pop : {}
options[:add] = true if options.empty?
return add(*arguments) if options[:add]
return subtract(*arguments) if options[:subtract]
end
I don't understand many things, but the one thing that baffles me is the shift method: current_result = numbers.shift. Why is it there? I mean, I understand what it does, but what's its job in this particular piece of code?
Btw, if someone goes to the trouble to break this code down for me I would be endlessly and eternally thankful.
The task is at the bottom of the following page:
https://rubymonk.com/learning/books/1-ruby-primer/chapters/19-ruby-methods/lessons/69-new-lesson#solution3899
A:
add(*numbers)
Let's start by invoking:
def add(*numbers)
numbers.inject(0) { |sum, number| sum + number }
end
like this:
add(1,2,3) #=> 6
or like this:
add(*[1,2,3]) #=> 6
The two are equivalent. The latter shows you what the operator "splat" does.
This results in:
numbers #=> [1,2,3]
so Ruby sends Enumerable#inject (aka reduce) to numbers:
[1,2,3].inject(0) { |sum, number| sum + number }
inject first initializes the "memo" sum to inject's argument (if, as here, it has one) and then passes the first element of the "receiver" [1,2,3] into the block and assigns it to the block variable number:
sum #=> 0
number #=> 1
Ruby then computes:
sum + number #=> 0 + 1 => 1
which becomes the new value of the memo. Next, inject passes 2 into the block and computes:
sum #=> 1
number #=> 2
sum + number #=> 3
so (the memo) sum is now 3.
Lastly,
sum #=> 3
number #=> 3
sum + number #=> 6
As all elements of the receiver have been passed into the block, inject returns the value of the memo:
sum #=> 6
If you examine the doc for inject you'll see that if the method has no argument Ruby assigns the first element of the receiver (here 1) to the memo (sum) and then carries on as above starting with the second element of the receiver (2). As expected, this produces the same answer:
def add(*numbers)
numbers.inject { |sum, number| sum + number }
end
add(1,2,3) #=> 6
So why include the argument zero? Often we will want add() (i.e., add(*[])) to return zero. I will leave it to you to investigate what happens here with each of the two forms of inject. What conclusion can you draw?
As @Stefan points out in his answer, you can simply this to:
def add(*numbers)
numbers.inject :+
end
which is how you'd normally see it written.
If, however, numbers may be an empty array, you'd want to provide an initial value of zero for the memo:
def add(*numbers)
numbers.inject 0, :+
end
add(*[]) #=> 0
subtract(*numbers)
def subtract(*numbers)
current_result = numbers.shift
numbers.inject(current_result) { |current_result, number|
current_result - number }
end
This is similar to the method add, with a small twist. We need the first value of the memo (here current_result) to be the first element of the receiver. There are two ways we could do that.
The first way is like this:
def subtract(*numbers)
numbers[1..-1].inject(numbers.first) { |current_result, number|
current_result - number }
end
numbers = [6,2,3]
subtract(*numbers) #=> 1
Let
first_number = numbers.first #=> 6
all_but_first = numbers[1..-1] #=> [2,3]
then:
numbers[1..-1].inject(numbers.first) { ... }
is:
all_but_first.inject(first_number) { ... }
#=> [2,3].inject(6) { ... }
Instead, the author chose to write:
first_number = numbers.shift #=> 6
numbers #=> [2,3]
numbers.inject(first_number) { ... }
#=> [2,3].inject(6) { ... }
which may be a bit niftier, but the choice is yours.
The second way is to use inject without an argument:
def subtract(*numbers)
numbers.inject { |current_result, number| current_result - number }
end
numbers = [6,2,3]
subtract(*numbers) #=> 1
You can see why this works by reviewing the doc for inject.
Moreover, similar to :add, you can write:
def subtract(*numbers)
numbers.inject :-
end
Lastly, subtract requires numbers to have at least one element, so we might write:
def subtract(*numbers)
raise ArgumentError, "argument cannot be an empty array" if numbers.empty?
numbers.inject :-
end
calculate(*arguments)
We see that calculate expects to be invoked in one of the following ways:
calculate(6,2,3,{ :add=>true }) #=> 11
calculate(6,2,3,{ :add=>7 }) #=> 11
calculate(6,2,3,{ :subtract=>true }) #=> 1
calculate(6,2,3,{ :subtract=>7 }) #=> 1
calculate(6,2,3) #=> 11
If the hash has a key :add with a "truthy" value (anything other than false or nil, we are to add; if the hash has a key :subtract with a "truthy" value (anything other than false or nil, we are to subtract. If the last element is not a hash (calculate(6,2,3)), add is assumed.
Note:
calculate(6,2,3,{ :add=>false }) #=> nil
calculate(6,2,3,{ :subtract=>nil }) #=> nil
Let's write the method like this:
def calculate(*arguments)
options =
if arguments.last.is_a?(Hash) # or if arguments.last.class==Hash
arguments.pop
else
{}
end
if (options.empty? || options[:add])
add *arguments
elsif options[:subtract]
subtract *arguments
else
nil
end
end
calculate(6,2,3,{ :add=>true }) #=> 11
calculate(6,2,3,{ :add=>7 }) #=> 11
calculate(6,2,3,{ :subtract=>true }) #=> 1
calculate(6,2,3,{ :subtract=>7 }) #=> 1
calculate(6,2,3) #=> 11
calculate(6,2,3,{ :add=>false }) #=> nil
calculate(6,2,3,{ :subtract=>nil }) #=> nil
Note the return keyword is not needed (nor is it needed in the original code). It seems very odd that a hash would be used to signify the type of operation to be performed. It would make more sense to invoke the method:
calculate(6,2,3,:add) #=> 11
calculate(6,2,3) #=> 11
calculate(6,2,3,:subtract) #=> 1
We can implement that as follows:
def calculate(*arguments)
operation =
case arguments.last
when :add
arguments.pop
:add
when :subtract
arguments.pop
:subtract
else
:add
end
case operation
when :add
add *arguments
else
subtract *arguments
end
end
Better:
def calculate(*arguments, op=:add)
case op
when :subtract
subtract *arguments
else
add *arguments
end
end
calculate(6,2,3,:add) #=> 11
calculate(6,2,3) #=> 11
calculate(6,2,3,:subtract) #=> 1
I am overwhelmed by your offer to be "endlessly and eternally thankful", but if you appreciate my efforts for a few minutes, that is sufficient.
A:
current_result = numbers.shift. Why is it there? I mean, I understand what it does, but what's its job in this particular piece of code?
The line removes the first element from the numbers array and assigns it to current_result. Afterwards, current_result - number is calculated for each number in the remaining numbers array.
Example with logging:
numbers = [20, 3, 2]
current_result = numbers.shift
current_result #=> 20
numbers #=> [3, 2]
numbers.inject(current_result) do |current_result, number|
(current_result - number).tap do |result|
puts "#{current_result} - #{number} = #{result}"
end
end
Output:
20 - 3 = 17
17 - 2 = 15
15 would be the method's return value.
However, removing the first element is not necessary; inject does this by default:
If you do not explicitly specify an initial value for memo, then the first element of collection is used as the initial value of memo.
Thus, the subtract method can be simplified:
def subtract(*numbers)
numbers.inject { |current_result, number| current_result - number }
end
subtract(20, 3, 2) #=> 15
You can also provide a method symbol:
def subtract(*numbers)
numbers.inject(:-)
end
Same works for add:
def add(*numbers)
numbers.inject(:+)
end
| {
"pile_set_name": "StackExchange"
} |
Q:
ASP.Net listview layout behaving oddly
I'm trying to use a ListView in an ASP.Net page and failing to get the results I was expecting. My page looks like this:
<table>
<tr>
<td><label class="subHeading">Contacts</label></td>
</tr>
<tr>
<asp:ListView runat="server" id="lvwContacts">
<LayoutTemplate>
<div class="tableWrapper">
<div class="tableScroll">
<table>
<tr>
<th><label>Full Name</label></th>
<th><label>Job Title</label></th>
<th><label>Direct Line</label></th>
<th><label>Mobile Phone</label></th>
<th><label>Email</label></th>
</tr>
<tr id="itemPlaceHolder" runat="server"></tr>
</table>
</div>
</div>
</LayoutTemplate>
<ItemTemplate>
<tr>
... etc
but when I look at the output the table is not appearing inside the divs:
<div class="tableWrapper">
<div class="tableScroll"></div>
</div>
<table>
<tbody>
<tr>
<td><label class="subHeading">Contacts</label></td>
</tr>
<tr></tr>
</tbody>
</table>
<table>
<tbody>
<tr>
<th><label>Full Name</label></th>
<th><label>Job Title</label></th>
<th><label>Direct Line</label></th>
<th><label>Mobile Phone</label></th>
<th><label>Email</label></th>
</tr>
... etc
I've tried putting the divs around the whole listview with much the same result. What on earth is going on here? Have I done something stupid or do ListViews really behave like this?
Thanks
John
A:
You must make sure you have valid HTML markup. Currently one of your <tr>'s has a <div> as a child, not a <td> or <th>.
See this demo:
/* style used to illustrate problem */
.tableWrapper {
padding: 10px;
background: red;
}
<label>Invalid markup</label>
<table>
<tr>
<td><label class="subHeading">Contacts</label></td>
</tr>
<tr> <!-- Invalid. child is a div not a td or th -->
<div class="tableWrapper">
<div class="tableScroll">
<table>
<tr>
<th><label>Full Name</label></th>
<th><label>Job Title</label></th>
<th><label>Direct Line</label></th>
<th><label>Mobile Phone</label></th>
<th><label>Email</label></th>
</tr>
</table>
</div>
</div>
</tr>
</table>
<hr>
<label>Valid markup</label>
<table>
<tr>
<td><label class="subHeading">Contacts</label></td>
</tr>
<tr>
<td> <!-- This is required! -->
<div class="tableWrapper">
<div class="tableScroll">
<table>
<tr>
<th><label>Full Name</label></th>
<th><label>Job Title</label></th>
<th><label>Direct Line</label></th>
<th><label>Mobile Phone</label></th>
<th><label>Email</label></th>
</tr>
</table>
</div>
</div>
</td>
</tr>
</table>
Inspect the rendered output of both tables... you will see what happens when the markup is not valid (what you are experiencing) the browser removes the <div> from the table. The second table has correct markup so it renders as-is
| {
"pile_set_name": "StackExchange"
} |
Q:
Static member inheritance in C#
I have a number of classes that reflect tables in a database. I would like to have a base class that has some basic functionality (say, it would have a "isDirty" flag), and a static array of strings with the column names as they appear in the database. The following code doesn't work but illustrates what I would like to do:
public class BaseRecord {
public bool isDirty;
public object [] itemArray;
public static string [] columnNames;
}
public class PeopleRec : BaseRecord {
}
public class OrderRec : BaseRecord {
}
public static void Main() {
PeopleRec.columnNames = new string[2];
PeopleRec.columnNames[0]="FIRST_NAME";
PeopleRec.columnNames[1]="LAST_NAME";
OrderRec.columnNames = new string[4];
OrderRec.columnNames[0] = "ORDER_ID";
OrderRec.columnNames[1] = "LINE";
OrderRec.columnNames[2] = "PART_NO";
OrderRec.columnNames[3] = "QTY";
}
public class DoWork<T> where T : BaseRecord {
public void DisplayColumnNames() {
foreach(string s in T.columnNames)
Console.Write("{0}", s);
}
public void DisplayItem(T t) {
for (int i=0; i<itemValues.Length; i++) {
Console.Write("{0}: {1}",t.columnNames[i],t.itemValues[i])
}
}
}
I would like each derived class to have it's own static array of strings of database column names, and I would like the generic class to access this static member without the need for an instance.
But it doesn't work:
(A) columnNames is the identical array in BaseRec, PeopleRec and OrderRec. I cannot have columnNames be different. BaseRec.columnNames.Length would be 3 because the columnNames in OrderRec is initialized last.
(B) The notation T.columnNames does not compile.
Any ideas on how to fix this?
A:
The issue is you that you want to associate some data with the types, not with instances of the types. I'm not sure that there's a neat way of doing this in C#, but one possibility is using a static Dictionary<Type, string[]> on BaseRecord. An example is below, you could neaten this up by adding some generic static members on BaseRecord for initializing/accessing the record names (and add some error checking...):
using System;
using System.Collections.Generic;
namespace Records
{
public class BaseRecord
{
public bool isDirty;
public object[] itemArray;
public static Dictionary<Type, string[]> columnNames = new Dictionary<Type, string[]>();
}
public class PeopleRec : BaseRecord
{
static PeopleRec()
{
string[] names = new string[2];
names[0] = "FIRST_NAME";
names[1] = "LAST_NAME";
BaseRecord.columnNames[typeof(PeopleRec)] = names;
}
}
public class DoWork<T> where T : BaseRecord
{
public void DisplayColumnNames()
{
foreach (string s in BaseRecord.columnNames[typeof(T)])
Console.WriteLine("{0}", s);
}
public void DisplayItem(T t)
{
for (int i = 0; i < t.itemArray.Length; i++)
{
Console.WriteLine("{0}: {1}", BaseRecord.columnNames[typeof(T)][i], t.itemArray[i]);
}
}
}
class Program
{
public static void Main()
{
PeopleRec p = new PeopleRec
{
itemArray = new object[] { "Joe", "Random" }
};
DoWork<PeopleRec> w = new DoWork<PeopleRec>();
w.DisplayColumnNames();
w.DisplayItem(p);
}
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
mysql JSON_SET or JSON_INSERT function to insert object into nested object if it exists
I have a json type column config I want to add a key value pair into, ultimately looking like:
{
"features": {
"divisions": true,
"utilities": {
"water": true,
"gas": true,
"electric": true
}
}
}
The issue I'm running into is when I want to insert the utilities object inside of features, I either overwrite the divisions key value or I'm returning NULL and the utilities object isn't being inserted.
Furthermore, the config column may be NULL or initially just be an empty {}.
This query will check for NULL or empty {} and also whether the features key exists but results in overwriting features if it already exists:
UPDATE entities SET config = JSON_SET(COALESCE(config, '{}'),
COALESCE("$.features", "features"), JSON_OBJECT("utilities",
JSON_OBJECT("water", TRUE, "gas", TRUE, "electric", TRUE)))
WHERE id = 123725082;
This works fine unless the column already contains something like:
{
"features": {
"divisions": true,
}
}
in which it overwrites divisions with the utilities object.
So I'm trying a JSON_INSERT query; from what I've gathered from mysql json functions documentation should work but it's returning null and I can't understand why:
UPDATE entities SET config = JSON_INSERT(COALESCE(config, '{}'),
COALESCE("$.features", "features"), JSON_OBJECT("utilities",
JSON_OBJECT("water", TRUE, "gas", TRUE, "electric", TRUE)))
WHERE id = 123725082;
A:
JSON_MERGE function can be useful in this case.
Modify the UPDATE as needed:
UPDATE `entities`
SET `config` = COALESCE(
JSON_MERGE(
`config`,
JSON_OBJECT('features',
JSON_OBJECT('utilities',
JSON_OBJECT('water', TRUE, 'gas', TRUE, 'electric', TRUE)
)
)
),
JSON_INSERT(
JSON_OBJECT(),
'$.features',
JSON_OBJECT('utilities',
JSON_OBJECT('water', TRUE, 'gas', TRUE, 'electric', TRUE)
)
)
);
See db-fiddle.
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I write blob datas to zip and download it in C#?
Assume that I got some blobs from database.
Then I put them in Byte arrays. For example:
Byte[] lol1=(Byte[])reader["data1"];
Byte[] lol2=(Byte[])reader["data2"];
Now how can i write these byte arrays as file into zip and download it as file from browser in C#?
// Edit for clarity
Relevant codes in "Manager.cs" file like:
public Byte[] FileDownload(string userName)
{
try
{
MySqlDataReader reader = new MySqlCommand("SELECT veri FROM veriler WHERE kullanici_id = (SELECT id FROM uyeler WHERE kullanici_adi='" + userName + "')", con).ExecuteReader();
MemoryStream ms = new MemoryStream();
GZipStream gzs = new GZipStream(ms, CompressionMode.Compress);
while (reader.Read())
gzs.Write((Byte[])reader["veri"], 0, ((Byte[])reader["veri"]).Length);
return ms.ToArray();
}
catch (Exception)
{
return Encoding.UTF8.GetBytes(string.Empty);
}
}
Relevant codes in "DataDown.aspx.cs" file like:
protected void download_Click(object sender, EventArgs e)
{
Response.AddHeader("Content-type", ContentType);
Response.AddHeader("Content-Disposition", "attachment; filename=Archive.zip");
Response.BinaryWrite(new Manager().FileDownload(Session["user"].ToString()));
Response.Flush();
Response.End();
}
It returns a .zip file which is only file in it. It must be two files. Moreover, this one file is corrupted.
A:
To do it cleanly, you'll need System.IO.Compression which is only available in .Net 4.5 forward.
string blobName = "data1";
string zipName = "database.zip";
Byte[] blob = (Byte[])reader[blobName];
using(MemoryStream zs = new MemoryStream())
{
// Build the archive
using(System.IO.Compression.ZipArchive zipArchive = new ZipArchive(zs, ZipArchiveMode.Create, true))
{
System.IO.Compression.ZipArchiveEntry archiveEntry = zipArchive.CreateEntry(blobName);
using(Stream entryStream = archiveEntry.Open())
{
entryStream.Write(blob, 0/* offset */, blob.Length);
}
}
//Rewind the stream for reading to output.
zs.Seek(0,SeekOrigin.Begin);
// Write to output.
Response.Clear();
Response.ContentType = "application/zip";
Response.AddHeader("Content-Disposition", string.Format("attachment; filename={0}", zipName));
Response.BinaryWrite(zs.ToArray());
Response.End();
}
If your data provider supports opening the blob as a stream, you can probably avoid reading the entry into a buffer, and instead use Stream.CopyTo()
| {
"pile_set_name": "StackExchange"
} |
Q:
Mac OS X, install another keyboard layout?
I'm a developer and I'm just so used to using Spanish (Argentina) as my keyboard layout on my PC both at home and at work. Now I want to develop on my Macbook Pro too, but the only Spanish layouts available are "regular" and ISO, both of which are basically the same as the one I use, except I have to press the alt key to input characters that are very common for me, like {}[].
How can I set my keyboard layout to Spanish (Argentina)? Thanks a lot!
A:
Would the Windows Latin American layout mentioned in this article meet your needs?
http://m10lmac.blogspot.com/2007/02/more-ways-to-type-spanish.html
| {
"pile_set_name": "StackExchange"
} |
Q:
DataTable to Excel range in one shot?
I've got some data in a System.Data.DataTable instance and want to put it an excel sheet range, directly, in one shot.
I'm working with VS 2008 and my project is a C# Excel 2007 Workbook project.
Thank you
A:
you will first have to convert datatable to two dimensional array, and then assign 2d array to range
| {
"pile_set_name": "StackExchange"
} |
Q:
Can't get rounded corners on HTML form submit button?
I have a submit button for a form in an HTML doc like so:
<form id = "submit" method="get" action="">
<input type="submit" name="action" value="Download" id="dlbutton"/>
</form>
I tried to get rounded corners on it with CSS using the border-radius property, but they still remain sharp:
#dlbutton{
background:#223445;
color:#FFFFFF ;
border: 1px solid #223445;
border-radius: 18px
-moz-border-radius: 5px
display:inline-block;
width: 20em;
height: 5em;
-webkit-font-smoothing: antialiased;
}
I have another button on another page that is style exactly the same and has rounded corners, except in the html the button is like :
<p><button class="clickable" id="clickable">Click me</button></p>
I'm using the latest Firefox. Thanks
A:
You forgot semicolons (;). Change:
border-radius: 18px
-moz-border-radius: 5px
to:
border-radius: 18px;
-moz-border-radius: 5px;
Also, I would recommend adding:
-webkit-border-radius: 5px;
for better cross-browser compatibility.
| {
"pile_set_name": "StackExchange"
} |
Q:
NameError: global name 'MAGIC_PINK' is not defined
I have a project which looks like so:
PyBlob
|- __init__
|- Actor
|- Blob
|- Bullet
|- main
|- Player
|- Scene
|- utils
|- Zombie
__init__.py
import sys, pygame, cmath
from Actor import Actor
from Blob import Blob
from Bullet import Bullet
from Player import Player
from Scene import Scene
from utils import *
from Zombie import Zombie
utils.py
MAGIC_PINK = (255, 0, 255)
# plus a small handful of utility functions
Blob.py
from PyBlob import *
class Blob:
def __init__(self, radius, body_colour=(0,0,0), face_colour=(255,255,0)):
self.body = pygame.Surface((2*radius, 2*radius))
self.face = pygame.Surface((2*radius, 2*radius))
self.body.set_colorkey(MAGIC_PINK)
#rest of module omitted for brevity
This results in the error:
NameError: global name 'MAGIC_PINK' is not defined
Importing the classes seems to work fine so clearly I am doing something wrong with this MAGIC_PINK variable.
A:
You have a circular import. Python has to import PyBlob.Blob to import PyBlob, and it has to from PyBlob import * to import PyBlob.Blob. This is a problem.
When Python tries to run from PyBlob import *, it finds that PyBlob is already in the middle of the import process. It can't wait for PyBlob to be ready, because PyBlob won't be ready until Blob is ready, and Blob needs PyBlob. Thus, it assumes that PyBlob is "ready enough", and uses it in its current state. Unfortunately, PyBlob is still missing most of the stuff it's supposed to have, so from PyBlob import * doesn't pick up most of the stuff it was supposed to pick up.
To fix this problem, reorganize your code to stop using circular imports, and try to avoid import *.
| {
"pile_set_name": "StackExchange"
} |
Q:
Zeroth-homology of a complex of $n$ connected components
I am new at Algebraic topology and am reading Basic Concepts of Algebraic Topology of Croom.
I have a question. In Theorem 2.4/page 25, it states that if $K$ is a complex with $n$ connected components, then $H_0(K)$ is isomorphic to $\mathbb{Z}^n$.
Anyway, in the proof, Croom showed that "Applying this result to each connected component $K_1,..., K_n$
of $K$,
there is a vertex $a_i$
of $K_i$
such that any $0$-cycle on $K$
is homologous to a $0$-chain
of the form $\sum h_i \langle a_i \rangle$ where $h_i$ is a integer and $\langle a_i \rangle$ denotes the $0$-cycle that maps $(a_i)$ to 1 and other $0$-simplicies to $0$.
Hence it suffices to show that the representation here is unique, which means if we have $\sum (g_i- h_i) \langle a_i \rangle=\partial (c)$, then $g_i=h_i$. This part is clearly trivial to Croom, but I do not understand.
Can you clarify this part for me? Thank u
A:
The boundary of a $1$-chain is a linear combination of
boundaries of $1$-simplex. A $1$-simplex in $K$ has endpoints in the
same component of $K$, so in the same $K_i$. If $\partial(c)=\sum_v r_v
\langle v\rangle$ then the sum of the $r_v$
over the vertices $v$ in the same $K_i$ is zero. In your example this sum
is $g_i-h_i$.
| {
"pile_set_name": "StackExchange"
} |
Q:
Database design under Django
I have a probably quite basic question: I am currently setting up a database for students and their marks in my courses. I currently have two main classes in my models.py: Student (containing their name, id, email address etc) and Course (containing an id, the year it is running in and the assessment information - for example "Essay" "40%" "Presentation" "10%" "Exam" "50%"). And, of course, Student has a ManyToMany field so that I can assign students to courses and vice versa. I have to be able to add and modify these things.
Now, obviously, I would like to be able to add the marks for the students in the different assignments (which are different from course to course). As I am very unexperienced in database programming, I was hoping one of you could give me a tip how to set this up within my models.
Thanks,
Tobi
A:
Perhaps the way to go about it is to have a separate class for assignment, something like this.
class Assignment(models.Model):
ASSIGNMENT_TYPES = (
('essay', "Essay"),
...
)
ASSIGNMENT_GRADES = (
('a+', "A+"),
('a', "A"),
...
)
student = models.ForeignKey("Student")
course = models.ForeignKey("Course")
assignment_type = models.CharField(choices=ASSIGNMENT_TYPES, max_length=15, default='essay')
progress = models.IntegerField()
grade = models.CharField(choices=ASSIGNMENT_GRADES, max_length=3, default="a+")
This way you have one assignment connected to one student and one course. It can be modified relatively easy if you have multiple students per one assignment, by adding another class (for example StudentGroup) and including it in the model.
Hope that this helps :)
| {
"pile_set_name": "StackExchange"
} |
Q:
Alternate for NetworkInterface.getHardwareAddress()?
I have used this API to pick the mac address of device,
NetworkInterface.getHardwareAddress()
But this is for API level 9 and later, what should i use to pick the mac address for API level 8? froyo device.
May be this a very simple thing, but i tried googling and couldn't find the answer.
A:
WifiInfo.getMacAddress() has been available since API level 1.
WifiManager wifiMan = (WifiManager) this.getSystemService(Context.WIFI_SERVICE);
WifiInfo wifiInf = wifiMan.getConnectionInfo();
String macAddr = wifiInf.getMacAddress();
You'll need to add:
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE"/>
To your manifest
| {
"pile_set_name": "StackExchange"
} |
Q:
Cant get SVG to scale properly
Trying to add an svg file and have it scale from one end of the page to the other overlapping two divs (please see mock up image below)
Here is the page JSFIDDLE
Here is the code in particular:
HTML:
<!-- Sectional Blue Background -->
<div id="blueSection">
</div>
<!-- /Sectional Blue Background -->
CSS
/* Blue Sectional 3 Steps */
#blueSection {
position: relative;
transform:scale(3, 4);
-webkit-transform:scale(3, 4);
-ms-transform: scale(3, 4);
right: 80em;
}
#blueSection::before {
content: '';
width: 100%;
height: 100px;
display: block;
position: absolute;
top: -80px;
left: 0;
background: url('http://convio.cancer.ca/mIFE/svg/blue.svg') no-repeat;
background-size: 100%;
overflow: hidden;
}
#blueSection::after {
content: '';
width: 100%;
height: 100px;
display: block;
position: absolute;
top: -63px;
background: url('http://convio.cancer.ca/mIFE/svg/blue.svg') no-repeat;
background-size: 100%;
overflow: hidden;
-ms-transform: rotate(5deg); /* IE 9 */
-webkit-transform: rotate(5deg); /* Safari */
transform: rotate(5deg);
}
So what's happening right now is in full screen desktop mode, its right where I'd like it to be, however it extends all the way past the margins of the page. When you resize the screen it doesn't look at all the same its a completely different size. In mobile on chrome it's doesn't appear at all (havent tried other browsers). And in IE on desktop it's as if no edits were done to the blue section at all.
I'm really not sure what I'm doing wrong, if I need to use javascript/jquery or just css I don't mind going either route, I just want to learn how to fix this. I've looked at various articles and I can't see what I'm doing wrong.
All suggestions are greatly appreciated! Thank you for your time!
A:
Try background-size: cover for both before and after elements.
| {
"pile_set_name": "StackExchange"
} |
Q:
Git staging and snapshots
I recently forgot to pull the most recent version of my remote to my local before I made a commit. No big deal I'll just stash my changes then pull from remote. This resulted in a conflict with one of the files, and I also noticed that many other files were staged. So I resolved the conflict and then removed all the files from the staging area (I didn't know anything about them) except the file that I resolved the conflict on, lastly I committed and pushed what I thought was that single file.
Turns out my understanding of how git works is flawed, and I pushed the snapshot of my local workspace (without the many other files from the remote that I was trying to fetch and merge). This effectively removed those files from the remote branches new head (my commit)
My guess is these questions have been asked before but I wasn't able to find them so here goes: Is staging files required? If git just commits a snapshot of your whole local workspace anyways, why index anything? Lastly is there a command (I could create an alias) that wraps pull and commit, so that I don't forget to pull before committing again, so it would fetch, merge, commit? Or is there a reason the command doesn't exist?
thanks for your time
A:
When you do a git commit you record a snapshot of the whole working directory, but not necessary your working directory.
That is you can have a lot of changes in a lot of files and only include in a commit a few of these changes. That does not mean that those other files are not part of the commit: technically all unchanged files are part of that commit, they are just not part of the commit diff.
The index, or stage, is useful to prepare the status of the working directory you are going to commit. If you do not need that and just want to commit anything changed, you can just do git commit -a. That will not add new files, only changed ones. If you want to add also all the new files, you can do git commit -au.
About your previous mess. That is actually pretty easy once you know the basic rule. The basic rule for git is: when in doubt, commit.
How would have worked in your case? Let's see:
You forget to pull and work a little.
Do git commit! To your local master branch, of course.
Do git push and it fails because you the remote branch has advanced.
Do git fetch to get the latest changes.
Do git merge or git rebase to mix your changes with those from origin.
If there are conflicts, solve them and git merge/rebase --continue.
Do git push and profit!
The only chance of breaking things is when solving conflicts. If that happens, a git merge/rebase --abort restores you to the last good commit.
About a command that does everything... beware! git pull is already a mix of git fetch plus git merge (or git rebase). It may save you some time, but if you are learning git I very much recommend not to use git pull but do the other two commands separately.
Doing a git commit and a git pull and a git push all at the same time will be probably quite a bad idea.
| {
"pile_set_name": "StackExchange"
} |
Q:
Termux give user write access to usb pen
I have a android box that has termux and I would like to add permission for my user to write to a plugged in usb pen drive.
For what I believe I only have to add that user to sdcard_r group, I'm not really sure if this is the way.
How can I achieve this? I can't find useradd command.
I also have tried using the termux-setup-storage command but this command doesn't seem to create a folder for my usb pen drive.
My android version is 5.1.1
A:
I have followed the tutorial in http://www.cnx-software.com/2012/08/26/how-to-allow-apps-to-write-files-to-usb-mass-storage-devices-in-android/ and now I can write to my usb :)
| {
"pile_set_name": "StackExchange"
} |
Q:
how to loop through ALL the attributes and return the values as objects
I need to write a function that takes in a argument which is a list of elements. The elements are all labels and inputs.The function should create an object where the text of the labels are the keys of the object and the values of the inputs are the values belonging to the keys of the object.I need my function must return this object.
let elements = document.querySelectorAll('form *') //Selects all children of the form element
formObj(elements)
//I need it to return: {name: "Harry, email: "[email protected]", age: 22}
<form>
<label for='name'>name<label/>
<input type='text' id='name' value="Harry"/>
<label for='email'>email<label/>
<input type='email' id='email' value="[email protected]"/>
<label for='age'>age<label/>
<input type='text' id='age' value="22"/>
<form/>
I have tried to loop through ALL the attributes but am unsure how to go about doing it
Please help out
A:
You can loop through all the elements in elements using a for...of loop. You can also create an empty array called form_details, which will hold your resulting object. For each element, you can get its id and value. Then, using an if statement, you can check whether the id and value are defined, and if they are, add them as a key-value pair to your object.
See example below:
function formObj(elems) {
const form_details = {};
for(let elem of elems) {
const key = elem.id;
const val = elem.value;
if(key && val)
form_details[key] = val;
}
return form_details;
}
const elements = document.querySelectorAll('form *') //Selects all children of the form element
const details = formObj(elements);
console.log(details);
<form>
<label for='name'>name</label>
<input type='text' id='name' value="Harry"/>
<label for='email'>email</label>
<input type='email' id='email' value="[email protected]"/>
<label for='age'>age</label>
<input type='text' id='age' value="22"/>
<form/>
An easier way would be to change your selector to get the form element, and add the name attribute to each attribute which you want to retrieve. You can then use the FormData Web API to retrieve the entries from your FormData, which you can then turn into an object using Object.fromEntries():
function formObj(elem) {
const form = new FormData(elem);
return Object.fromEntries([...form.entries()]);
}
const elements = document.querySelector('form') //Selects all children of the form element
const details = formObj(elements);
console.log(details);
<form>
<label for='name'>name</label>
<input type='text' name='name' id='name' value="Harry"/>
<label for='email'>email</label>
<input type='email' name='email' id='email' value="[email protected]"/>
<label for='age'>age</label>
<input type='text' name='age' id='age' value="22"/>
<form/>
| {
"pile_set_name": "StackExchange"
} |
Q:
Sending/Receiving multi-recipient SMS - Twilio API
I am writing an app that will facilitate the sending and receiving of SMS messages via a web application. I would like to allow for multiple recipients (not bulk, just a few recipients at most).
I understand that in order to send to multiple recipients, I have to make multiple API calls, and that is fine. The problem I am having is receiving text messages via the Webhook callback. If the SMS was sent to multiple recipients, I cannot see the other recipients in the callback, just myself as the recipient.
Because of this, I have no idea whether this message was intended for just me, or for other recipients as well. This is a problem, because I would like to show threaded conversations similar to Google hangouts, or the SMS applications on all Andorid and iPhones.
I cannot figure out a way to track conversations, if I can't tell if a received message was sent to just me, or a group of recipients. Any suggestions? I do not yes use Twilio on a production server, so if this is not possible to do using Twilio, but is possible using another service, that would be an option for me as well.
A:
Twilio developer evangelist here.
Twilio doesn't fully support group messaging the way that you are used to it when using a phone. That actually relies on MMS under the hood to keep the members of the group chat synced up.
Where you make multiple API calls to send messages to each user, that is manifested as just a single message with no group attached. Thus, any reply to that message comes solely from that person you sent the message to. There is no group at all at this point.
The link that Alex shared in the comments is the closest way you can get group messaging to work. It relies on everyone messaging one Twilio number and the application behind it fanning the messages out to all the recipients. The blog post also comes with some handy subscribe/unsubscribe administration for the group.
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I run a satisfying and successful seeking in the context of a chronicle?
In Mage: the Ascension, in order to raise one's Arete, a mage must undergo a seeking, something between a vision quest and a personally tailored one-on-one adventure. At the end of this side mission, the player rolls to see whether they are permitted to raise this (pretty crucial) stat. Unlike other experience expenditures, this one requires a great deal of Storyteller adjudication, and has the potential to disappoint a player who doesn't 'make the grade,' however the grade is calculated.
Seekings are hard. How does one run a good one? Issues that should be addressed:
How do you balance the feeling that "Ascension should be hard, a trial for the character as well as the player" with the idea that XP are there to be spent and players have the ability to shape their characters?
What should failing a seeking look like? (I've heard of one Storyteller who made his player "ante up" the XP; those points were gone regardless of success or failure.) Not that I expect the player/character to fail, but in the case it happens, what's the best method?
The "netrunner problem." We don't have a lot of time as working adults to do solo gaming, so the other players will be present for the seeking. I'm planning on having them play as "aspects" of the Awakened soul, sort of like the people Dorothy meets on the road to Oz, but is there another way for them to contribute that would keep them engaged?
…as well as other aspects of the seeking I might be forgetting. A quality answer will come from someone who has run Mage: the Ascension and led a player through a Seeking.
A:
I have run Mage the Ascension in a variety of games, Mage exclusively or in crossover with other WoD games. I've had some two Seekings in the past, and recently three more, therefore I believe I'm qualified to give an answer. Unfortunately there is no short version of it.
Success and/or failure
It is a viable idea to ask the player to wager their XP on a roll - however I would say that when the roll is unsuccessful, the XP in question should be spent without limitation on Spheres, willpower and ability increases. Losing XP - and in this case probably majority of the pool - is frustrating. Avoid it.
Players needs to shape their character. I am opposed to the idea of failing a whole Seeking because of an unlucky roll.
The preparation
The most successful of my Seekings were preceded by some core discussion between myself and the player about the future of the character. The most important issue and one that is often forgotten is that a Seeking changes the character fundamentally and irrevocably. This opening statement requires that the players agrees to substantially change his character's psyche, style, sometimes even rewrite the core concept. Have your player realise that they will have to choose (or have chosen for them) new Backgrounds for the character. They don't have to be completely disjoint from their previous ones, but different enough to show lateral character growth
When creating a new character I always make the player write down what holds the character back in terms of magickal development. Now is the time to use it.
Take a look at their Nature - the primary strength and weakness of the chosen archetype is listed in the book. Note both of them. Do the same for Demeanor. You will challenge those during the Seeking.
Consider character's Style. Do you see any limitation? E.g. one of my players used to derive her power from achieving an euphoric state. That limited her ability to do magic during moments of clarity and purposeful contemplation.
Examine the foci and how the character uses it. The same character from above used dance to achieve euphoria. The Seeking showed that it was not the enjoyment that did the magick, but instead her momentary detachment from reality. The Focus can remain the same, but it's usage change or the other way around.
Are there any other go-to strategies that the player uses? Tactics that your player instinctively does e.g. always run for cover in a fight, examine any unknown piece of technology before using it. Any preconceptions or strong beliefs? You might even go as far as to challenge your conservative friend to consider liberalism as a viable system.
Consider player's Avatar very carefully. Both will be extremely important, as a Seeking is Avatar's attempt to change the character.
The trial
The overarching flavour of the endeavour should be determined by Avatar's Essence. Seekings in it's truest form are delivered by Questing Avatars. Dynamic ones often tease the mage with randomness and unpredictability, while a Pattern one could delight in a series of puzzles or structured challenges. Primordial I find the hardest, but you could go wild, retelling the creation of the Tellurian or something similarly grandiose.
Try to make the Seeking plot relevant to their characters weaknesses. The journey should demonstrate that the character can overcome those flaws and the player abandon some concepts within the character. I recommend a series of challenges where playing within the scope of character's current style seems to be the most natural and intuitive approach but ends in failure. Now, don't be afraid to involve lateral thinking or paradoxical logic. This is Mage the Ascension! The player is aware that a Seeking is not about completing a quest, it's about doing it in a novel way, and if he's not, drop clues, as to what would be sufficient, in the form of visions or companions urging the character to "let go of his limitations".
Put the player in situations, where his usual tactics appear to be appropriate, but allow him to progress only if he chooses to do the opposite. E.g. if your character has Bravo Nature with Anger weakness, put him in a in a trash compactor. Springing to action (his default playstyle) is what the situation calls for (superficially). However, only if a character lets go of his urge to act heroically can he progress (trash compaction stops because it reacts to movement or sleeping monster's shell stops the compactor).
Every time such a challenge is completed, get your player to choose a new related character archetype. This will serve to show the transition and let the player retain control over the process. Once all of his limitations are overcome - he has a new Nature and Demeanor, player's go-to tactic had to be abandoned and new magickal style emerged, go to the finale.
Finish with a scene that enables the realisation that character's understanding of Magick was incomplete. Allow the player to come up with a new, improved version. Real-life example - one of my players used to do Magick by "hacking the server of the universe". The seeking made him wrong, and he decided that his new paradigm allows him to directly rewrite source code - going white hat.
Involving other players
I think you are right on the money. Talk to other players and plot with them against the Seeker. They should know what does the Avatar want from the Seeker and have a role to play - as enemy, friend or trying to lead him astray. They can even play as projections of their own characters (if they befriended the Seeker) or command more than one character. This should be their opportunity to play as something different. However, don't let them in on all the challenges, let them figure it out with the main player, but ask them to play double agents if they have better ideas than him.
| {
"pile_set_name": "StackExchange"
} |
Q:
Database design question, using inheritance
I'm using Symfony2 and Doctrine2 to create a blog engine. I will have three types of contents, so I created this, using inheritance :
Next, I want to have "blocks" of text that can be inserted after any "content", so I do :
Now my problem is : How can I store the information that some blocks should be included by default with certain types of contents ?
Example : say I have a "social" block (includes the facebook button, a tweet button, etc.). I want it to be linked by default in any new "blog" content.
A:
If the default block, like "facebook" button, won't be different between different contents. You may consider generating block from the code instead of database for better performance.
Otherwise, if users could customize these buttons after new content is inserted, you just insert "default block" after insert a new content in code. The relational data model can't help if the default data is complicated.
| {
"pile_set_name": "StackExchange"
} |
Q:
Assign property in XAML with a shorthand syntax without converters and markup extensions
I have a XAML like this
<ml:Visualizer Smooth="True" />
Recently we have added different types of preprocessing, like e.g. Smoothing, Bluring, Sharpening, etc.
Now we write it like this
<ml:Visualizer>
<ml:Visualizer.Effect>
<thirdParty:Smoothing/>
</ml:Visualizer.Effect>
</ml:Visualizer>
Is it possible to assign a property as an XML attribute but perhaps without writing custom type converters or MarkupExtensions. The goal is to have a short syntax for assigning the property however the actual effects can be later provided by a third party as a DLL and we need to reference them in XAML.
<ml:Visualizer Effect="{thirdParty:Smoothing}" /> <!-- BUT WITH NO CUSTOM MARKUP EXTENSION -->
And if it is possible indeed, then the next level would be to set properties of effects (even if there is only a default constructor available).
<ml:Visualizer Effect="{thirdParty:Smoothing Factor=5}" /> <!-- BUT WITH NO CUSTOM MARKUP EXTENSION -->
I know it looks as a markup extension but it would be too tedious to write a separate markup extension for each effect introduced. Looks as a too basic thing to have no solution for that :)
Any suggestions?
Thanks in advance!
A:
It's possible to do it without any markup extensions at all.
The syntax is SomeProp="{YourClass}" or SomeProp="{YourClass Prop=Value}"
assigns SomeProp with an instance of YourClass.
An example (verified on .Net 4.0):
<!-- Works with no markup extensions HardDriveExtension in the code! -->
<Computer Storage="{HardDrive Gigabytes=2.5}"
xmlns="clr-namespace:XamlExperiments;assembly=XamlExperiments"
/>
using System.IO;
using System.Xaml;
using System;
namespace XamlExperiments
{
class Program
{
static void Main(string[] args)
{
var xamlText =
@"<?xml version=""1.0"" encoding=""utf-8""?>
<Computer Storage=""{HardDrive Gigabytes=2.5}""
xmlns=""clr-namespace:XamlExperiments;assembly=XamlExperiments""
/>
";
var computer = (Computer)XamlServices.Load(new StringReader(xamlText));
computer.Process();
}
}
public class Computer
{
public IStorage Storage { get; set; }
public void Process() { Storage.Store();}
}
public interface IStorage
{
void Store();
}
public class HardDrive : IStorage
{
public double Gigabytes { get; set; }
public void Store() {Console.WriteLine("Stored 1GB on HardDrive");}
}
}
Output is:
Stored 1GB on HardDrive
Note that without any markup extension you can instantiate an instance of HardDrive with just simple {HardDrive Gigabytes=2.5}.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is there a R code set to use PubMed ID or DOI to get data files for that article, please?
I am trying to get the data file names from NCBI or PubMed that are related or attached to hundreds of unique DOIs or PMIDs, in R language. For example. I have PMID: 19122651 and, I want to get the names of the three GSEs connected to it, which are: GSE12781,GSE12782, and GSE12783.
I have searched various sources and packages to no avail.
Appreciate your assistance.
A:
You can do this using the rentrez package.
The required function is entrez_link.
Example:
library(rentrez)
results <- entrez_link(dbfrom = 'pubmed', id = 19122651, db = 'gds')
results$links$pubmed_gds
[1] "200012783" "200012782" "200012781"
The 3 results are the IDs for the associated GEO Dataset records. You can convert them to GSE accessions using entrez_summary.
Here's a somewhat ugly sapply that may serve as the basis for a function:
sapply(results$links$pubmed_gds, function (id) entrez_summary("gds", id)$accession,
USE.NAMES = FALSE)
[1] "GSE12783" "GSE12782" "GSE12781"
| {
"pile_set_name": "StackExchange"
} |
Q:
Fire jQuery scroll event only once
I want to fire an jQuery scroll event only once, it works but when I have a new scroll event it doesn't work hope someone can help me.
$(".content").scroll(function(){
if(var == 1) {
$(".content").off("scroll");
console.log("it works only once");
}
}
$(".content").on("scroll", function() {
console.log("This should work always, but it don't work");
});
Thanks!
A:
As i see it, you should use namespaced event:
$(".content").on("scroll.custom", function(){
if(someVariableCheck == 1) {
$(".content").off("scroll.custom");
console.log("it works only once");
}
}
$(".content").on("scroll", function() {
console.log("This should work always, but it don't work");
});
You could use one to bind specific handler to be fired only once but this would be unregarding variable check.
| {
"pile_set_name": "StackExchange"
} |
Q:
Distributed system simulator
I have coded a very simple distributed system simulator in Python. It uses multiprocessing to assign tasks, and queues to communicate between processes.
The code is shown below.
from functions import *
import multiprocessing
import time
try:
with open("config.txt") as f:
lines = f.readlines()
max_instances = int(lines[0].split(' ')[1])
except Exception, e:
print "Exception while opening config.txt :", e
print "Please make sure that\n1) The File is present in the current folder"
print "2) It contains the value of MAX_NUMBER_OF_INSTANCES, space delimited"
print "Download the file again if problem persists"
exit(1)
class machine():
'Class for the instance of a machine'
q = [multiprocessing.Queue() for i in range(max_instances + 1)]
# q[0] is unused
count = multiprocessing.Value('i', 1)
def __init__(self):
self.mac_id = machine.count.value
machine.count.value += 1
def execute_func(self, func_name, *args):
comm_str = str(func_name) + ' = multiprocessing.Process(name = "' + str(func_name) + '", target = ' + str(func_name) + ', args = ('
comm_str += 'self,'
for arg in args:
if(type(arg) is str):
comm_str += '"' + str(arg) + '",'
else:
comm_str += str(arg) + ','
comm_str += '))'
try:
# create the new process
exec(comm_str)
# start the new process
comm_str = str(func_name) + '.start()'
exec(comm_str)
except Exception, e:
print "Exception in execute_func() of", self.get_machine_id(), ":", e
print self.get_machine_id(), "was not able to run the function ", func_name
print "Check your function name and parameters passed to execute_func() for", self.get_machine_id()
def send(self, destination_id, message):
# send message to the machine with machine_id destination_id
mac_id = int(destination_id[8:])
if(mac_id >= machine.count.value or mac_id <= 0):
return -1
# message is of the format "hello|2". Meaning message is "hello" from machine with id 2
# However, the message received is processed and then returned back to the user
message += '|' + str(self.get_id())
machine.q[mac_id].put(message)
return 1
def recv(self):
mac_id = self.get_id()
if(mac_id >= machine.count.value or mac_id <= 0):
return -1, -1
message = machine.q[mac_id].get().split('|')
# message received is returned with the format "hello" message from "machine_2"
return message[0], 'machine_' + message[1]
def get_id(self):
return self.mac_id
def get_machine_id(self):
return "machine_" + str(self.get_id())
You can assign tasks to each machine instance that you would create. These tasks are to be given in the form of a function. These functions are to be kept in a file in the same folder with name functions.py
Suppose I want 2 machine instances. One would send the other machine 10 numbers and the other one will return the sum. In this case, the functions would look something like this.
def machine1(id_var):
print "machine instance started with id:", id_var.get_machine_id()
# id_var.get_machine_id() is used to get the machine id
for i in range(10):
id_var.send("machine_2", str(i))
message, sender = id_var.recv()
print id_var.get_machine_id(), " got sum =", message, " from", sender
def machine2(id_var):
print "machine instance started with id:", id_var.get_machine_id()
# id_var.get_machine_id() is used to get the machine id
total = 0
for i in range(10):
message, sender = id_var.recv()
total += int(message)
id_var.send("machine_1", str(total))
Now to run this, you need to create a machine instance and assign the proper function to it. Like
from dss import *
m1 = machine()
m1.execute_func("machine1")
m2 = machine()
m2.execute_func("machine2")
This all works fine. I am already using this library to implement some pretty complex distributed load balancing algorithms.
I'm looking for a review as to this being a good enough solution, or new features that should be added.
For more information, you can see the github page.
A:
I would do these things as improvements:
Follow PEP-8 and PEP-257 for writing code and docstrings in Python.
Use the configparser module in Python to get config parameters
Use logging module instead of print
Compatibility Python2 and Python3, that means change every try ... except Exception, e and all the print statements.
Also I saw your config.txt file is repeated in two paths in your GitHub project.
| {
"pile_set_name": "StackExchange"
} |
Q:
32 bit applications on 64 bit OS ( windows )
I need some help understanding how 32 bit applications use memory on a 64 bit OS.
A 32 bit application can use 2 gb of memory on 64 bit OS, correct?
Does this mean that 3 32 bit applications running in parrallel could address 6 gb of memory...
Or do the 3 32 bit applications have to share the 2-4 gb of 32 bit memory that the os has?
Likewise, If I have a webservice that is compiled as 32 bits, running under IIS on a 64 bit machine. As long as a single request to that webservice always stays under 2gb of memory usage, is there any point in recompiling to 64 bit? My theory is that IIS creates a new process for each request, so the whole pool of processes will be able to make use of all the memory the 64bit machine has , 8 or 15 or 20 gig or whatever.
Let me know your thoughts, thanks
A:
Yes, the total usage of all the 32-bit programs can exceed 2 GB. So yes you can have a bunch of 32-bit processes using all the memory in a 64-bit machine.
Actually, there's a compiler option that lets 32-bit programs use up to 3GB in Windows.
If performance isn't important, then there isn't much of a reason to use 64-bit.
| {
"pile_set_name": "StackExchange"
} |
Q:
Debug Partial Mock in JMockit
Using JMockit 0.999.4 and JDK6, is it possible to debug into a partially mocked class?
Consider the following test:
@Test
public void testClass() {
SampleClass cls = new SampleClass();
System.out.println(cls.getStaticInt());
cls.setVal(25);
System.out.println(cls.getVal());
}
static class SampleClass {
static int staticInt = 5;
private int val;
{
staticInt = 10;
}
public int getStaticInt() {
System.out.println("Returning static int and adding a line for debugging");
return staticInt;
}
public void setVal(int num) {
System.out.println("Setting val and adding a line for debugging");
this.val = num;
}
public int getVal() {
System.out.println("Returning val and adding a line for debugging");
return this.val;
}
}
Placing a breakpoint on each of the sysout lines in SampleClass and debug "Step Over" in Eclipse will enter the SampleClass methods.
Consider the following which will prevent the static initializer from setting staticInt to a value of 10.
@Test
public void testClass(@Mocked(methods = "$clinit") SampleClass cls) {
System.out.println(cls.getStaticInt());
cls.setVal(25);
System.out.println(cls.getVal());
}
static class SampleClass {
static int staticInt = 5;
private int val;
{
staticInt = 10;
}
public int getStaticInt() {
System.out.println("Returning static int and adding a line for debugging");
return staticInt;
}
public void setVal(int num) {
System.out.println("Setting val and adding a line for debugging");
this.val = num;
}
public int getVal() {
System.out.println("Returning val and adding a line for debugging");
return this.val;
}
}
However, this code will not debug into the methods in SampleClass.
Yes, I have tried the -javaagent property.
A:
Answered by Rogerio in the JMockit Google's discussion group.
The JVM discards the breakpoints set on a class after it is redefined
(which JMockit does whenever a class is mocked).
To reset the breakpoints, stop the debugger at the test method, just
before it enters the code under test. That is, set a breakpoint in the
test method, on the line which calls into "SampleClass" in this
example.
| {
"pile_set_name": "StackExchange"
} |
Q:
Unable to insert record in table using hibernate
I am using Microsoft SQL Server Database.
In My Hibernate Mapping file, I have declared Primary key as
<id name="item_Group_Sid" type="int" column="ITEM_GROUP_SID" >
<generator class="native"/>
</id>.
I am trying to insert a record in the table.
But I am getting error like
DEFAULT or NULL are not allowed as explicit identity values.
My hibernate.cfg.xml is
<hibernate-configuration>
<session-factory>
<property name="hibernate.connection.driver_class">com.microsoft.sqlserver.jdbc.SQLServerDriver</property>
<property name="hibernate.connection.url">**</property>
<property name="hibernate.connection.username">**</property>
<property name="hibernate.connection.password">**</property>
<property name="hibernate.dialect">org.hibernate.dialect.DB2Dialect</property>
<property name="hibernate.show_sql">true</property>
<property name="hibernate.format_sql">true</property>
<mapping resource="ItemGroup.hbm.xml"/>
</session-factory>
</hibernate-configuration>
A:
Change the hibernate.dialect to
<property name="hibernate.dialect">org.hibernate.dialect.SQLServerDialect</property>
| {
"pile_set_name": "StackExchange"
} |
Q:
17.10 cisco anyconnect vpn client
I recently upgraded from Ubuntu 17.04 to 17.10. Unfortunately it affected quite some configurations of the OS and applications too.
I used a customized vpn client by cicsco systems, its called 'anyconnect'. You do download it personalized & preconfigured from the universities website. After upgrading the OS it does not start at all. Any hints ideas.
Since also other confugurations are affected e.g. it is an option to reverse the process of upgrading.
A:
I had the same problem.
sudo apt install libpangox-1.0-0
Fixed it for me.
For more information please have a look at my blog.
| {
"pile_set_name": "StackExchange"
} |
Q:
Slider with Progress bar
I develop an Windows Phone that play audio from the web, and i have Slider(normal) that show the progress of the audio Position .
And i want to add a something like progress of the audio buffer process .
Can i make the slider right(gray) to be invisible?
There is any build control?
A:
You can use the ProgressBar class to add a progress bar to your
application. However, if you plan on adding an indeterminate progress
bar, using ProgressBar can decrease the performance of your
application. This is because the current implementation of ProgressBar
runs an indeterminate progress bar on the UI thread, rather than the
compositor thread. Instead, you can use the
CustomIndeterminateProgressBar sample to add an indeterminate progress
bar that runs on the compositor thread for better performance. This
topic describes how the sample works, and how you can use it to add an
indeterminate progress bar to your application.
Please see this artice
Visible
Collapsed
| {
"pile_set_name": "StackExchange"
} |
Q:
How to do HTTP Basic Auth using angular
I'm building ionic app with django rest framework backend, And I can't do simple http basic auth.
backend view:
class GetActive(APIView):
permission_classes = (permissions.IsAuthenticated,)
def get(self, request):
settings = Setting.objects.filter(active=True)
for setting in reversed(settings):
headers = {'Access-Control-Allow-Origin': '*'}
return Response({
'youtube_link': setting.youtube_link,
'text': setting.text}, headers=headers)
return HttpResponse('not found')
frontend api.ts:
@Injectable()
export class ApiProvider {
url;
constructor(public http: Http) {
this.url = 'http://127.0.0.1:8000/get_active/';
}
getSettings() {
var auth = window.btoa("foo:bar"),
headers = {"Authorization": "Basic " + auth};
return this.http.get(this.url, {headers: headers}).map(res => res.json());
}
}
I'm getting this error:
403 forbidden No 'Access-Control-Allow-Origin' header is present on the requested resource.
However, if I remove IsAuthenticated permisson on backend and remove headers from frontend request then it's working.
To be confident that it is indeed working when IsAuthenticated is on, I make this python script:
import requests
from requests.auth import HTTPBasicAuth
theurl = 'http://localhost:8000/get_active'
username = 'foo'
password = 'bar'
r = requests.get(theurl, auth=HTTPBasicAuth(username, password))
print (r.text)
And it is working fine, so I just need js analog.
A:
Add CORS to the server.
pip install django-cors-headers and add header
maybe this help
| {
"pile_set_name": "StackExchange"
} |
Q:
Loading object from JSON file into javascript
How do I load an object into javascript if it is available in a json file?
I have the following script in my html:
<script src='scene.json'></script>
<script>
var x = scene.x;
</script>
And this is the file scene.json, located in the same folder:
{"scene": {
"x": 0,
"y": 0,
"w": 11000,
"h": 3500,
}}
But the json file is not loaded properly (unexpected token ':') and the scene.x reference is also probably not the way it should be done. Is it possible to refer to the data directly? Or does it need to be loaded by some http request?
A:
Modify this to javascript:
var scene = {
"x": 0,
"y": 0,
"w": 11000,
"h": 3500
};
Or use jQuery api and function getJSON
<script>
var scene={};
$.getJSON('scene.json', function(data) {
scene=data;
});
</script>
A:
{"scene": {
"x": 0,
"y": 0,
"w": 11000,
"h": 3500
}}
Is invalid javascript (because it's treated as a block), you probably just want a javascript file:
var scene = {
"x": 0,
"y": 0,
"w": 11000,
"h": 3500
};
If you want to keep the file as JSON, you cannot reference it from a script element and have it work while being valid JSON. You would need to use ajax request to fetch the file and parse the JSON.
A:
set your json data to one variable like
data = {"scene": {
"x": 0,
"y": 0,
"w": 11000,
"h": 3500
}
}
then access it as
data.scene.x //it will give 0
| {
"pile_set_name": "StackExchange"
} |
Q:
JAVA - how to modify SWT UI while looping in thread
I'm trying to implement a Desktop app that loops trough a function and sets a textfield on the UI every 1 sec.
But I either get
org.eclipse.swt.SWTException: Invalid thread access
when I don't use the display
or the UI is really sluggish when I do
display.asyncExec(new Runnable() {
My code looks like this:
public void open() {
Display display = Display.getDefault();
shell = new Shell();
shell.setSize(486, 322);
shell.setText("SWT Application");
Button btnStartLoop = new Button(shell, SWT.NONE);
btnStartLoop.addMouseListener(new MouseAdapter() {
@Override
public void mouseDown(MouseEvent e) {
SwingUtilities.invokeLater(new Runnable() {
public void run() // updates displayArea
{
while (true) {
try {
text.setText("Text has been set");
Thread.sleep(1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
});
}
});
btnStartLoop.setBounds(35, 30, 75, 25);
btnStartLoop.setText("Start Loop");
text = new Text(shell, SWT.BORDER);
text.setText("0");
text.setBounds(116, 32, 177, 21);
shell.open();
shell.layout();
while (!shell.isDisposed()) {
if (!display.readAndDispatch()) {
display.sleep();
}
}
}
is there any way this can be overcome?
A:
You must never sleep in the UI thread. You must use a new Thread for the background activity. You call Display.asyncExec from within the thread to run the UI update code in the UI thread.
btnStartLoop.addMouseListener(new MouseAdapter() {
@Override
public void mouseDown(final MouseEvent e) {
final Thread background = new Thread(new Runnable() {
public void run()
{
while (true) {
try {
Display.getDefault().asyncExec(() -> text.setText("Text has been set"));
Thread.sleep(1000);
} catch (final InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
});
background.start();
}
});
Note: SwingUtilities is for Swing applications, do not use it in SWT apps.
You can also use the timerExec method of Display to run code after a specific delay which avoids the need for a background thread.
btnStartLoop.addMouseListener(new MouseAdapter() {
@Override
public void mouseDown(final MouseEvent e) {
final Runnable update = new Runnable() {
public void run()
{
text.setText("Text has been set");
Display.getDefault().timerExec(1000, this);
}
};
Display.getDefault().timerExec(0, update);
}
});
| {
"pile_set_name": "StackExchange"
} |
Q:
Verifying password_hash() in PDO prepared statements
I'm trying to use the bcrypt algorithm for hashing the passwords but I've ran into a couple of problems. First of all, I can't find the appropriate spot to check whether password_verify() returns true.
$admin = $_POST['admin-user'];
$pass = $_POST['admin-pass'];
$password_hash = password_hash($pass, PASSWORD_BCRYPT);
if (isset($admin)&&isset($pass)&&!empty($admin)&&!empty($pass)) {
$admin_select = $link->prepare("SELECT `id` FROM `admins` WHERE `username` = :admin");
$admin_passwd = $link->prepare("SELECT `password` FROM `admins` WHERE `username` = :admin_pw");
$admin_passwd->execute(array(':admin_pw' => $admin));
$admin_pwd = $admin_passwd->fetch(PDO::FETCH_ASSOC);
if (password_verify($pass, $admin_pwd)){
if ($admin_select->execute(array(':admin' => $admin))) {
$res = $link->query('SELECT COUNT(*) FROM requests');
$query_num_rowz = $res->fetchColumn();
if ($query_num_rowz == 0) {
echo 'No records found';
} else if ($query_num_rowz > 0) {
$query = $link->prepare("SELECT id FROM admins WHERE username = :admin");
$query->execute(array(':admin' => $admin));
$admin_id = $query->fetch(PDO::FETCH_ASSOC);
$_SESSION['admin_id'] = $admin_id;
header('Location: index.php');
}
}
}
}
Second of all, I'm not sure this is the right way to select the user's password.
$admin_passwd = $link->prepare("SELECT `password` FROM `admins` WHERE `username` = :admin_pw");
$admin_passwd->execute(array(':admin_pw' => $admin));
$admin_pwd = $admin_passwd->fetch(PDO::FETCH_ASSOC);
A:
Since you didn't put ->fetch in a loop, the single invocation will return a single row of associative array. You must access the proper index first (in this case password). Then compare the row value (at least if this is hashed already) inside the password_verify with the user input. Rough example:
if(!empty($_POST['admin-user'] && !empty($_POST['admin-pass']))) {
$admin = $_POST['admin-user'];
$pass = $_POST['admin-pass'];
$admin_info = $link->prepare("SELECT `password` FROM `admins` WHERE `username` = :admin_user");
$admin_info->execute(array(':admin_user' => $admin));
$row = $admin_info->fetch(PDO::FETCH_ASSOC);
if(!empty($row)) {
// check if the hashed row password
if(password_verify($pass, $row['password'])) {
// okay
}
} else {
// not found
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
UIAlertView is Deprecated how can i fix this code as UIAlertController? I can't figure out how to use switch statement in UIAlertController
I've tried Several ways to fix this deprecated code but nothing helped me I'm new to Objective C and iOS please help me fix this...It's working fine in iOS8 but not in iOS9..
- (void)alertView:(UIAlertView *)alertView clickedButtonAtIndex:(NSInteger)buttonIndex
{
[super alertView:alertView clickedButtonAtIndex:buttonIndex];
if (buttonIndex)
{
switch (alertView.tag)
{
case kAccessAddressBook:
{
[self displayFindFriendView:[NSNumber numberWithInteger: CS_CONTACTS ]];
}
break;
case kFindFriendEmail:
{
}
break;
case kLogout:
{
// Hit Logout API
[self userLogout];
}
break;
case kClearSearchHistory:
{
// Clear Search History Data base.
[[CSCoreDataHandler sharedInstance] deleteManagedObjectsInModel:@"CSRecentSearch"];
}
break;
default:
break;
}
}
}
A:
AlertView is depricated in iOS 8.So we need to use UIAlertController.
UIAlertController * alert = [UIAlertController
alertControllerWithTitle:@"Title"
message:@"Message"
preferredStyle:UIAlertControllerStyleAlert];
UIAlertAction *actionAccessAddressbook = [UIAlertAction
actionWithTitle:@"Access"
style:UIAlertActionStyleDefault
handler:^(UIAlertAction * action) {
[self displayFindFriendView:[NSNumber numberWithInteger: CS_CONTACTS ]];
}];
UIAlertAction *actionFindFriendEmail = [UIAlertAction
actionWithTitle:@"Find Friend Email"
style:UIAlertActionStyleDefault
handler:^(UIAlertAction * action) {
//...Do your stuff here
}];
UIAlertAction *actionLogout = [UIAlertAction
actionWithTitle:@"Logout"
style:UIAlertActionStyleDefault
handler:^(UIAlertAction * action) {
[self userLogout];
}];
UIAlertAction *actionClearSearchHistory = [UIAlertAction
actionWithTitle:@"ClearSearchHistory"
style:UIAlertActionStyleDefault
handler:^(UIAlertAction * action) {
[[CSCoreDataHandler sharedInstance] deleteManagedObjectsInModel:@"CSRecentSearch"];
}];
[alert addAction:actionAccessAddressbook];
[alert addAction:actionFindFriendEmail];
[alert addAction:actionLogout];
[alert addAction:actionClearSearchHistory];
[self presentViewController:alert animated:YES completion:nil];
| {
"pile_set_name": "StackExchange"
} |
Q:
Where to declare class constants?
I'm using class members to hold constants. E.g.:
function Foo() {
}
Foo.CONSTANT1 = 1;
Foo.CONSTANT2 = 2;
This works fine, except that it seems a bit unorganized, with all the code that is specific to Foo laying around in global scope. So I thought about moving the constant declaration to inside the Foo() declaration, but then wouldn't that code execute everytime Foo is constructed?
I'm coming from Java where everything is enclosed in a class body, so I'm thinking JavaScript might have something similar to that or some work around that mimics it.
A:
All you're doing in your code is adding a property named CONSTANT with the value 1 to the Function object named Foo, then overwriting it immediately with the value 2.
I'm not too familiar with other languages, but I don't believe javascript is able to do what you seem to be attempting.
None of the properties you're adding to Foo will ever execute. They're just stored in that namespace.
Maybe you wanted to prototype some property onto Foo?
function Foo() {
}
Foo.prototype.CONSTANT1 = 1;
Foo.prototype.CONSTANT2 = 2;
Not quite what you're after though.
A:
You must make your constants like you said :
function Foo() {
}
Foo.CONSTANT1 = 1;
Foo.CONSTANT2 = 2;
And you access like that :
Foo.CONSTANT1;
or
anInstanceOfFoo.__proto__.constructor.CONSTANT1;
All other solutions alloc an other part of memory when you create an other object, so it's not a constant. You should not do that :
Foo.prototype.CONSTANT1 = 1;
A:
IF the constants are to be used inside of the object only:
function Foo() {
var CONSTANT1 = 1,CONSTANT2 = 2;
}
If not, do it like this:
function Foo(){
this.CONSTANT1=1;
this.CONSTANT2=2;
}
It's much more readable and easier to work out what the function does.
| {
"pile_set_name": "StackExchange"
} |
Q:
Move files into year/month folders based on file name timestamp powershell
I have thousands of files spanning 5 years which I would like to move into year/month folders. The file names all end with
_yyyy_mm_dd_wxyz.dat
I'm looking for ideas on how I can generate such file folders and move the files into the appropriate folders yyyy/mm using the windows command shell.
A:
You'll need a Regular Expression with (capture groups) to extract year/month from the filename.
Assuming the year/month folder should be placed directly in files parent location.
untested with -Version 2
## Q:\Test\2018\07\23\SO_51485727.ps1
Push-Location 'x:\folder\to\start'
Get-ChildItem *_*_*_*_*.dat |
Where-Object {$_.BaseName -match '_(\d{4})_(\d{2})_\d{2}_[a-z]+$'} | ForEach-Object {
$TargetDir = "{0}\{1}" -f $Matches[1],$Matches[2]
if (!(Test-Path $TargetDir)){MD $TargetDir | Out-Null}
$_ | Move -Destination $TargetDir
}
Sample tree /f after running the script on my ramdriive:
PS A:\> tree /F
A:.
├───2017
│ └───07
│ test_2017_07_24_xyz.dat
└───2018
└───07
test_2018_07_24_xyz.dat
| {
"pile_set_name": "StackExchange"
} |
Q:
Numpy array - stack multiple columns into one using reshape
For a 2D array like this:
table = np.array([[11,12,13],[21,22,23],[31,32,33],[41,42,43]])
Is it possible to use np.reshape on table to get an array single_column where each column of table is stacked vertically? This can be accomplished by splitting table and combining with vstack.
single_column = np.vstack(np.hsplit(table , table .shape[1]))
Reshape can combine all the rows into a single row, I'm wondering if it can combine the columns as well to make the code cleaner and possibly faster.
single_row = table.reshape(-1)
A:
You can transpose first, then reshape:
table.T.reshape(-1, 1)
array([[11],
[21],
[31],
[41],
[12],
[22],
[32],
[42],
[13],
[23],
[33],
[43]])
| {
"pile_set_name": "StackExchange"
} |
Q:
Reflecting on Java
I just messed up an opportunity, failing to answer 2 questions. I still do not know the answers, so looking for them:
[1] You have a Java class with private variables and no getter/setter methods. How do you modify such variables?
My answer: You cannot do it, private variables cannot be accessed from outside.
Interviewer: Correct answer is "Using reflection".
[2] Which reflection methods do you use to do the above?
My answer: I am not sure.
Interviewer: Good bye.
From my experience, I'd (1) Check if the class exists (2) Create an instance (3) Check if a method exists (4) Invoke the method (5) Carry on with the instance of class. Of course, I'd catch Exceptions like ClassNotFound and MethodInvocation. But is there a trick to modify private variables? And do people do this? TIA.
A:
Given this:
You have a Java class with private variables and no getter/setter
methods. How do you modify such variables?
my response would be that you don't need specific setters/getters and you'd just modify them in other non-specific methods. Setters/getters can be viewed in many cases as exposing the implementation.
In order to make a field accessible, you have to call Field.setAccessible().
It's the sort of topic I don't know off the top of my head, and have to look up if/when I use it (I can't remember when I last used it). For an interviewer to be so hung up on this seems a little unusual.
| {
"pile_set_name": "StackExchange"
} |
Q:
Path of Diffeomorphisms Fixing the Boundary
Could you please let me know if the following is true. The problem came up while constructing a solution of a PDE. I have browsed through the net for an answer. While I came across some articles regarding the identity component of the diffeomorphism group, with my poor geometry and topology I could not really figure out what really is happening. Any help in this direction is appreciated.
QUESTION:
Let $n\geqslant 2$, $k\geqslant 1$, $\Omega\subset\mathbb{R}^n$ be open, bounded, smooth, simply connected with $\partial\Omega$ connected. Let $u:\overline{\Omega}\to\overline{\Omega}$ be a diffeomorphism of class $C^k$ satisfying
$$\det(\nabla u)>0\text{ in }\overline{\Omega}\text{ and }u(x)=x\text{ on }\partial\Omega.$$
Does there exist $H\in C^k\left([0,1]\times\overline{\Omega};\mathbb{R}^n\right)$ satisfying
$H(0,\cdot)=\text{Id}$ in $\overline{\Omega}$.
$H(1,\cdot)=u$ in $\overline{\Omega}$.
$\det(\nabla H(t,\cdot))>0\text{ in }\overline{\Omega}$, for all $t$.
$H(t,x)=x\text{ on }\partial\Omega$, for all $t$.
Note that, 3 and 4 imply that $H(t,\cdot)$ is a diffeomorphism of $\overline{\Omega}$.
If the result is negative in general, it will be great to have an explicit counterexample. It will also be good to know some cases, if any, when the result is positive.
A:
A diffeomorphism for which there exists such an H is called an isotopy (relative to the boundary).
This has been much studied in the case where Omega is the unit ball in R^n. Your question is well-known as: does pi_0(D^n,\partial D^n)=0?
The answer is positive for the unit balls in R^2 (Smale 1958) and R^3 (Cerf's famous "Gamma4=0" in 1969). This answers positively tour question for n=2 and 3, since in these dimensions, every simply connected compact domain with smooth connected boundary is diffeomorphic to the n-ball.
For the compact unit ball in R^4, your question is a big open question.
The answer to your question is widely negative in large dimensions.
The answer to your question is negative for the unit ball in R^6, as discovered by John Milnor in 1959. It is linked to the existence of "exotic spheres".
There are many more counterexamples in large dimensions; see for example Hatcher's review "A 50 year-view on diffeomorphism group". He writes "\pi_0 of Diff(D^n, \partial D^n) is not zero for most n ≥ 5. However it is zero for n = 5, 11, 60." It seems that no other exception than 2,3,5,11,60 is known.
| {
"pile_set_name": "StackExchange"
} |
Q:
Border goes over two rowspans
I will show you my problem with an example, here i use on column with a rowspan:
<table border="1" style="width:300px">
<tr>
<td rowspan="2">Familie</td>
<td id="jill">Jill</td>
<td>Smith</td>
<td>50</td>
</tr>
<tr>
<td id="eve">Eve</td>
<td>Jackson</td>
<td>94</td>
</tr>
</table>
Somehow when i add the css:
border-left: 1px solid red;
To #jill the two rows get a red border: http://jsfiddle.net/hPBds/16/
When i add this css to #eve it works how it should, only one border gets this color: http://jsfiddle.net/hPBds/17/
Can somebody say my why this occurs and how i can fix it? Thanks
A:
It's the table's border-collapse property. http://www.w3schools.com/cssref/pr_border-collapse.asp
It's set to collapse, which is collating the borders for #jill and the Familie td.
Set the table's border-collapse CSS to separate and that should solve the problem. Though now you'll have borders on everything else (visibly, borders twice as thick).
<table border="1" style="width: 300px; border-collapse: separate;">
Here's a jsfiddle.
| {
"pile_set_name": "StackExchange"
} |
Q:
Pray hurriedly as soon as time comes for mincha/maariv arrives or wait but pray with more kavanah
Suppose one works in a non-Jewish office environment with rooms that can be used for prayer.
Should one wait till one arrives at home and pray Mincha there at leisure or do it at work earlier when the preferred time arrives but hurriedly?
Also, one may not arrive home before sunset, and in those cases, end up missing Mincha entirely.
A:
I think you are asking two questions
Is it better to pray at a less than ideal time or after the limit?
Is it better to pray with less kavana but at the ideal time or with more kavana at a "less than ideal" time?
The answer to the first question is that, clearly, it is better to pray before the limit (most often understood as the shekia).
The Gmara in Brakoht 29b says explicitly that one shouldn't delay minha up to the last minute for fear of missing the time (see Rashi there).
Rav David Brofsky cites the Mishna Berura as preferring that one prays individually without a minyan rather than praying with a minyan after shekia. Rav Ovadya Yosef (Yechavveh Da'at 5:22) and others (see Piskei Teshuvot 233:6) disagree but that doesn't apply to your case. See there for many relevant sources to the time of minha.
The answer to your second question is less clear to me. Kavana is very important in prayer, and one finds sources saying it is better to pray alone with more kavana than in a minyan with less kavana (Yabia Omer OC 4:9). So delaying beyond the additional time, provided you don't run the risk of missing the time limit, might be better if it allows you to pray with more kavana.
But as usual with individual halakhic questions, best is to ask a rav that knows you and the halakha well.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to display an UIView above NavigationController and UITabbar?
I am trying to create a custom Actionsheet controller (as I want it to be similar to action sheet even on the iPad). I am using XIB's in my project to load views.
I have a View Controller A on which I want to show a custom Table view from bottom (Just to mimic the actionSheet behavior). But when I do so by adding it to custom View in my View Controller A, the view stays below the NavigationController and UITababar of the that View controller.
let vc = CustomActionTableViewController(nibName: "CustomActionTableViewController", bundle: nil)
self.addChildViewController(vc)
self.actionView.addSubview(vc.view)
vc.didMove(toParentViewController: self)
let horzConstraints = NSLayoutConstraint.constraints(withVisualFormat: "H:|[childView]|", options: [], metrics: nil, views: ["childView": vc.view])
let vertConstraints = NSLayoutConstraint.constraints(withVisualFormat: "V:|[childView]|", options: [], metrics: nil, views: ["childView": vc.view])
view.addConstraints(horzConstraints)
view.addConstraints(vertConstraints)
vc.view.translatesAutoresizingMaskIntoConstraints = false
I want to add this to a view so that I can achieve transparent background even over Navigation controller and UITabbar.
How can I achieve this. And further more how is it possible to create Card like bottom views used for AirPods connectivity ? Can this kind of view be mimicked using Apple's UIKit or any Library ?
A:
You should present this ViewController with modalPresentationStyle as .overFullScreen
let vc = CustomActionTableViewController(nibName: "CustomActionTableViewController", bundle: nil)
vc.modalPresentationStyle = .overFullScreen
self.present(vc, animated: true, completion: nil)
You can set the background color of CustomActionTableViewController's view as black and change alpha value to set the transparency.
| {
"pile_set_name": "StackExchange"
} |
Q:
Converting Char "1" to hex "4'h0001"
I am trying to convert a string to the same value in hex. Eg
If string="abc123" it's hex value should also be 24'habc123.
The length of string is 64 and so I convert each char to hex,I'll get 64*4=256 bits of hex value
eg if text_o_cplus.getc(i)=a i.e 97,I should assign it a hex value a i'e 4'ha;
But I am not getting proper way to do that. That's what I tried
int j=255;
for(int i=text_o_cplus.len();i>0;i--)
begin
while(j>=0)
begin
case(text_o_cplus.getc(i))
48: begin
rev_result[j]=4'b0000;
j=j-4;
break;
end
and so on.
rev_result is defined as
bit [64][4] rev_result;
I tried to define it as
bit [255:0] rev_result
also but it didn't work.
Can someone please suggest how can I achieve it?
A:
Use the built-in function:
bit [255:0] rev_result = text_o_cplus.atohex();
| {
"pile_set_name": "StackExchange"
} |
Q:
Continous Build Integration with SourceSafe and Batch Files
I want to create a continuous build integration system for .NET using just Windows batch files and Visual Source Safe.
I've come up with the following batch file so far -
set ssdir=\\xxxx\vss
cd d:\mydir
"C:\Program Files\Microsoft Visual SourceSafe\ss.exe" diff "$/sourcedir" -R -Q > diffout.txt
This will spit out a file containg lines like "SourceSafe files different from local files" when a change has been made.
My challenge is to figure out if those lines are in the file, then do a get and kick off MSBuild if they are. I'd then schedule the batch file to run every 10 minutes or so.
Anyone got any thoughts on how to do that? Or any other ways of doing continuous build integration without downloading a complicated build automation system?
Update: Happy to use cscript or powershell too, though not really familiar with those environments. My main aim is to avoid installing 3rd party software
A:
hudson is not a very complicated thing to get running. Even i managed to get it working in a short amount of time.
And while you're at it, replace sourcesafe...
A:
cmd.exe is a dinosaur. Here's a PowerShell version.
Set-Alias ss 'C:\Program Files\Microsoft Visual SourceSafe\ss.exe'
Set-Alias msbuild 'C:\Windows\Microsoft.Net\Framework64\v3.5\msbuild.exe'
cd d:\mydir
$diffs = ss diff '$/sourcedir' -R -Q
if ($diffs -match 'SourceSafe files different') {
msbuild blah
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Match pattern in file and use found strings to extract lines containing the strings in another file, in GNU/Linux
I am a newbie to the awesome world of shell scripting.
File b.txt contains error codes in comma separated text format. The error codes can be matched with this pattern - '[A-Z]\{2\}-[A-Z0-9]\{4\}'. Example of b.txt
LO-5645,SE-DH68,MY-2255,MI-9878,SY-FC25,
ER-55R8,LO-5645,
EU-1C07,ER-9871,EY-5523,MM-2564,
FO-D389,XU-2659,EU-1568,
etc etc....
File a.txt contains in each line a error code and a description of the error code. Example of a.txt:
EU-1568: system not initializing
ER-55R8: fatal error on platform xx22
MM-2564: Driver not initialized
LO-24DE: Lot failed
SY-FC25: System error on domain
etc etc.....
I want to combine info in these two files such that I can create a file c.txt, that contains the comma seperated errors along with description of the error message extracted from b.txt.
Example of intended result in c.txt
LO-5645,SE-DH68,MY-2255,MI-9878,SY-FC25: System error on domain,
ER-55R8: fatal error on platform xx22,LO-5645,
EU-1C07,ER-9871,EY-5523,MM-2564: Driver not initialized,
FO-D389,XU-2659,EU-1568: system not initializing,
etc etc...
My idea to approach this problem: I was trying to use a while loop to read line by line b.txt, and use grep -o to match exactly the pattern of the error codes into a array variable. Then using a internal For loop I try to read one element of this array at a time and match lines containing the error code in a.txt.
It would be great if I can get some of your ideas on how I can approach this solution in a better way.
Awk, Sed, grep, perl, cut are all welcome.
A:
Looks for exact match
awk -F'[,:]' -v OFS=',' '
FNR==NR{error[$1]=$NF;next}
{
for(i=1; i<=NF;i++)if($i in error)$i=$i":"error[$i]
}1' a.txt b.txt >c.txt
Explanation
awk -F'[,:]' -v OFS=',' ' # Call awk, set input field sep
# , and : awk supports multiple field sep
# and output field sep as comma
# Here we read file a.txt
FNR==NR{ # this is true when awk reads first file
# When awk reads from the multiple input file,
# NR variable will give the total number
# of records relative to all the input file.
# FNR will give you number of records
# for each input file.
error[$1]=$NF; # populate array named error
# such that array index is col1
# and array value is last field of record
# NF gives no of fields in current record
next # The next statement forces awk to immediately
# stop processing the current record and
# go on to the next record
}
# Here we read file b.txt
{
# NF gives no fields in current record,
# start loop from first field/column to last field/column( NF )
# increment by 1
for(i=1; i<=NF;i++)
# check if column value exists in array error
if($i in error)
# if above if statement is true, then we
# have error description so
# modify current column
# current column = current column : and your description
# which exists in error array
$i=$i":"error[$i]
}1 # 1 at then does default operation print $0 (print current row/record)
' a.txt b.txt >c.txt
Input
$ cat a.txt
EU-1568: system not initializing
ER-55R8: fatal error on platform xx22
MM-2564: Driver not initialized
LO-24DE: Lot failed
SY-FC25: System error on domain
etc etc.....
$ cat b.txt
LO-5645,SE-DH68,MY-2255,MI-9878,SY-FC25,
ER-55R8,LO-5645,
EU-1C07,ER-9871,EY-5523,MM-2564,
FO-D389,XU-2659,EU-1568,
etc etc....
Output
$ awk -F'[,:]' -v OFS=',' '
FNR==NR{error[$1]=$NF;next}
{
for(i=1; i<=NF;i++)if($i in error)$i=$i":"error[$i]
}1' a.txt b.txt
LO-5645,SE-DH68,MY-2255,MI-9878,SY-FC25: System error on domain,
ER-55R8: fatal error on platform xx22,LO-5645,
EU-1C07,ER-9871,EY-5523,MM-2564: Driver not initialized,
FO-D389,XU-2659,EU-1568: system not initializing,
etc etc....
| {
"pile_set_name": "StackExchange"
} |
Q:
Eager loading child and child-of-child collections in NHibernate
I've got a problem with NHibernate trying to load a small hierarchy of data. My domain model looks like:
class GrandParent
{
int ID{get;set;}
IList<Parent> Parents {get; set;}
}
class Parent
{
IList<Child> Children {get; set;}
}
class Child
{
}
and I would like to eager load all parents and children for a given GrandParent. This Linq-to-NH query creates the correct SQL and loads the GrandParent as expected: (the example assumes the grandparent has 2 parents who each have 2 child objects - so 4 child objects in total).
var linq = session.Linq<GrandParent>();
linq.Expand("Parents");
linq.Expand("Parents.Children");
linq.QueryOptions.RegisterCustomAction(c =>
c.SetResultTransformer(new DistinctRootEntityResultTransformer()));
var grandparent = (select g from session.Linq<GrandParent>()
where g.ID == 1
select g).ToList();
Assert(grandparent.Count == 1); //Works
Assert(grandparent.Parents.Count == 2); //Fails - count = 4!
The grandparent.Parents collection contains 4 items, 2 of which are duplicates. It seems the DistinctRootEntityResultTransformer only works on collections 1 level deep, so the Parents collection is duplicated depending on how many Child objects each parent has.
Is it possible to get NH to only include the distinct Parent objects?
Thanks very much.
A:
If your mapping is set to FetchType.Join, try changing it to FetchType.Select.
| {
"pile_set_name": "StackExchange"
} |
Q:
Conditional parameter in Cruisecontrol.net
Is there any way to mix up the 'AND' and 'OR' operators in Cruisecontrol.net 1.6? My if condition goes like this:
if ((A="a" && a="a") || (B="b" && b="b"))
{
//Task to be done
}
Same thing when written in CC (The OR part):
<conditional>
<conditions>
<orCondition>
<conditions>
<compareCondition value1="A" evaluation="equal" value2="a" />
<compareCondition value1="B" evaluation="equal" value2="b" />
</conditions>
</orCondition>
</conditions>
<tasks>
<!--Task to be done-->
</tasks>
</conditional>
and when written in CC with the AND part:
<conditional>
<conditions>
<andCondition>
<conditions>
<compareCondition value1="a" evaluation="equal" value2="a" />
<compareCondition value1="b" evaluation="equal" value2="b" />
</conditions>
</andCondition>
</conditions>
<tasks>
<!--Task to be done-->
</tasks>
</conditional>
I want to write both of these as a single conditional operation. Is it possible?
A:
Well, I figured it out myself... :)
<conditional>
<conditions>
<orCondition>
<conditions>
<andCondition>
<conditions>
<compareCondition value1="A" evaluation="equal" value2="a" />
<compareCondition value1="a" evaluation="equal" value2="a" />
</conditions>
</andCondition>
<andCondition>
<conditions>
<compareCondition value1="B" evaluation="equal" value2="b" />
<compareCondition value1="b" evaluation="equal" value2="b" />
</conditions>
</andCondition>
</conditions>
</orCondition>
</conditions>
<tasks>
<!--Task to be done-->
</tasks>
</conditional>
| {
"pile_set_name": "StackExchange"
} |
Q:
Looking for a better design: A readonly in-memory cache mechanism
I have a Category entity (class), which has zero or one parent Category and many child Categories -- it's a tree structure. The Category data is stored in a RDBMS, so for better performance, I want to load all categories and cache them in memory while launching the applicaiton.
Our system can have plugins, and we allow the plugin authors to access the Category Tree, but they should not modify the cached items and the tree(I think a non-readonly design might cause some subtle bugs in this senario), only the system knows when and how to refresh the tree.
Here are some demo codes:
public interface ITreeNode<T>
where T : ITreeNode<T>
{
// No setter
T Parent { get; }
IEnumerable<T> ChildNodes { get; }
}
// This class is generated by O/R Mapping tool (e.g. Entity Framework)
public class Category : EntityObject
{
public string Name { get; set; }
}
// Because Category is not stateless, so I create a cleaner view class for Category.
// And this class is the Node Type of the Category Tree
public class CategoryView : ITreeNode<CategoryView>
{
public string Name { get; private set; }
#region ITreeNode Memebers
public CategoryView Parent { get; private set; }
private List<CategoryView> _childNodes;
public IEnumerable<CategoryView> ChildNodes {
return _childNodes;
}
#endregion
public static CategoryView CreateFrom(Category category) {
// here I can set the CategoryView.Name property
}
}
So far so good. However, I want to make ITreeNode interface reuseable, and for some other types, the tree should not be readonly. We are not able to do this with the above readonly ITreeNode, so I want the ITreeNode to be like this:
public interface ITreeNode<T> {
// has setter
T Parent { get; set; }
// use ICollection<T> instead of IEnumerable<T>
ICollection<T> ChildNodes { get; }
}
But if we make the ITreeNode writable, then we cannot make the Category Tree readonly, it's not good.
So I think if we can do like this:
public interface ITreeNode<T> {
T Parent { get; }
IEnumerable<T> ChildNodes { get; }
}
public interface IWritableTreeNode<T> : ITreeNode<T> {
new T Parent { get; set; }
new ICollection<T> ChildNodes { get; }
}
Is this good or bad? Are there some better designs? Thanks a lot! :)
A:
One thing you can try is to use List<T> for your IEnumerable items that you want to be read only. Then when you populate your tree structure you can internally call the AsReadOnly() method on your list which will return a ReadOnlyCollection<T> and the consumers of your of your data would not be able to modify the contents of the collection.
This approach isn't ReadOnly from the interface's point of view, but an attempt to call a method like Add on the collection would fail and throw an exception.
To protect other members of you can build some private read-only flag within the implementation class of ITreeNode within the class, then set your flag to read-only on the cached items.
Something like this...
public class TreeNode : ITreeNode
{
private bool _isReadOnly;
private List<ITreeNode> _childNodes = new List<ITreeNode>();
public TreeNode Parent { get; private set; }
public IEnumerable<ITreeNode> ChildNodes
{
get
{
return _isReadOnly ? _childNodes.AsReadOnly() : _childNodes;
}
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
ImportError: cannot import name 'normalize_data_format'
I have read an article Here and its pretty nice enough to understand. Given its implementation on GitHub. When I am trying to train at my own using given code it gives me an Import Error in this file at line 117 like following. I am using google Colab environment. Having some search over the error i got that the following line is compatible to keras version==2.2.2. I have also installed that yet not solved with the error. Please help me to get over it. By default keras version installed in colab is 2.2.4
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-47-f8ce7e15cf87> in <module>()
9 from keras.layers.merge import Add
10 from keras.utils import conv_utils
---> 11 from keras.utils.conv_utils import normalize_data_format
12
13 from keras.layers.core import Dropout
ImportError: cannot import name 'normalize_data_format'
---------------------------------------------------------------------------
A:
https://github.com/keras-team/keras/blob/master/keras/utils/conv_utils.py
master branch's conv_utils doesn't have normalize_data_format.
some of the other branches do have it such as tf-keras branch.
It is a trivial function here is its implementation:
import keras.backend as K
def normalize_data_format(value):
if value is None:
value = K.image_data_format()
data_format = value.lower()
if data_format not in {'channels_first', 'channels_last'}:
raise ValueError('The `data_format` argument must be one of '
'"channels_first", "channels_last". Received: ' +
str(value))
return data_format
| {
"pile_set_name": "StackExchange"
} |
Q:
AJAX and PHP: how to stop script being cached?
Problem:
How do I include a token? My show_aht.php is being cached. I have to refresh manually my show_aht.php in order to get new data in my map.php when I get on the aht_button to make the AJAX call. Its very frustrating.
When aht_button is clicked it returns data, but if I refresh the page and/or I reclick the button it will still show me the old data or do nothing at all. I have to manually refresh my "show_aht.php" on my browser and then click on "aht_button" so I can display the new data being retrieve from "show_aht.php".
I did not want to post my PHP code because its a lot of stuff.. maybe someone can find the problem because I have no clue. Not sure if we can reload a PHP script by itself? I've put only the important stuff.
thanks in advance!
map.php JS:
<div id="aht">
<button id="aht_button">AHT</button>
</div>
<script type="text/javascript">
$(document).ready(function() {
$('#aht').click(function(){
$.ajax({
type:"GET",
url : "show_aht.php", //use a token here??
data:{ } ,
dataType: 'json',
success : function(data){
//get the MIN value from the array
var min = data.reduce(function(prev, curr) {
return isNaN(+curr['aht_value']) || prev < +curr['aht_value'] ? prev : +curr['aht_value'];
}, 1000000);
alert("min:" + min);
//get the MAX value from the array
var max = data.reduce(function(prev, curr) {
return isNaN(+curr['aht_value']) || prev > +curr['aht_value'] ? prev : +curr['aht_value'];
}, -1000000);
alert("max:" + max);
//function for calculation of background color depending on aht_value
function conv(x){
return Math.floor((x - min) / (max - min) * 255);
}
//function for background color, if NA then show white background, either show from green to red
function colorMe(v){
return v == 'NA' ? "#FFF" : "rgb(" + conv(v) + "," + (255-conv(v)) + ",0)";
}
//going through all DIVs only once with this loop
for(var i = 0; i < data.length; i++) { // loop over results
var divForResult = $('#desk_' + data[i]['station']); // look for div for this object
if(divForResult.length) { // if a div was found
divForResult.html(data[i]['aht_value']).css("background-color", colorMe(data[i]['aht_value']));
}//end if
}//end for
}//end success
});//end ajax
});//end click
});//end rdy
</script>
show_aht.php:
include 'db_conn_retca2003.php';
include 'db_conn_retca2001.php';
header('Content-type: application/json');
/****************************************************
matching USER array and MEMO array
for matching username values
/****************************************************/
$result = array();
foreach ($memo as $username => $memodata) {
if (in_array($username, array_keys($user))) {
// Match username against the keys of $user (the usernames)
$userdata = $user[$username];
//if AHT is null give N/A as value
if (is_null($memodata['aht_value'])) {
$result[] = array( 'username' => $userdata['username'],
'aht_value' => 'NA',
'station' => $userdata['station']
);
}//end inner if
//else give the actual value of AHT without the decimals
else {
$result[] = array( 'username' => $userdata['username'],
'aht_value' => substr($memodata['aht_value'],0,-3),
'station' => $userdata['station']
);
}//end else
}//end outer if
}//end for
echo json_encode($result);
?>
A:
There's multiple ways you can stop browser caching. One is to send headers that indicate no caching. From How to control web page caching, across all browsers?:
The correct minimum set of headers that works across all mentioned
browsers:
Cache-Control: no-cache, no-store, must-revalidate Pragma: no-cache Expires: 0
Using PHP:
header('Cache-Control: no-cache, no-store, must-revalidate'); // HTTP 1.1.
header('Pragma: no-cache'); // HTTP 1.0. header('Expires: 0'); // Proxies.
Using HTML:
<meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate" />
<meta http-equiv="Pragma" content="no-cache" />
<meta http-equiv="Expires" content="0" />
Note that this method depends on browsers respecting the no cache headers. This is not a guarantee.
You can also add a query string variable that contains the current time stamp when linking to those files you do not wish to be cached. Since you're going to a different URL every time, the browser will not cache.
| {
"pile_set_name": "StackExchange"
} |
Q:
Invalid Security certificate error message after reissuing the SSL Certificate
In my server I have one SSL Certificate which is valid from 06/09/2009 through 06/09/2011. the client is getting invalid certificate error. I reissued the certificate and installed in the server machine. But still the client is getting the same error. Is there any problem with the Browser. Can anyone reply for this issue. But If I open the page in different machine I can go to the site without any invalid certificate error message.
A:
I have had similar problems self signed certs switching to legit certs. The browser seems to cache the cert (though it shouldn't be browser specific)
I have noticed the problem more in Chrome than other browsers. You can try dumping the cache in the browser. You can also run
certmgr.msc
and see if the expired cert shows up, if it does you should be able to delete it and hit the site again to get the new cert.
My related question on serverfault: https://serverfault.com/questions/279984/clearning-chrome-ssl-cache
| {
"pile_set_name": "StackExchange"
} |
Q:
Mail repeatedly sent by JavaMail ends up in Spam folder
I'm sending a mail using JavaMail from inside a JSP page as follows:
String from= request.getParameter("from");
String to= request.getParameter("to");
String thanks= request.getParameter("thanks");
String subject= request.getParameter("subject");
try{
SmtpClient client = new SmtpClient("smtp.example.com");
client.from(from);
client.to(to);
PrintStream message = client.startMessage();
message.println("From: " + from);
message.println("To: " + to);
message.println("Subject: " + subject);
message.println();
Enumeration paramNames = request.getParameterNames();
while(paramNames.hasMoreElements()) {
String paramName = (String) paramNames.nextElement();
String paramValue = request.getParameter(paramName);
if (request.getParameter(paramName) != null &&
request.getParameter(paramName) != "") {
message.println(paramName + ": " + paramValue);
message.println();
}
}
client.closeServer();
}
catch (IOException e){
System.out.println("ERROR IN DELIVERING THE FORM:"+e);
}
This was working fine first and sent the data to my Inbox, but after many trials and insignificant changes, now the post goes to my Spam folder.
I appreciate if anyone could tell me where the problem is and what causes this.
A:
What causes this? Your spam filter!
Depending on what you/your mail provider uses as spam filter, you might learn something from the mail headers - I recall spamassassin giving some information about what filter scored how high, and the resulting spam score. Others might do that as well.
You might also be able to train your spam filter to recognize this mail as non-spam (ham) if you remove it from the spamfolder.
| {
"pile_set_name": "StackExchange"
} |
Q:
Getting value by name from OrderedDictionary
I am looking for something like PHP's associative arrays that supports nesting too. For instance, I am creating a Dictionary Object like following:
System.Collections.Specialized.OrderedDictionary userRoles = new System.Collections.Specialized.OrderedDictionary();
userRoles["UserId"] = "120202";
userRoles["UserName"] = "Jhon Doe";
// 2D array Like
userRoles["UserRoles"][0] = "CAN_EDIT_LIST";
userRoles["UserRoles"][1] = "CAN_EDIT_PAGE";
Then I would like to access them by KeyNames instead of index values. Is it possible?
A:
OrderedDictionary uses objects for both keys and values.
To achieve nesting, just set the value to be another dictionary:
userRoles["UserRoles"] = new Dictionary();
Then you can use:
((Dictionary())userRoles["UserRoles"])["MyKey"] = "My Value";
| {
"pile_set_name": "StackExchange"
} |
Q:
clean up css automaticly with dreamweaver or other tool
It's not really a coding question, but I don't know where to ask it elsewhere.
I'm looking for a tool to clean up unused css selectors.
I know this tool Dust-Me selectors, but I want it to clean it automaticly.
Can anyone help me with this?
A:
Depending on the complexity of your site, I don't think it's a good idea to clean up CSS automatically. I've used those tools myself (DustMe-Selectors mostly) but as soon as it comes to dynamic pages (and sites), all of the tools lack the ability to really find out what is used and what not.
Consider a site using selectors like "item-selected", "item-soldout", "item-bargain", etc. If the site will apply selectors dynamically to e.g. items in a shop, tools may not find those selectors in your markup because they are not used at the moment but maybe used as soon as the shop-configuration changes.
So I'd suggest to go with one (or more) of the tools suggested here and carefully evaluate the suggestions for unused selectors, but rather not use something to clean my code automatically.
| {
"pile_set_name": "StackExchange"
} |
Q:
Large datasets for a project
Not sure if Stack Overflow is the right site for it, but since there are many DW developers here...
I'm going to build a data warehouse for a graduation project, and to do so I need a good dataset, and by good I mean bad :) I need a dataset which requires a lot of transformations, is contained in many files (with various or weird formatting if possible). It should also have a lot of columns so a moderately large cube can be built on it. Most of the datasets available on the internet are too simple for this. Can anyone recommend something?
A:
Perhaps you could use US Census Data? There's lots of different kinds of data available. Maybe focus on a specific state? Your cube could allow roll ups across various political or geographic areas, or by various demographics.
http://www.census.gov/population/www/cen2010/glance/
It doesn't appear that all the data's available yet, so you can always use the 2000 census instead.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to apply a bevel modifier, so that it affects only specific edges?
I am trying to apply a bevel modifier to a box with some additional buttons, however, I am struggling to apply the bevel modifier only to the top face and the edges that are connected to that face. In a nutshell, I am trying to bevel the top of the box without affecting the bottom and the buttons.
Here is an image of what I get upon applying the bevel modifier:
A:
You can use bevel weights, which lets you specify a value between 0 (no bevel) and 1 (full bevel).
| {
"pile_set_name": "StackExchange"
} |
Q:
IEnumerable not enumerating in foreach
I'm encountering a problem with one of my IEnumerable's that I haven't seen before.
I have a collection:
IEnumerable<IDependency> dependencies;
that's being used in a foreach loop.
foreach (var dependency in dependencies)
For some reason this foreach doesn't iterate over my IEnumerable and simply skips to the end.
If I change my foreach to loop through a a list however it seems to work fine:
foreach (var dependency in dependencies.ToList())
What could I be doing that's causing this behaviour? I haven't experienced this with IEnumerable before.
Update:
Here's the entire code of my foreach that's running in my method GenerateDotString:
foreach (var dependency in dependencies)
{
var dependentResource = dependency.Resource;
var lineColor = (dependency.Type == DependencyTypeEnum.DependencyType.Hard) ? "blue" : "red";
output += labelFormat.FormatWith(dependentResource.Name.MakeDotsafeString(), dependentResource.Name, dependentResource.ResourceType);
output += relationshipFormat.FormatWith(dependentResource.Name.MakeDotsafeString(), currentName, lineColor);
if (dependentResource.DependentResources != null)
{
output += GenerateDotString(dependentResource, dependentResource.DependentResources, searchDirection);
}
}
return output;
Update 2:
Here's the signature of the method containing this foreach (incase it helps).
private static string GenerateDotString(IResource resource, IEnumerable<IDependency> dependencies, SearchEnums.SearchDirection searchDirection)
Update 3:
Here's the method GetAllRelatedResourcesByParentGuidWithoutCacheCheck:
private IEnumerable<IDependency> GetAllRelatedResourcesByParentGuidWithoutCacheCheck(Guid parentCiGuid, Func<Guid, IEnumerable<IDependency>> getResources)
{
if (!_itemsCheckedForRelations.Contains(parentCiGuid)) // Have we already got related resources for this CI?;
{
var relatedResources = getResources(parentCiGuid);
_itemsCheckedForRelations.Add(parentCiGuid);
if (relatedResources.Count() > 0)
{
foreach (var relatedResource in relatedResources)
{
relatedResource.Resource.DependentResources = GetAllRelatedResourcesByParentGuidWithoutCacheCheck(relatedResource.Resource.Id, getResources);
yield return relatedResource;
}
}
}
}
Update 4:
I'm adding the methods in the chain here to be clear on how we're getting the collection of dependencies.
The above method GetAllRelatedResourcesByParentGuidWithoutCacheCheck accepts a delegate which in this case is:
private IEnumerable<IDependency> GetAllSupportsResources(Guid resourceId)
{
var hardDependents = GetSupportsHardByParentGuid(resourceId);
var softDependents = GetSupportsSoftByParentGuid(resourceId);
var allresources = hardDependents.Union(softDependents);
return allresources;
}
which is calling:
private IEnumerable<IDependency> GetSupportsHardByParentGuid(Guid parentCiGuid)
{
XmlNode ciXmlNode = _reportManagementService.RunReportWithParameters(Res.SupportsHardReportGuid, Res.DependentCiReportCiParamName + "=" + parentCiGuid);
return GetResourcesFromXmlNode(ciXmlNode, DependencyTypeEnum.DependencyType.Hard);
}
and returns:
private IEnumerable<IDependency> GetResourcesFromXmlNode(XmlNode ciXmlNode, DependencyTypeEnum.DependencyType dependencyType)
{
var allResources = GetAllResources();
foreach (var nodeItem in ciXmlNode.SelectNodes(Res.WebServiceXmlRootNode).Cast<XmlNode>())
{
Guid resourceGuid;
var isValidGuid = Guid.TryParse(nodeItem.SelectSingleNode("ResourceGuid").InnerText, out resourceGuid);
var copyOfResource = allResources.Where(m => m.Id == resourceGuid).SingleOrDefault();
if (isValidGuid && copyOfResource != null)
{
yield return new Dependency
{
Resource = copyOfResource,
Type = dependencyType
};
}
}
}
which is where the concrete type is returned.
A:
So it looks like the problem was to do with the dependencies collection infinately depending on itself.
It seems from my debugging that iterating the IEnumerable causes a timeout and so the foreach simply skips execution of its contents where as ToList() returns as much as it can before timing out.
I may not be correct about that but it's what seems to be the case as far as I can tell.
To give a bit of background as to how this all came about I'll explain the code changes I made yesterday.
The first thing the application does is build up a collection of all resources which are filtered by resource type. These are being brought in from our CMDB via a web service call.
What I was then doing is for each resource that was selected (via autocomplete in this case) I'd make a web service call and get the dependents for the resource based on its Guid. (recursively)
I changed this yesterday so that we didn't need to obtain the full resource information in this second web service call, rather, simply obtain a list of Guids in the web service call and grab the resources from our resources collection.
What I forgot was that the web service call for dependents wasn't filtered by type and so it was returning results that didn't exist in the original resources collection.
I need to look a bit further but it seems that at this point, the new collection of dependent resources was becoming dependent on itself and thus, causing the IEnumerable<IDependents> collection later on to timeout.
This is where I've got to today, if I find anything else I'll be sure to note it here.
To summarise this:
If infinite recursion occurs in an IEnumerable it'll simply timeout
when attempting to enumerate in a foreach.
Using ToList() on the IEnumerable seems to return as much data as it
can before timing out.
| {
"pile_set_name": "StackExchange"
} |
Q:
dynamically populating div in asp.net
I am working on a weboage that will display questions and answers (maybe 5 at one time, maybe 7 at another time) returned from a database table. The questions will each be displayed in a div and the related answers displayed in another div. The question will have an icon "Show Answer / Hide Answer"
How can I go about creating a div and then populating it with values from a table?
Thanks
A:
I would use repeater for that.
1.Create data source pulling data from your database
<asp:sqlDataSource Id="sqldsQuestionsAnswers" ... />
2.Create repeater linking to that data source:
<asp:repeater DataSourceId="sqldsQuestionsAnswers" runat="server">
<itemTemplate>
<div>
<%# Eval("question") %>
<hr/>
<%# Eval("answer") %>
</div>
</itemTemplate>
</asp:repeater>
The repeater will display anything whats in <itemTemplate> tag for every row returned by your query.
So if your query returns 2 questions like that:
Question-------------Answer
-----------------------------------
question1?----------answer1
question2?----------answer2
The output would be:
<div>
question1?
<hr/>
answer1
</div>
<div>
question2?
<hr/>
answer2
</div>
I hope it helps...
| {
"pile_set_name": "StackExchange"
} |
Q:
Independance of Sigma Algebras
A book I'm reading gives the following defintion for independance.
Write $J \subset_f I$ if $J$ is a finite subset of $I$. A family $(S_i)_{i\in I}$ of $\sigma$-sub-algebras of $A$ is called independent, if for every $J \subset_f I$ and every choice $A_j \in S_j$ we have $P[\cap_{j\in J} A_j] = \prod_{j\in J} P[A_j]$. A family of sets $(A_i)_{i\in I}$ is called independent, if the $\sigma$-algebras $S_j = \{\emptyset, A_j, A^C_j, \Omega\}$ are independent.
Then they provide the following example:
Let $\Omega = \{1,2,3,4\}$ and consider the two $\sigma$-algebras $A=\{\emptyset,\{1,3\},\{2,4\},\Omega\}$ and $B=\{\emptyset,\{1,2\},\{3,4\},\Omega\}$. $A$ and $B$ are independent.
I don't see how the sigma algebras are independent. In particular, how are they making this assertion without a probability measure on $\Omega$. Do they mean that they are independent for any choice of probability measure? If so, how do I see that?
thanks!
A:
I guess it should say (maybe it does somewhere) that $P$ is defined in such a way that $P({n})=\frac14$ for $n\in\{1,2,3,4\}$ (or put in other way $P(A)=\frac{\#A}4$).
So you have to prove that for every pair $(E_A,E_B)\in A\times B$ (*), you have
$$P(E_A\cap E_B)=P(E_A)\cdot P(E_B).$$
If one of $E_A$ and $E_B$ or both are empty sets, then both sides equal $0$. If $E_A=\Omega$ then both equal $P(E_B)$ and viceversa.
Finally, in other case, you have $\#E_A=\#E_B=2$ and $\#(E_A\cap E_B)=1$, so the left side equals $\frac14$ and the right side equals $\frac12\cdot\frac12=\frac14$.
If you choose a $P$ such that, for instance, $p_1=\frac12$, $p_2=\frac14$ and $p_3=p_4=\frac18$ (where $p_i=P(\{i\})$, $1\le i \le 4$, then it is easy to see that, for instance $P(\{1,3\}\cap\{1,2\})\neq P(\{1,3\})\cdot P(\{1,2\})$.
(*) Going back to the definition, this would actually be the case $J=I$, and here $\#I=2$. Technically there are three other cases which are all fairly trivial: two of them correspond to $\#J=1$ —that is, taking only events in $A$ and taking only events in $B$, which makes the equality evident since it is of the form $P(E)=P(E)$— and the other one is for $J=\emptyset$ which involves an empty intersection and an empty product which conventionally are interpreted as $\Omega$ and $1$, respectively. Of course, you never need to check these trivial cases, but it is interesting to see that they are somehow included in the definition.
| {
"pile_set_name": "StackExchange"
} |
Q:
SQL Server 2008 R2 : LDF file size not increasing
I am using SQL Server 2008 R2. I have a table statDB of 100 GB in primary filegroup.
I have created a secondary filegroup in same database (Lab1) and created a table copyStatDB.
Now I start copy table data from primary file group to secondary file group.
I have noticed that none of my TempDB size change and not my .LDF file size change.
I am surprised to see that because as per my understanding when we execute a Insert statement it should increase .LDF file size increase first then copy data to my .NDF file.
A:
I wouldn't expect a copy operation to increase file sizes unless the file size was too small to begin with. SQL Server will first use unallocated space within data files before growing the file. Similarly, if the log file is large enough for the operation, that file shouldn't grow either.
Also regarding log space usage, some operations can be minimally logged to reduce logging requirements. Whether or not your INSERT...SELECT is fully logged depends on the database recovery model and indexes on the target table.
| {
"pile_set_name": "StackExchange"
} |
Q:
how to call parameterized function in config .js file at feature file using karate framework?
I need to pass response value from feature file to javascript function which is defined at config.js file for some computation purposes .
Please help on how to call function which is present at config.js file?
A:
First refer to this: https://github.com/intuit/karate#javascript-functions
Just keep the JS as a *.js file and re-use it from any feature or the karate-config.js. Note that karate.call() is possible from JS, including karate-config.js and you can even call a feature file, not just JS.
Maybe you should also look at the example for karate.callSingle() which is new in 0.7.0. As of the time of this post, version 0.7.0.RC5 is available to test.
| {
"pile_set_name": "StackExchange"
} |
Q:
Can someone find my EC2 information?
I have an AWS EC2 instance and I run a nodejs server for a website. You can easily find out the IP and the service provider (in my case Amazon), but my question is: can you find out more information (i.e. name, country) about the person who owns the VM?
Thank you very much!
A:
Unless legal actions are taken against you, Amazon won't give any information about you.
On the security part, it's pretty good. https://aws.amazon.com/security/
But I would say the weakest part is how you configure your instance and what kind of data you write on it.
And of course the same thing applies to your AWS account. Make sure no one can access it by choosing a very strong password and using 2 factor auth.
| {
"pile_set_name": "StackExchange"
} |
Q:
Test for a continuous function
Let $f$ be a function defined in $[0, 6]$, continuous in $[0, 6]$
and it is provided of a third derivative in $]0, 6[.$ Which of the following assertions is false?
$$\fbox{A}\quad f \text{ has no asymptotes; }$$
$$\fbox{B}\quad f \text{ may have no critical points; }$$
$$\fbox{C}\quad f \text{ has a relative maximum or has a minimum
relative; }$$
$$\fbox{D}\quad f'' \text{ is continuous in } ]0; 6[;$$
$$\fbox{E}\quad \text{If } f'(5) = f''(5) = 0 \text{ and } f'''(5) = 7, \text{then } f \text{ has an inflection point with
a horizontal tangent at } x = 5$$
Below there is the original question in Italian Language. Above there is the translation.
My attempt of resolution for to find the correct answer. The $\fbox{A}$ is true being $f$ is continuous in $[0,6]$. The $\fbox{B}$ is true for the Weierstrass' theorem: remark that $[0,6]$ is closed set. If I think to the polynomial $\deg(p(x))=6$ and $\fbox{C}$ for me it is true. For the $\fbox{D}$ I have thought that if $f$ and it is provided of a third derivative in $]0,6[$, almost for $f''$ is continuous in $]0,6[$. I'd say the $\fbox{E}$ is false, but I can't justify it.
I ask if my reasoning is correct or there are incongruities.
A:
For me, C is false if you understand as a relative extremum (or local extremum) an extremum on a neighbourhood of a point in the interior of $[0,6]$. Indeed , here is a counterexample satisfying all the hypotheses, which has neither a local maximum nor a local minimum on $[0,6]$, albeit it has a maximum and a minimum:
$$f(x)=\frac 76(x-5)^3.$$
On the other hand, E is true, because if $f'''(5)=7$, it is positive in a small neighbourhood of $5$, say $I=(5-ε, 5+ε)$ (derivatives satisfy the intermediate value property), so that $f''$ is increasing on this interval. Therefore , if $f''(5)=0$, we have $f''(x)<0$ on $(5-ε,5)$ and $f''(x)>0$ on $(5, 5+ε)$, so that $f'$ has a local minimum on $I$, which corresponds to the definition of an inflection point.
| {
"pile_set_name": "StackExchange"
} |
Q:
Howto overcome Unit Test Regression Problems...?
I was looking for some kind of a solution for software development teams which spend too much time handling unit test regression problems (about 30% of the time in my case!!!), i.e., dealing with unit tests which fails on a day to day basis.
Following is one solution I'm familiar with, which analyzes which of the latest code changes caused a certain unit test to fail:
Unit Test Regression Analysis Tool
I wanted to know if anyone knows similar tools so I can benchmark them.
As well, if anyone can recommand another approach to handle this annoying problem.
Thanks at Advanced
A:
You have our sympathy. It sounds like you have brittle test syndrome. Ideally, a single change to a unit test should only break a single test-- and it should be a real problem. Like I said, "ideally". But this type of behavior common and treatable.
I would recommend spending some time with the team doing some root cause analysis of why all these tests are breaking. Yep, there are some fancy tools that keep track of which tests fail most often, and which ones fail together. Some continuous integration servers have this built in. That's great. But I suspect if you just ask each other, you'll know. I've been though this and the team always just knows from their experience.
Anywho, a few other things I've seen that cause this:
Unit tests generally shouldn't depend on more than the class and method they are testing. Look for dependencies that have crept in. Make sure you're using dependency injection to make testing easier.
Are these truly unique tests? Or are they testing the same thing over and over? If they are always going to fail together, why not just remove all but one?
Many people favor integration over unit tests, since they get more coverage for their buck. But with these, a single change can break lots of tests. Maybe you're writing integration tests?
Perhaps they are all running through some common set-up code for lots of tests, causing them to break in unison. Maybe this can be mocked out to isolate behaviors.
| {
"pile_set_name": "StackExchange"
} |
Q:
Kotlin extensions for Android: How to use bundleOf
Documentation says:
fun bundleOf(vararg pairs: Pair<String, Any?>): Bundle
Returns a new Bundle with the given key/value pairs as elements.
I tried:
val bundle = bundleOf {
Pair("KEY_PRICE", 50.0)
Pair("KEY_IS_FROZEN", false)
}
But it is showing error.
A:
If it takes a vararg, you have to supply your arguments as parameters, not a lambda. Try this:
val bundle = bundleOf(
Pair("KEY_PRICE", 50.0),
Pair("KEY_IS_FROZEN", false)
)
Essentially, change the { and } brackets you have to ( and ) and add a comma between them.
Another approach would be to use Kotlin's to function, which combines its left and right side into a Pair. That makes the code even more succinct:
val bundle = bundleOf(
"KEY_PRICE" to 50.0,
"KEY_IS_FROZEN" to false
)
A:
How about this?
val bundle = bundleOf (
"KEY_PRICE" to 50.0,
"KEY_IS_FROZEN" to false
)
to is a great way to create Pair objects. The beauty of infix function with awesome readability.
A:
Just to complete the other answers:
First, to use bundleOf, need to add implementation 'androidx.core:core-ktx:1.0.0' to the build.gradle then:
var bundle = bundleOf("KEY_PRICE" to 50.0, "KEY_IS_FROZEN" to false)
| {
"pile_set_name": "StackExchange"
} |
Q:
Understanding 'Yield' keyword in javascript?
I came across yield keyword in javascript today and I am aware that currently it's not supported in browsers that are not ECMA 6 upgraded. Meanwhile in firefox, how can i rewrite the following code without yield
if (currentNode) {
yield currentNode;
currentNode = null;
}
A:
There is no direct equivalent. However, one can fake it by returning a "generator" object. Basically, the continuation code is moved into the next() of the generator.
Consider this fib-generator example on MDN:
function fib() {
var i = 0, j = 1;
while (true) {
yield i;
var t = i;
i = j;
j += t;
}
}
var g = fib();
for (var i = 0; i < 10; i++) {
console.log(g.next());
}
And re-written using a fake generator:
function fib() {
var i = 0, j = 1;
return {
'next': function () {
var yieldRet = i;
// These haven't occurred before the `yield` in the above generator,
// but it makes it easier to do it in the same order here.
// Just make sure there are no OBSERVABLE side-effects.
var t = i;
i = j;
j += t;
return yieldRet;
}
};
}
var g = fib();
for (var i = 0; i < 10; i++) {
console.log(g.next());
}
Now, this does become a bit trickier with the addition of observable mutable state; the given example could still be expressed as a state machine. Note that each next can "advance" the state.
var currentNode;
function yield1 () {
var y = { next: st0 };
return y;
function st0 () {
if (currentNode) {
y.next = st1;
return currentNode;
} else {
y.next = stZ;
}
}
function st1 () {
y.next = stZ;
currentNode = null; // observable side-effect!
}
function stZ () {
}
}
var g = yield1();
currentNode = "x";
console.log(g.next()); // "x"
console.log(currentNode); // still "x"
g.next();
console.log(currentNode); // null
| {
"pile_set_name": "StackExchange"
} |
Q:
Publishing custom artifact built from task in gradle
I am having some issues trying to create a task that build a special file which is then uploaded to artifactory.
Heres a simplified version:
apply plugin: 'maven-publish'
task myTask {
ext.destination = file('myfile')
doLast {
// Here it should build the file
}
}
publishing {
repositories {
maven {
name 'ArtifactoryDevDirectory'
url 'http://artifactory/artifactory/repo-dev'
credentials {
username 'username'
password 'password'
}
}
}
publications {
MyJar(MavenPublication) {
artifactId "test"
version "1.0"
groupId "org.example"
artifact myTask.destination
}
}
}
This works, except that gradle publish does not run myTask. I tried adding
publishMyJarPublicationToArtifactoryDevDirectoryRepository.dependsOn myTask
but i just get:
Could not find property 'publishMyJarPublicationToArtifactoryDevDirectoryRepository' on root project 'test'.
I tried messing about with the artifact, adding a custom artifact and configuration and publishing that instead but that did not work either.
Any help would be greatly appreciated.
A:
afterEvaluate {
publishMyJarPublicationToArtifactoryDevDirectoryRepository.dependsOn myTask
}
Accomplishes what I want.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why are WatchKit classes not recognised in new .swift file?
I am trying to create a new swift file in Xcode to house a class that derives from WKInterfaceObjectRepresentable. E.g.
import WatchKit
struct Bing: WKInterfaceObjectRepresentable {
}
But I get the following error:
Use of undeclared type 'WKInterfaceObjectRepresentable'
However, if I add it to one of the standard files (ContentView.swift) it picks it up correctly.
I thought it might be to do with the target membership but it is exactly the same for my new Bing.swift as it is for ContentView.swift (WatchKit Extension).
Any ideas?
A:
You need to also import SwiftUI:
import SwiftUI
| {
"pile_set_name": "StackExchange"
} |
Q:
Problem with the Apache DefaultHttpClient class
I am a newbie for servlet applications, trying to learn the subject. On my way, I wrote a servlet class called FormWebServlet that uses the org.apache.http.impl.client.DefaultHttpClient class. However, I get the exception
java.lang.ClassNotFoundException: org.apache.http.impl.client.DefaultHttpClient
... that clearly shows that this class does not exist, although I have added the jar file to the project.
The server returns an "HTTP Status 500" error with the message that the "root cause" is this missing class:
java.lang.NoClassDefFoundError: org/apache/http/impl/client/DefaultHttpClient
testPackage.FormWebServlet.doGet(FormWebServlet.java:45)
javax.servlet.http.HttpServlet.service(HttpServlet.java:621)
javax.servlet.http.HttpServlet.service(HttpServlet.java:722)
TRIES
1) I searched for the missing jar file and added it to the project (by going on the project in "Eclipse JAVA EE IDE for Web Developers, 20100917-0705"'s project explorer, select "Properties", selected the "Java Build Path" and clicked the [Add External JARs...] button.) The added jar file is from the Apache site and is called httpclient-4.1.1.jar.
2) As I still get the same error, I extracted with 7-ZIP the DefaultHttpClient.class file and put it into the WebContent/WEB-INF/lib directory.
QUESTION
What am I doing wrong? Neither of the other two JAR files do contain the class, nor is there a class with this name in the WEB-INF/lib folder.
DETAILS
Inculded JARs:
common-httpclient-3.0.1.jar
httpclient-4.1.1.jar
httpcore-4.1.jar
FormWebServlet.jar:
/**
*
*/
package testPackage;
import java.io.IOException;
import java.io.PrintWriter;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.apache.http.HttpResponse;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.DefaultHttpClient;
import coreServlets.ServletUtilities;
/**
*
*/
@WebServlet(description = "Gets the book's barcode with a form", urlPatterns = { "/FormWebServlet" })
public class FormWebServlet extends HttpServlet {
/** */
private static final long serialVersionUID = 6008315960327824633L;
/**
* @see HttpServlet#doGet(HttpServletRequest request,
* HttpServletResponse response)
*/
protected void doGet(final HttpServletRequest request,
final HttpServletResponse response)
throws IOException, ServletException {
final String BAR_CODE = request.getParameter("barCode");
response.setContentType("text/html");
final PrintWriter out = response.getWriter();
if (BAR_CODE != null) {
HttpClient client = new DefaultHttpClient();
final String ADDRESS = ServletUtilities.getHttpAddress(BAR_CODE);
out.println("ADDRESS = \"" + ADDRESS + '\"');
HttpGet get = new HttpGet(ADDRESS);
HttpResponse httpResponse = null;
// Removed commented code that will use these objects
}
}
}
A:
Just put the JAR files themselves into WEB-INF/lib, not the class file. That way they will be included in your deployment.
| {
"pile_set_name": "StackExchange"
} |
Q:
Swift - AVAudioPlayer Doesn't Work
I am trying to play my aiff file in iOS application the following way:
var sound = NSBundle.mainBundle().pathForResource("tropical_birds", ofType: "aiff")
var soundData = NSData(contentsOfFile: sound!)
let audioPlayer = AVAudioPlayer(data: soundData, error: nil)
audioPlayer.prepareToPlay()
audioPlayer.play()
A:
Your AVAudioPlayer is going out of scope and being deallocated. Assign it to an instance variable of your class to prolong its life.
e.g.
self.audioPlayer = AVAudioPlayer(data: soundData, error: nil)
self.audioPlayer.prepareToPlay()
self.audioPlayer.play()
| {
"pile_set_name": "StackExchange"
} |
Q:
Distinguish button_press_event from drag and zoom clicks in matplotlib
I have a simple code that shows two subplots, and lets the user left click on the second subplot while recording the x,y coordinates of those clicks.
The problem is that clicks to select a region to zoom and to drag the subplot are also identified as left clicks.
Is there a way to distinguish and filter out these left clicks?
import numpy as np
import matplotlib.pyplot as plt
def onclick(event, ax):
# Only clicks inside this axis are valid.
if event.inaxes == ax:
if event.button == 1:
print(event.xdata, event.ydata)
# Draw the click just made
ax.scatter(event.xdata, event.ydata)
ax.figure.canvas.draw()
elif event.button == 2:
# Do nothing
print("scroll click")
elif event.button == 3:
# Do nothing
print("right click")
else:
pass
fig, (ax1, ax2) = plt.subplots(1, 2)
# Plot some random scatter data
ax2.scatter(np.random.uniform(0., 10., 10), np.random.uniform(0., 10., 10))
fig.canvas.mpl_connect(
'button_press_event', lambda event: onclick(event, ax2))
plt.show()
A:
You may check if the mouse button is released after the mouse has previously been moved. Since for zooming and panning, this would be the case you may call the function to draw a new point only when no previous movement has happened.
import numpy as np
import matplotlib.pyplot as plt
class Click():
def __init__(self, ax, func, button=1):
self.ax=ax
self.func=func
self.button=button
self.press=False
self.move = False
self.c1=self.ax.figure.canvas.mpl_connect('button_press_event', self.onpress)
self.c2=self.ax.figure.canvas.mpl_connect('button_release_event', self.onrelease)
self.c3=self.ax.figure.canvas.mpl_connect('motion_notify_event', self.onmove)
def onclick(self,event):
if event.inaxes == self.ax:
if event.button == self.button:
self.func(event, self.ax)
def onpress(self,event):
self.press=True
def onmove(self,event):
if self.press:
self.move=True
def onrelease(self,event):
if self.press and not self.move:
self.onclick(event)
self.press=False; self.move=False
def func(event, ax):
print(event.xdata, event.ydata)
ax.scatter(event.xdata, event.ydata)
ax.figure.canvas.draw()
fig, (ax1, ax2) = plt.subplots(1, 2)
# Plot some random scatter data
ax2.scatter(np.random.uniform(0., 10., 10), np.random.uniform(0., 10., 10))
click = Click(ax2, func, button=1)
plt.show()
A:
One way to distinguish between clicks and dragging/zooming (be it right click or left click) would be to measure the time between the button press and the button release and then carry out the actions on the button release, not the button press.
import numpy as np
import matplotlib.pyplot as plt
import time
MAX_CLICK_LENGTH = 0.1 # in seconds; anything longer is a drag motion
def onclick(event, ax):
ax.time_onclick = time.time()
def onrelease(event, ax):
# Only clicks inside this axis are valid.
if event.inaxes == ax:
if event.button == 1 and ((time.time() - ax.time_onclick) < MAX_CLICK_LENGTH):
print(event.xdata, event.ydata)
# Draw the click just made
ax.scatter(event.xdata, event.ydata)
ax.figure.canvas.draw()
elif event.button == 2:
print("scroll click")
elif event.button == 3:
print("right click")
else:
pass
fig, (ax1, ax2) = plt.subplots(1, 2)
# Plot some random scatter data
ax2.scatter(np.random.uniform(0., 10., 10), np.random.uniform(0., 10., 10))
fig.canvas.mpl_connect('button_press_event', lambda event: onclick(event, ax2))
fig.canvas.mpl_connect('button_release_event', lambda event: onrelease(event, ax2))
plt.show()
| {
"pile_set_name": "StackExchange"
} |
Q:
How to switch element orders in Bootstrap 3
I've looked many examples, but for some reason I can't get it to work myself.
I have 2 elements and I want them to be in 3 different positions for mobile, tablet and desktop view.
Desktop: [A] [B] -
Tablet: [A] - [B]
Mobile: - [B] -
- [A] -
So for desktop I want the element A to float to left and B in the middle.
For Tablet I want A to float left and B to right.
And for Mobile I want to switch the order of A and B and be on top of each other.
Is this possible with just Bootstrap?
A:
col-push-* could solve your problem:
<div class="row">
<div class="col-sm-5 col-sm-push-5 well">
Content B
</div>
<div class="col-sm-5 col-sm-pull-5 well">
Content A
</div>
</div>
Working demo
| {
"pile_set_name": "StackExchange"
} |
Q:
Kann man Antonyme der Form „Richtigkeit“ und „Unrichtigkeit“ beim Listen abkürzen
Dass man Worte, die eine gemeinsame Wurzel haben und die sich nur durch ein unterschiedliches Präfix voneinander unterscheiden, beim Listen anhand eines Bindestrichs abkürzen kann (soll?) ist wohlbekannt.
Kann man etwas Ähnliches für etwa Antonyme der form
X und Präfix+X
beim Listen abkürzen?
Z. B. wäre
Richtigkeit und Un-
verständlich?
A:
Grundsätzlich ist dies nicht empfehlenswert. Es ist zwar üblich im deutschen Raum (Umgangssprache) etwas wie
Un-/Richtigkeit oder
(Un-) Richtigkeit
zu schreiben, jedoch ist die Form in Listen sehr ungebräuchlich und nicht zu empfehlen.
Ähnlich verhält es sich auch mit Suffixen für Antonyme, so ist z.B.
Verständnis/-losigkeit
im Umgangsstil nicht ungebräuchlich, allerdings nicht in der Listenform.
Sowohl bei Präfixen als auch Suffixen sollte man also für Antonyme auf eine solche Listenform verzichten.
Update: Sollte man nur den generellen Suffix abkapseln wollen, so kann man das natürlich wie bei jedem anderen deutschen Wort auch ausführen. Also z.B. hier nur das entfallende "-keit":
Richtig- und Unrichtigkeit
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I make an elevation model from a 3d polygon?
I have a number of polygons in 3d from a geojson file, and I would like to make an elevation model. This means that I want a raster, where every pixel is the height of the polygon in this position.
I tried looking at gdal_rasterize, but the description says
As of now, only points and lines are drawn in 3D.
gdal_rasterize
A:
I ended up using the scipy.interpolat-function called griddata. This uses a meshgrid to get the coordinates in the grid, and I had to tile it up because of memory restrictions of meshgrid.
import scipy.interpolate as il #for griddata
# meshgrid of coords in this tile
gridX, gridY = np.meshgrid(xi[c*tcols:(c+1)*tcols], yi[r*trows:(r+1)*trows][::-1])
## Creating the DEM in this tile
zi = il.griddata((coordsT[0], coordsT[1]), coordsT[2], (gridX, gridY),method='linear',fill_value = nodata) # fill_value to prevent NaN at polygon outline
The linear interpolation seems to do exactly what I want. See description at https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html
| {
"pile_set_name": "StackExchange"
} |
Q:
Attach properties to functions within their definition
I want to achieve the same as this...
var x = function(){ return "abc"}
x.y = 123
x() // "abc"
x.y // 123
but to define the properties inside the function definition like this...
var x = function(){
// declare `this.y` here somehow...
// this.y = 123
return "abc"
}
A:
You can give functions a name even when instantiating them as part of an expression:
var x = function x() {
x.y = 123;
return "abc";
}
Unfortunately, some browsers have weird quirks in the implementation of that highly useful feature, so it's not really completely safe to use it.
Note that in the above, there are two separate "x" symbols. The function name "x" is bound inside the function, and it hides the variable "x" in the outer scope.
| {
"pile_set_name": "StackExchange"
} |
Q:
Can't get input containing spaces in C++
I have the following C++ code
#include<iostream>
#include<string>
using namespace std;
struct data
{
char name1[20];
string name2[20];
string name3;
};
void main() {
data *d = new data;
cout << "Enter name1 : ";
cin >> d->name1; // this does not capture spaces in between
cout << "Enter name2 : ";
cin.getline(d->name2,20);
//compiler: cannot convert argument 1 from 'std::string [20]' to 'char *'
cin.getline(d->name2,sizeof(d->name2)); // same as above
getline(cin,d->name2);
// error C2665: 'std::getline' : none of the 2 overloads could convert all arguments type
cout << "Enter name3 : " ;
cin >> noskipws >> d->name3;
// does not even wait for input, execution resumes without my input
cout << "name1=" << d->name1 << endl;
cout << "name2=" << d->name2 << endl;
cout << "name3=" << d->name3 << endl;
}
If I run the above program (commenting lines that cause the compiler to complaint), I would get something like this: (my input italized)
Enter name1: ahmad mutawa
Enter name2: any name
Enter name3:
name1=ahmad
name2=
name3=ahmad
What am I doing wrong? How can I get a string containing spaces into a string variable?
I am using Microsoft Visual C++ compiler cl from command line tools.
Edit
I rewrote the program as recommended from the comments/answers:
I re-declared variables all as strings, without specifying length.
struct data
{
string name1,name2,name3;
};
...
cout << "Enter name1 : ";
getline(cin, d->name1);
cout << "Enter name2 : ";
getline(cin, d->name2);
cout << "Enter name3 : ";
getline(cin, d->name3);
...
The program allowed me to input a full name at each getline but the output I got contained only the last names, not one of them had the first name or the spaces in between.
A:
Seems that the declaration
string name2[20];
does not work normally. Declare it just like
string name2;
And
getline(cin, d->name2);
will work perfectly fine
working_code
A:
First, you should know what your original struct means:
struct data
{
char name1[20]; // an array of 20 single characters
string name2[20]; // an array of 20 dynamic strings
string name3; // a single dynamic string
};
Look at the name2 member variable. It is an array of 20 std::string. That means you can have 20 separate dynamic strings, and each string is accessed by name2[0], name2[1], name2[2], up until name2[19].
Given that, since you want to simply enter 3 strings, then the struct above does not reflect what you are trying to accomplish. The correction should be:
struct data
{
string name1;
string name2;
string name3;
};
Having done this, then to fill each string from std::cin, and include spaces within each string, use std::getline 3 times:
data d;
getline(cin, d.name1);
getline(cin, d.name2);
getline(cin, d.name3);
Here is a Live Example.
| {
"pile_set_name": "StackExchange"
} |
Q:
Switching between Twitter Bootstrap Themes?
I'm working on a project for a client built on Twitter Bootstrap. He wants to have different colour schemes that the user can select from. For example have a Red Colour Scheme and a Blue Colour Scheme that the user can change through a menu up the top.
Is there any plugins for jQuery (or anything else for that matter) that will do this? All it really has to do is load a different CSS file I suppose, how would you go about doing this?
A:
Use Kickstrap. You can install themes from basically anywhere, make your own, includes themes from Bootswatch and it uses Less.js client-side to easily recompile your changes each time.
| {
"pile_set_name": "StackExchange"
} |
Q:
PLY differentiate between grammars
so I'm doing this project where you order a forklift to do certain things with natural language and I'm using Python Lex Yacc. I wanted to know if there is a way to differentiate between permuted grammar that is the same length? The documentation only states that you can do it with len(p) but that only helps if they different in length. Sample code:
def p_moveitemfromto(p):
'''moveitemfromto : MOVEITEM fulltype item rack rack side
| MOVEITEM rack rack side fulltype item
| rack MOVEITEM fulltype item rack side
| rack MOVEITEM rack side fulltype item'''
I need the output to always be in an unified order (so I stick to 1st being default), what I'm doing now is just adding different letters on the lower levels so then I can order it the right way and then deleting the letters, but it seems like it's not the way to go. I could also split it to different defs but that again doesn't seem like the way to go. Or maybe I've got the whole concept wrong since it's my first time.
A:
If different productions (even for the same non-terminal) have different actions, then implement them in different functions. That is a normal and totally reasonable way to use Ply. Combining different production actions so that you have to figure out which production was reduced is false economy.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to sort an array of objects based on date
I have an array of objects which I am attempting to sort based on the createDate.
[{
"TestName": "com.DPProgram",
"Test": {
"createDate": "2018-02-15T17:17:10.000+0530",
"effectiveStartDate": "1900-01-01T00:00:00.000+0530",
"effectiveEndDate": "2200-01-01T00:00:00.000+0530"
}
}, {
"TestName": "com.DPProgram",
"Test": {
"createDate": "2018-02-22T15:00:11.000+0530",
"effectiveStartDate": "2017-12-22T00:00:00.000+0530",
"effectiveEndDate": "2018-12-23T00:00:00.000+0530"
}
}];
data = data.sort(function(a, b) {
data = data.sort(function(a, b) {
return (a[data.createDate] > b[data.createDate])
});
However it's not sorting
https://jsfiddle.net/o2gxgz9r/51545/
A:
You are comparing the dates as strings. You need to instead convert them to Date objects before making the comparison.
Also your syntax of a[data.bean.createDate] is broken given the example. You need to access obj.Test.createDate instead.
Finally, don't use alert() for debugging, and especially not for any datatype more complex than a string. console.log() is more accurate as it doesn't coerce data types, and lets you traverse through the levels or objects/arrays.
With all that said, try this:
var data = [{
"TestName": "com.DPProgram",
"Test": {
"createDate": "2018-02-15T17:17:10.000+0530",
"effectiveStartDate": "1900-01-01T00:00:00.000+0530",
"effectiveEndDate": "2200-01-01T00:00:00.000+0530"
}
}, {
"TestName": "com.callidus.quotaDP.Tests.DPProgram",
"Test": {
"createDate": "2018-02-22T15:00:11.000+0530",
"effectiveStartDate": "2017-12-22T00:00:00.000+0530",
"effectiveEndDate": "2018-12-23T00:00:00.000+0530"
}
}, {
"TestName": "com.Foo",
"Test": {
"createDate": "2018-02-07T15:00:11.000+0530",
"effectiveStartDate": "2017-12-22T00:00:00.000+0530",
"effectiveEndDate": "2018-12-23T00:00:00.000+0530"
}
}];
data = data.sort(function(a, b) {
var aDate = new Date(a.Test.createDate),
bDate = new Date(b.Test.createDate);
return aDate > bDate ? 1 : aDate < bDate ? -1 : 0;
});
console.log(data);
| {
"pile_set_name": "StackExchange"
} |
Q:
React Native ListView dataSource allways null
I have this piece of code
componentWillMount() {
return fetch("http://10.0.3.2:8080/all", {
method: "GET",
headers: {
Accept: "application/json",
"Content-Type": "application/json"
}
})
.then(response => response.json())
.then(responseData => {
this.ds = new ListView.DataSource({
rowHasChanged: (r1, r2) => r1 !== r2
});
this.setState({
dataSource: this.ds.cloneWithRows(responseData)
});
})
.catch(error => {
console.log("error : " + error);
});
};
It brings me the data from my api, i already test it with console.log(this.state.dataSource) and the result was
console.log result
But when i add the code
<ListView
dataSource={this.state.dataSource}
renderRow={(rowData, navi) => this.renderEvent(rowData, this.props.nav)}
/>
I receive this error
Error
A:
Is this issue fixed by adding a simple constructor to your main component? I'd imagine you are getting this issue because you haven't received the data yet at the time of your initial render. Something like
class Test extends Component{
constructor(props){
super(props);
this.state = {
dataSource = [];
}
}
So when you have your ListView , atleast it will have something to pull from initially.
Hope this helps, but if not, have a look at this article , explains this concept further in depth.
| {
"pile_set_name": "StackExchange"
} |
Q:
Best Linux distro for Cherokee?
I want to use Cherokee for my PHP-centered site...what distro would best for that?
A:
The definitive answer: whichever distro you're most comfortable with.
Case closed.
| {
"pile_set_name": "StackExchange"
} |
Q:
matlab encoding integers to matrix of vectors of 0's and 1's
I have a 1118x1 vector of values from 0 to 10 as such:
5
5
3
4
7
4
1
.
.
I need to encode each value into a 11x1118 Matrix of zeros where the k+1th values is a 1.
For example the first value is a 5 so the 5+1=6 value in the first column with be 1
0
0
0
0
0
1
0
0
0
0
0
I need to do this for all values up to 1118.
I assume I just need a for loop but am completely lost as to how to do it
A:
You can use for example sub2ind. Try the following code:
x = [4;3;1;1;4;7];
y = zeros(11,numel(x));
y(sub2ind(size(y),x+1,(1:numel(x))')) = 1
y =
0 0 0 0 0 0
0 0 1 1 0 0
0 0 0 0 0 0
0 1 0 0 0 0
1 0 0 0 1 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 1
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
| {
"pile_set_name": "StackExchange"
} |
Q:
How to put parameters in RESTful POST with JerseyFramework
Let's assume that I have simple class:
public class Test
{
@Path("/test")
@POST
@Produces(APPLICATION_JSON)
@Consumes(APPLICATION_JSON)
public TestResponse post(TestResponse request, @HeaderParam("text") String text)
{
return new TestResponse(request.getData());
}
}
and I want to test this class. So how I can param in code like this:
Entity<TestRequest> requestEntity = Entity.entity(request, MediaType.APPLICATION_JSON);
final TestResponse response = target("test").request().post(requestEntity, TestResponse.class);
A:
target("test").request().header("text", "value").post(...);
When you call request(). You get back an Invocation.Builder. You can take a look at the other methods. For the most part they all return the same Invocation.Builder, so can just chain the calls.
| {
"pile_set_name": "StackExchange"
} |
Q:
Irreducible representation with trace zero in positive characteristic
Is there an example of an irreducible representation $\rho:G\rightarrow GL_n(F)$ where $char(F)>0$ such that $trace(\rho(g))=0$ for all $g\in G$?
Of course we have to consider $F$ and $G$ (finite group) with $F[G]$ non-semi-simple.
A:
Just so this has an answer (I'll make the post community wiki since it's not my answer):
This answer on MathOverflow says that I.M. Isaacs, Character Theory of Finite Groups, Corollary 9.22, Dover, p. 155, states that every irreducible representation of a finite group over any field has a non-zero character.
| {
"pile_set_name": "StackExchange"
} |
Q:
Call to undefined relationship [cities] on model [App\Models\Municipal_district]
City.php
public function municipal_districts()
{
return $this->hasMany('App\Models\Municipal_district');
}
Municipal_district.php
public function province()
{
return $this->belongsTo('App\Models\City', 'city_id');
}
Where can be wrong?
A:
City Model
public function municipal_districts()
{
return $this->hasMany('App\Models\Municipal_district');
}
Municipal_district Model
public function city()
{
return $this->belongsTo('App\Models\City', 'city_id');
}
And I don't knwo about cities() relationship here!
City has many ditricts and District belongs to one city. So, What is the cities() here?
| {
"pile_set_name": "StackExchange"
} |
Q:
Why does Firefox have an empty heap on Linux? Where does malloc go in memory?
The start_brk and brk feild of mm_struct have same value for Firefox, which means the heap is empty in Firefox. Does anyone know: Why does Firefox have an empty heap on Linux? Where does malloc go in memory?
A:
Firefox uses a custom memory allocator, jemalloc. Unless the --enable-dss option is specified during configuration, this allocator uses only mmap(), otherwise it uses both sbrk() and mmap(). Needless to say, only the brk() system call will modify the start_brk and brk fields of the struct in question.
| {
"pile_set_name": "StackExchange"
} |
Q:
Add dates ranges to a table for individual values using a cursor
I have a calendar table called CalendarInformation that gives me a list of dates from 2015 to 2025. This table has a column called BusinessDay that shows what dates are weekends or holidays. I have another table called OpenProblemtimeDiffTable with a column called number for my problem number and a date for when the problem was opened called ProblemNew and another date for the current column called Now. What I want to do is for each problem number grab its date ranges and find the dates between and then sum them up to give me the number of business days. Then I want to insert these values in another table with the problem number associated with the business day.
Thanks in advance and I hope I was clear.
TRUNCATE TABLE ProblemsMoreThan7BusinessDays
DECLARE @date AS date
DECLARE @businessday AS INT
DECLARE @Startdate as DATE, @EndDate as DATE
DECLARE CONTACT_CURSOR CURSOR FOR
SELECT date, businessday
FROM CalendarInformation
OPEN contact_cursor
FETCH NEXT FROM Contact_cursor INTO @date, @businessday
WHILE (@@FETCH_STATUS=0)
BEGIN
SELECT @enddate= now FROM OpenProblemtimeDiffTable
SELECT @Startdate= problemnew FROM OpenProblemtimeDiffTable
SET @Date=@Startdate
PRINT @enddate
PRINT @startdate
SELECT @businessday= SUM (businessday) FROM CalendarInformation WHERE date > @startdate AND date <= @Enddate
INSERT INTO ProblemsMoreThan7BusinessDays (businessdays, number)
SELECT @businessday, number
FROM OpenProblemtimeDiffTable
FETCH NEXT FROM CONTACT_CURSOR INTO @date, @businessday
END
CLOSE CONTACT_CURSOR
DEALLOCATE CONTACT_CURSOR
I tried this code using a cursor and I'm close, but I cannot get the date ranges to change for each row.
So if I have a problemnumber with date ranges between 02-07-2018 and 05-20-2019, I would want in my new table the sum of business days from the calendar along with the problem number. So my output would be column number PROB0421 businessdays (with the correct sum). Then the next problem PRB0422 with date ranges of 11-6-18 to 5-20-19. So my output would be PROB0422 with the correct sum of business days.
A:
Rather than doing this in with a cursor, you should approach this in a set based manner. That you already have a calendar table makes this a lot easier. The basic approach is to select from your data table and join into your calendar table to return all the rows in the calendar table that sit within your date range. From here you can then aggregate as you require.
This would look something like the below, though apply it to your situation and adjust as required:
select p.ProblemNow
,p.Now
,sum(c.BusinessDay) as BusinessDays
from dbo.Problems as p
join dbo.calendar as c
on c.CalendarDate between p.ProblemNow and p.Now
and c.BusinessDay = 1
group by p.ProblemNow
,p.Now
| {
"pile_set_name": "StackExchange"
} |
Q:
specify the postion of a text & and an image in a botton [android]
i asked the question about haw to create a button by a text & an image
Button contains text & image from the Internet [android]
it was very helpful.But, the problem is :
i want a button contains the image & under it the text (not as background)
am looking how to add this image as the text and specify their place??
thanx
A:
You can try creating LayerDrawable with two layers: transparent (size of the button) and your image on top and set it as background. Then add text as a caption and use gravity = bottom to position it.
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I make delegate class non-public when delegating methods of an interface in another package?
In my library, I'm generating implementations of client-provided interfaces (annotated with custom directives from the library). I use MethodDelegation to intercept interface methods and forward them to an instance of a delegate class defined in a library package:
package library.pkg;
class ImplBase { }
public class ImplDelegate {
final ImplContext context;
ImplDelegate(ImplContext ctx) {
this.context = ctx;
}
public void impl(
@CustomName String name,
@CustomTags String[] tags,
@AllArguments Object[] args) {
// do things here
}
}
static <T> T implClient(Class<T> clientType) {
MethodDelegation delegation = MethodDelegation
.to(new ImplDelegate(new ImplContext(clientType)))
.filter(not(isDeclaredBy(Object.class)))
.appendParameterBinder(ParameterBinders.CustomTags.binder)
.appendParameterBinder(ParameterBinders.CustomName.binder);
Class<? extends ImplBase> implClass =
new ByteBuddy()
.subclass(ImplBase.class)
.name(String.format("%s$Impl$%d", clientType.getName(), id++))
.implement(clientType)
.method(isDeclaredBy(clientType).and(isVirtual()).and(returns(VOID)))
.intercept(delegation)
.make()
.load(clientType.getClassLoader(), ClassLoadingStrategy.Default.WRAPPER)
.getLoaded();
return clientType.cast(implClass.newInstance());
}
// In client code, get an instance of the interface and use it.
package client.pkg;
interface Client {
void operationA(String p1, long p2);
void operationB(String... p1);
}
Client client = implClient(Client.class);
client.operationA("A", 1);
This works, but it exposes ImplDelegate as a public type from the library; I'd rather have it stay package-private. One way of doing this would be to generate a public subclass of ImplDelegate in the library package at runtime that proxies all package-private methods with public bridge methods and use that as the delegate. I've looked at TypeProxy but I'm not familiar enough with ByteBuddy yet to see if the auxiliary type mechanism is a good fit for this.
Is there a way to generate runtime proxies that implement bridge methods in a way so I can hide delegate implementations?
A:
The delegate type needs to be visible to the class that is invoking it. You have only two possibilities:
Create a type in the same package as the interceptor. Make sure you are injecting the generated class in the interceptor's class loader, a package-private type is only visible to classes of the same package in the same class loader. This way, you can however only implement public interfaces.
At runtime, subclass your interceptor and make sure all interceptor methods are public. Byte Buddy, by default, generates a public subclass:
Object delegate = new ByteBuddy()
.subclass(ImplDelegate.class)
.make()
.load(ImplDelegate.class.getClassLoader())
.getLoaded()
.newInstance();
The above type will be public such that you can now delegate to this instance, even if ImplDelegate is package-private. Note however, that this only affects compile-time visibility, at runtime, the subclass of ImplDelegate is visible to any type. (The constructor does however remain package-private, even for the subclass.)
| {
"pile_set_name": "StackExchange"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.