qid
int64 1
74.7M
| question
stringlengths 0
58.3k
| date
stringlengths 10
10
| metadata
sequence | response_j
stringlengths 2
48.3k
| response_k
stringlengths 2
40.5k
|
---|---|---|---|---|---|
1,462,272 | $x^2-(2+i)x+(-1+7i)=0$
I tried to solve it and I got stuck here:
$x=(2+i)Β±\sqrt{\frac{7-24i}{2}}$ | 2015/10/03 | [
"https://math.stackexchange.com/questions/1462272",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/276466/"
] | Firstly, you correctly found the discriminant:
$$D = B^2 - 4AC = [-(2+i)]^2-4(-1+7i) = 7 - 24i.$$
Then:
$$x\_{1,2} = \frac{-B\pm \sqrt{D}}{2A}=\frac{2+i\pm\sqrt{7-24i}}{2}.\tag{1}$$
But:
$$\sqrt{7-24i}=\pm(4-3i).$$ Why?
Let $\sqrt{7-24i} = z\implies z^2 = 7-24i$.
If we let $z = a+bi \implies a^2-b^2 +2ab i = 7-24i $. Thus:
$$\left\{
\begin{array}{l}
a^2 - b^2 = 7\\
ab = -12
\end{array}
\right.
$$
Solving the above system in Reals, we get 2 pairs of solutions: $(a,b) = (4,-3)$ and $(a,b) = (-4,3)$. Both $z\_1 = 4-3i$ and $z\_2 = -4+3i$ satisfy the equation $z^2 = 7-24i$. No matter the choice we make for $\sqrt{27-4i}$ (either $4-3i$ or $-4+3i$), the solutions given by the quadratic formula will be the same, due to the "$\pm$ sign" in the numerator.
Apply this to $(1)$ and you will get the result. | >
> By using factor method:
>
>
>
$$x^2-(2+i)x+(-1+7i)=0$$
* Rewrite the left hand side of the equation.
$$x^2-(2+i)x+(-1+7i)=(-1+7i)+(-2-i)x+x^{2}$$
* Factor the left hand side.
$$(( -3+i)+x)((1-2i)+x)=0$$
- Solve each term in the product separately.
$(-3+i)+x=0$ or $(1-2i)+x=0$
$$x=3-i \text{ or } x=-1+2i $$
>
> Or we can Solve the quadratic equation by completing the square
>
>
>
indeed,
$$x^2-(2+i)x+(-1+7i)=0$$
Subtract $-1+7i$ from both sides:
$$x^2-(2+i)x=1-7i$$
Take one half of the coefficient of $x$ and square it, then add it to both sides
Add $\dfrac{3}{4}+i$ to both sides:
$$(\dfrac{3}{4}+i)+(-2-i)x+x^{2}=\dfrac{7}{4}-6i$$
* Factor the left hand side.
Write the left hand side as a square:
$$\left(x+(-1-\dfrac{i}{2})\right)^{2}=\dfrac{7}{4}-6i$$
* Eliminate the exponent on the left hand side.
Take the square root of both sides:
$$x+(-1-\dfrac{i}{2})=\sqrt{\dfrac{7}{4}-6i} \text{ Or } x+(-1-\dfrac{i}{2})=-\sqrt{\dfrac{7}{4}-6i}$$
add $1+\dfrac{i}{2}$ to both sides:
$$x=(1+\dfrac{i}{2})+\sqrt{\dfrac{7}{4}-6i} \text{ Or } x=(1+\dfrac{i}{2})-\sqrt{\dfrac{7}{4}-6i}$$
$$x=(1+\dfrac{i}{2})+\sqrt{4-6i-\dfrac{9}{4}} \text{ Or } x=(1+\dfrac{i}{2})-\sqrt{4-6i-\dfrac{9}{4}}$$
$$x=(1+\dfrac{i}{2})+\sqrt{\dfrac{(4-3i)^{2}}{4}} \text{ Or } x=(1+\dfrac{i}{2})-\sqrt{\dfrac{(4-3i)^{2}}{4}}$$
$$x=(1+\dfrac{i}{2})+\dfrac{(4-3i)}{2} \text{ Or } x=(1+\dfrac{i}{2})-\dfrac{(4-3i)}{2}$$ |
1,462,272 | $x^2-(2+i)x+(-1+7i)=0$
I tried to solve it and I got stuck here:
$x=(2+i)Β±\sqrt{\frac{7-24i}{2}}$ | 2015/10/03 | [
"https://math.stackexchange.com/questions/1462272",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/276466/"
] | Firstly, you correctly found the discriminant:
$$D = B^2 - 4AC = [-(2+i)]^2-4(-1+7i) = 7 - 24i.$$
Then:
$$x\_{1,2} = \frac{-B\pm \sqrt{D}}{2A}=\frac{2+i\pm\sqrt{7-24i}}{2}.\tag{1}$$
But:
$$\sqrt{7-24i}=\pm(4-3i).$$ Why?
Let $\sqrt{7-24i} = z\implies z^2 = 7-24i$.
If we let $z = a+bi \implies a^2-b^2 +2ab i = 7-24i $. Thus:
$$\left\{
\begin{array}{l}
a^2 - b^2 = 7\\
ab = -12
\end{array}
\right.
$$
Solving the above system in Reals, we get 2 pairs of solutions: $(a,b) = (4,-3)$ and $(a,b) = (-4,3)$. Both $z\_1 = 4-3i$ and $z\_2 = -4+3i$ satisfy the equation $z^2 = 7-24i$. No matter the choice we make for $\sqrt{27-4i}$ (either $4-3i$ or $-4+3i$), the solutions given by the quadratic formula will be the same, due to the "$\pm$ sign" in the numerator.
Apply this to $(1)$ and you will get the result. | Some other answers explained how to get square roots of complex numbers, but it's also worth mentioning that there is an explicit formula you can get by following such methods in generality (leaving the variables in). The formula obtained is that the square root of $a + bi$ with nonnegative real part is given by
$$\sqrt{a + bi} = \sqrt{{a + \sqrt{a^2 + b^2} \over 2}} \pm \sqrt{{-a + \sqrt{a^2 + b^2} \over 2}}\,\,i$$
You choose the $\pm$ to ensure your square root is in the correct quadrant. So in your case
$$\sqrt{7 - 24i} = \sqrt{{7 + \sqrt{625} \over 2}} \pm \sqrt{{-7 + \sqrt{625} \over 2}}\,i$$
$$= 4 \pm 3i$$
Based on $7 - 24i$'s position in the fourth quadrant, you'd choose the $-$ in the $\pm$, so you'd have
$$\sqrt{7 - 24i} = 4 - 3i$$
This is the square root with positive real part. $-4 + 3i$ is the square root with negative real part; you just multiply the other square root by $-1$. Due to the $\pm$ in the quadratic formula, it doesn't matter which square root you use when solving quadratic equations. |
4,889,998 | I have the following problem with excel. I want to increase a variable by one without using a function. So i mean without writing a "=" before my expression. Example:
B1.c\_O2\_L\_y.Value(i)
B1.c\_O2\_L\_y.Value(1)
B1.c\_O2\_L\_y.Value(2)
B1.c\_O2\_L\_y.Value(3)
B1.c\_O2\_L\_y.Value(4)
B1.c\_O2\_L\_y.Value(5)
B1.c\_O2\_L\_y.Value(6)
B1.c\_O2\_L\_y.Value(7)
B1.c\_O2\_L\_y.Value(8)
B1.c\_O2\_L\_y.Value(9)
B1.c\_O2\_L\_y.Value(10)
B1.c\_O2\_L\_y.Value(11)
B1.c\_O2\_L\_y.Value(12)
.......
I must do that for many expressions and for i > 500. So i can't do that by hand. I would be thankful for any advice. | 2011/02/03 | [
"https://Stackoverflow.com/questions/4889998",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/483859/"
] | How do you want to combine `dat1` and `dat2`? By rows or columns? I'd take a look at the help pages for `rbind()` (row bind) , `cbind()` (column bind), or`c()` which combines arguments to form a vector. | Let me start by a comment.
In order to create a sequence of number on can use the following syntax:
```
x <- seq(from=, to=, by=)
```
A shorthand for, e.g., `x <- seq(from=1, to=10, by=1)` is simply `1:10`. So, your notation is a little bit weird...
On the other hand, you can combine two or more vectors using the `c()` function. Let us say, for example, that `a <- c(1, 2)` and `b <- c(3, 4)`. Then `c <- c(a, b)` is the vector `(1, 2, 3, 4)`.
There exist similar functions to combine data sets: `rbind()` and `cbind()`. |
67,633,031 | I currently have a table with a quantity in it.
| ID | Code | Quantity |
| --- | --- | --- |
| 1 | A | 1 |
| 2 | B | 3 |
| 3 | C | 2 |
| 4 | D | 1 |
Is there anyway to get this table?
| ID | Code | Quantity |
| --- | --- | --- |
| 1 | A | 1 |
| 2 | B | 1 |
| 2 | B | 1 |
| 2 | B | 1 |
| 3 | C | 1 |
| 3 | C | 1 |
| 4 | D | 1 |
I need to break out the quantity and have that many number of rows.
Thanks!!!! | 2021/05/21 | [
"https://Stackoverflow.com/questions/67633031",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15989969/"
] | **Updated**
Now we have stored the separated, collapsed values into a new column:
```
library(dplyr)
library(tidyr)
df %>%
group_by(ID) %>%
uncount(Quantity, .remove = FALSE) %>%
mutate(NewQ = 1)
# A tibble: 7 x 4
# Groups: ID [4]
ID Code Quantity NewQ
<int> <chr> <int> <dbl>
1 1 A 1 1
2 2 B 3 1
3 2 B 3 1
4 2 B 3 1
5 3 C 2 1
6 3 C 2 1
7 4 D 1 1
```
**Updated**
In case we opt not to replace the existing `Quantity` column with the collapsed values.
```
df %>%
group_by(ID) %>%
mutate(NewQ = ifelse(Quantity != 1, paste(rep(1, Quantity), collapse = ", "),
as.character(Quantity))) %>%
separate_rows(NewQ) %>%
mutate(NewQ = as.numeric(NewQ))
# A tibble: 7 x 4
# Groups: ID [4]
ID Code Quantity NewQ
<int> <chr> <int> <dbl>
1 1 A 1 1
2 2 B 3 1
3 2 B 3 1
4 2 B 3 1
5 3 C 2 1
6 3 C 2 1
7 4 D 1 1
``` | We could use `slice`
```
library(dplyr)
df %>%
group_by(ID) %>%
slice(rep(1:n(), each = Quantity)) %>%
mutate(Quantity= rep(1))
```
Output:
```
ID Code Quantity
<dbl> <chr> <dbl>
1 1 A 1
2 2 B 1
3 2 B 1
4 2 B 1
5 3 C 1
6 3 C 1
7 4 D 1
``` |
67,633,031 | I currently have a table with a quantity in it.
| ID | Code | Quantity |
| --- | --- | --- |
| 1 | A | 1 |
| 2 | B | 3 |
| 3 | C | 2 |
| 4 | D | 1 |
Is there anyway to get this table?
| ID | Code | Quantity |
| --- | --- | --- |
| 1 | A | 1 |
| 2 | B | 1 |
| 2 | B | 1 |
| 2 | B | 1 |
| 3 | C | 1 |
| 3 | C | 1 |
| 4 | D | 1 |
I need to break out the quantity and have that many number of rows.
Thanks!!!! | 2021/05/21 | [
"https://Stackoverflow.com/questions/67633031",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15989969/"
] | **Updated**
Now we have stored the separated, collapsed values into a new column:
```
library(dplyr)
library(tidyr)
df %>%
group_by(ID) %>%
uncount(Quantity, .remove = FALSE) %>%
mutate(NewQ = 1)
# A tibble: 7 x 4
# Groups: ID [4]
ID Code Quantity NewQ
<int> <chr> <int> <dbl>
1 1 A 1 1
2 2 B 3 1
3 2 B 3 1
4 2 B 3 1
5 3 C 2 1
6 3 C 2 1
7 4 D 1 1
```
**Updated**
In case we opt not to replace the existing `Quantity` column with the collapsed values.
```
df %>%
group_by(ID) %>%
mutate(NewQ = ifelse(Quantity != 1, paste(rep(1, Quantity), collapse = ", "),
as.character(Quantity))) %>%
separate_rows(NewQ) %>%
mutate(NewQ = as.numeric(NewQ))
# A tibble: 7 x 4
# Groups: ID [4]
ID Code Quantity NewQ
<int> <chr> <int> <dbl>
1 1 A 1 1
2 2 B 3 1
3 2 B 3 1
4 2 B 3 1
5 3 C 2 1
6 3 C 2 1
7 4 D 1 1
``` | A base R option using `rep`
```
transform(
`row.names<-`(df[rep(1:nrow(df), df$Quantity), ], NULL),
Quantity = 1
)
```
gives
```
ID Code Quantity
1 1 A 1
2 2 B 1
3 2 B 1
4 2 B 1
5 3 C 1
6 3 C 1
7 4 D 1
``` |
46,013 | I want to set the frame title as follows:
* When the current buffer is visiting a file, show the full path name and the Emacs version.
* When the current buffer has no file, then show the buffer name and the Emacs version.
In my `init.el`, I put
```
(setq-default frame-title-format
(concat (if (buffer-file-name) "%f" "%b") " - " (substring (emacs-version) 0 15)))
```
But here is the result:[](https://i.stack.imgur.com/K8Vim.png)
Why doesn't my code print the file name with the full path? | 2018/11/15 | [
"https://emacs.stackexchange.com/questions/46013",
"https://emacs.stackexchange.com",
"https://emacs.stackexchange.com/users/16006/"
] | Because you're setting `frame-title-format` to `"%b - GNU Emacs 26.1 "`.
You can try the following instead
```
(setq frame-title-format
`((buffer-file-name "%f" "%b")
,(format " - GNU Emacs %s" emacs-version)))
```
The following does the same but it probably does some unneeded work (that is, computing the version string) repeatedly
```
(setq frame-title-format
(list '(buffer-file-name "%f" "%b")
'(:eval (format " - GNU Emacs %s" emacs-version))))
``` | Emacs is evaluating your expression at the time when you setq `frame-title-format`, whereas you want it to be evaluated dynamically. Try wrapping your code in `:eval` as explained at [Mode-Line-Data](https://www.gnu.org/software/emacs/manual/html_node/elisp/Mode-Line-Data.html#Mode-Line-Data) |
34,410,662 | I have a service tax calculation in my page.For that i have to get the current service tax.
Service Tax table is as follows
```
Date Percentage
2015-10-01 00:00:00.000 14
2015-11-15 06:12:31.687 14.5
```
Say if the current date is `less than 2015-11-15` I will get the the value of `percentage` as `14` and if the current date is `equal to or greater than 2015-11-15` i should get the value of `percentage` as `14.5` .
How can I implement this using `Linq`?? | 2015/12/22 | [
"https://Stackoverflow.com/questions/34410662",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/958396/"
] | You need to get all taxes which are lower and fetch only first after sorting:
```
Taxes
.Where(t => t.Date < DateTime.Now)
.OrderByDescending(t => t.Date)
.First()
``` | If you need to compare a date, it's better to do so on the database to prevent fetching useless data. To do this you can make use of `System.Data.Entity` namespace to access some functions:
```
db.Taxes.Where(t => DbFunctions.TruncateTime(t.Date)
< DbFunctions.TruncateTime(dateParameter)).FirstOrDefault();
```
`System.Data.Entity.DbFunctions.TruncateTime(Datetime)` trims the time part of a `DateTime` value on the database. |
183,870 | What's the difference between `.bashrc` and `.bash_profile` and which one should I use? | 2010/09/02 | [
"https://superuser.com/questions/183870",
"https://superuser.com",
"https://superuser.com/users/12461/"
] | Have a look at this [excellent blog post by ShreevatsaR](http://shreevatsa.wordpress.com/2008/03/30/zshbash-startup-files-loading-order-bashrc-zshrc-etc/). Here's an extract, but go to the blog post, it includes an explanation for terms like "login shell", a flow chart, and a similar table for Zsh.
>
> For Bash, they work as follows. Read down the appropriate column. Executes A, then B, then C, etc. The B1, B2, B3 means it executes only the first of those files found.
>
>
>
```
+----------------+-----------+-----------+------+
| |Interactive|Interactive|Script|
| |login |non-login | |
+----------------+-----------+-----------+------+
|/etc/profile | A | | |
+----------------+-----------+-----------+------+
|/etc/bash.bashrc| | A | |
+----------------+-----------+-----------+------+
|~/.bashrc | | B | |
+----------------+-----------+-----------+------+
|~/.bash_profile | B1 | | |
+----------------+-----------+-----------+------+
|~/.bash_login | B2 | | |
+----------------+-----------+-----------+------+
|~/.profile | B3 | | |
+----------------+-----------+-----------+------+
|BASH_ENV | | | A |
+----------------+-----------+-----------+------+
| | | | |
+----------------+-----------+-----------+------+
| | | | |
+----------------+-----------+-----------+------+
|~/.bash_logout | C | | |
+----------------+-----------+-----------+------+
``` | **A BETTER COMMENT FOR THE HEAD OF /ETC/PROFILE**
Building on Flimm's great answer above, I inserted this new comment at the head of my Debian `/etc/profile`, *(you might need to adjust it for your distro.)*:
```
# For BASH: Read down the appropriate column. Executes A, then B, then C, etc.
# The B1, B2, B3 means it executes only the first of those files found. (A)
# or (B2) means it is normally sourced by (read by and included in) the
# primary file, in this case A or B2.
#
# +---------------------------------+-------+-----+------------+
# | | Interactive | non-Inter. |
# +---------------------------------+-------+-----+------------+
# | | login | non-login |
# +---------------------------------+-------+-----+------------+
# | | | | |
# | ALL USERS: | | | |
# +---------------------------------+-------+-----+------------+
# |BASH_ENV | | | A | not interactive or login
# | | | | |
# +---------------------------------+-------+-----+------------+
# |/etc/profile | A | | | set PATH & PS1, & call following:
# +---------------------------------+-------+-----+------------+
# |/etc/bash.bashrc | (A) | A | | Better PS1 + command-not-found
# +---------------------------------+-------+-----+------------+
# |/etc/profile.d/bash_completion.sh| (A) | | |
# +---------------------------------+-------+-----+------------+
# |/etc/profile.d/vte-2.91.sh | (A) | | | Virt. Terminal Emulator
# |/etc/profile.d/vte.sh | (A) | | |
# +---------------------------------+-------+-----+------------+
# | | | | |
# | A SPECIFIC USER: | | | |
# +---------------------------------+-------+-----+------------+
# |~/.bash_profile (bash only) | B1 | | | (doesn't currently exist)
# +---------------------------------+-------+-----+------------+
# |~/.bash_login (bash only) | B2 | | | (didn't exist) **
# +---------------------------------+-------+-----+------------+
# |~/.profile (all shells) | B3 | | | (doesn't currently exist)
# +---------------------------------+-------+-----+------------+
# |~/.bashrc (bash only) | (B2) | B | | colorizes bash: su=red, other_users=green
# +---------------------------------+-------+-----+------------+
# | | | | |
# +---------------------------------+-------+-----+------------+
# |~/.bash_logout | C | | |
# +---------------------------------+-------+-----+------------+
#
# ** (sources !/.bashrc to colorize login, for when booting into non-gui)
```
And this note at the head of each of the other setup files to refer to it:
```
# TIP: SEE TABLE in /etc/profile of BASH SETUP FILES AND THEIR LOAD SEQUENCE
```
Worth noting I think is that Debian's `/etc/profile` by default sources (includes) `/etc/bash.bashrc`, (that's when `/etc/bash.bashrc` exists). So login scripts read both `/etc` files, while non-login reads only bash.bashrc.
Also of note is that `/etc/bash.bashrc` is set to do nothing when it's not run interactively. So these two files are only for interactive scripts. |
183,870 | What's the difference between `.bashrc` and `.bash_profile` and which one should I use? | 2010/09/02 | [
"https://superuser.com/questions/183870",
"https://superuser.com",
"https://superuser.com/users/12461/"
] | From this [short article](http://joshstaiger.org/archives/2005/07/bash_profile_vs.html)
>
> According to the bash man page,
> .bash\_profile is executed for login
> shells, while .bashrc is executed for
> interactive non-login shells.
>
>
> **What is a login or non-login shell?**
>
>
> When you login (eg: type username and
> password) via console, either
> physically sitting at the machine when
> booting, or remotely via ssh:
> .bash\_profile is executed to configure
> things before the initial command
> prompt.
>
>
> But, if you've already logged into
> your machine and open a new terminal
> window (xterm) inside Gnome or KDE,
> then .bashrc is executed before the
> window command prompt. .bashrc is also
> run when you start a new bash instance
> by typing /bin/bash in a terminal.
>
>
> | The configuration logic of bash's itself is not crazy complicated and explained in other answers in this page, on serverfault and in many blogs. The problem however is *what the Linux distributions make of bash*, I mean the complex and various ways they configure bash by default. <http://mywiki.wooledge.org/DotFiles> mentions some of these quirks briefly. Here's one sample trace on Fedora 29, it shows which files source which other file(s) and in which order for a very simple scenario: remotely connecting with ssh and then starting another subshell:
```
ssh fedora29
ββ -bash # login shell
βββ /etc/profile
| ββ /etc/profile.d/*.sh
| ββ /etc/profile.d/sh.local
| ββ /etc/bashrc
βββ ~/.bash_profile
| ββ ~/.bashrc
| ββ /etc/bashrc
|
|
ββ $ bash # non-login shell
ββ ~/.bashrc
ββ /etc/bashrc
ββ /etc/profile.d/*.sh
```
Fedora's most complex logic is in `/etc/bashrc`. As seen above `/etc/bashrc` is a file bash itself doesn't know about, I mean not directly. Fedora's `/etc/bashrc` tests whether:
* it's being sourced by a login shell,
* it's being sourced by an interactive shell,
* it has already been sourced
... and then does completely different things depending on those.
If you think can remember the graph above then too bad because it's not nearly enough: this graph merely describes just one scenario, slightly different things happen when running non-interactive scripts or starting a graphical session. I've omitted `~/.profile`. I've omitted `bash_completion` scripts. For backward compatibility reasons, invoking bash as `/bin/sh` instead of `/bin/bash` changes its behaviour. What about zsh and other shells? And of course different Linux distributions do things differently, for instance *Debian and Ubuntu come with a non-standard version of bas*h, it has Debian-specific customization(s). It notably looks for an unusual file: `/etc/bash.bashrc`. Even if you stick to a single Linux distribution it probably evolves over time. Wait: we haven't even touched macOS, FreeBSD,... Finally, let's have a thought for users stuck with the even more creative ways their admins have configured the system they have to use.
As the never-ending stream of discussions on this topic demonstrates, it's a lost cause. As long as you just want to add new values, some "trial and error" tends to be enough. The real fun begins when you want to *modify* in one (user) file something already defined in another (in /etc). Then be prepared to spend some time engineering a solution that will never be portable.
For a last bit of fun here's the "source graph" for the same, simple scenario on Clear Linux as of June 2019:
```
ssh clearlinux
ββ -bash # login shell
βββ /usr/share/defaults/etc/profile
| ββ /usr/share/defaults/etc/profile.d/*
| ββ /etc/profile.d/*
| ββ /etc/profile
βββ ~/.bash_profile
|
|
ββ $ bash # non-login shell
ββ /usr/share/defaults/etc/bash.bashrc
| ββ /usr/share/defaults/etc/profile
| | ββ /usr/share/defaults/etc/profile.d/*
| | ββ /etc/profile.d/*
| | ββ /etc/profile
| ββ /etc/profile
ββ ~/.bashrc
``` |
183,870 | What's the difference between `.bashrc` and `.bash_profile` and which one should I use? | 2010/09/02 | [
"https://superuser.com/questions/183870",
"https://superuser.com",
"https://superuser.com/users/12461/"
] | Traditionally, when you log into a Unix system, the system would start one program for you. That program is a shell, i.e., a program designed to start other programs. It's a command line shell: you start another program by typing its name. The default shell, a Bourne shell, reads commands from `~/.profile` when it is invoked as the login shell.
Bash is a Bourne-like shell. It reads commands from `~/.bash_profile` when it is invoked as the login shell, and if that file doesn't existΒΉ, it tries reading `~/.profile` instead.
You can invoke a shell directly at any time, for example by launching a terminal emulator inside a GUI environment. If the shell is not a login shell, it doesn't read `~/.profile`. When you start bash as an interactive shell (i.e., not to run a script), it reads `~/.bashrc` (except when invoked as a login shell, then it only reads `~/.bash_profile` or `~/.profile`.
Therefore:
* `~/.profile` is the place to put stuff that applies to your whole session, such as programs that you want to start when you log in (but not graphical programs, they go into a different file), and environment variable definitions.
* `~/.bashrc` is the place to put stuff that applies only to bash itself, such as alias and function definitions, shell options, and prompt settings. (You could also put key bindings there, but for bash they normally go into `~/.inputrc`.)
* `~/.bash_profile` can be used instead of `~/.profile`, but it is read by bash only, not by any other shell. (This is mostly a concern if you want your initialization files to work on multiple machines and your login shell isn't bash on all of them.) This is a logical place to include `~/.bashrc` if the shell is interactive. I recommend the following contents in `~/.bash_profile`:
```
if [ -r ~/.profile ]; then . ~/.profile; fi
case "$-" in *i*) if [ -r ~/.bashrc ]; then . ~/.bashrc; fi;; esac
```
On modern unices, there's an added complication related to `~/.profile`. If you log in in a graphical environment (that is, if the program where you type your password is running in graphics mode), you don't automatically get a login shell that reads `~/.profile`. Depending on the graphical login program, on the window manager or desktop environment you run afterwards, and on how your distribution configured these programs, your `~/.profile` may or may not be read. If it's not, there's usually another place where you can define environment variables and programs to launch when you log in, but there is unfortunately no standard location.
Note that you may see here and there recommendations to either put environment variable definitions in `~/.bashrc` or always launch login shells in terminals. Both are bad ideas. The most common problem with either of these ideas is that your environment variables will only be set in programs launched via the terminal, not in programs started directly with an icon or menu or keyboard shortcut.
ΒΉ For completeness, by request: if `.bash_profile` doesn't exist, bash also tries `.bash_login` before falling back to `.profile`. Feel free to forget it exists. | From this [short article](http://joshstaiger.org/archives/2005/07/bash_profile_vs.html)
>
> According to the bash man page,
> .bash\_profile is executed for login
> shells, while .bashrc is executed for
> interactive non-login shells.
>
>
> **What is a login or non-login shell?**
>
>
> When you login (eg: type username and
> password) via console, either
> physically sitting at the machine when
> booting, or remotely via ssh:
> .bash\_profile is executed to configure
> things before the initial command
> prompt.
>
>
> But, if you've already logged into
> your machine and open a new terminal
> window (xterm) inside Gnome or KDE,
> then .bashrc is executed before the
> window command prompt. .bashrc is also
> run when you start a new bash instance
> by typing /bin/bash in a terminal.
>
>
> |
183,870 | What's the difference between `.bashrc` and `.bash_profile` and which one should I use? | 2010/09/02 | [
"https://superuser.com/questions/183870",
"https://superuser.com",
"https://superuser.com/users/12461/"
] | Traditionally, when you log into a Unix system, the system would start one program for you. That program is a shell, i.e., a program designed to start other programs. It's a command line shell: you start another program by typing its name. The default shell, a Bourne shell, reads commands from `~/.profile` when it is invoked as the login shell.
Bash is a Bourne-like shell. It reads commands from `~/.bash_profile` when it is invoked as the login shell, and if that file doesn't existΒΉ, it tries reading `~/.profile` instead.
You can invoke a shell directly at any time, for example by launching a terminal emulator inside a GUI environment. If the shell is not a login shell, it doesn't read `~/.profile`. When you start bash as an interactive shell (i.e., not to run a script), it reads `~/.bashrc` (except when invoked as a login shell, then it only reads `~/.bash_profile` or `~/.profile`.
Therefore:
* `~/.profile` is the place to put stuff that applies to your whole session, such as programs that you want to start when you log in (but not graphical programs, they go into a different file), and environment variable definitions.
* `~/.bashrc` is the place to put stuff that applies only to bash itself, such as alias and function definitions, shell options, and prompt settings. (You could also put key bindings there, but for bash they normally go into `~/.inputrc`.)
* `~/.bash_profile` can be used instead of `~/.profile`, but it is read by bash only, not by any other shell. (This is mostly a concern if you want your initialization files to work on multiple machines and your login shell isn't bash on all of them.) This is a logical place to include `~/.bashrc` if the shell is interactive. I recommend the following contents in `~/.bash_profile`:
```
if [ -r ~/.profile ]; then . ~/.profile; fi
case "$-" in *i*) if [ -r ~/.bashrc ]; then . ~/.bashrc; fi;; esac
```
On modern unices, there's an added complication related to `~/.profile`. If you log in in a graphical environment (that is, if the program where you type your password is running in graphics mode), you don't automatically get a login shell that reads `~/.profile`. Depending on the graphical login program, on the window manager or desktop environment you run afterwards, and on how your distribution configured these programs, your `~/.profile` may or may not be read. If it's not, there's usually another place where you can define environment variables and programs to launch when you log in, but there is unfortunately no standard location.
Note that you may see here and there recommendations to either put environment variable definitions in `~/.bashrc` or always launch login shells in terminals. Both are bad ideas. The most common problem with either of these ideas is that your environment variables will only be set in programs launched via the terminal, not in programs started directly with an icon or menu or keyboard shortcut.
ΒΉ For completeness, by request: if `.bash_profile` doesn't exist, bash also tries `.bash_login` before falling back to `.profile`. Feel free to forget it exists. | Back in the old days, when pseudo tty's weren't pseudo and actually, well, typed, and UNIXes were accessed by modems so slow you could see each letter being printed to your screen, efficiency was paramount. To help efficiency somewhat you had a concept of a main login window and whatever other windows you used to actually work. In your main window, you'd like notifications to any new mail, possibly run some other programs in the background.
To support this, shells sourced a file `.profile` specifically on 'login shells'. This would do the special, once a session setup. Bash extended this somewhat to look at .bash\_profile first before .profile, this way you could put bash only things in there (so they don't screw up Bourne shell, etc, that also looked at .profile). Other shells, non-login, would just source the rc file, .bashrc (or .kshrc, etc).
This is a bit of an anachronism now. You don't log into a main shell as much as you log into a gui window manager. There is no main window any different than any other window.
My suggestion - don't worry about this difference, it's based on an older style of using unix. Eliminate the difference in your files. The entire contents of .bash\_profile should be:
`[ -f $HOME/.bashrc ] && . $HOME/.bashrc`
And put everything you actually want to set in .bashrc
Remember that .bashrc is sourced for all shells, interactive and non-interactive. You can short circuit the sourcing for non-interactive shells by putting this code near the top of .bashrc:
`[[ $- != *i* ]] && return` |
183,870 | What's the difference between `.bashrc` and `.bash_profile` and which one should I use? | 2010/09/02 | [
"https://superuser.com/questions/183870",
"https://superuser.com",
"https://superuser.com/users/12461/"
] | Traditionally, when you log into a Unix system, the system would start one program for you. That program is a shell, i.e., a program designed to start other programs. It's a command line shell: you start another program by typing its name. The default shell, a Bourne shell, reads commands from `~/.profile` when it is invoked as the login shell.
Bash is a Bourne-like shell. It reads commands from `~/.bash_profile` when it is invoked as the login shell, and if that file doesn't existΒΉ, it tries reading `~/.profile` instead.
You can invoke a shell directly at any time, for example by launching a terminal emulator inside a GUI environment. If the shell is not a login shell, it doesn't read `~/.profile`. When you start bash as an interactive shell (i.e., not to run a script), it reads `~/.bashrc` (except when invoked as a login shell, then it only reads `~/.bash_profile` or `~/.profile`.
Therefore:
* `~/.profile` is the place to put stuff that applies to your whole session, such as programs that you want to start when you log in (but not graphical programs, they go into a different file), and environment variable definitions.
* `~/.bashrc` is the place to put stuff that applies only to bash itself, such as alias and function definitions, shell options, and prompt settings. (You could also put key bindings there, but for bash they normally go into `~/.inputrc`.)
* `~/.bash_profile` can be used instead of `~/.profile`, but it is read by bash only, not by any other shell. (This is mostly a concern if you want your initialization files to work on multiple machines and your login shell isn't bash on all of them.) This is a logical place to include `~/.bashrc` if the shell is interactive. I recommend the following contents in `~/.bash_profile`:
```
if [ -r ~/.profile ]; then . ~/.profile; fi
case "$-" in *i*) if [ -r ~/.bashrc ]; then . ~/.bashrc; fi;; esac
```
On modern unices, there's an added complication related to `~/.profile`. If you log in in a graphical environment (that is, if the program where you type your password is running in graphics mode), you don't automatically get a login shell that reads `~/.profile`. Depending on the graphical login program, on the window manager or desktop environment you run afterwards, and on how your distribution configured these programs, your `~/.profile` may or may not be read. If it's not, there's usually another place where you can define environment variables and programs to launch when you log in, but there is unfortunately no standard location.
Note that you may see here and there recommendations to either put environment variable definitions in `~/.bashrc` or always launch login shells in terminals. Both are bad ideas. The most common problem with either of these ideas is that your environment variables will only be set in programs launched via the terminal, not in programs started directly with an icon or menu or keyboard shortcut.
ΒΉ For completeness, by request: if `.bash_profile` doesn't exist, bash also tries `.bash_login` before falling back to `.profile`. Feel free to forget it exists. | Have a look at this [excellent blog post by ShreevatsaR](http://shreevatsa.wordpress.com/2008/03/30/zshbash-startup-files-loading-order-bashrc-zshrc-etc/). Here's an extract, but go to the blog post, it includes an explanation for terms like "login shell", a flow chart, and a similar table for Zsh.
>
> For Bash, they work as follows. Read down the appropriate column. Executes A, then B, then C, etc. The B1, B2, B3 means it executes only the first of those files found.
>
>
>
```
+----------------+-----------+-----------+------+
| |Interactive|Interactive|Script|
| |login |non-login | |
+----------------+-----------+-----------+------+
|/etc/profile | A | | |
+----------------+-----------+-----------+------+
|/etc/bash.bashrc| | A | |
+----------------+-----------+-----------+------+
|~/.bashrc | | B | |
+----------------+-----------+-----------+------+
|~/.bash_profile | B1 | | |
+----------------+-----------+-----------+------+
|~/.bash_login | B2 | | |
+----------------+-----------+-----------+------+
|~/.profile | B3 | | |
+----------------+-----------+-----------+------+
|BASH_ENV | | | A |
+----------------+-----------+-----------+------+
| | | | |
+----------------+-----------+-----------+------+
| | | | |
+----------------+-----------+-----------+------+
|~/.bash_logout | C | | |
+----------------+-----------+-----------+------+
``` |
183,870 | What's the difference between `.bashrc` and `.bash_profile` and which one should I use? | 2010/09/02 | [
"https://superuser.com/questions/183870",
"https://superuser.com",
"https://superuser.com/users/12461/"
] | Traditionally, when you log into a Unix system, the system would start one program for you. That program is a shell, i.e., a program designed to start other programs. It's a command line shell: you start another program by typing its name. The default shell, a Bourne shell, reads commands from `~/.profile` when it is invoked as the login shell.
Bash is a Bourne-like shell. It reads commands from `~/.bash_profile` when it is invoked as the login shell, and if that file doesn't existΒΉ, it tries reading `~/.profile` instead.
You can invoke a shell directly at any time, for example by launching a terminal emulator inside a GUI environment. If the shell is not a login shell, it doesn't read `~/.profile`. When you start bash as an interactive shell (i.e., not to run a script), it reads `~/.bashrc` (except when invoked as a login shell, then it only reads `~/.bash_profile` or `~/.profile`.
Therefore:
* `~/.profile` is the place to put stuff that applies to your whole session, such as programs that you want to start when you log in (but not graphical programs, they go into a different file), and environment variable definitions.
* `~/.bashrc` is the place to put stuff that applies only to bash itself, such as alias and function definitions, shell options, and prompt settings. (You could also put key bindings there, but for bash they normally go into `~/.inputrc`.)
* `~/.bash_profile` can be used instead of `~/.profile`, but it is read by bash only, not by any other shell. (This is mostly a concern if you want your initialization files to work on multiple machines and your login shell isn't bash on all of them.) This is a logical place to include `~/.bashrc` if the shell is interactive. I recommend the following contents in `~/.bash_profile`:
```
if [ -r ~/.profile ]; then . ~/.profile; fi
case "$-" in *i*) if [ -r ~/.bashrc ]; then . ~/.bashrc; fi;; esac
```
On modern unices, there's an added complication related to `~/.profile`. If you log in in a graphical environment (that is, if the program where you type your password is running in graphics mode), you don't automatically get a login shell that reads `~/.profile`. Depending on the graphical login program, on the window manager or desktop environment you run afterwards, and on how your distribution configured these programs, your `~/.profile` may or may not be read. If it's not, there's usually another place where you can define environment variables and programs to launch when you log in, but there is unfortunately no standard location.
Note that you may see here and there recommendations to either put environment variable definitions in `~/.bashrc` or always launch login shells in terminals. Both are bad ideas. The most common problem with either of these ideas is that your environment variables will only be set in programs launched via the terminal, not in programs started directly with an icon or menu or keyboard shortcut.
ΒΉ For completeness, by request: if `.bash_profile` doesn't exist, bash also tries `.bash_login` before falling back to `.profile`. Feel free to forget it exists. | The configuration logic of bash's itself is not crazy complicated and explained in other answers in this page, on serverfault and in many blogs. The problem however is *what the Linux distributions make of bash*, I mean the complex and various ways they configure bash by default. <http://mywiki.wooledge.org/DotFiles> mentions some of these quirks briefly. Here's one sample trace on Fedora 29, it shows which files source which other file(s) and in which order for a very simple scenario: remotely connecting with ssh and then starting another subshell:
```
ssh fedora29
ββ -bash # login shell
βββ /etc/profile
| ββ /etc/profile.d/*.sh
| ββ /etc/profile.d/sh.local
| ββ /etc/bashrc
βββ ~/.bash_profile
| ββ ~/.bashrc
| ββ /etc/bashrc
|
|
ββ $ bash # non-login shell
ββ ~/.bashrc
ββ /etc/bashrc
ββ /etc/profile.d/*.sh
```
Fedora's most complex logic is in `/etc/bashrc`. As seen above `/etc/bashrc` is a file bash itself doesn't know about, I mean not directly. Fedora's `/etc/bashrc` tests whether:
* it's being sourced by a login shell,
* it's being sourced by an interactive shell,
* it has already been sourced
... and then does completely different things depending on those.
If you think can remember the graph above then too bad because it's not nearly enough: this graph merely describes just one scenario, slightly different things happen when running non-interactive scripts or starting a graphical session. I've omitted `~/.profile`. I've omitted `bash_completion` scripts. For backward compatibility reasons, invoking bash as `/bin/sh` instead of `/bin/bash` changes its behaviour. What about zsh and other shells? And of course different Linux distributions do things differently, for instance *Debian and Ubuntu come with a non-standard version of bas*h, it has Debian-specific customization(s). It notably looks for an unusual file: `/etc/bash.bashrc`. Even if you stick to a single Linux distribution it probably evolves over time. Wait: we haven't even touched macOS, FreeBSD,... Finally, let's have a thought for users stuck with the even more creative ways their admins have configured the system they have to use.
As the never-ending stream of discussions on this topic demonstrates, it's a lost cause. As long as you just want to add new values, some "trial and error" tends to be enough. The real fun begins when you want to *modify* in one (user) file something already defined in another (in /etc). Then be prepared to spend some time engineering a solution that will never be portable.
For a last bit of fun here's the "source graph" for the same, simple scenario on Clear Linux as of June 2019:
```
ssh clearlinux
ββ -bash # login shell
βββ /usr/share/defaults/etc/profile
| ββ /usr/share/defaults/etc/profile.d/*
| ββ /etc/profile.d/*
| ββ /etc/profile
βββ ~/.bash_profile
|
|
ββ $ bash # non-login shell
ββ /usr/share/defaults/etc/bash.bashrc
| ββ /usr/share/defaults/etc/profile
| | ββ /usr/share/defaults/etc/profile.d/*
| | ββ /etc/profile.d/*
| | ββ /etc/profile
| ββ /etc/profile
ββ ~/.bashrc
``` |
183,870 | What's the difference between `.bashrc` and `.bash_profile` and which one should I use? | 2010/09/02 | [
"https://superuser.com/questions/183870",
"https://superuser.com",
"https://superuser.com/users/12461/"
] | Traditionally, when you log into a Unix system, the system would start one program for you. That program is a shell, i.e., a program designed to start other programs. It's a command line shell: you start another program by typing its name. The default shell, a Bourne shell, reads commands from `~/.profile` when it is invoked as the login shell.
Bash is a Bourne-like shell. It reads commands from `~/.bash_profile` when it is invoked as the login shell, and if that file doesn't existΒΉ, it tries reading `~/.profile` instead.
You can invoke a shell directly at any time, for example by launching a terminal emulator inside a GUI environment. If the shell is not a login shell, it doesn't read `~/.profile`. When you start bash as an interactive shell (i.e., not to run a script), it reads `~/.bashrc` (except when invoked as a login shell, then it only reads `~/.bash_profile` or `~/.profile`.
Therefore:
* `~/.profile` is the place to put stuff that applies to your whole session, such as programs that you want to start when you log in (but not graphical programs, they go into a different file), and environment variable definitions.
* `~/.bashrc` is the place to put stuff that applies only to bash itself, such as alias and function definitions, shell options, and prompt settings. (You could also put key bindings there, but for bash they normally go into `~/.inputrc`.)
* `~/.bash_profile` can be used instead of `~/.profile`, but it is read by bash only, not by any other shell. (This is mostly a concern if you want your initialization files to work on multiple machines and your login shell isn't bash on all of them.) This is a logical place to include `~/.bashrc` if the shell is interactive. I recommend the following contents in `~/.bash_profile`:
```
if [ -r ~/.profile ]; then . ~/.profile; fi
case "$-" in *i*) if [ -r ~/.bashrc ]; then . ~/.bashrc; fi;; esac
```
On modern unices, there's an added complication related to `~/.profile`. If you log in in a graphical environment (that is, if the program where you type your password is running in graphics mode), you don't automatically get a login shell that reads `~/.profile`. Depending on the graphical login program, on the window manager or desktop environment you run afterwards, and on how your distribution configured these programs, your `~/.profile` may or may not be read. If it's not, there's usually another place where you can define environment variables and programs to launch when you log in, but there is unfortunately no standard location.
Note that you may see here and there recommendations to either put environment variable definitions in `~/.bashrc` or always launch login shells in terminals. Both are bad ideas. The most common problem with either of these ideas is that your environment variables will only be set in programs launched via the terminal, not in programs started directly with an icon or menu or keyboard shortcut.
ΒΉ For completeness, by request: if `.bash_profile` doesn't exist, bash also tries `.bash_login` before falling back to `.profile`. Feel free to forget it exists. | **A BETTER COMMENT FOR THE HEAD OF /ETC/PROFILE**
Building on Flimm's great answer above, I inserted this new comment at the head of my Debian `/etc/profile`, *(you might need to adjust it for your distro.)*:
```
# For BASH: Read down the appropriate column. Executes A, then B, then C, etc.
# The B1, B2, B3 means it executes only the first of those files found. (A)
# or (B2) means it is normally sourced by (read by and included in) the
# primary file, in this case A or B2.
#
# +---------------------------------+-------+-----+------------+
# | | Interactive | non-Inter. |
# +---------------------------------+-------+-----+------------+
# | | login | non-login |
# +---------------------------------+-------+-----+------------+
# | | | | |
# | ALL USERS: | | | |
# +---------------------------------+-------+-----+------------+
# |BASH_ENV | | | A | not interactive or login
# | | | | |
# +---------------------------------+-------+-----+------------+
# |/etc/profile | A | | | set PATH & PS1, & call following:
# +---------------------------------+-------+-----+------------+
# |/etc/bash.bashrc | (A) | A | | Better PS1 + command-not-found
# +---------------------------------+-------+-----+------------+
# |/etc/profile.d/bash_completion.sh| (A) | | |
# +---------------------------------+-------+-----+------------+
# |/etc/profile.d/vte-2.91.sh | (A) | | | Virt. Terminal Emulator
# |/etc/profile.d/vte.sh | (A) | | |
# +---------------------------------+-------+-----+------------+
# | | | | |
# | A SPECIFIC USER: | | | |
# +---------------------------------+-------+-----+------------+
# |~/.bash_profile (bash only) | B1 | | | (doesn't currently exist)
# +---------------------------------+-------+-----+------------+
# |~/.bash_login (bash only) | B2 | | | (didn't exist) **
# +---------------------------------+-------+-----+------------+
# |~/.profile (all shells) | B3 | | | (doesn't currently exist)
# +---------------------------------+-------+-----+------------+
# |~/.bashrc (bash only) | (B2) | B | | colorizes bash: su=red, other_users=green
# +---------------------------------+-------+-----+------------+
# | | | | |
# +---------------------------------+-------+-----+------------+
# |~/.bash_logout | C | | |
# +---------------------------------+-------+-----+------------+
#
# ** (sources !/.bashrc to colorize login, for when booting into non-gui)
```
And this note at the head of each of the other setup files to refer to it:
```
# TIP: SEE TABLE in /etc/profile of BASH SETUP FILES AND THEIR LOAD SEQUENCE
```
Worth noting I think is that Debian's `/etc/profile` by default sources (includes) `/etc/bash.bashrc`, (that's when `/etc/bash.bashrc` exists). So login scripts read both `/etc` files, while non-login reads only bash.bashrc.
Also of note is that `/etc/bash.bashrc` is set to do nothing when it's not run interactively. So these two files are only for interactive scripts. |
183,870 | What's the difference between `.bashrc` and `.bash_profile` and which one should I use? | 2010/09/02 | [
"https://superuser.com/questions/183870",
"https://superuser.com",
"https://superuser.com/users/12461/"
] | Back in the old days, when pseudo tty's weren't pseudo and actually, well, typed, and UNIXes were accessed by modems so slow you could see each letter being printed to your screen, efficiency was paramount. To help efficiency somewhat you had a concept of a main login window and whatever other windows you used to actually work. In your main window, you'd like notifications to any new mail, possibly run some other programs in the background.
To support this, shells sourced a file `.profile` specifically on 'login shells'. This would do the special, once a session setup. Bash extended this somewhat to look at .bash\_profile first before .profile, this way you could put bash only things in there (so they don't screw up Bourne shell, etc, that also looked at .profile). Other shells, non-login, would just source the rc file, .bashrc (or .kshrc, etc).
This is a bit of an anachronism now. You don't log into a main shell as much as you log into a gui window manager. There is no main window any different than any other window.
My suggestion - don't worry about this difference, it's based on an older style of using unix. Eliminate the difference in your files. The entire contents of .bash\_profile should be:
`[ -f $HOME/.bashrc ] && . $HOME/.bashrc`
And put everything you actually want to set in .bashrc
Remember that .bashrc is sourced for all shells, interactive and non-interactive. You can short circuit the sourcing for non-interactive shells by putting this code near the top of .bashrc:
`[[ $- != *i* ]] && return` | **A BETTER COMMENT FOR THE HEAD OF /ETC/PROFILE**
Building on Flimm's great answer above, I inserted this new comment at the head of my Debian `/etc/profile`, *(you might need to adjust it for your distro.)*:
```
# For BASH: Read down the appropriate column. Executes A, then B, then C, etc.
# The B1, B2, B3 means it executes only the first of those files found. (A)
# or (B2) means it is normally sourced by (read by and included in) the
# primary file, in this case A or B2.
#
# +---------------------------------+-------+-----+------------+
# | | Interactive | non-Inter. |
# +---------------------------------+-------+-----+------------+
# | | login | non-login |
# +---------------------------------+-------+-----+------------+
# | | | | |
# | ALL USERS: | | | |
# +---------------------------------+-------+-----+------------+
# |BASH_ENV | | | A | not interactive or login
# | | | | |
# +---------------------------------+-------+-----+------------+
# |/etc/profile | A | | | set PATH & PS1, & call following:
# +---------------------------------+-------+-----+------------+
# |/etc/bash.bashrc | (A) | A | | Better PS1 + command-not-found
# +---------------------------------+-------+-----+------------+
# |/etc/profile.d/bash_completion.sh| (A) | | |
# +---------------------------------+-------+-----+------------+
# |/etc/profile.d/vte-2.91.sh | (A) | | | Virt. Terminal Emulator
# |/etc/profile.d/vte.sh | (A) | | |
# +---------------------------------+-------+-----+------------+
# | | | | |
# | A SPECIFIC USER: | | | |
# +---------------------------------+-------+-----+------------+
# |~/.bash_profile (bash only) | B1 | | | (doesn't currently exist)
# +---------------------------------+-------+-----+------------+
# |~/.bash_login (bash only) | B2 | | | (didn't exist) **
# +---------------------------------+-------+-----+------------+
# |~/.profile (all shells) | B3 | | | (doesn't currently exist)
# +---------------------------------+-------+-----+------------+
# |~/.bashrc (bash only) | (B2) | B | | colorizes bash: su=red, other_users=green
# +---------------------------------+-------+-----+------------+
# | | | | |
# +---------------------------------+-------+-----+------------+
# |~/.bash_logout | C | | |
# +---------------------------------+-------+-----+------------+
#
# ** (sources !/.bashrc to colorize login, for when booting into non-gui)
```
And this note at the head of each of the other setup files to refer to it:
```
# TIP: SEE TABLE in /etc/profile of BASH SETUP FILES AND THEIR LOAD SEQUENCE
```
Worth noting I think is that Debian's `/etc/profile` by default sources (includes) `/etc/bash.bashrc`, (that's when `/etc/bash.bashrc` exists). So login scripts read both `/etc` files, while non-login reads only bash.bashrc.
Also of note is that `/etc/bash.bashrc` is set to do nothing when it's not run interactively. So these two files are only for interactive scripts. |
183,870 | What's the difference between `.bashrc` and `.bash_profile` and which one should I use? | 2010/09/02 | [
"https://superuser.com/questions/183870",
"https://superuser.com",
"https://superuser.com/users/12461/"
] | From this [short article](http://joshstaiger.org/archives/2005/07/bash_profile_vs.html)
>
> According to the bash man page,
> .bash\_profile is executed for login
> shells, while .bashrc is executed for
> interactive non-login shells.
>
>
> **What is a login or non-login shell?**
>
>
> When you login (eg: type username and
> password) via console, either
> physically sitting at the machine when
> booting, or remotely via ssh:
> .bash\_profile is executed to configure
> things before the initial command
> prompt.
>
>
> But, if you've already logged into
> your machine and open a new terminal
> window (xterm) inside Gnome or KDE,
> then .bashrc is executed before the
> window command prompt. .bashrc is also
> run when you start a new bash instance
> by typing /bin/bash in a terminal.
>
>
> | Have a look at this [excellent blog post by ShreevatsaR](http://shreevatsa.wordpress.com/2008/03/30/zshbash-startup-files-loading-order-bashrc-zshrc-etc/). Here's an extract, but go to the blog post, it includes an explanation for terms like "login shell", a flow chart, and a similar table for Zsh.
>
> For Bash, they work as follows. Read down the appropriate column. Executes A, then B, then C, etc. The B1, B2, B3 means it executes only the first of those files found.
>
>
>
```
+----------------+-----------+-----------+------+
| |Interactive|Interactive|Script|
| |login |non-login | |
+----------------+-----------+-----------+------+
|/etc/profile | A | | |
+----------------+-----------+-----------+------+
|/etc/bash.bashrc| | A | |
+----------------+-----------+-----------+------+
|~/.bashrc | | B | |
+----------------+-----------+-----------+------+
|~/.bash_profile | B1 | | |
+----------------+-----------+-----------+------+
|~/.bash_login | B2 | | |
+----------------+-----------+-----------+------+
|~/.profile | B3 | | |
+----------------+-----------+-----------+------+
|BASH_ENV | | | A |
+----------------+-----------+-----------+------+
| | | | |
+----------------+-----------+-----------+------+
| | | | |
+----------------+-----------+-----------+------+
|~/.bash_logout | C | | |
+----------------+-----------+-----------+------+
``` |
183,870 | What's the difference between `.bashrc` and `.bash_profile` and which one should I use? | 2010/09/02 | [
"https://superuser.com/questions/183870",
"https://superuser.com",
"https://superuser.com/users/12461/"
] | From this [short article](http://joshstaiger.org/archives/2005/07/bash_profile_vs.html)
>
> According to the bash man page,
> .bash\_profile is executed for login
> shells, while .bashrc is executed for
> interactive non-login shells.
>
>
> **What is a login or non-login shell?**
>
>
> When you login (eg: type username and
> password) via console, either
> physically sitting at the machine when
> booting, or remotely via ssh:
> .bash\_profile is executed to configure
> things before the initial command
> prompt.
>
>
> But, if you've already logged into
> your machine and open a new terminal
> window (xterm) inside Gnome or KDE,
> then .bashrc is executed before the
> window command prompt. .bashrc is also
> run when you start a new bash instance
> by typing /bin/bash in a terminal.
>
>
> | **A BETTER COMMENT FOR THE HEAD OF /ETC/PROFILE**
Building on Flimm's great answer above, I inserted this new comment at the head of my Debian `/etc/profile`, *(you might need to adjust it for your distro.)*:
```
# For BASH: Read down the appropriate column. Executes A, then B, then C, etc.
# The B1, B2, B3 means it executes only the first of those files found. (A)
# or (B2) means it is normally sourced by (read by and included in) the
# primary file, in this case A or B2.
#
# +---------------------------------+-------+-----+------------+
# | | Interactive | non-Inter. |
# +---------------------------------+-------+-----+------------+
# | | login | non-login |
# +---------------------------------+-------+-----+------------+
# | | | | |
# | ALL USERS: | | | |
# +---------------------------------+-------+-----+------------+
# |BASH_ENV | | | A | not interactive or login
# | | | | |
# +---------------------------------+-------+-----+------------+
# |/etc/profile | A | | | set PATH & PS1, & call following:
# +---------------------------------+-------+-----+------------+
# |/etc/bash.bashrc | (A) | A | | Better PS1 + command-not-found
# +---------------------------------+-------+-----+------------+
# |/etc/profile.d/bash_completion.sh| (A) | | |
# +---------------------------------+-------+-----+------------+
# |/etc/profile.d/vte-2.91.sh | (A) | | | Virt. Terminal Emulator
# |/etc/profile.d/vte.sh | (A) | | |
# +---------------------------------+-------+-----+------------+
# | | | | |
# | A SPECIFIC USER: | | | |
# +---------------------------------+-------+-----+------------+
# |~/.bash_profile (bash only) | B1 | | | (doesn't currently exist)
# +---------------------------------+-------+-----+------------+
# |~/.bash_login (bash only) | B2 | | | (didn't exist) **
# +---------------------------------+-------+-----+------------+
# |~/.profile (all shells) | B3 | | | (doesn't currently exist)
# +---------------------------------+-------+-----+------------+
# |~/.bashrc (bash only) | (B2) | B | | colorizes bash: su=red, other_users=green
# +---------------------------------+-------+-----+------------+
# | | | | |
# +---------------------------------+-------+-----+------------+
# |~/.bash_logout | C | | |
# +---------------------------------+-------+-----+------------+
#
# ** (sources !/.bashrc to colorize login, for when booting into non-gui)
```
And this note at the head of each of the other setup files to refer to it:
```
# TIP: SEE TABLE in /etc/profile of BASH SETUP FILES AND THEIR LOAD SEQUENCE
```
Worth noting I think is that Debian's `/etc/profile` by default sources (includes) `/etc/bash.bashrc`, (that's when `/etc/bash.bashrc` exists). So login scripts read both `/etc` files, while non-login reads only bash.bashrc.
Also of note is that `/etc/bash.bashrc` is set to do nothing when it's not run interactively. So these two files are only for interactive scripts. |
8,670,530 | I am new to Javascript (and programming in general) and have been trying to get a basic grasp on working with the DOM. Apologies if this is a very basic mistake, but I looked around and couldn't find an answer.
I am trying to use the appendChild method to add a heading and some paragraph text into the in the very basic HTML file below.
```
<html>
<head>
<title>JS Practice</title>
</head>
<body>
<script src="script.js"></script>
<div id = "main">
<h1>Simple HTML Page</h1>
<p>This is a very simple HTML page.</p>
<p>It's about as basic as they come. It has: </p>
<ul>
<li>An H1 Tag</li>
<li>Two paragraphs</li>
<li>An unordered list</li>
</ul>
</div>
<div id="javascript">
</div>
</body>
</html>
```
Here is the js code:
```
var newHeading = document.createElement("h1");
var newParagraph = document.createElement("p");
newHeading.innerHTML = "New Heading!";
newParagraph.innerHTML = "Some text for a paragraph.";
document.getElementById("javascript").appendChild(newHeading);
document.getElementById("javascript").appendChild(newParagraph);
```
Running it causes an error: "Cannot call method 'appendChild' of null"
Help? I can't figure out why this isn't working... | 2011/12/29 | [
"https://Stackoverflow.com/questions/8670530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1058180/"
] | There is an easier way to resolve this issue. Put your JavaScript inside of a function and use the window.onload. So for instance:
```
window.onload = function any_function_name()
{
var newHeading = document.createElement("h1");
var newParagraph = document.createElement("p");
newHeading.innerHTML = "New Heading!";
newParagraph.innerHTML = "Some text for a paragraph.";
document.getElementById("javascript").appendChild(newHeading);
document.getElementById("javascript").appendChild(newParagraph);
}
```
Now, you do not have to move your tag because that will run your code after the HTML has loaded. | I add an event listener to wait for DOM content to fully load
```
document.addEventListener("DOMContentLoaded", function() {
place_the_code_you_want_to_run_after_page_load
})
``` |
8,670,530 | I am new to Javascript (and programming in general) and have been trying to get a basic grasp on working with the DOM. Apologies if this is a very basic mistake, but I looked around and couldn't find an answer.
I am trying to use the appendChild method to add a heading and some paragraph text into the in the very basic HTML file below.
```
<html>
<head>
<title>JS Practice</title>
</head>
<body>
<script src="script.js"></script>
<div id = "main">
<h1>Simple HTML Page</h1>
<p>This is a very simple HTML page.</p>
<p>It's about as basic as they come. It has: </p>
<ul>
<li>An H1 Tag</li>
<li>Two paragraphs</li>
<li>An unordered list</li>
</ul>
</div>
<div id="javascript">
</div>
</body>
</html>
```
Here is the js code:
```
var newHeading = document.createElement("h1");
var newParagraph = document.createElement("p");
newHeading.innerHTML = "New Heading!";
newParagraph.innerHTML = "Some text for a paragraph.";
document.getElementById("javascript").appendChild(newHeading);
document.getElementById("javascript").appendChild(newParagraph);
```
Running it causes an error: "Cannot call method 'appendChild' of null"
Help? I can't figure out why this isn't working... | 2011/12/29 | [
"https://Stackoverflow.com/questions/8670530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1058180/"
] | Assuming this code is inside the `script.js` file, this is because the javascript is running before the rest of the HTML page has loaded.
When an HTML page loads, when it comes across a linked resource such as a javascript file, it loads that resource, executes all code it can, and then continues running the page. So your code is running before the `<div>` is loaded on the page.
Move your `<script>` tag to the bottom of the page and you should no longer have the error. Alternatively, introduce an event such as `<body onload="doSomething();">` and then make a `doSomething()` method in your javascript file which will run those statements. | I add an event listener to wait for DOM content to fully load
```
document.addEventListener("DOMContentLoaded", function() {
place_the_code_you_want_to_run_after_page_load
})
``` |
8,670,530 | I am new to Javascript (and programming in general) and have been trying to get a basic grasp on working with the DOM. Apologies if this is a very basic mistake, but I looked around and couldn't find an answer.
I am trying to use the appendChild method to add a heading and some paragraph text into the in the very basic HTML file below.
```
<html>
<head>
<title>JS Practice</title>
</head>
<body>
<script src="script.js"></script>
<div id = "main">
<h1>Simple HTML Page</h1>
<p>This is a very simple HTML page.</p>
<p>It's about as basic as they come. It has: </p>
<ul>
<li>An H1 Tag</li>
<li>Two paragraphs</li>
<li>An unordered list</li>
</ul>
</div>
<div id="javascript">
</div>
</body>
</html>
```
Here is the js code:
```
var newHeading = document.createElement("h1");
var newParagraph = document.createElement("p");
newHeading.innerHTML = "New Heading!";
newParagraph.innerHTML = "Some text for a paragraph.";
document.getElementById("javascript").appendChild(newHeading);
document.getElementById("javascript").appendChild(newParagraph);
```
Running it causes an error: "Cannot call method 'appendChild' of null"
Help? I can't figure out why this isn't working... | 2011/12/29 | [
"https://Stackoverflow.com/questions/8670530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1058180/"
] | There is an easier way to resolve this issue. Put your JavaScript inside of a function and use the window.onload. So for instance:
```
window.onload = function any_function_name()
{
var newHeading = document.createElement("h1");
var newParagraph = document.createElement("p");
newHeading.innerHTML = "New Heading!";
newParagraph.innerHTML = "Some text for a paragraph.";
document.getElementById("javascript").appendChild(newHeading);
document.getElementById("javascript").appendChild(newParagraph);
}
```
Now, you do not have to move your tag because that will run your code after the HTML has loaded. | Because your JavaScript file is loaded first, and that time when you write `window.document.body.appendChild(btn)`, `body` element is not loaded in html, that's why you are getting error here, you can load the js file once body element is loaded in DOM.
**index.html**
```
<html>
<head>
<script src="JavaScript.js"></script>
</head>
<body onload="init()">
<h3> button will come here</h3>
</body>
</html>
```
**JavaScript.js**
```
function init(){
var button = window.document.createElement("button");
var textNode = window.document.createTextNode("click me");
button.appendChild(textNode);
window.document.body.appendChild(button);
}
``` |
8,670,530 | I am new to Javascript (and programming in general) and have been trying to get a basic grasp on working with the DOM. Apologies if this is a very basic mistake, but I looked around and couldn't find an answer.
I am trying to use the appendChild method to add a heading and some paragraph text into the in the very basic HTML file below.
```
<html>
<head>
<title>JS Practice</title>
</head>
<body>
<script src="script.js"></script>
<div id = "main">
<h1>Simple HTML Page</h1>
<p>This is a very simple HTML page.</p>
<p>It's about as basic as they come. It has: </p>
<ul>
<li>An H1 Tag</li>
<li>Two paragraphs</li>
<li>An unordered list</li>
</ul>
</div>
<div id="javascript">
</div>
</body>
</html>
```
Here is the js code:
```
var newHeading = document.createElement("h1");
var newParagraph = document.createElement("p");
newHeading.innerHTML = "New Heading!";
newParagraph.innerHTML = "Some text for a paragraph.";
document.getElementById("javascript").appendChild(newHeading);
document.getElementById("javascript").appendChild(newParagraph);
```
Running it causes an error: "Cannot call method 'appendChild' of null"
Help? I can't figure out why this isn't working... | 2011/12/29 | [
"https://Stackoverflow.com/questions/8670530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1058180/"
] | There is an easier way to resolve this issue. Put your JavaScript inside of a function and use the window.onload. So for instance:
```
window.onload = function any_function_name()
{
var newHeading = document.createElement("h1");
var newParagraph = document.createElement("p");
newHeading.innerHTML = "New Heading!";
newParagraph.innerHTML = "Some text for a paragraph.";
document.getElementById("javascript").appendChild(newHeading);
document.getElementById("javascript").appendChild(newParagraph);
}
```
Now, you do not have to move your tag because that will run your code after the HTML has loaded. | Your DOM is not loaded, so **getElementById** will return null, use **document.ready()** in jquery
```
$(document).ready(function(){
var newHeading = document.createElement("h1");
var newParagraph = document.createElement("p");
newHeading.innerHTML = "New Heading!";
newParagraph.innerHTML = "Some text for a paragraph.";
document.getElementById("javascript").appendChild(newHeading);
document.getElementById("javascript").appendChild(newParagraph);
}
``` |
8,670,530 | I am new to Javascript (and programming in general) and have been trying to get a basic grasp on working with the DOM. Apologies if this is a very basic mistake, but I looked around and couldn't find an answer.
I am trying to use the appendChild method to add a heading and some paragraph text into the in the very basic HTML file below.
```
<html>
<head>
<title>JS Practice</title>
</head>
<body>
<script src="script.js"></script>
<div id = "main">
<h1>Simple HTML Page</h1>
<p>This is a very simple HTML page.</p>
<p>It's about as basic as they come. It has: </p>
<ul>
<li>An H1 Tag</li>
<li>Two paragraphs</li>
<li>An unordered list</li>
</ul>
</div>
<div id="javascript">
</div>
</body>
</html>
```
Here is the js code:
```
var newHeading = document.createElement("h1");
var newParagraph = document.createElement("p");
newHeading.innerHTML = "New Heading!";
newParagraph.innerHTML = "Some text for a paragraph.";
document.getElementById("javascript").appendChild(newHeading);
document.getElementById("javascript").appendChild(newParagraph);
```
Running it causes an error: "Cannot call method 'appendChild' of null"
Help? I can't figure out why this isn't working... | 2011/12/29 | [
"https://Stackoverflow.com/questions/8670530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1058180/"
] | Your script is running before the elements are available.
Place your script directly before the closing `</body>` tag.
```
<html>
<head>
<title>JS Practice</title>
</head>
<body>
<div id = "main">
<h1>Simple HTML Page</h1>
<p>This is a very simple HTML page.</p>
<p>It's about as basic as they come. It has: </p>
<ul>
<li>An H1 Tag</li>
<li>Two paragraphs</li>
<li>An unordered list</li>
</ul>
</div>
<div id="javascript">
</div>
<!-- Now it will run after the above elements have been created -->
<script src="script.js"></script>
</body>
</html>
``` | I add an event listener to wait for DOM content to fully load
```
document.addEventListener("DOMContentLoaded", function() {
place_the_code_you_want_to_run_after_page_load
})
``` |
8,670,530 | I am new to Javascript (and programming in general) and have been trying to get a basic grasp on working with the DOM. Apologies if this is a very basic mistake, but I looked around and couldn't find an answer.
I am trying to use the appendChild method to add a heading and some paragraph text into the in the very basic HTML file below.
```
<html>
<head>
<title>JS Practice</title>
</head>
<body>
<script src="script.js"></script>
<div id = "main">
<h1>Simple HTML Page</h1>
<p>This is a very simple HTML page.</p>
<p>It's about as basic as they come. It has: </p>
<ul>
<li>An H1 Tag</li>
<li>Two paragraphs</li>
<li>An unordered list</li>
</ul>
</div>
<div id="javascript">
</div>
</body>
</html>
```
Here is the js code:
```
var newHeading = document.createElement("h1");
var newParagraph = document.createElement("p");
newHeading.innerHTML = "New Heading!";
newParagraph.innerHTML = "Some text for a paragraph.";
document.getElementById("javascript").appendChild(newHeading);
document.getElementById("javascript").appendChild(newParagraph);
```
Running it causes an error: "Cannot call method 'appendChild' of null"
Help? I can't figure out why this isn't working... | 2011/12/29 | [
"https://Stackoverflow.com/questions/8670530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1058180/"
] | Your DOM is not loaded, so **getElementById** will return null, use **document.ready()** in jquery
```
$(document).ready(function(){
var newHeading = document.createElement("h1");
var newParagraph = document.createElement("p");
newHeading.innerHTML = "New Heading!";
newParagraph.innerHTML = "Some text for a paragraph.";
document.getElementById("javascript").appendChild(newHeading);
document.getElementById("javascript").appendChild(newParagraph);
}
``` | Because your JavaScript file is loaded first, and that time when you write `window.document.body.appendChild(btn)`, `body` element is not loaded in html, that's why you are getting error here, you can load the js file once body element is loaded in DOM.
**index.html**
```
<html>
<head>
<script src="JavaScript.js"></script>
</head>
<body onload="init()">
<h3> button will come here</h3>
</body>
</html>
```
**JavaScript.js**
```
function init(){
var button = window.document.createElement("button");
var textNode = window.document.createTextNode("click me");
button.appendChild(textNode);
window.document.body.appendChild(button);
}
``` |
8,670,530 | I am new to Javascript (and programming in general) and have been trying to get a basic grasp on working with the DOM. Apologies if this is a very basic mistake, but I looked around and couldn't find an answer.
I am trying to use the appendChild method to add a heading and some paragraph text into the in the very basic HTML file below.
```
<html>
<head>
<title>JS Practice</title>
</head>
<body>
<script src="script.js"></script>
<div id = "main">
<h1>Simple HTML Page</h1>
<p>This is a very simple HTML page.</p>
<p>It's about as basic as they come. It has: </p>
<ul>
<li>An H1 Tag</li>
<li>Two paragraphs</li>
<li>An unordered list</li>
</ul>
</div>
<div id="javascript">
</div>
</body>
</html>
```
Here is the js code:
```
var newHeading = document.createElement("h1");
var newParagraph = document.createElement("p");
newHeading.innerHTML = "New Heading!";
newParagraph.innerHTML = "Some text for a paragraph.";
document.getElementById("javascript").appendChild(newHeading);
document.getElementById("javascript").appendChild(newParagraph);
```
Running it causes an error: "Cannot call method 'appendChild' of null"
Help? I can't figure out why this isn't working... | 2011/12/29 | [
"https://Stackoverflow.com/questions/8670530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1058180/"
] | Assuming this code is inside the `script.js` file, this is because the javascript is running before the rest of the HTML page has loaded.
When an HTML page loads, when it comes across a linked resource such as a javascript file, it loads that resource, executes all code it can, and then continues running the page. So your code is running before the `<div>` is loaded on the page.
Move your `<script>` tag to the bottom of the page and you should no longer have the error. Alternatively, introduce an event such as `<body onload="doSomething();">` and then make a `doSomething()` method in your javascript file which will run those statements. | Because your JavaScript file is loaded first, and that time when you write `window.document.body.appendChild(btn)`, `body` element is not loaded in html, that's why you are getting error here, you can load the js file once body element is loaded in DOM.
**index.html**
```
<html>
<head>
<script src="JavaScript.js"></script>
</head>
<body onload="init()">
<h3> button will come here</h3>
</body>
</html>
```
**JavaScript.js**
```
function init(){
var button = window.document.createElement("button");
var textNode = window.document.createTextNode("click me");
button.appendChild(textNode);
window.document.body.appendChild(button);
}
``` |
8,670,530 | I am new to Javascript (and programming in general) and have been trying to get a basic grasp on working with the DOM. Apologies if this is a very basic mistake, but I looked around and couldn't find an answer.
I am trying to use the appendChild method to add a heading and some paragraph text into the in the very basic HTML file below.
```
<html>
<head>
<title>JS Practice</title>
</head>
<body>
<script src="script.js"></script>
<div id = "main">
<h1>Simple HTML Page</h1>
<p>This is a very simple HTML page.</p>
<p>It's about as basic as they come. It has: </p>
<ul>
<li>An H1 Tag</li>
<li>Two paragraphs</li>
<li>An unordered list</li>
</ul>
</div>
<div id="javascript">
</div>
</body>
</html>
```
Here is the js code:
```
var newHeading = document.createElement("h1");
var newParagraph = document.createElement("p");
newHeading.innerHTML = "New Heading!";
newParagraph.innerHTML = "Some text for a paragraph.";
document.getElementById("javascript").appendChild(newHeading);
document.getElementById("javascript").appendChild(newParagraph);
```
Running it causes an error: "Cannot call method 'appendChild' of null"
Help? I can't figure out why this isn't working... | 2011/12/29 | [
"https://Stackoverflow.com/questions/8670530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1058180/"
] | Your script is running before the elements are available.
Place your script directly before the closing `</body>` tag.
```
<html>
<head>
<title>JS Practice</title>
</head>
<body>
<div id = "main">
<h1>Simple HTML Page</h1>
<p>This is a very simple HTML page.</p>
<p>It's about as basic as they come. It has: </p>
<ul>
<li>An H1 Tag</li>
<li>Two paragraphs</li>
<li>An unordered list</li>
</ul>
</div>
<div id="javascript">
</div>
<!-- Now it will run after the above elements have been created -->
<script src="script.js"></script>
</body>
</html>
``` | Because your JavaScript file is loaded first, and that time when you write `window.document.body.appendChild(btn)`, `body` element is not loaded in html, that's why you are getting error here, you can load the js file once body element is loaded in DOM.
**index.html**
```
<html>
<head>
<script src="JavaScript.js"></script>
</head>
<body onload="init()">
<h3> button will come here</h3>
</body>
</html>
```
**JavaScript.js**
```
function init(){
var button = window.document.createElement("button");
var textNode = window.document.createTextNode("click me");
button.appendChild(textNode);
window.document.body.appendChild(button);
}
``` |
8,670,530 | I am new to Javascript (and programming in general) and have been trying to get a basic grasp on working with the DOM. Apologies if this is a very basic mistake, but I looked around and couldn't find an answer.
I am trying to use the appendChild method to add a heading and some paragraph text into the in the very basic HTML file below.
```
<html>
<head>
<title>JS Practice</title>
</head>
<body>
<script src="script.js"></script>
<div id = "main">
<h1>Simple HTML Page</h1>
<p>This is a very simple HTML page.</p>
<p>It's about as basic as they come. It has: </p>
<ul>
<li>An H1 Tag</li>
<li>Two paragraphs</li>
<li>An unordered list</li>
</ul>
</div>
<div id="javascript">
</div>
</body>
</html>
```
Here is the js code:
```
var newHeading = document.createElement("h1");
var newParagraph = document.createElement("p");
newHeading.innerHTML = "New Heading!";
newParagraph.innerHTML = "Some text for a paragraph.";
document.getElementById("javascript").appendChild(newHeading);
document.getElementById("javascript").appendChild(newParagraph);
```
Running it causes an error: "Cannot call method 'appendChild' of null"
Help? I can't figure out why this isn't working... | 2011/12/29 | [
"https://Stackoverflow.com/questions/8670530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1058180/"
] | I add an event listener to wait for DOM content to fully load
```
document.addEventListener("DOMContentLoaded", function() {
place_the_code_you_want_to_run_after_page_load
})
``` | Because your JavaScript file is loaded first, and that time when you write `window.document.body.appendChild(btn)`, `body` element is not loaded in html, that's why you are getting error here, you can load the js file once body element is loaded in DOM.
**index.html**
```
<html>
<head>
<script src="JavaScript.js"></script>
</head>
<body onload="init()">
<h3> button will come here</h3>
</body>
</html>
```
**JavaScript.js**
```
function init(){
var button = window.document.createElement("button");
var textNode = window.document.createTextNode("click me");
button.appendChild(textNode);
window.document.body.appendChild(button);
}
``` |
8,670,530 | I am new to Javascript (and programming in general) and have been trying to get a basic grasp on working with the DOM. Apologies if this is a very basic mistake, but I looked around and couldn't find an answer.
I am trying to use the appendChild method to add a heading and some paragraph text into the in the very basic HTML file below.
```
<html>
<head>
<title>JS Practice</title>
</head>
<body>
<script src="script.js"></script>
<div id = "main">
<h1>Simple HTML Page</h1>
<p>This is a very simple HTML page.</p>
<p>It's about as basic as they come. It has: </p>
<ul>
<li>An H1 Tag</li>
<li>Two paragraphs</li>
<li>An unordered list</li>
</ul>
</div>
<div id="javascript">
</div>
</body>
</html>
```
Here is the js code:
```
var newHeading = document.createElement("h1");
var newParagraph = document.createElement("p");
newHeading.innerHTML = "New Heading!";
newParagraph.innerHTML = "Some text for a paragraph.";
document.getElementById("javascript").appendChild(newHeading);
document.getElementById("javascript").appendChild(newParagraph);
```
Running it causes an error: "Cannot call method 'appendChild' of null"
Help? I can't figure out why this isn't working... | 2011/12/29 | [
"https://Stackoverflow.com/questions/8670530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1058180/"
] | There is an easier way to resolve this issue. Put your JavaScript inside of a function and use the window.onload. So for instance:
```
window.onload = function any_function_name()
{
var newHeading = document.createElement("h1");
var newParagraph = document.createElement("p");
newHeading.innerHTML = "New Heading!";
newParagraph.innerHTML = "Some text for a paragraph.";
document.getElementById("javascript").appendChild(newHeading);
document.getElementById("javascript").appendChild(newParagraph);
}
```
Now, you do not have to move your tag because that will run your code after the HTML has loaded. | Your script is running before the elements are available.
Place your script directly before the closing `</body>` tag.
```
<html>
<head>
<title>JS Practice</title>
</head>
<body>
<div id = "main">
<h1>Simple HTML Page</h1>
<p>This is a very simple HTML page.</p>
<p>It's about as basic as they come. It has: </p>
<ul>
<li>An H1 Tag</li>
<li>Two paragraphs</li>
<li>An unordered list</li>
</ul>
</div>
<div id="javascript">
</div>
<!-- Now it will run after the above elements have been created -->
<script src="script.js"></script>
</body>
</html>
``` |
12,207,541 | I want parse something like a section entry in an \*.ini file:
```
line=' [ fdfd fdf f ] '
```
What could be the sed pattern (???) for this line to split the
```
'fdfd fdf f'
```
out?
So:
```
echo "${line}" | sed -E 's/???/\1/g'
```
How can I describe all chars except `[[:space:]]`, `[` and `]` ? This doesn't work for me: `[^[[:space:]]\[]*` . | 2012/08/31 | [
"https://Stackoverflow.com/questions/12207541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1309041/"
] | When you use the `[[:space:]]` syntax, the outer brackets are normal "match one character from this list" brackets, the same as in `[aeiou]` but the inner brackets are part of `[:space:]` which is an indivisible unit.
So if you wanted to match a single character which either belongs to the `space` class or is an `x` you'd use `[[:space:]x]` or `[x[:space:]]`
When one of the characters you want to match is a `]`, it will terminate the bracketed character list unless you give it some special treatment. You've guessed that you need a backslash somewhere; a good guess but wrong. The way you include a `]` in the list is to put it first. `[ab]c]` is a bracketed list containing the 2 characters `ab`, followed by 2 literal-match characters `c]`, so it matches `"ac]"` or `"bc]"` but `[]abc]` is a bracketed list of the 4 characters `]abc` so it matches `"a"`, `"b"`, `"c"`, or `"]"`.
In a negated list the `]` comes immediately after the `^`.
So putting that all together, the way to match a single char from the set of all chars except the `[:space:]` class and the brackets is:
```
[^][:space:][]
```
The first bracket and the last bracket are a matching pair, even if you think it doesn't look like they should be. | ```
$ echo "$line" | sed "s/^.*\[[[:space:]]*\([^]]*[^][:space:]]\)[[:space:]]*\].*$/'\1'/"
```
You can split the pattern into two:
```
$ echo "$line" | sed "s/^.*\[[[:space:]]*/'/; s/[[:space:]]*\].*$/'/"
```
`awk` works too:
```
$ echo "$line" | awk -F' *[[\]] *' -vQ="'" '{print Q$2Q}'
``` |
12,207,541 | I want parse something like a section entry in an \*.ini file:
```
line=' [ fdfd fdf f ] '
```
What could be the sed pattern (???) for this line to split the
```
'fdfd fdf f'
```
out?
So:
```
echo "${line}" | sed -E 's/???/\1/g'
```
How can I describe all chars except `[[:space:]]`, `[` and `]` ? This doesn't work for me: `[^[[:space:]]\[]*` . | 2012/08/31 | [
"https://Stackoverflow.com/questions/12207541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1309041/"
] | ```
$ echo "$line" | sed "s/^.*\[[[:space:]]*\([^]]*[^][:space:]]\)[[:space:]]*\].*$/'\1'/"
```
You can split the pattern into two:
```
$ echo "$line" | sed "s/^.*\[[[:space:]]*/'/; s/[[:space:]]*\].*$/'/"
```
`awk` works too:
```
$ echo "$line" | awk -F' *[[\]] *' -vQ="'" '{print Q$2Q}'
``` | When you say split, do you mean *split into an array*, or do you mean *filter* out all spaces and brackets?
Assuming the value of line number 1 in file.ini is:
```
[ fdfd fdf f ]
```
If you mean array,
```
$linenumber=1;
array=($(sed -n ${linenumber}p file.ini | sed 's/[][]*//g'));
```
will split line number 1 of file.ini into an array and return the values:
```
${array[0]} = fdfd
${array[1]} = fdf
${array[2]} = f
```
If you mean filter spaces and brackets,
```
$linenumber=1;
sed -n ${linenumber}p file.ini | sed 's/[ []]*//g';
```
will return:
```
fdfdfdff
```
and if *neither of those* is what you meant, **please specify the exact output** you are looking to extract from the initial value so that we can address it correctly. |
12,207,541 | I want parse something like a section entry in an \*.ini file:
```
line=' [ fdfd fdf f ] '
```
What could be the sed pattern (???) for this line to split the
```
'fdfd fdf f'
```
out?
So:
```
echo "${line}" | sed -E 's/???/\1/g'
```
How can I describe all chars except `[[:space:]]`, `[` and `]` ? This doesn't work for me: `[^[[:space:]]\[]*` . | 2012/08/31 | [
"https://Stackoverflow.com/questions/12207541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1309041/"
] | When you use the `[[:space:]]` syntax, the outer brackets are normal "match one character from this list" brackets, the same as in `[aeiou]` but the inner brackets are part of `[:space:]` which is an indivisible unit.
So if you wanted to match a single character which either belongs to the `space` class or is an `x` you'd use `[[:space:]x]` or `[x[:space:]]`
When one of the characters you want to match is a `]`, it will terminate the bracketed character list unless you give it some special treatment. You've guessed that you need a backslash somewhere; a good guess but wrong. The way you include a `]` in the list is to put it first. `[ab]c]` is a bracketed list containing the 2 characters `ab`, followed by 2 literal-match characters `c]`, so it matches `"ac]"` or `"bc]"` but `[]abc]` is a bracketed list of the 4 characters `]abc` so it matches `"a"`, `"b"`, `"c"`, or `"]"`.
In a negated list the `]` comes immediately after the `^`.
So putting that all together, the way to match a single char from the set of all chars except the `[:space:]` class and the brackets is:
```
[^][:space:][]
```
The first bracket and the last bracket are a matching pair, even if you think it doesn't look like they should be. | When you say split, do you mean *split into an array*, or do you mean *filter* out all spaces and brackets?
Assuming the value of line number 1 in file.ini is:
```
[ fdfd fdf f ]
```
If you mean array,
```
$linenumber=1;
array=($(sed -n ${linenumber}p file.ini | sed 's/[][]*//g'));
```
will split line number 1 of file.ini into an array and return the values:
```
${array[0]} = fdfd
${array[1]} = fdf
${array[2]} = f
```
If you mean filter spaces and brackets,
```
$linenumber=1;
sed -n ${linenumber}p file.ini | sed 's/[ []]*//g';
```
will return:
```
fdfdfdff
```
and if *neither of those* is what you meant, **please specify the exact output** you are looking to extract from the initial value so that we can address it correctly. |
40,164,277 | I need to have my html attribute submit a form. My problem is that a normal button attribute is able to use the type="submit" and other attributes using role="button" don't do anything with the type.
So how do I make it submit a form? If you can give me a script to do it, that would be fine too.
(I don't know javascript myself)
My current code:
```
<form action="myloc" method="post">
<div class="input-group col-lg-6">
<a type="submit" class="btn btn-default input-group-addon" role="button">
Search<span class="glyphicon glyphicon-search" aria-hidden="true"></span>
</a>
<input type="search" class="form-control" name="search" id="search" placeholder="Search">
</div>
</form>
``` | 2016/10/20 | [
"https://Stackoverflow.com/questions/40164277",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5103780/"
] | If I understand you correctly you want to submit the form when you press the button using javascript?
```
<form id="test" action="myloc" method="post">
<div class="input-group col-lg-6">
<a type="submit" class="btn btn-default input-group-addon" role="button" onclick="document.getElementById('test').submit();">
Search<span class="glyphicon glyphicon-search" aria-hidden="true"></span>
</a>
<input type="search" class="form-control" name="search" id="search" placeholder="Search">
</div>
</form>
```
Notice that what I did was to set an id ("test") on the form and then added an onclick event to the anchor element. | Give your `<a>` element an id, then set a jquery onclick function with that id to run whatever query you want. |
40,164,277 | I need to have my html attribute submit a form. My problem is that a normal button attribute is able to use the type="submit" and other attributes using role="button" don't do anything with the type.
So how do I make it submit a form? If you can give me a script to do it, that would be fine too.
(I don't know javascript myself)
My current code:
```
<form action="myloc" method="post">
<div class="input-group col-lg-6">
<a type="submit" class="btn btn-default input-group-addon" role="button">
Search<span class="glyphicon glyphicon-search" aria-hidden="true"></span>
</a>
<input type="search" class="form-control" name="search" id="search" placeholder="Search">
</div>
</form>
``` | 2016/10/20 | [
"https://Stackoverflow.com/questions/40164277",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5103780/"
] | Here is your solution,
```
<form action="myloc" method="post" id="myForm">
<div class="input-group col-lg-6">
<a type="submit" class="btn btn-default input-group-addon" role="button" id="sub">
Search<span class="glyphicon glyphicon-search" aria-hidden="true"></span>
</a>
<input type="search" class="form-control" name="search" id="search" placeholder="Search">
</div>
</form>
```
Javascript code
```
var button=document.getElementById('sub');
button.onclick=function(){
document.getElementById("myForm").submit();
}
``` | Give your `<a>` element an id, then set a jquery onclick function with that id to run whatever query you want. |
40,164,277 | I need to have my html attribute submit a form. My problem is that a normal button attribute is able to use the type="submit" and other attributes using role="button" don't do anything with the type.
So how do I make it submit a form? If you can give me a script to do it, that would be fine too.
(I don't know javascript myself)
My current code:
```
<form action="myloc" method="post">
<div class="input-group col-lg-6">
<a type="submit" class="btn btn-default input-group-addon" role="button">
Search<span class="glyphicon glyphicon-search" aria-hidden="true"></span>
</a>
<input type="search" class="form-control" name="search" id="search" placeholder="Search">
</div>
</form>
``` | 2016/10/20 | [
"https://Stackoverflow.com/questions/40164277",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5103780/"
] | If I understand you correctly you want to submit the form when you press the button using javascript?
```
<form id="test" action="myloc" method="post">
<div class="input-group col-lg-6">
<a type="submit" class="btn btn-default input-group-addon" role="button" onclick="document.getElementById('test').submit();">
Search<span class="glyphicon glyphicon-search" aria-hidden="true"></span>
</a>
<input type="search" class="form-control" name="search" id="search" placeholder="Search">
</div>
</form>
```
Notice that what I did was to set an id ("test") on the form and then added an onclick event to the anchor element. | Here is your solution,
```
<form action="myloc" method="post" id="myForm">
<div class="input-group col-lg-6">
<a type="submit" class="btn btn-default input-group-addon" role="button" id="sub">
Search<span class="glyphicon glyphicon-search" aria-hidden="true"></span>
</a>
<input type="search" class="form-control" name="search" id="search" placeholder="Search">
</div>
</form>
```
Javascript code
```
var button=document.getElementById('sub');
button.onclick=function(){
document.getElementById("myForm").submit();
}
``` |
19,073,331 | I'm looking for a way to change the first character of every word in a sentence form lowercase to uppercase. I already read the [following answer](https://stackoverflow.com/questions/1159343/convert-a-char-to-upper-case-using-regular-expressions-editpad-pro/1159389#1159389) but it doesn't work.
I tried to use `\U` to replace the first letter as an uppercase letter. But it returns \U as the replacement, not the first letter. May someone take a look at <http://regexr.com?36h59>
Thank you in advance! | 2013/09/29 | [
"https://Stackoverflow.com/questions/19073331",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2827396/"
] | javascript
----------
```
str.replace(/' '([a-z])+/, foo);
function foo(x){
return x.toUpperCase();
}
``` | If your coding using PHP check out the manual here:
<http://php.net/manual/en/function.ucwords.php> |
19,073,331 | I'm looking for a way to change the first character of every word in a sentence form lowercase to uppercase. I already read the [following answer](https://stackoverflow.com/questions/1159343/convert-a-char-to-upper-case-using-regular-expressions-editpad-pro/1159389#1159389) but it doesn't work.
I tried to use `\U` to replace the first letter as an uppercase letter. But it returns \U as the replacement, not the first letter. May someone take a look at <http://regexr.com?36h59>
Thank you in advance! | 2013/09/29 | [
"https://Stackoverflow.com/questions/19073331",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2827396/"
] | If text transformations are possible depends on the regexp implementation. Most standard implementations base off Perl regular expressions and do not support this.
A lot of text editors however do provide some way of transformations as they do not have any other capabilities of processing a regular expression match. For example the answer in your linked question refers to an editor called βTextPadβ. These transformations are often non-standard and can also differ a lot depending on what tool you use. When using programming languages however, you donβt really need those features built into the regular expression syntax, as you can easily store the match and do some further processing on your own. A lot language also allow you to supply a function which is then called to process every replacement individually.
If you tell us what language you are using, we might be able to help you further.
### Some examples
JavaScript:
```
> text = 'anleitungen gesundes wohnen';
> text.replace(/(\w+)/g, function(x) { return x[0].toUpperCase() + x.substring(1) });
'Anleitungen Gesundes Wohnen'
```
Python:
```
>>> import re
>>> text = 'anleitungen gesundes wohnen'
>>> re.sub('(\w+)', lambda x: x.group(0).capitalize(), text)
'Anleitungen Gesundes Wohnen'
``` | If your coding using PHP check out the manual here:
<http://php.net/manual/en/function.ucwords.php> |
19,073,331 | I'm looking for a way to change the first character of every word in a sentence form lowercase to uppercase. I already read the [following answer](https://stackoverflow.com/questions/1159343/convert-a-char-to-upper-case-using-regular-expressions-editpad-pro/1159389#1159389) but it doesn't work.
I tried to use `\U` to replace the first letter as an uppercase letter. But it returns \U as the replacement, not the first letter. May someone take a look at <http://regexr.com?36h59>
Thank you in advance! | 2013/09/29 | [
"https://Stackoverflow.com/questions/19073331",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2827396/"
] | ```
$ echo 'xyzzy plugh dwarf' | sed 's/\(\<.\)\.*/\u\1/g'
Xyzzy Plugh Dwarf
``` | If your coding using PHP check out the manual here:
<http://php.net/manual/en/function.ucwords.php> |
19,073,331 | I'm looking for a way to change the first character of every word in a sentence form lowercase to uppercase. I already read the [following answer](https://stackoverflow.com/questions/1159343/convert-a-char-to-upper-case-using-regular-expressions-editpad-pro/1159389#1159389) but it doesn't work.
I tried to use `\U` to replace the first letter as an uppercase letter. But it returns \U as the replacement, not the first letter. May someone take a look at <http://regexr.com?36h59>
Thank you in advance! | 2013/09/29 | [
"https://Stackoverflow.com/questions/19073331",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2827396/"
] | If text transformations are possible depends on the regexp implementation. Most standard implementations base off Perl regular expressions and do not support this.
A lot of text editors however do provide some way of transformations as they do not have any other capabilities of processing a regular expression match. For example the answer in your linked question refers to an editor called βTextPadβ. These transformations are often non-standard and can also differ a lot depending on what tool you use. When using programming languages however, you donβt really need those features built into the regular expression syntax, as you can easily store the match and do some further processing on your own. A lot language also allow you to supply a function which is then called to process every replacement individually.
If you tell us what language you are using, we might be able to help you further.
### Some examples
JavaScript:
```
> text = 'anleitungen gesundes wohnen';
> text.replace(/(\w+)/g, function(x) { return x[0].toUpperCase() + x.substring(1) });
'Anleitungen Gesundes Wohnen'
```
Python:
```
>>> import re
>>> text = 'anleitungen gesundes wohnen'
>>> re.sub('(\w+)', lambda x: x.group(0).capitalize(), text)
'Anleitungen Gesundes Wohnen'
``` | javascript
----------
```
str.replace(/' '([a-z])+/, foo);
function foo(x){
return x.toUpperCase();
}
``` |
19,073,331 | I'm looking for a way to change the first character of every word in a sentence form lowercase to uppercase. I already read the [following answer](https://stackoverflow.com/questions/1159343/convert-a-char-to-upper-case-using-regular-expressions-editpad-pro/1159389#1159389) but it doesn't work.
I tried to use `\U` to replace the first letter as an uppercase letter. But it returns \U as the replacement, not the first letter. May someone take a look at <http://regexr.com?36h59>
Thank you in advance! | 2013/09/29 | [
"https://Stackoverflow.com/questions/19073331",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2827396/"
] | If text transformations are possible depends on the regexp implementation. Most standard implementations base off Perl regular expressions and do not support this.
A lot of text editors however do provide some way of transformations as they do not have any other capabilities of processing a regular expression match. For example the answer in your linked question refers to an editor called βTextPadβ. These transformations are often non-standard and can also differ a lot depending on what tool you use. When using programming languages however, you donβt really need those features built into the regular expression syntax, as you can easily store the match and do some further processing on your own. A lot language also allow you to supply a function which is then called to process every replacement individually.
If you tell us what language you are using, we might be able to help you further.
### Some examples
JavaScript:
```
> text = 'anleitungen gesundes wohnen';
> text.replace(/(\w+)/g, function(x) { return x[0].toUpperCase() + x.substring(1) });
'Anleitungen Gesundes Wohnen'
```
Python:
```
>>> import re
>>> text = 'anleitungen gesundes wohnen'
>>> re.sub('(\w+)', lambda x: x.group(0).capitalize(), text)
'Anleitungen Gesundes Wohnen'
``` | ```
$ echo 'xyzzy plugh dwarf' | sed 's/\(\<.\)\.*/\u\1/g'
Xyzzy Plugh Dwarf
``` |
68,427,623 | I have a database containing tickets. Each ticket has a unique number but this number is not unique in the table. So for example ticket #1000 can be multiple times in the table with different other columns (Which I have removed here for the example).
```
create table countries
(
isoalpha varchar(2),
pole varchar(50)
);
insert into countries values ('DE', 'EMEA'),('FR', 'EMEA'),('IT', 'EMEA'),('US','USCAN'),('CA', 'USCAN');
create table tickets
(
id int primary key auto_increment,
number int,
isoalpha varchar(2),
created datetime
);
insert into tickets (number, isoalpha, created) values
(1000, 'DE', '2021-01-01 00:00:00'),
(1001, 'US', '2021-01-01 00:00:00'),
(1002, 'FR', '2021-01-01 00:00:00'),
(1003, 'CA', '2021-01-01 00:00:00'),
(1000, 'DE', '2021-01-01 00:00:00'),
(1000, 'DE', '2021-01-01 00:00:00'),
(1004, 'DE', '2021-01-02 00:00:00'),
(1001, 'US', '2021-01-01 00:00:00'),
(1002, 'FR', '2021-01-01 00:00:00'),
(1005, 'IT', '2021-01-02 00:00:00'),
(1006, 'US', '2021-01-02 00:00:00'),
(1007, 'DE', '2021-01-02 00:00:00');
```
Here is an example:
<http://sqlfiddle.com/#!9/3f4ba4/6>
What I need as output is the number of new created tickets for each day, devided into tickets from USCAN and rest of world.
So for this Example the out coming data should be
```
Date | USCAN | Other
'2021-01-01' | 2 | 2
'2021-01-02' | 1 | 3
```
At the moment I use this two queries to fetch all new tickets and then add the number of rows with same date in my application code:
```
SELECT MIN(ti.created) AS date
FROM tickets ti
LEFT JOIN countries ct ON (ct.isoalpha = ti.isoalpha)
WHERE ct.pole = 'USCAN'
GROUP BY ti.number
ORDER BY date
SELECT MIN(ti.created) AS date
FROM tickets ti
LEFT JOIN countries ct ON (ct.isoalpha = ti.isoalpha)
WHERE ct.pole <> 'USCAN'
GROUP BY ti.number
ORDER BY date
```
but that doesn't look like a very clean method. So how can I improved the query to get the needed data with less overhead?
Ii is recommended that is works with mySQL 5.7 | 2021/07/18 | [
"https://Stackoverflow.com/questions/68427623",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/634698/"
] | You may logically combine the queries using conditional aggregation:
```sql
SELECT
MIN(CASE WHEN ct.pole = 'USCAN' THEN ti.created END) AS date_uscan,
MIN(CASE WHEN ct.pole <> 'USCAN' THEN ti.created END) AS date_other
FROM tickets ti
LEFT JOIN countries ct ON ct.isoalpha = ti.isoalpha
GROUP BY ti.number
ORDER BY date;
``` | You can create unique entries for each date/country then use that value to count USCAN and non-USCAN
```
SELECT created,
SUM(1) as total,
SUM(CASE WHEN pole = 'USCAN' THEN 1 ELSE 0 END) as uscan,
SUM(CASE WHEN pole != 'USCAN' THEN 1 ELSE 0 END) as nonuscan
FROM (
SELECT created, t.isoalpha, MIN(pole) AS pole
FROM tickets t JOIN countries c ON t.isoalpha = c.isoalpha
GROUP BY created,isoalpha
) AS uniqueTickets
GROUP BY created
```
Results:
```
created total uscan nonuscan
2021-01-01T00:00:00Z 4 2 2
2021-01-02T00:00:00Z 3 1 2
```
<http://sqlfiddle.com/#!9/3f4ba4/45/0> |
68,427,623 | I have a database containing tickets. Each ticket has a unique number but this number is not unique in the table. So for example ticket #1000 can be multiple times in the table with different other columns (Which I have removed here for the example).
```
create table countries
(
isoalpha varchar(2),
pole varchar(50)
);
insert into countries values ('DE', 'EMEA'),('FR', 'EMEA'),('IT', 'EMEA'),('US','USCAN'),('CA', 'USCAN');
create table tickets
(
id int primary key auto_increment,
number int,
isoalpha varchar(2),
created datetime
);
insert into tickets (number, isoalpha, created) values
(1000, 'DE', '2021-01-01 00:00:00'),
(1001, 'US', '2021-01-01 00:00:00'),
(1002, 'FR', '2021-01-01 00:00:00'),
(1003, 'CA', '2021-01-01 00:00:00'),
(1000, 'DE', '2021-01-01 00:00:00'),
(1000, 'DE', '2021-01-01 00:00:00'),
(1004, 'DE', '2021-01-02 00:00:00'),
(1001, 'US', '2021-01-01 00:00:00'),
(1002, 'FR', '2021-01-01 00:00:00'),
(1005, 'IT', '2021-01-02 00:00:00'),
(1006, 'US', '2021-01-02 00:00:00'),
(1007, 'DE', '2021-01-02 00:00:00');
```
Here is an example:
<http://sqlfiddle.com/#!9/3f4ba4/6>
What I need as output is the number of new created tickets for each day, devided into tickets from USCAN and rest of world.
So for this Example the out coming data should be
```
Date | USCAN | Other
'2021-01-01' | 2 | 2
'2021-01-02' | 1 | 3
```
At the moment I use this two queries to fetch all new tickets and then add the number of rows with same date in my application code:
```
SELECT MIN(ti.created) AS date
FROM tickets ti
LEFT JOIN countries ct ON (ct.isoalpha = ti.isoalpha)
WHERE ct.pole = 'USCAN'
GROUP BY ti.number
ORDER BY date
SELECT MIN(ti.created) AS date
FROM tickets ti
LEFT JOIN countries ct ON (ct.isoalpha = ti.isoalpha)
WHERE ct.pole <> 'USCAN'
GROUP BY ti.number
ORDER BY date
```
but that doesn't look like a very clean method. So how can I improved the query to get the needed data with less overhead?
Ii is recommended that is works with mySQL 5.7 | 2021/07/18 | [
"https://Stackoverflow.com/questions/68427623",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/634698/"
] | You may logically combine the queries using conditional aggregation:
```sql
SELECT
MIN(CASE WHEN ct.pole = 'USCAN' THEN ti.created END) AS date_uscan,
MIN(CASE WHEN ct.pole <> 'USCAN' THEN ti.created END) AS date_other
FROM tickets ti
LEFT JOIN countries ct ON ct.isoalpha = ti.isoalpha
GROUP BY ti.number
ORDER BY date;
``` | Regarding the answer of SQL Hacks I found the right solution
```
SELECT created,
SUM(1) as total,
SUM(CASE WHEN pole = 'USCAN' THEN 1 ELSE 0 END) as uscan,
SUM(CASE WHEN pole != 'USCAN' THEN 1 ELSE 0 END) as nonuscan
FROM (
SELECT created, t.isoalpha, MIN(pole) AS pole
FROM tickets t JOIN countries c ON t.isoalpha = c.isoalpha
GROUP BY t.number
) AS uniqueTickets
GROUP BY SUBSTR(created, 1 10)
``` |
68,427,623 | I have a database containing tickets. Each ticket has a unique number but this number is not unique in the table. So for example ticket #1000 can be multiple times in the table with different other columns (Which I have removed here for the example).
```
create table countries
(
isoalpha varchar(2),
pole varchar(50)
);
insert into countries values ('DE', 'EMEA'),('FR', 'EMEA'),('IT', 'EMEA'),('US','USCAN'),('CA', 'USCAN');
create table tickets
(
id int primary key auto_increment,
number int,
isoalpha varchar(2),
created datetime
);
insert into tickets (number, isoalpha, created) values
(1000, 'DE', '2021-01-01 00:00:00'),
(1001, 'US', '2021-01-01 00:00:00'),
(1002, 'FR', '2021-01-01 00:00:00'),
(1003, 'CA', '2021-01-01 00:00:00'),
(1000, 'DE', '2021-01-01 00:00:00'),
(1000, 'DE', '2021-01-01 00:00:00'),
(1004, 'DE', '2021-01-02 00:00:00'),
(1001, 'US', '2021-01-01 00:00:00'),
(1002, 'FR', '2021-01-01 00:00:00'),
(1005, 'IT', '2021-01-02 00:00:00'),
(1006, 'US', '2021-01-02 00:00:00'),
(1007, 'DE', '2021-01-02 00:00:00');
```
Here is an example:
<http://sqlfiddle.com/#!9/3f4ba4/6>
What I need as output is the number of new created tickets for each day, devided into tickets from USCAN and rest of world.
So for this Example the out coming data should be
```
Date | USCAN | Other
'2021-01-01' | 2 | 2
'2021-01-02' | 1 | 3
```
At the moment I use this two queries to fetch all new tickets and then add the number of rows with same date in my application code:
```
SELECT MIN(ti.created) AS date
FROM tickets ti
LEFT JOIN countries ct ON (ct.isoalpha = ti.isoalpha)
WHERE ct.pole = 'USCAN'
GROUP BY ti.number
ORDER BY date
SELECT MIN(ti.created) AS date
FROM tickets ti
LEFT JOIN countries ct ON (ct.isoalpha = ti.isoalpha)
WHERE ct.pole <> 'USCAN'
GROUP BY ti.number
ORDER BY date
```
but that doesn't look like a very clean method. So how can I improved the query to get the needed data with less overhead?
Ii is recommended that is works with mySQL 5.7 | 2021/07/18 | [
"https://Stackoverflow.com/questions/68427623",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/634698/"
] | You can create unique entries for each date/country then use that value to count USCAN and non-USCAN
```
SELECT created,
SUM(1) as total,
SUM(CASE WHEN pole = 'USCAN' THEN 1 ELSE 0 END) as uscan,
SUM(CASE WHEN pole != 'USCAN' THEN 1 ELSE 0 END) as nonuscan
FROM (
SELECT created, t.isoalpha, MIN(pole) AS pole
FROM tickets t JOIN countries c ON t.isoalpha = c.isoalpha
GROUP BY created,isoalpha
) AS uniqueTickets
GROUP BY created
```
Results:
```
created total uscan nonuscan
2021-01-01T00:00:00Z 4 2 2
2021-01-02T00:00:00Z 3 1 2
```
<http://sqlfiddle.com/#!9/3f4ba4/45/0> | Regarding the answer of SQL Hacks I found the right solution
```
SELECT created,
SUM(1) as total,
SUM(CASE WHEN pole = 'USCAN' THEN 1 ELSE 0 END) as uscan,
SUM(CASE WHEN pole != 'USCAN' THEN 1 ELSE 0 END) as nonuscan
FROM (
SELECT created, t.isoalpha, MIN(pole) AS pole
FROM tickets t JOIN countries c ON t.isoalpha = c.isoalpha
GROUP BY t.number
) AS uniqueTickets
GROUP BY SUBSTR(created, 1 10)
``` |
9,842,494 | So I have a fascinating problem. Despite the fact that I'm not manually or programmatically scrolling my view, my WebView is being automatically scrolled to after the data inside it loads.
I've got a fragment in a viewpager. When I first load the pager, it works as expected and everything is shown. But once I "flip the page" the data loads and the WebView pops up to the top of the page, hiding the views above it, which is undesirable.
Does anyone know how to prevent this from happening?
My layout looks like such:
```
<?xml version="1.0" encoding="utf-8"?>
<ScrollView xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:background="@color/background" >
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="vertical" >
<TextView
android:id="@+id/article_title"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:layout_marginTop="10dp"
android:layout_marginBottom="2dp"
android:text="Some Title"
android:textAppearance="?android:attr/textAppearanceLarge"
android:textColor="@color/article_title"
android:textStyle="bold" />
<LinearLayout
android:id="@+id/LL_Seperator"
android:layout_width="fill_parent"
android:layout_height="1dp"
android:layout_marginLeft="10dp"
android:layout_marginRight="10dp"
android:layout_marginTop="5dp"
android:layout_marginBottom="5dp"
android:background="@color/text"
android:orientation="horizontal" >
</LinearLayout>
<WebView
android:id="@+id/article_content"
android:layout_width="match_parent"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:layout_height="wrap_content" />
<TextView
android:id="@+id/article_link"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_marginBottom="5dp"
android:layout_marginTop="5dp"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:text="View Full Article"
android:textColor="@color/article_title"
android:textStyle="bold" />
</LinearLayout>
</ScrollView>
```
I'm also not giving focus to anything. By default, it seems to automatically scroll to the WebView after it has loaded. How do I prevent this? | 2012/03/23 | [
"https://Stackoverflow.com/questions/9842494",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/43790/"
] | Probably there are people who have the same problem I was having, so I'll help out.
I was trying to put **android:descendantFocusability="beforeDescendants"** in my main ScrollView as following:
```
<ScrollView
android:layout_width="match_parent"
android:layout_height="match_parent"
android:descendantFocusability="beforeDescendants">
/*my linearlayout or whatever to hold the views */
```
and it wasn't working, so I had to make a RelativeLayout the parent of the ScrollView, and place the **android:descendantFocusability="beforeDescendants"** in the parent aswell.
So I solved it doing the following:
```
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:descendantFocusability="blocksDescendants">
<ScrollView
android:layout_width="match_parent"
android:layout_height="match_parent">
/*my linearlayout or whatever to hold the views */
``` | I added: `android:overScrollMode="never"` in my ScrollView and set the height to `wrap_content`.
My view was very complex as it was legacy code with LinearLayout inside LinearLayout inside LinearLayout.
This helped me, hope it will help someone else too! |
9,842,494 | So I have a fascinating problem. Despite the fact that I'm not manually or programmatically scrolling my view, my WebView is being automatically scrolled to after the data inside it loads.
I've got a fragment in a viewpager. When I first load the pager, it works as expected and everything is shown. But once I "flip the page" the data loads and the WebView pops up to the top of the page, hiding the views above it, which is undesirable.
Does anyone know how to prevent this from happening?
My layout looks like such:
```
<?xml version="1.0" encoding="utf-8"?>
<ScrollView xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:background="@color/background" >
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="vertical" >
<TextView
android:id="@+id/article_title"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:layout_marginTop="10dp"
android:layout_marginBottom="2dp"
android:text="Some Title"
android:textAppearance="?android:attr/textAppearanceLarge"
android:textColor="@color/article_title"
android:textStyle="bold" />
<LinearLayout
android:id="@+id/LL_Seperator"
android:layout_width="fill_parent"
android:layout_height="1dp"
android:layout_marginLeft="10dp"
android:layout_marginRight="10dp"
android:layout_marginTop="5dp"
android:layout_marginBottom="5dp"
android:background="@color/text"
android:orientation="horizontal" >
</LinearLayout>
<WebView
android:id="@+id/article_content"
android:layout_width="match_parent"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:layout_height="wrap_content" />
<TextView
android:id="@+id/article_link"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_marginBottom="5dp"
android:layout_marginTop="5dp"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:text="View Full Article"
android:textColor="@color/article_title"
android:textStyle="bold" />
</LinearLayout>
</ScrollView>
```
I'm also not giving focus to anything. By default, it seems to automatically scroll to the WebView after it has loaded. How do I prevent this? | 2012/03/23 | [
"https://Stackoverflow.com/questions/9842494",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/43790/"
] | You can simply add this to your LinearLayout: android:focusableInTouchMode="true". It works for me.
```
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:focusableInTouchMode="true"
android:orientation="vertical" >
``` | Adding these line in main layout solves the problem
```
android:descendantFocusability="blocksDescendants"
``` |
9,842,494 | So I have a fascinating problem. Despite the fact that I'm not manually or programmatically scrolling my view, my WebView is being automatically scrolled to after the data inside it loads.
I've got a fragment in a viewpager. When I first load the pager, it works as expected and everything is shown. But once I "flip the page" the data loads and the WebView pops up to the top of the page, hiding the views above it, which is undesirable.
Does anyone know how to prevent this from happening?
My layout looks like such:
```
<?xml version="1.0" encoding="utf-8"?>
<ScrollView xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:background="@color/background" >
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="vertical" >
<TextView
android:id="@+id/article_title"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:layout_marginTop="10dp"
android:layout_marginBottom="2dp"
android:text="Some Title"
android:textAppearance="?android:attr/textAppearanceLarge"
android:textColor="@color/article_title"
android:textStyle="bold" />
<LinearLayout
android:id="@+id/LL_Seperator"
android:layout_width="fill_parent"
android:layout_height="1dp"
android:layout_marginLeft="10dp"
android:layout_marginRight="10dp"
android:layout_marginTop="5dp"
android:layout_marginBottom="5dp"
android:background="@color/text"
android:orientation="horizontal" >
</LinearLayout>
<WebView
android:id="@+id/article_content"
android:layout_width="match_parent"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:layout_height="wrap_content" />
<TextView
android:id="@+id/article_link"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_marginBottom="5dp"
android:layout_marginTop="5dp"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:text="View Full Article"
android:textColor="@color/article_title"
android:textStyle="bold" />
</LinearLayout>
</ScrollView>
```
I'm also not giving focus to anything. By default, it seems to automatically scroll to the WebView after it has loaded. How do I prevent this? | 2012/03/23 | [
"https://Stackoverflow.com/questions/9842494",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/43790/"
] | I added: `android:overScrollMode="never"` in my ScrollView and set the height to `wrap_content`.
My view was very complex as it was legacy code with LinearLayout inside LinearLayout inside LinearLayout.
This helped me, hope it will help someone else too! | I had to use the fully qualified name for MyScrollView, otherwise I got an inflate exception.
```
<com.mypackagename.MyScrollView
xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:background="@color/background" >
``` |
9,842,494 | So I have a fascinating problem. Despite the fact that I'm not manually or programmatically scrolling my view, my WebView is being automatically scrolled to after the data inside it loads.
I've got a fragment in a viewpager. When I first load the pager, it works as expected and everything is shown. But once I "flip the page" the data loads and the WebView pops up to the top of the page, hiding the views above it, which is undesirable.
Does anyone know how to prevent this from happening?
My layout looks like such:
```
<?xml version="1.0" encoding="utf-8"?>
<ScrollView xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:background="@color/background" >
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="vertical" >
<TextView
android:id="@+id/article_title"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:layout_marginTop="10dp"
android:layout_marginBottom="2dp"
android:text="Some Title"
android:textAppearance="?android:attr/textAppearanceLarge"
android:textColor="@color/article_title"
android:textStyle="bold" />
<LinearLayout
android:id="@+id/LL_Seperator"
android:layout_width="fill_parent"
android:layout_height="1dp"
android:layout_marginLeft="10dp"
android:layout_marginRight="10dp"
android:layout_marginTop="5dp"
android:layout_marginBottom="5dp"
android:background="@color/text"
android:orientation="horizontal" >
</LinearLayout>
<WebView
android:id="@+id/article_content"
android:layout_width="match_parent"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:layout_height="wrap_content" />
<TextView
android:id="@+id/article_link"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_marginBottom="5dp"
android:layout_marginTop="5dp"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:text="View Full Article"
android:textColor="@color/article_title"
android:textStyle="bold" />
</LinearLayout>
</ScrollView>
```
I'm also not giving focus to anything. By default, it seems to automatically scroll to the WebView after it has loaded. How do I prevent this? | 2012/03/23 | [
"https://Stackoverflow.com/questions/9842494",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/43790/"
] | You can simply add this to your LinearLayout: android:focusableInTouchMode="true". It works for me.
```
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:focusableInTouchMode="true"
android:orientation="vertical" >
``` | Probably there are people who have the same problem I was having, so I'll help out.
I was trying to put **android:descendantFocusability="beforeDescendants"** in my main ScrollView as following:
```
<ScrollView
android:layout_width="match_parent"
android:layout_height="match_parent"
android:descendantFocusability="beforeDescendants">
/*my linearlayout or whatever to hold the views */
```
and it wasn't working, so I had to make a RelativeLayout the parent of the ScrollView, and place the **android:descendantFocusability="beforeDescendants"** in the parent aswell.
So I solved it doing the following:
```
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:descendantFocusability="blocksDescendants">
<ScrollView
android:layout_width="match_parent"
android:layout_height="match_parent">
/*my linearlayout or whatever to hold the views */
``` |
9,842,494 | So I have a fascinating problem. Despite the fact that I'm not manually or programmatically scrolling my view, my WebView is being automatically scrolled to after the data inside it loads.
I've got a fragment in a viewpager. When I first load the pager, it works as expected and everything is shown. But once I "flip the page" the data loads and the WebView pops up to the top of the page, hiding the views above it, which is undesirable.
Does anyone know how to prevent this from happening?
My layout looks like such:
```
<?xml version="1.0" encoding="utf-8"?>
<ScrollView xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:background="@color/background" >
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="vertical" >
<TextView
android:id="@+id/article_title"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:layout_marginTop="10dp"
android:layout_marginBottom="2dp"
android:text="Some Title"
android:textAppearance="?android:attr/textAppearanceLarge"
android:textColor="@color/article_title"
android:textStyle="bold" />
<LinearLayout
android:id="@+id/LL_Seperator"
android:layout_width="fill_parent"
android:layout_height="1dp"
android:layout_marginLeft="10dp"
android:layout_marginRight="10dp"
android:layout_marginTop="5dp"
android:layout_marginBottom="5dp"
android:background="@color/text"
android:orientation="horizontal" >
</LinearLayout>
<WebView
android:id="@+id/article_content"
android:layout_width="match_parent"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:layout_height="wrap_content" />
<TextView
android:id="@+id/article_link"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_marginBottom="5dp"
android:layout_marginTop="5dp"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:text="View Full Article"
android:textColor="@color/article_title"
android:textStyle="bold" />
</LinearLayout>
</ScrollView>
```
I'm also not giving focus to anything. By default, it seems to automatically scroll to the WebView after it has loaded. How do I prevent this? | 2012/03/23 | [
"https://Stackoverflow.com/questions/9842494",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/43790/"
] | Like this:
```
<com.ya.test.view.MyScrollView
android:layout_width="match_parent"
android:layout_height="match_parent" >
<FrameLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
android:descendantFocusability="beforeDescendants"
android:focusable="true"
android:focusableInTouchMode="true" >
``` | I had to use the fully qualified name for MyScrollView, otherwise I got an inflate exception.
```
<com.mypackagename.MyScrollView
xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:background="@color/background" >
``` |
9,842,494 | So I have a fascinating problem. Despite the fact that I'm not manually or programmatically scrolling my view, my WebView is being automatically scrolled to after the data inside it loads.
I've got a fragment in a viewpager. When I first load the pager, it works as expected and everything is shown. But once I "flip the page" the data loads and the WebView pops up to the top of the page, hiding the views above it, which is undesirable.
Does anyone know how to prevent this from happening?
My layout looks like such:
```
<?xml version="1.0" encoding="utf-8"?>
<ScrollView xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:background="@color/background" >
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="vertical" >
<TextView
android:id="@+id/article_title"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:layout_marginTop="10dp"
android:layout_marginBottom="2dp"
android:text="Some Title"
android:textAppearance="?android:attr/textAppearanceLarge"
android:textColor="@color/article_title"
android:textStyle="bold" />
<LinearLayout
android:id="@+id/LL_Seperator"
android:layout_width="fill_parent"
android:layout_height="1dp"
android:layout_marginLeft="10dp"
android:layout_marginRight="10dp"
android:layout_marginTop="5dp"
android:layout_marginBottom="5dp"
android:background="@color/text"
android:orientation="horizontal" >
</LinearLayout>
<WebView
android:id="@+id/article_content"
android:layout_width="match_parent"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:layout_height="wrap_content" />
<TextView
android:id="@+id/article_link"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_marginBottom="5dp"
android:layout_marginTop="5dp"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:text="View Full Article"
android:textColor="@color/article_title"
android:textStyle="bold" />
</LinearLayout>
</ScrollView>
```
I'm also not giving focus to anything. By default, it seems to automatically scroll to the WebView after it has loaded. How do I prevent this? | 2012/03/23 | [
"https://Stackoverflow.com/questions/9842494",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/43790/"
] | I had the same problem, after hours of trying several ideas, what finally worked for me was simply adding the `descendantFocusability` attribute to the ScrollView's containing LinearLayout, with the value `blocksDescendants`. In your case:
```
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="vertical"
android:descendantFocusability="blocksDescendants" >
```
Haven't had the problem reoccur since. | Adding these line in main layout solves the problem
```
android:descendantFocusability="blocksDescendants"
``` |
9,842,494 | So I have a fascinating problem. Despite the fact that I'm not manually or programmatically scrolling my view, my WebView is being automatically scrolled to after the data inside it loads.
I've got a fragment in a viewpager. When I first load the pager, it works as expected and everything is shown. But once I "flip the page" the data loads and the WebView pops up to the top of the page, hiding the views above it, which is undesirable.
Does anyone know how to prevent this from happening?
My layout looks like such:
```
<?xml version="1.0" encoding="utf-8"?>
<ScrollView xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:background="@color/background" >
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="vertical" >
<TextView
android:id="@+id/article_title"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:layout_marginTop="10dp"
android:layout_marginBottom="2dp"
android:text="Some Title"
android:textAppearance="?android:attr/textAppearanceLarge"
android:textColor="@color/article_title"
android:textStyle="bold" />
<LinearLayout
android:id="@+id/LL_Seperator"
android:layout_width="fill_parent"
android:layout_height="1dp"
android:layout_marginLeft="10dp"
android:layout_marginRight="10dp"
android:layout_marginTop="5dp"
android:layout_marginBottom="5dp"
android:background="@color/text"
android:orientation="horizontal" >
</LinearLayout>
<WebView
android:id="@+id/article_content"
android:layout_width="match_parent"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:layout_height="wrap_content" />
<TextView
android:id="@+id/article_link"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_marginBottom="5dp"
android:layout_marginTop="5dp"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:text="View Full Article"
android:textColor="@color/article_title"
android:textStyle="bold" />
</LinearLayout>
</ScrollView>
```
I'm also not giving focus to anything. By default, it seems to automatically scroll to the WebView after it has loaded. How do I prevent this? | 2012/03/23 | [
"https://Stackoverflow.com/questions/9842494",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/43790/"
] | I had the same problem, after hours of trying several ideas, what finally worked for me was simply adding the `descendantFocusability` attribute to the ScrollView's containing LinearLayout, with the value `blocksDescendants`. In your case:
```
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="vertical"
android:descendantFocusability="blocksDescendants" >
```
Haven't had the problem reoccur since. | You should create new class extend ScrollView, then Override requestChildFocus:
```
public class MyScrollView extends ScrollView {
@Override
public void requestChildFocus(View child, View focused) {
if (focused instanceof WebView )
return;
super.requestChildFocus(child, focused);
}
}
```
Then in your xml layout, using:
```
<MyScrollView
xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:background="@color/background" >
```
That works for me. The ScrollView will not auto scroll to the WebView anymore. |
9,842,494 | So I have a fascinating problem. Despite the fact that I'm not manually or programmatically scrolling my view, my WebView is being automatically scrolled to after the data inside it loads.
I've got a fragment in a viewpager. When I first load the pager, it works as expected and everything is shown. But once I "flip the page" the data loads and the WebView pops up to the top of the page, hiding the views above it, which is undesirable.
Does anyone know how to prevent this from happening?
My layout looks like such:
```
<?xml version="1.0" encoding="utf-8"?>
<ScrollView xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:background="@color/background" >
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="vertical" >
<TextView
android:id="@+id/article_title"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:layout_marginTop="10dp"
android:layout_marginBottom="2dp"
android:text="Some Title"
android:textAppearance="?android:attr/textAppearanceLarge"
android:textColor="@color/article_title"
android:textStyle="bold" />
<LinearLayout
android:id="@+id/LL_Seperator"
android:layout_width="fill_parent"
android:layout_height="1dp"
android:layout_marginLeft="10dp"
android:layout_marginRight="10dp"
android:layout_marginTop="5dp"
android:layout_marginBottom="5dp"
android:background="@color/text"
android:orientation="horizontal" >
</LinearLayout>
<WebView
android:id="@+id/article_content"
android:layout_width="match_parent"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:layout_height="wrap_content" />
<TextView
android:id="@+id/article_link"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_marginBottom="5dp"
android:layout_marginTop="5dp"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:text="View Full Article"
android:textColor="@color/article_title"
android:textStyle="bold" />
</LinearLayout>
</ScrollView>
```
I'm also not giving focus to anything. By default, it seems to automatically scroll to the WebView after it has loaded. How do I prevent this? | 2012/03/23 | [
"https://Stackoverflow.com/questions/9842494",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/43790/"
] | Probably there are people who have the same problem I was having, so I'll help out.
I was trying to put **android:descendantFocusability="beforeDescendants"** in my main ScrollView as following:
```
<ScrollView
android:layout_width="match_parent"
android:layout_height="match_parent"
android:descendantFocusability="beforeDescendants">
/*my linearlayout or whatever to hold the views */
```
and it wasn't working, so I had to make a RelativeLayout the parent of the ScrollView, and place the **android:descendantFocusability="beforeDescendants"** in the parent aswell.
So I solved it doing the following:
```
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:descendantFocusability="blocksDescendants">
<ScrollView
android:layout_width="match_parent"
android:layout_height="match_parent">
/*my linearlayout or whatever to hold the views */
``` | I had to use the fully qualified name for MyScrollView, otherwise I got an inflate exception.
```
<com.mypackagename.MyScrollView
xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:background="@color/background" >
``` |
9,842,494 | So I have a fascinating problem. Despite the fact that I'm not manually or programmatically scrolling my view, my WebView is being automatically scrolled to after the data inside it loads.
I've got a fragment in a viewpager. When I first load the pager, it works as expected and everything is shown. But once I "flip the page" the data loads and the WebView pops up to the top of the page, hiding the views above it, which is undesirable.
Does anyone know how to prevent this from happening?
My layout looks like such:
```
<?xml version="1.0" encoding="utf-8"?>
<ScrollView xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:background="@color/background" >
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="vertical" >
<TextView
android:id="@+id/article_title"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:layout_marginTop="10dp"
android:layout_marginBottom="2dp"
android:text="Some Title"
android:textAppearance="?android:attr/textAppearanceLarge"
android:textColor="@color/article_title"
android:textStyle="bold" />
<LinearLayout
android:id="@+id/LL_Seperator"
android:layout_width="fill_parent"
android:layout_height="1dp"
android:layout_marginLeft="10dp"
android:layout_marginRight="10dp"
android:layout_marginTop="5dp"
android:layout_marginBottom="5dp"
android:background="@color/text"
android:orientation="horizontal" >
</LinearLayout>
<WebView
android:id="@+id/article_content"
android:layout_width="match_parent"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:layout_height="wrap_content" />
<TextView
android:id="@+id/article_link"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_marginBottom="5dp"
android:layout_marginTop="5dp"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:text="View Full Article"
android:textColor="@color/article_title"
android:textStyle="bold" />
</LinearLayout>
</ScrollView>
```
I'm also not giving focus to anything. By default, it seems to automatically scroll to the WebView after it has loaded. How do I prevent this? | 2012/03/23 | [
"https://Stackoverflow.com/questions/9842494",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/43790/"
] | You should create new class extend ScrollView, then Override requestChildFocus:
```
public class MyScrollView extends ScrollView {
@Override
public void requestChildFocus(View child, View focused) {
if (focused instanceof WebView )
return;
super.requestChildFocus(child, focused);
}
}
```
Then in your xml layout, using:
```
<MyScrollView
xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:background="@color/background" >
```
That works for me. The ScrollView will not auto scroll to the WebView anymore. | Adding these line in main layout solves the problem
```
android:descendantFocusability="blocksDescendants"
``` |
9,842,494 | So I have a fascinating problem. Despite the fact that I'm not manually or programmatically scrolling my view, my WebView is being automatically scrolled to after the data inside it loads.
I've got a fragment in a viewpager. When I first load the pager, it works as expected and everything is shown. But once I "flip the page" the data loads and the WebView pops up to the top of the page, hiding the views above it, which is undesirable.
Does anyone know how to prevent this from happening?
My layout looks like such:
```
<?xml version="1.0" encoding="utf-8"?>
<ScrollView xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:background="@color/background" >
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="vertical" >
<TextView
android:id="@+id/article_title"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:layout_marginTop="10dp"
android:layout_marginBottom="2dp"
android:text="Some Title"
android:textAppearance="?android:attr/textAppearanceLarge"
android:textColor="@color/article_title"
android:textStyle="bold" />
<LinearLayout
android:id="@+id/LL_Seperator"
android:layout_width="fill_parent"
android:layout_height="1dp"
android:layout_marginLeft="10dp"
android:layout_marginRight="10dp"
android:layout_marginTop="5dp"
android:layout_marginBottom="5dp"
android:background="@color/text"
android:orientation="horizontal" >
</LinearLayout>
<WebView
android:id="@+id/article_content"
android:layout_width="match_parent"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:layout_height="wrap_content" />
<TextView
android:id="@+id/article_link"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_marginBottom="5dp"
android:layout_marginTop="5dp"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:text="View Full Article"
android:textColor="@color/article_title"
android:textStyle="bold" />
</LinearLayout>
</ScrollView>
```
I'm also not giving focus to anything. By default, it seems to automatically scroll to the WebView after it has loaded. How do I prevent this? | 2012/03/23 | [
"https://Stackoverflow.com/questions/9842494",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/43790/"
] | You should create new class extend ScrollView, then Override requestChildFocus:
```
public class MyScrollView extends ScrollView {
@Override
public void requestChildFocus(View child, View focused) {
if (focused instanceof WebView )
return;
super.requestChildFocus(child, focused);
}
}
```
Then in your xml layout, using:
```
<MyScrollView
xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:background="@color/background" >
```
That works for me. The ScrollView will not auto scroll to the WebView anymore. | I added: `android:overScrollMode="never"` in my ScrollView and set the height to `wrap_content`.
My view was very complex as it was legacy code with LinearLayout inside LinearLayout inside LinearLayout.
This helped me, hope it will help someone else too! |
360,331 | Is there a way to obtain a pdf file without figures and tables, while maintaining the figure numbers in the body text.
Thank you very much | 2017/03/25 | [
"https://tex.stackexchange.com/questions/360331",
"https://tex.stackexchange.com",
"https://tex.stackexchange.com/users/105060/"
] | That picture looks like
```
\(\left(\frac{\partial U}{\partial T}\right)_V\).
``` | The `esdiff` package makes it easy to type: the `\diffp*` command accepts 3 arguments (function in numerator, variables in denominator, evaluation point). It can calculate by itself the order of derivation in crossed derivatives.
You have options to have the differential symbol upright (default) or italic, and to set horizontal spacings between the differential symbol and what follows, or between the differential symbols of variables for crossed derivatives.
```
\documentclass{article}
\usepackage{amsmath}
\usepackage[thinc]{esdiff}
\begin{document}
\begin{gather*}
\diffp*{U}{T}{V}\quad \diffp*{f(x,y )}{{x^2} y}{(x_0,y_0)} \\
\diffp{f}{xy} = \diffp{f}{yx}
\end{gather*}%
\end{document}
```
[](https://i.stack.imgur.com/VWcnk.png) |
2,080,245 | I need to construct square matrices $A$ and $B$ such that $AB=0$ but $BA \neq 0$.
I know matrix multiplication is not commutative, but I don't know how to construct such matrices. Thanks in advance.
Edit: looking for some simple way | 2017/01/02 | [
"https://math.stackexchange.com/questions/2080245",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/369983/"
] | A whole family:
$$ A=\begin{bmatrix}0&a\\0&a'\end{bmatrix}, \quad B=\begin{bmatrix}b&b'\\0&0\end{bmatrix}$$
with non-orthogonal vectors $(a,a')$ and $(b, b')$. | We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above. |
2,080,245 | I need to construct square matrices $A$ and $B$ such that $AB=0$ but $BA \neq 0$.
I know matrix multiplication is not commutative, but I don't know how to construct such matrices. Thanks in advance.
Edit: looking for some simple way | 2017/01/02 | [
"https://math.stackexchange.com/questions/2080245",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/369983/"
] | $A = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$, $B = \begin{bmatrix} 0 & 0 \\ 1 & 0\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I\_n$ etc. These are also kind of trivial right? | Pick $\mathrm u, \mathrm v, \mathrm w \in \mathbb R^n \setminus \{0\_n\}$ such that $\neg (\mathrm u \perp \mathrm v)$ and $\mathrm v \perp \mathrm w$. Define
$$\mathrm A := \mathrm u \mathrm v^{\top} \qquad \qquad \qquad \mathrm B := \mathrm w \mathrm v^{\top}$$
whose traces are
$$\mbox{tr} (\mathrm A) = \mathrm v^{\top} \mathrm u \neq 0 \qquad \qquad \qquad \mbox{tr} (\mathrm B) = \mathrm v^{\top} \mathrm w = 0$$
Hence
$$\mathrm A \mathrm B = \mathrm u \underbrace{\mathrm v^{\top} \mathrm w}\_{= \mbox{tr} (\mathrm B)} \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm u \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm A = \mathrm O\_n$$
$$\mathrm B \mathrm A = \mathrm w \underbrace{\mathrm v^{\top}\mathrm u}\_{= \mbox{tr} (\mathrm A)} \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm w \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm B \neq \mathrm O\_n$$ |
2,080,245 | I need to construct square matrices $A$ and $B$ such that $AB=0$ but $BA \neq 0$.
I know matrix multiplication is not commutative, but I don't know how to construct such matrices. Thanks in advance.
Edit: looking for some simple way | 2017/01/02 | [
"https://math.stackexchange.com/questions/2080245",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/369983/"
] | A whole family:
$$ A=\begin{bmatrix}0&a\\0&a'\end{bmatrix}, \quad B=\begin{bmatrix}b&b'\\0&0\end{bmatrix}$$
with non-orthogonal vectors $(a,a')$ and $(b, b')$. | Pick $\mathrm u, \mathrm v, \mathrm w \in \mathbb R^n \setminus \{0\_n\}$ such that $\neg (\mathrm u \perp \mathrm v)$ and $\mathrm v \perp \mathrm w$. Define
$$\mathrm A := \mathrm u \mathrm v^{\top} \qquad \qquad \qquad \mathrm B := \mathrm w \mathrm v^{\top}$$
whose traces are
$$\mbox{tr} (\mathrm A) = \mathrm v^{\top} \mathrm u \neq 0 \qquad \qquad \qquad \mbox{tr} (\mathrm B) = \mathrm v^{\top} \mathrm w = 0$$
Hence
$$\mathrm A \mathrm B = \mathrm u \underbrace{\mathrm v^{\top} \mathrm w}\_{= \mbox{tr} (\mathrm B)} \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm u \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm A = \mathrm O\_n$$
$$\mathrm B \mathrm A = \mathrm w \underbrace{\mathrm v^{\top}\mathrm u}\_{= \mbox{tr} (\mathrm A)} \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm w \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm B \neq \mathrm O\_n$$ |
2,080,245 | I need to construct square matrices $A$ and $B$ such that $AB=0$ but $BA \neq 0$.
I know matrix multiplication is not commutative, but I don't know how to construct such matrices. Thanks in advance.
Edit: looking for some simple way | 2017/01/02 | [
"https://math.stackexchange.com/questions/2080245",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/369983/"
] | $A = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$, $B = \begin{bmatrix} 0 & 0 \\ 1 & 0\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I\_n$ etc. These are also kind of trivial right? | Yet another more powerful (and maybe more "advanced"):
Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M\_{AR}\text{vec}(B) - M\_{AL}\text{vec}(B)$, where $M\_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M\_{AL}$ from Left by $A$.
We can now first pick a vectorization $\text{vec}$ and an objective matrix $C$ and try to solve $$\min\_B\{\|(M\_{AR}-M\_{AL})\text{vec}(B)-\text{vec}(C)\|\}$$
Where $C = AB-BA$ is something we can choose. |
2,080,245 | I need to construct square matrices $A$ and $B$ such that $AB=0$ but $BA \neq 0$.
I know matrix multiplication is not commutative, but I don't know how to construct such matrices. Thanks in advance.
Edit: looking for some simple way | 2017/01/02 | [
"https://math.stackexchange.com/questions/2080245",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/369983/"
] | The problem reduces to solving a system of linear equations.
Let
$$x=\left(
\begin{array}{cc}
a & b \\
c & d \\
\end{array}
\right)$$
$$y=\left(
\begin{array}{cc}
e & f \\
g & h \\
\end{array}
\right)$$
Now solve the 4 linear equations resulting from $x.y=0$
The solution with the maximum number of free parameters (which is the one with $a\neq 0,b\neq 0$) is
$$xs=\left(
\begin{array}{cc}
a & b \\
c & \frac{b c}{a} \\
\end{array}
\right)$$
$$ys=\left(
\begin{array}{cc}
e & f \\
-\frac{a e}{b} & -\frac{a f}{b} \\
\end{array}
\right)$$
it contains 5 free parameters and the products are
$$xs.ys = \left(
\begin{array}{cc}
0 & 0 \\
0 & 0 \\
\end{array}
\right)$$
$$ys.xs = \left(
\begin{array}{cc}
a e+c f & b e+\frac{b c f}{a} \\
-\frac{e a^2}{b}-\frac{c f a}{b} & -a e-c f \\
\end{array}
\right)$$
The choice
$$\{a\to 1,b\to 1,c\to 0,e\to 1,f\to 0\}$$
gives
$$xx = \left(
\begin{array}{cc}
1 & 1 \\
0 & 0 \\
\end{array}
\right)$$
$$yy = \left(
\begin{array}{cc}
1 & 0 \\
-1 & 0 \\
\end{array}
\right)$$
EDIT
As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5. | Yet another more powerful (and maybe more "advanced"):
Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M\_{AR}\text{vec}(B) - M\_{AL}\text{vec}(B)$, where $M\_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M\_{AL}$ from Left by $A$.
We can now first pick a vectorization $\text{vec}$ and an objective matrix $C$ and try to solve $$\min\_B\{\|(M\_{AR}-M\_{AL})\text{vec}(B)-\text{vec}(C)\|\}$$
Where $C = AB-BA$ is something we can choose. |
2,080,245 | I need to construct square matrices $A$ and $B$ such that $AB=0$ but $BA \neq 0$.
I know matrix multiplication is not commutative, but I don't know how to construct such matrices. Thanks in advance.
Edit: looking for some simple way | 2017/01/02 | [
"https://math.stackexchange.com/questions/2080245",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/369983/"
] | A whole family:
$$ A=\begin{bmatrix}0&a\\0&a'\end{bmatrix}, \quad B=\begin{bmatrix}b&b'\\0&0\end{bmatrix}$$
with non-orthogonal vectors $(a,a')$ and $(b, b')$. | Yet another more powerful (and maybe more "advanced"):
Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M\_{AR}\text{vec}(B) - M\_{AL}\text{vec}(B)$, where $M\_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M\_{AL}$ from Left by $A$.
We can now first pick a vectorization $\text{vec}$ and an objective matrix $C$ and try to solve $$\min\_B\{\|(M\_{AR}-M\_{AL})\text{vec}(B)-\text{vec}(C)\|\}$$
Where $C = AB-BA$ is something we can choose. |
2,080,245 | I need to construct square matrices $A$ and $B$ such that $AB=0$ but $BA \neq 0$.
I know matrix multiplication is not commutative, but I don't know how to construct such matrices. Thanks in advance.
Edit: looking for some simple way | 2017/01/02 | [
"https://math.stackexchange.com/questions/2080245",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/369983/"
] | The problem reduces to solving a system of linear equations.
Let
$$x=\left(
\begin{array}{cc}
a & b \\
c & d \\
\end{array}
\right)$$
$$y=\left(
\begin{array}{cc}
e & f \\
g & h \\
\end{array}
\right)$$
Now solve the 4 linear equations resulting from $x.y=0$
The solution with the maximum number of free parameters (which is the one with $a\neq 0,b\neq 0$) is
$$xs=\left(
\begin{array}{cc}
a & b \\
c & \frac{b c}{a} \\
\end{array}
\right)$$
$$ys=\left(
\begin{array}{cc}
e & f \\
-\frac{a e}{b} & -\frac{a f}{b} \\
\end{array}
\right)$$
it contains 5 free parameters and the products are
$$xs.ys = \left(
\begin{array}{cc}
0 & 0 \\
0 & 0 \\
\end{array}
\right)$$
$$ys.xs = \left(
\begin{array}{cc}
a e+c f & b e+\frac{b c f}{a} \\
-\frac{e a^2}{b}-\frac{c f a}{b} & -a e-c f \\
\end{array}
\right)$$
The choice
$$\{a\to 1,b\to 1,c\to 0,e\to 1,f\to 0\}$$
gives
$$xx = \left(
\begin{array}{cc}
1 & 1 \\
0 & 0 \\
\end{array}
\right)$$
$$yy = \left(
\begin{array}{cc}
1 & 0 \\
-1 & 0 \\
\end{array}
\right)$$
EDIT
As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5. | We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above. |
2,080,245 | I need to construct square matrices $A$ and $B$ such that $AB=0$ but $BA \neq 0$.
I know matrix multiplication is not commutative, but I don't know how to construct such matrices. Thanks in advance.
Edit: looking for some simple way | 2017/01/02 | [
"https://math.stackexchange.com/questions/2080245",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/369983/"
] | $A = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$, $B = \begin{bmatrix} 0 & 0 \\ 1 & 0\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I\_n$ etc. These are also kind of trivial right? | We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above. |
2,080,245 | I need to construct square matrices $A$ and $B$ such that $AB=0$ but $BA \neq 0$.
I know matrix multiplication is not commutative, but I don't know how to construct such matrices. Thanks in advance.
Edit: looking for some simple way | 2017/01/02 | [
"https://math.stackexchange.com/questions/2080245",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/369983/"
] | The problem reduces to solving a system of linear equations.
Let
$$x=\left(
\begin{array}{cc}
a & b \\
c & d \\
\end{array}
\right)$$
$$y=\left(
\begin{array}{cc}
e & f \\
g & h \\
\end{array}
\right)$$
Now solve the 4 linear equations resulting from $x.y=0$
The solution with the maximum number of free parameters (which is the one with $a\neq 0,b\neq 0$) is
$$xs=\left(
\begin{array}{cc}
a & b \\
c & \frac{b c}{a} \\
\end{array}
\right)$$
$$ys=\left(
\begin{array}{cc}
e & f \\
-\frac{a e}{b} & -\frac{a f}{b} \\
\end{array}
\right)$$
it contains 5 free parameters and the products are
$$xs.ys = \left(
\begin{array}{cc}
0 & 0 \\
0 & 0 \\
\end{array}
\right)$$
$$ys.xs = \left(
\begin{array}{cc}
a e+c f & b e+\frac{b c f}{a} \\
-\frac{e a^2}{b}-\frac{c f a}{b} & -a e-c f \\
\end{array}
\right)$$
The choice
$$\{a\to 1,b\to 1,c\to 0,e\to 1,f\to 0\}$$
gives
$$xx = \left(
\begin{array}{cc}
1 & 1 \\
0 & 0 \\
\end{array}
\right)$$
$$yy = \left(
\begin{array}{cc}
1 & 0 \\
-1 & 0 \\
\end{array}
\right)$$
EDIT
As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5. | There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.
Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is *the only* inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to *not contain*$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).
There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace. |
2,080,245 | I need to construct square matrices $A$ and $B$ such that $AB=0$ but $BA \neq 0$.
I know matrix multiplication is not commutative, but I don't know how to construct such matrices. Thanks in advance.
Edit: looking for some simple way | 2017/01/02 | [
"https://math.stackexchange.com/questions/2080245",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/369983/"
] | $\newcommand{\im}{\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.
Will start with a $2 \times 2$ example, and then generalize.
First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.
Now consider the underlying linear maps $\alpha, \beta$. We will have $\dim(\ker(\beta)) = \dim(\im(\beta)) = 1$.
Consider first the case when $\ker(\beta) \ne \im(\beta)$, so that if you choose non-zero vectors in $e\_1 \in \ker(\beta)$ and $e\_2 \in \im(\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have
$$ B =
\begin{bmatrix}
0 & 0\\
0 & \lambda\\
\end{bmatrix}
$$
with $\lambda \ne 0$.
If you want $A B = 0$, you need $A$ to be of the form
$$A =
\begin{bmatrix}
x & 0\\
y & 0\\
\end{bmatrix}
$$
or considering $\alpha$, you need $\alpha(e\_2) = 0$, so that $\alpha(\im(\beta)) = 0$. Now compute
$$
B A =
\begin{bmatrix}
0 & 0\\
\lambda y & 0\\
\end{bmatrix},
$$
to see that $B A \ne 0$ iff $y \ne 0$. So you might as well choose $\lambda = y = 1$, $x = 0$.
When $\ker(\beta) = \im(\beta)$, let $e\_1 \in \ker(\beta)$ be non-zero, and choose $e\_2 \notin \ker(\beta)$ to get
$$ B =
\begin{bmatrix}
0 & \lambda\\
0 & 0\\
\end{bmatrix}
$$
for some non-zero $\lambda$. For $A B = 0$ we need
$$A =
\begin{bmatrix}
0 & x\\
0 & y\\
\end{bmatrix}
$$
or $\alpha(e\_1) = 0$. Now
$$
B A =
\begin{bmatrix}
0 & \lambda y\\
0 & 0\\
\end{bmatrix},
$$
so again choose $y \ne 0$.
If you want an $n \times n$ example, with $n \ge 2$, take $n \times n$ block matrices
$$
\begin{bmatrix}
A & 0\\
0 & 0\\
\end{bmatrix},
\begin{bmatrix}
B & 0\\
0 & 0\\
\end{bmatrix}
$$
where $A, B$ are as above, and $0$ are zero matrices of the appropriate size. | Pick $\mathrm u, \mathrm v, \mathrm w \in \mathbb R^n \setminus \{0\_n\}$ such that $\neg (\mathrm u \perp \mathrm v)$ and $\mathrm v \perp \mathrm w$. Define
$$\mathrm A := \mathrm u \mathrm v^{\top} \qquad \qquad \qquad \mathrm B := \mathrm w \mathrm v^{\top}$$
whose traces are
$$\mbox{tr} (\mathrm A) = \mathrm v^{\top} \mathrm u \neq 0 \qquad \qquad \qquad \mbox{tr} (\mathrm B) = \mathrm v^{\top} \mathrm w = 0$$
Hence
$$\mathrm A \mathrm B = \mathrm u \underbrace{\mathrm v^{\top} \mathrm w}\_{= \mbox{tr} (\mathrm B)} \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm u \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm A = \mathrm O\_n$$
$$\mathrm B \mathrm A = \mathrm w \underbrace{\mathrm v^{\top}\mathrm u}\_{= \mbox{tr} (\mathrm A)} \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm w \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm B \neq \mathrm O\_n$$ |
32,140,042 | Is there a way to detect the SIM phone number on a mobile device while using Meteor?
Moreover, what is the correct behavior to have and precautions to make to log users using their phone number (like in Whatsapp or Viber for example)?
Thank you in advance. | 2015/08/21 | [
"https://Stackoverflow.com/questions/32140042",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3104373/"
] | You have two options:
1. Install Microsoft Windows SDK
2. Copy these files to the machine you are trying to build.
take a look at these answers:
[Task could not find "AxImp.exe"](https://stackoverflow.com/questions/5923258/task-could-not-find-aximp-exe)
["Task failed because AXImp.exe was not found" when using MSBuild 12 to build a MVC 4.0 project](https://stackoverflow.com/questions/21373792/task-failed-because-aximp-exe-was-not-found-when-using-msbuild-12-to-build-a-m)
[Task failed because "AxImp.exe" was not found, or the correct Microsoft Windows SDK is not installed](https://social.msdn.microsoft.com/Forums/vstudio/en-US/e56fd9b3-dbea-4545-a5a5-f1af0e333ad7/task-failed-because-aximpexe-was-not-found-or-the-correct-microsoft-windows-sdk-is-not-installed?forum=tfsbuild) | I had the same issue when I try to build the VS2015 solution file with Jenkins on Windows 2012 server after commit to SVN. Tried to copy AxImp.exe and related binaries and settings but failed. SO I had to install the windows 8.1 SDK and that made the successful build via Jenkins CI server.
Hope this helps someone. |
32,140,042 | Is there a way to detect the SIM phone number on a mobile device while using Meteor?
Moreover, what is the correct behavior to have and precautions to make to log users using their phone number (like in Whatsapp or Viber for example)?
Thank you in advance. | 2015/08/21 | [
"https://Stackoverflow.com/questions/32140042",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3104373/"
] | I solved my problem installing [Windows 8.1 SDK](https://developer.microsoft.com/pt-pt/windows/downloads/windows-8-1-sdk)
In my case, I was trying to compile a **.csproj** using **MSBuild** on a CI **Windows 2012 R2** server (CI stands for Continuous Integration), although the most relevant web's issues on this subject suggested to install **Windows SDK**, they failed pointing EXACTLY which one.
So, at first, I guessed they mean **Windows 10 SDK**, which is also the right version for **Windows 2012** server. Unfortunately, **Windows 10 SDK** does not include the missing **AxImp.exe** SDK file.
So, as you can imagine, it took me quite a while to figure out that **Windows 8.1 SDK** was the key to this riddle.
Again, hope it helps someone else ;-) | I had the same issue when I try to build the VS2015 solution file with Jenkins on Windows 2012 server after commit to SVN. Tried to copy AxImp.exe and related binaries and settings but failed. SO I had to install the windows 8.1 SDK and that made the successful build via Jenkins CI server.
Hope this helps someone. |
8,243,134 | How do I accomplish the following in C++, and what is doing such things called?
```
template <bool S>
class NuclearPowerplantControllerFactoryProviderFactory {
// if S == true
typedef int data_t;
// if S == false
typedef unsigned int data_t;
};
``` | 2011/11/23 | [
"https://Stackoverflow.com/questions/8243134",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/254704/"
] | By **specialization**:
```
template <bool> class Foo;
template <> class Foo<true>
{
typedef int data_t;
};
template <> class Foo<false>
{
typedef unsigned int data_t;
};
```
You can choose to make one of the two cases the primary template and the other one the specialization, but I prefer this more symmetric version, given that `bool` can only have two values.
---
If this is the first time you see this, you might also like to think about *partial* specialization:
```
template <typename T> struct remove_pointer { typedef T type; };
template <typename U> struct remove_pointer<U*> { typedef U type; };
```
---
As @Nawaz says, the easiest way is probably to `#include <type_traits>` and say:
```
typedef typename std::conditional<S, int, unsigned int>::type data_t;
``` | @Kerrek has answered the question sufficiently, but that can be more generic as follows:
```
template<bool b, typename T, typename U>
struct select
{
typedef T type;
};
template<typename T, typename U>
struct select<false, T, U>
{
typedef U type;
};
```
And use as:
```
template <bool S>
class NuclearPowerplantControllerFactoryProviderFactory
{
typedef typename select<S, int, unsigned int>::type data_t;
//use data_t as data type
};
```
If `S` is true, the first type argument in `select` will be selected, or else second type argument will be selected. It is generic because you specify both types in `select<>`, and based on the value of the boolean, `select<b,T,U>::type` returns either first type, or second type. |
40,636,004 | suppose we have a table: MASTER\_X\_Y in the database.
I want the syntax to run the query :
```
INSERT INTO MASTER_VARIABLE1_VARIABLE2 VALUES (.....);
```
where VARIABLE1 and VARIABLE2 have values X and Y respectively which have been selected from another table.
Is this possible ?
( I have 38 possible combinations of X and Y and simply want to insert the selected data into the correct tables. Is there any other approach ? )
I am using oracle SQL 11g.
This is the first time I am writing a PL/SQL procedure and I am not getting any straightforward answers.
Thank you ! | 2016/11/16 | [
"https://Stackoverflow.com/questions/40636004",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5800086/"
] | Still need to test for the user hitting Cancel...This worked for me when hitting Cancel, or typing in 0, 1, or 11
```
Prints:
Dim NumberOfCopies As String
NumberOfCopies = Application.InputBox("How many copies do you want to print? Must enter 0-10", Type:=2)
If NumberOfCopies = "False" Or NumberOfCopies = "0" Then
'If user hits Cancel, NumberofCopies will be "False"
Else
If NumberOfCopies >= "11" Then
MsgBox "Max to print is 10 copies"
GoTo Prints
Else
ActiveSheet.PrintOut Copies:=NumberOfCopies
End If
End If
``` | Putting numbers in quotation marks turns them into a String. Since you want the user to input a number, you should make the following changes:
```
Dim NumberOfCopies As Int
NumberOfCopies = Application.InputBox("How many copies do you want to print Must enter 0-10", Type:=1)
If NumberofCopies >= 11 Then
...
If NumberOfCopies = 0 or NumberOfCopies = "" Then
...
``` |
40,636,004 | suppose we have a table: MASTER\_X\_Y in the database.
I want the syntax to run the query :
```
INSERT INTO MASTER_VARIABLE1_VARIABLE2 VALUES (.....);
```
where VARIABLE1 and VARIABLE2 have values X and Y respectively which have been selected from another table.
Is this possible ?
( I have 38 possible combinations of X and Y and simply want to insert the selected data into the correct tables. Is there any other approach ? )
I am using oracle SQL 11g.
This is the first time I am writing a PL/SQL procedure and I am not getting any straightforward answers.
Thank you ! | 2016/11/16 | [
"https://Stackoverflow.com/questions/40636004",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5800086/"
] | Putting numbers in quotation marks turns them into a String. Since you want the user to input a number, you should make the following changes:
```
Dim NumberOfCopies As Int
NumberOfCopies = Application.InputBox("How many copies do you want to print Must enter 0-10", Type:=1)
If NumberofCopies >= 11 Then
...
If NumberOfCopies = 0 or NumberOfCopies = "" Then
...
``` | `Type:=2` is actually for Text type. You are working with numbers, so you should set it as `Type:=1`. With that, if the user enters anything that is not a number, it will automatically pop up a message "Number is not valid" (no scripting effort required for that).
You should also avoid using Position labels whenever you can. Your code could be easily scripted with `Do...While` loops.
As already pointed out, when working with numbers, do not use them enclosed in double quotes, or VBA will treat them as String (which is not what you want).
@Rdster brought a good point for when user hits "Cancel" button. When that occurs, the value of your variable will be "False"(a String), so you should look for that either.
With all of that being said, I believe your code would work better like this:
```
'ASKS HOW MANY COPIES TO PRINT
Dim NumberOfCopies As String
Do
NumberOfCopies = Application.InputBox("How many copies do you want to print? Must enter 0-10", Type:=1)
If NumberOfCopies = "False" Then Exit Sub ' If user clicks on Cancel button
If NumberOfCopies > 10 Then
MsgBox "Max to print is 10 copies"
End If
Loop While NumberOfCopies > 10 'although producing the same result, it is cleaner to compare with 10, than to >=11
'Avoid leaving blank conditions. Work with what you have.
'Everything that doesn't match your conditions will be skipped anyways.
If NumberOfCopies > 0 And NumberOfCopies < 10 Then
ActiveSheet.PrintOut copies:=NumberOfCopies
End If
``` |
40,636,004 | suppose we have a table: MASTER\_X\_Y in the database.
I want the syntax to run the query :
```
INSERT INTO MASTER_VARIABLE1_VARIABLE2 VALUES (.....);
```
where VARIABLE1 and VARIABLE2 have values X and Y respectively which have been selected from another table.
Is this possible ?
( I have 38 possible combinations of X and Y and simply want to insert the selected data into the correct tables. Is there any other approach ? )
I am using oracle SQL 11g.
This is the first time I am writing a PL/SQL procedure and I am not getting any straightforward answers.
Thank you ! | 2016/11/16 | [
"https://Stackoverflow.com/questions/40636004",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5800086/"
] | Still need to test for the user hitting Cancel...This worked for me when hitting Cancel, or typing in 0, 1, or 11
```
Prints:
Dim NumberOfCopies As String
NumberOfCopies = Application.InputBox("How many copies do you want to print? Must enter 0-10", Type:=2)
If NumberOfCopies = "False" Or NumberOfCopies = "0" Then
'If user hits Cancel, NumberofCopies will be "False"
Else
If NumberOfCopies >= "11" Then
MsgBox "Max to print is 10 copies"
GoTo Prints
Else
ActiveSheet.PrintOut Copies:=NumberOfCopies
End If
End If
``` | `Type:=2` is actually for Text type. You are working with numbers, so you should set it as `Type:=1`. With that, if the user enters anything that is not a number, it will automatically pop up a message "Number is not valid" (no scripting effort required for that).
You should also avoid using Position labels whenever you can. Your code could be easily scripted with `Do...While` loops.
As already pointed out, when working with numbers, do not use them enclosed in double quotes, or VBA will treat them as String (which is not what you want).
@Rdster brought a good point for when user hits "Cancel" button. When that occurs, the value of your variable will be "False"(a String), so you should look for that either.
With all of that being said, I believe your code would work better like this:
```
'ASKS HOW MANY COPIES TO PRINT
Dim NumberOfCopies As String
Do
NumberOfCopies = Application.InputBox("How many copies do you want to print? Must enter 0-10", Type:=1)
If NumberOfCopies = "False" Then Exit Sub ' If user clicks on Cancel button
If NumberOfCopies > 10 Then
MsgBox "Max to print is 10 copies"
End If
Loop While NumberOfCopies > 10 'although producing the same result, it is cleaner to compare with 10, than to >=11
'Avoid leaving blank conditions. Work with what you have.
'Everything that doesn't match your conditions will be skipped anyways.
If NumberOfCopies > 0 And NumberOfCopies < 10 Then
ActiveSheet.PrintOut copies:=NumberOfCopies
End If
``` |
10,005,951 | I want to load the list of the groups as well as data into two separate datatables (or one, but I don't see that possible). Then I want to apply the grouping like this:
Groups
```
A
B
Bar
C
Car
```
Data
```
Ale
Beer
Bartender
Barry
Coal
Calm
Carbon
```
The final result after grouping should be like this.
```
*A
Ale
*B
*Bar
Bartender
Barry
Beer
*C
Calm
*Car
Carbon
Coal
```
I only have a grouping list, not the levels or anything else. And the items falling under the certain group are the ones that do start with the same letters as a group's name. The indentation is not a must. Hopefully my example clarifies what I need, but am not able to name thus I am unable to find anything similar on google.
The key things here are:
```
1. Grouping by a provided list of groups
2. There can be unlimited layers of grouping
``` | 2012/04/04 | [
"https://Stackoverflow.com/questions/10005951",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1167953/"
] | Since every record has it's children, the query should also take a father for each record. Then there is a nice trick in advanced grouping tab. Choosing a father's column yields as many higher level groups as needed recursively. I learnt about that in <http://blogs.microsoft.co.il/blogs/barbaro/archive/2008/12/01/creating-sum-for-a-group-with-recursion-in-ssrs.aspx> | I suggest reporting from a query like this:
```
select gtop.category top_category,
gsub.category sub_category,
dtab.category data_category
from groupTable gtop
join groupTable gsub on gsub.category like gtop.category + '%'
left join dataTable dtab on dtab.category like gsub.category + '%'
where len(gtop.category) = 1 and
not exists
(select null
from groupTable gchk
where gsub.category = gtop.category and
gchk.category like gsub.category + '%' and
gchk.category <> gsub.category and
dtab.category like gchk.category + '%')
```
- with report groups on top\_category and sub\_category, and headings for both groups. You will probably want to hide the sub\_category heading row when sub\_category = top\_category. |
12,169,718 | Lets say I have an XML file:
```
<locations>
<country name="Australia">
<city>Brisbane</city>
<city>Melbourne</city>
<city>Sydney</city>
</country>
<country name="England">
<city>Bristol</city>
<city>London</city>
</country>
<country name="America">
<city>New York</city>
<city>Washington</city>
</country>
</locations>
```
**I want it flattened to (this should be the final result):**
```
Australia
Brisbane
Melbourne
Sydney
England
Bristol
London
America
New York
Washington
```
I've tried this:
```
var query = XDocument.Load(@"test.xml").Descendants("country")
.Select(s => new
{
Country = (string)s.Attribute("name"),
Cities = s.Elements("city")
.Select (x => new { City = (string)x })
});
```
But this returns a nested list inside `query`. Like so:
```
{ Australia, Cities { Brisbane, Melbourne, Sydney }},
{ England, Cities { Bristol, London }},
{ America, Cities { New York, Washington }}
```
Thanks | 2012/08/29 | [
"https://Stackoverflow.com/questions/12169718",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/769083/"
] | [SelectMany](http://msdn.microsoft.com/en-us/library/bb534336.aspx) should do the trick here.
```
var result =
XDocument.Load(@"test.xml")
.Descendants("country")
.SelectMany(e =>
(new [] { (string)e.Attribute("name")})
.Concat(
e.Elements("city")
.Select(c => c.Value)
)
)
.ToList();
``` | Here's a way to do it with query syntax:
```
var query = from country in XDocument.Load(@"test.xml").Descendants("country")
let countryName = new [] {(string)country.Attribute("name")}
let cities = country.Elements("city").Select(x => (string)x)
from place in countryName.Concat(cities)
select place;
``` |
17,646,509 | I have a "why does it work that way?" question about garbage collection (any/all implementations: Java, Python, CLR, etc.). Garbage collectors deallocate an object when it is no longer in any scope; the number of references pointing to it is zero. It seems to me that a framework could deallocate as soon as the number of references reaches zero, but all implementations I've encountered wait a while and then deallocate many objects at a time. My question is, why?
I'm assuming that the framework keeps an integer for each object (which I think Python does, because you have to call `PyINCREF` and `PyDECREF` when writing extension modules for it in C; presumably these functions modify a real counter somewhere). If so, then it shouldn't take any more CPU time to eliminate the object the moment it goes out of scope. If it takes x nanoseconds per object now, then it would take x nanoseconds per object later, right?
If my assumption is wrong and there is no integer associated with each object, then I understand why garbage collection waits: it would have to walk the graph of references to determine the status of each object, and that calculation takes time. Such a method would consume less memory than the explicit reference-count method, but I'm astonished that it's quicker or is the preferred method for other reasons. It sounds like a lot of work.
From a programming point of view, it would be nice if objects deallocated immediately after they go out of scope. Not only could we rely on destructors being executed when we want them to be (one of the Python gotchas is that `__del__` is not called at a predictable time), but it would become much easier to memory-profile a program. [Here's an example](https://stackoverflow.com/questions/7114661/why-does-java-wait-so-long-to-run-the-garbage-collector) of how much confusion this causes. To my mind, the benefits of programming in a deallocate-right-away framework are so great that there must be some good reason why all the implementations I've heard of wait before deallocating. What is that benefit?
Note: if the walk over the graph of references is only needed to identify circular references (a pure reference count can't), then why not a hybrid approach? Deallocate objects as soon as their reference count hits zero and then also do periodic sweeps to look for circular references. Programmers working in such a framework would have a performance/determinism reason to stick to non-circular references as much as is feasible. It's often feasible (e.g. all data are in the form of JSON objects with no pointers to parents). Is this how any popular garbage collectors work? | 2013/07/15 | [
"https://Stackoverflow.com/questions/17646509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1623645/"
] | To understand garbage-collection, go to a bowling alley and watch how the pinsetter removes fallen pins after the first ball has been rolled. Rather than identifying and removing individual fallen pins, the pinsetter mechanism picks up all the pins that are still standing, lifts them to safety, and then runs a sweeper bar across the lane without regard for how many pins are lying there or where they are located. Once that is done, the pins that were standing are placed back on the lane. Many garbage-collection systems work on much the same principle: they have to do a non-trivial amount of work for each live object to ensure it doesn't get destroyed, but dead objects are destroyed wholesale without even being looked at or noticed.
**Addendum**
A garbage collector that always has to act on every live item to ensure its preservation is apt to be slow when there are a lot of live items; this is why garbage collectors have, historically, gotten a bad rap. The BASIC interpreter on the Commodore 64 (which was, incidentally, written by Microsoft in the days *before* MS-DOS) would take many seconds to perform a garbage collection in a program which had an array of a few hundred strings. Performance can be improved enormously if items which survive their first garbage collection can be ignored until *many* items have survived their first garbage collection, and those which have *participated in* and survived two garbage collections (note that they won't have to participate in their second collection until many other objects have survived their first) can be ignored until many other objects have also participated and survived in their second. This concept can be partially implemented easily (even on the Commodore 64, one could force all strings that exist at a given moment to be exempt from future garbage collection, which could be useful if on startup a program created large arrays of strings that would never change) but becomes more powerful with a little extra hardware support.
If one figures that a garbage collector will try to pack the objects which are going to be kept as close close to an end of memory as it can, generational support requires doing nothing more than keeping track of what (contiguous) range of memory is used by objects of each generation. All objects of every generation must be scanned to make sure all newer-generation live objects are located and preserved, but older-generation objects don't have to be moved, since the memory they occupy isn't in danger of wholesale elimination. This approach is very simple to implement, and can offer some significant performance improvements versus a non-generational GC, but even the scanning phase of a GC can be expensive if there are many live objects.
They key to speed up a "newer-generation" garbage collections is to observe that if an object "Fred" has not been written since the last garbage-collection in which it participated, it cannot possibly contain any references to any objects which have been created since that time. Consequently, none of the objects to which it holds references would be in any danger of elimination until Fred itself is eligible for elimination. Of course, if references to newer objects have been stored in Fred since the last lower-level GC, those references do need to be scanned. To accomplish this, advanced garbage collectors set up hardware traps which fire when parts of the older generation heap are written. When such a trap fires, it adds the objects in that region to a list of older generation objects which will need to be scanned, and then disables the trap associated with that region. In cases where older-generation objects frequently have references to newer objects stored in them, this extra bookkeeping can hurt performance, but in most cases it ends up being a major performance win. | @Jim has answered quite some, I will add more to it.
Firstly what makes you think that deallocating[A1] as soon as the count is `0` is a good alternative?
Garbage Collectors not only only deallocate objects but are responsible for the complete memory management. Starting with `fragmentation`, one of the biggest Issues with garbage collectors. If not done properly, it will result in unnecessary page hits and cache misses. Garbage collectors from the start are designed to handle this Issue. With different generations, it becomes easier to handle this. With `A[1]`, periodically a thread has to set it up and handle it.
Moreover it turns out that clearing multiple objects is faster than doing as in `A[1]`. (Think of it, for a room with sand spread - it is faster to clear all of them together rather than picking each one of them individually)
Secondly, for thread-safety in Multi-Threaded systems, one will have to hold a lock for every Object to increase/decrease the count which is bad performance and extra memory.Plus Modern collectors have the ability to do it in parallel and not Stop The World (Ex: Java's ParallelGC), i wonder how this can happen with `A[1]`. |
17,646,509 | I have a "why does it work that way?" question about garbage collection (any/all implementations: Java, Python, CLR, etc.). Garbage collectors deallocate an object when it is no longer in any scope; the number of references pointing to it is zero. It seems to me that a framework could deallocate as soon as the number of references reaches zero, but all implementations I've encountered wait a while and then deallocate many objects at a time. My question is, why?
I'm assuming that the framework keeps an integer for each object (which I think Python does, because you have to call `PyINCREF` and `PyDECREF` when writing extension modules for it in C; presumably these functions modify a real counter somewhere). If so, then it shouldn't take any more CPU time to eliminate the object the moment it goes out of scope. If it takes x nanoseconds per object now, then it would take x nanoseconds per object later, right?
If my assumption is wrong and there is no integer associated with each object, then I understand why garbage collection waits: it would have to walk the graph of references to determine the status of each object, and that calculation takes time. Such a method would consume less memory than the explicit reference-count method, but I'm astonished that it's quicker or is the preferred method for other reasons. It sounds like a lot of work.
From a programming point of view, it would be nice if objects deallocated immediately after they go out of scope. Not only could we rely on destructors being executed when we want them to be (one of the Python gotchas is that `__del__` is not called at a predictable time), but it would become much easier to memory-profile a program. [Here's an example](https://stackoverflow.com/questions/7114661/why-does-java-wait-so-long-to-run-the-garbage-collector) of how much confusion this causes. To my mind, the benefits of programming in a deallocate-right-away framework are so great that there must be some good reason why all the implementations I've heard of wait before deallocating. What is that benefit?
Note: if the walk over the graph of references is only needed to identify circular references (a pure reference count can't), then why not a hybrid approach? Deallocate objects as soon as their reference count hits zero and then also do periodic sweeps to look for circular references. Programmers working in such a framework would have a performance/determinism reason to stick to non-circular references as much as is feasible. It's often feasible (e.g. all data are in the form of JSON objects with no pointers to parents). Is this how any popular garbage collectors work? | 2013/07/15 | [
"https://Stackoverflow.com/questions/17646509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1623645/"
] | Your thoughts are generally very insightful and well considered. You're just missing some basic information.
>
> Garbage collectors deallocate an object when it is no longer in any scope
>
>
>
That is completely incorrect in general. Garbage collectors work at run-time on a representation in which the notion of scope has long since been removed. For example, inlining and applications of liveness analysis destroy scope.
Tracing garbage collectors recycle space at some point after the last reference disappears. Liveness analysis can have references in the stack frame overwritten with other references even if the variable is still in scope because liveness analysis determined that the variable is never used again and, therefore, is no longer needed.
>
> It seems to me that a framework could deallocate as soon as the number of references reaches zero, but all implementations I've encountered wait a while and then deallocate many objects at a time. My question is, why?
>
>
>
Performance. You can reference count at the level of stack entries and registers but performance is absolutely terrible. All practical reference counting garbage collectors defer counter decrements to the end of scope in order to achieve reasonable ([but still bad](http://flyingfrogblog.blogspot.co.nz/2011/01/boosts-sharedptr-up-to-10-slower-than.html)) performance. State-of-the-art reference counting garbage collectors defer decrements in order to batch them up and can [allegedly](http://users.cecs.anu.edu.au/~steveb/downloads/pdf/rc-ismm-2012.pdf) attain competitive performance.
>
> I'm assuming that the framework keeps an integer for each object
>
>
>
Not necessarily. For example, OCaml uses a single bit.
>
> From a programming point of view, it would be nice if objects deallocated immediately after they go out of scope.
>
>
>
From a programming point of view, it would be nice if code ran 10x faster effortlessly.
Note that destructors inhibit tail call elimination which are invaluable in functional programming.
>
> I'm astonished that it's quicker or is the preferred method for other reasons. It sounds like a lot of work.
>
>
>
Consider a program that solves the n-queens problem by manipulating lists of chess board coordinates. The input is a single integer. The output is a list containing a few board coordinates. The intermediate data is a huge spaghetti stack of linked list nodes. If you coded this up by pre-allocating a big enough stack of linked list nodes, manipulating them to get the answer, copy out the (small) answer and then calling `free` once on the entire stack then you'd be doing almost exactly the same thing that a generational garbage collector does. In particular, you'd only copy ~6% of your data and you'd deallocate the other ~94% with a single call to `free`.
That was a perfect happy day scenario for a generational garbage collector that adheres to the hypothesis that "most objects die young and old objects rarely refer to new object". A pathological counter example where generational garbage collectors struggle is filling a hash table with freshly allocated objects. The spine of the hash table is a big array that survives so it will be in the old generation. Every new object inserted into it is a backpointer from the old generation to the new generation. Every new object survives. So generational garbage collectors allocate quickly but then mark everything, copy everything and update pointers to everything and, therefore, run ~3x slower than a simple C or C++ solution would.
>
> Not only could we rely on destructors being executed when we want them to be (one of the Python gotchas is that **del** is not called at a predictable time), but it would become much easier to memory-profile a program
>
>
>
Note that destructors and garbage collection are orthogonal concepts. For example, .NET provides destructors in the form of `IDisposable`.
FWIW, in ~15 years of using garbage collected languages I have used memory profiling maybe 3 times.
>
> why not a hybrid approach? Deallocate objects as soon as their reference count hits zero and then also do periodic sweeps to look for circular references. Programmers working in such a framework would have a performance/determinism reason to stick to non-circular references as much as is feasible. It's often feasible (e.g. all data are in the form of JSON objects with no pointers to parents). Is this how any popular garbage collectors work?
>
>
>
CPython does that, I believe. Mathematica and Erlang restrict the heap to be a DAG by design so they can use reference counting alone. GC researchers have proposed related techniques such as trial-deletion as an auxiliary algorithm to detect cycles.
Note also that reference counting is theoretically asymptotically faster than tracing garbage collection as its performance is independent of the size of the (live) heap. In practice, tracing garbage collection is still *much* faster even with 100GB heaps. | Where I've come across GC systems they wait until they need to run, so that the relocation of objects still in use can be done once, rather than many times.
Consider a series of objects allocated sequentially in memory:
```
Object 1
Object 2
Object 3
Object 4
Object 5
```
If Object 2 can be deallocated, and GC operates immediately, Objects 3,4 and 5 will all need to be moved.
Now consider that object 4 can be deallocated, GC will now move Object 5 next to Object 3. Object 5 has been moved twice
However, if GC waits a short while, both Objects2 and 4 can be removed at the same time, meaning that Object 5 is moved once, and moved further.
Multiply the number of objects by, say, 100 and you can see considerable time savings from this approach |
17,646,509 | I have a "why does it work that way?" question about garbage collection (any/all implementations: Java, Python, CLR, etc.). Garbage collectors deallocate an object when it is no longer in any scope; the number of references pointing to it is zero. It seems to me that a framework could deallocate as soon as the number of references reaches zero, but all implementations I've encountered wait a while and then deallocate many objects at a time. My question is, why?
I'm assuming that the framework keeps an integer for each object (which I think Python does, because you have to call `PyINCREF` and `PyDECREF` when writing extension modules for it in C; presumably these functions modify a real counter somewhere). If so, then it shouldn't take any more CPU time to eliminate the object the moment it goes out of scope. If it takes x nanoseconds per object now, then it would take x nanoseconds per object later, right?
If my assumption is wrong and there is no integer associated with each object, then I understand why garbage collection waits: it would have to walk the graph of references to determine the status of each object, and that calculation takes time. Such a method would consume less memory than the explicit reference-count method, but I'm astonished that it's quicker or is the preferred method for other reasons. It sounds like a lot of work.
From a programming point of view, it would be nice if objects deallocated immediately after they go out of scope. Not only could we rely on destructors being executed when we want them to be (one of the Python gotchas is that `__del__` is not called at a predictable time), but it would become much easier to memory-profile a program. [Here's an example](https://stackoverflow.com/questions/7114661/why-does-java-wait-so-long-to-run-the-garbage-collector) of how much confusion this causes. To my mind, the benefits of programming in a deallocate-right-away framework are so great that there must be some good reason why all the implementations I've heard of wait before deallocating. What is that benefit?
Note: if the walk over the graph of references is only needed to identify circular references (a pure reference count can't), then why not a hybrid approach? Deallocate objects as soon as their reference count hits zero and then also do periodic sweeps to look for circular references. Programmers working in such a framework would have a performance/determinism reason to stick to non-circular references as much as is feasible. It's often feasible (e.g. all data are in the form of JSON objects with no pointers to parents). Is this how any popular garbage collectors work? | 2013/07/15 | [
"https://Stackoverflow.com/questions/17646509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1623645/"
] | Your thoughts are generally very insightful and well considered. You're just missing some basic information.
>
> Garbage collectors deallocate an object when it is no longer in any scope
>
>
>
That is completely incorrect in general. Garbage collectors work at run-time on a representation in which the notion of scope has long since been removed. For example, inlining and applications of liveness analysis destroy scope.
Tracing garbage collectors recycle space at some point after the last reference disappears. Liveness analysis can have references in the stack frame overwritten with other references even if the variable is still in scope because liveness analysis determined that the variable is never used again and, therefore, is no longer needed.
>
> It seems to me that a framework could deallocate as soon as the number of references reaches zero, but all implementations I've encountered wait a while and then deallocate many objects at a time. My question is, why?
>
>
>
Performance. You can reference count at the level of stack entries and registers but performance is absolutely terrible. All practical reference counting garbage collectors defer counter decrements to the end of scope in order to achieve reasonable ([but still bad](http://flyingfrogblog.blogspot.co.nz/2011/01/boosts-sharedptr-up-to-10-slower-than.html)) performance. State-of-the-art reference counting garbage collectors defer decrements in order to batch them up and can [allegedly](http://users.cecs.anu.edu.au/~steveb/downloads/pdf/rc-ismm-2012.pdf) attain competitive performance.
>
> I'm assuming that the framework keeps an integer for each object
>
>
>
Not necessarily. For example, OCaml uses a single bit.
>
> From a programming point of view, it would be nice if objects deallocated immediately after they go out of scope.
>
>
>
From a programming point of view, it would be nice if code ran 10x faster effortlessly.
Note that destructors inhibit tail call elimination which are invaluable in functional programming.
>
> I'm astonished that it's quicker or is the preferred method for other reasons. It sounds like a lot of work.
>
>
>
Consider a program that solves the n-queens problem by manipulating lists of chess board coordinates. The input is a single integer. The output is a list containing a few board coordinates. The intermediate data is a huge spaghetti stack of linked list nodes. If you coded this up by pre-allocating a big enough stack of linked list nodes, manipulating them to get the answer, copy out the (small) answer and then calling `free` once on the entire stack then you'd be doing almost exactly the same thing that a generational garbage collector does. In particular, you'd only copy ~6% of your data and you'd deallocate the other ~94% with a single call to `free`.
That was a perfect happy day scenario for a generational garbage collector that adheres to the hypothesis that "most objects die young and old objects rarely refer to new object". A pathological counter example where generational garbage collectors struggle is filling a hash table with freshly allocated objects. The spine of the hash table is a big array that survives so it will be in the old generation. Every new object inserted into it is a backpointer from the old generation to the new generation. Every new object survives. So generational garbage collectors allocate quickly but then mark everything, copy everything and update pointers to everything and, therefore, run ~3x slower than a simple C or C++ solution would.
>
> Not only could we rely on destructors being executed when we want them to be (one of the Python gotchas is that **del** is not called at a predictable time), but it would become much easier to memory-profile a program
>
>
>
Note that destructors and garbage collection are orthogonal concepts. For example, .NET provides destructors in the form of `IDisposable`.
FWIW, in ~15 years of using garbage collected languages I have used memory profiling maybe 3 times.
>
> why not a hybrid approach? Deallocate objects as soon as their reference count hits zero and then also do periodic sweeps to look for circular references. Programmers working in such a framework would have a performance/determinism reason to stick to non-circular references as much as is feasible. It's often feasible (e.g. all data are in the form of JSON objects with no pointers to parents). Is this how any popular garbage collectors work?
>
>
>
CPython does that, I believe. Mathematica and Erlang restrict the heap to be a DAG by design so they can use reference counting alone. GC researchers have proposed related techniques such as trial-deletion as an auxiliary algorithm to detect cycles.
Note also that reference counting is theoretically asymptotically faster than tracing garbage collection as its performance is independent of the size of the (live) heap. In practice, tracing garbage collection is still *much* faster even with 100GB heaps. | I think the reason in performance. If you create much objects in a loop and destroy them at the end of a loop step it would take more time to execute that code, then waiting until the program is idle and freeing the data at once. Or on low memory of cause. |
17,646,509 | I have a "why does it work that way?" question about garbage collection (any/all implementations: Java, Python, CLR, etc.). Garbage collectors deallocate an object when it is no longer in any scope; the number of references pointing to it is zero. It seems to me that a framework could deallocate as soon as the number of references reaches zero, but all implementations I've encountered wait a while and then deallocate many objects at a time. My question is, why?
I'm assuming that the framework keeps an integer for each object (which I think Python does, because you have to call `PyINCREF` and `PyDECREF` when writing extension modules for it in C; presumably these functions modify a real counter somewhere). If so, then it shouldn't take any more CPU time to eliminate the object the moment it goes out of scope. If it takes x nanoseconds per object now, then it would take x nanoseconds per object later, right?
If my assumption is wrong and there is no integer associated with each object, then I understand why garbage collection waits: it would have to walk the graph of references to determine the status of each object, and that calculation takes time. Such a method would consume less memory than the explicit reference-count method, but I'm astonished that it's quicker or is the preferred method for other reasons. It sounds like a lot of work.
From a programming point of view, it would be nice if objects deallocated immediately after they go out of scope. Not only could we rely on destructors being executed when we want them to be (one of the Python gotchas is that `__del__` is not called at a predictable time), but it would become much easier to memory-profile a program. [Here's an example](https://stackoverflow.com/questions/7114661/why-does-java-wait-so-long-to-run-the-garbage-collector) of how much confusion this causes. To my mind, the benefits of programming in a deallocate-right-away framework are so great that there must be some good reason why all the implementations I've heard of wait before deallocating. What is that benefit?
Note: if the walk over the graph of references is only needed to identify circular references (a pure reference count can't), then why not a hybrid approach? Deallocate objects as soon as their reference count hits zero and then also do periodic sweeps to look for circular references. Programmers working in such a framework would have a performance/determinism reason to stick to non-circular references as much as is feasible. It's often feasible (e.g. all data are in the form of JSON objects with no pointers to parents). Is this how any popular garbage collectors work? | 2013/07/15 | [
"https://Stackoverflow.com/questions/17646509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1623645/"
] | To start with, a point of terminology: "garbage collection" means different things to different people, and some GC schemes are more sophisticated than others. Some people consider reference counting to be a form of GC, but personally I consider "true GC" to be distinct from reference counting.
With refcounts, there is an integer tracking the number of references, and you can trigger deallocation immediately when the refcount hits zero. This us how the CPython implementation works, and how most varieties of C++ smart pointers work. The CPython implementation adds a mark/sweep GC as a backup, so it's very much like the hybrid design you describe.
But refcounting is actually a pretty terrible solution, since it incurs a (relatively) expensive memory write (plus a memory barrier and/or lock, to ensure thread safety) every time a reference is passed, which happens quite a lot. In imperative languages like C++ it's possible (just difficult) to manage memory ownership through macros and coding conventions, but in functional languages like Lisp it becomes well-nigh impossible, because memory allocation usually happens implicitly due to local variable capture in a closure.
So it should come as no surprise that the first step toward a modern GC was invented for Lisp. It was called the "twospace allocator" or "twospace collector" and it worked exactly like it sounds: it divided allocatable memory (the "heap") into two spaces. Every new object was allocated out of the first space until it got too full, at which point allocation would stop and the runtime would walk the reference graph and copy only the live (still referenced) objects to the second space. After the live objects were copied, the first space would be marked empty, and allocation would resume, allocating new objects from the second space, until it got too full, at which point the live objects would be copied back to the first space and the process would start all over again.
The advantage of the twospace collector is that, instead of doing `O(N)` work, where *N* is the total number of garbage objects, it would only do `O(M)` work, where *M* is the *number of objects that were* ***not garbage***. Since in practice, most objects are allocated and then deallocated in a short period of time, this can lead to a substantial performance improvement.
Additionally, the twospace collector made it possible to simplify the allocator side as well. Most `malloc()` implementations maintain what is called a "free list": a list of which blocks are still available to be allocated. To allocate a new object, `malloc()` must scan the free list looking for an empty space that's big enough. But the twospace allocator didn't bother with that: it just allocated objects in each space like a stack, by just pushing a pointer up by the desired number of bytes.
So the twospace collector was much faster than `malloc()`, which was great because Lisp programs would do a lot more allocations than C programs would. Or, to put it another way, Lisp programs needed a way to allocate memory like a stack but with a lifetime that was not limited to the execution stack -- in other words, a stack that could grow infinitely without the program running out of memory. And, in fact, Raymond Chen argues that that's exactly how people should think about GC. I highly recommend his series of blog posts starting with [Everybody thinks about garbage collection the wrong way](http://blogs.msdn.com/b/oldnewthing/archive/2010/08/09/10047586.aspx).
But the twospace collector had a major flaw, which is that no program could ever use more than half the available RAM: the other half was always wasted. So the history of GC techniques is the history of attempts to improve on the twospace collector, usually by using heuristics of program behavior. However, GC algorithms inevitably involve tradeoffs, usually preferring to deallocate objects in batches instead of individually, which inevitably leads to delays where objects aren't deallocated immediately.
**Edit:** To answer your follow-up question, modern GCs generally incorporate the idea of [generational garbage collection](http://en.wikipedia.org/wiki/Garbage_collection_%28computer_science%29#Generational_GC_.28ephemeral_GC.29), where objects are grouped into different "generations" based on lifetime, and an object in one generation gets "promoted" to another generation once it's lived long enough. Sometimes a small difference in object lifetime (e.g. in a request-driven server, storing an object for longer than one request) can lead to a large difference in the amount of time it takes before the object gets deallocated, since it causes it to become more "tenured".
You correctly observe that a true GC has to operate "beneath" the level of `malloc()` and `free()`. (As a side note, it's worth learning about how `malloc()` and `free()` are implemented -- they aren't magic either!) Additionally, for an effective GC, you either need to be conservative (like the Boehm GC) and never move objects, and check things that *might* be pointers, or else you need some kind of "opaque pointer" type -- which Java and C# call "references". Opaque pointers are actually great for an allocation system, since it means you can always move objects by updating pointers to them. In a language like C where you interact directly with raw memory addresses, it's never really safe to move objects.
And there are multiple options for GC algorithms. The standard Java runtime contains no less than five collectors (Young, Serial, old CMS, new CMS, and G1, although I think I'm forgetting one) and each has a set of options that are all configurable.
However, GCs aren't magic. Most GCs are just exploiting the [time-space tradeoff](https://en.wikipedia.org/wiki/Space%E2%80%93time_tradeoff) of batching work, which means that the gains in speed are usually paid for in increased memory usage (compared to manual memory management or refcounting). But the combination of increased program performance and increased programmer performance, versus the low cost of RAM these days, makes the tradeoff usually worth it.
Hopefully that helps make things clearer! | To understand garbage-collection, go to a bowling alley and watch how the pinsetter removes fallen pins after the first ball has been rolled. Rather than identifying and removing individual fallen pins, the pinsetter mechanism picks up all the pins that are still standing, lifts them to safety, and then runs a sweeper bar across the lane without regard for how many pins are lying there or where they are located. Once that is done, the pins that were standing are placed back on the lane. Many garbage-collection systems work on much the same principle: they have to do a non-trivial amount of work for each live object to ensure it doesn't get destroyed, but dead objects are destroyed wholesale without even being looked at or noticed.
**Addendum**
A garbage collector that always has to act on every live item to ensure its preservation is apt to be slow when there are a lot of live items; this is why garbage collectors have, historically, gotten a bad rap. The BASIC interpreter on the Commodore 64 (which was, incidentally, written by Microsoft in the days *before* MS-DOS) would take many seconds to perform a garbage collection in a program which had an array of a few hundred strings. Performance can be improved enormously if items which survive their first garbage collection can be ignored until *many* items have survived their first garbage collection, and those which have *participated in* and survived two garbage collections (note that they won't have to participate in their second collection until many other objects have survived their first) can be ignored until many other objects have also participated and survived in their second. This concept can be partially implemented easily (even on the Commodore 64, one could force all strings that exist at a given moment to be exempt from future garbage collection, which could be useful if on startup a program created large arrays of strings that would never change) but becomes more powerful with a little extra hardware support.
If one figures that a garbage collector will try to pack the objects which are going to be kept as close close to an end of memory as it can, generational support requires doing nothing more than keeping track of what (contiguous) range of memory is used by objects of each generation. All objects of every generation must be scanned to make sure all newer-generation live objects are located and preserved, but older-generation objects don't have to be moved, since the memory they occupy isn't in danger of wholesale elimination. This approach is very simple to implement, and can offer some significant performance improvements versus a non-generational GC, but even the scanning phase of a GC can be expensive if there are many live objects.
They key to speed up a "newer-generation" garbage collections is to observe that if an object "Fred" has not been written since the last garbage-collection in which it participated, it cannot possibly contain any references to any objects which have been created since that time. Consequently, none of the objects to which it holds references would be in any danger of elimination until Fred itself is eligible for elimination. Of course, if references to newer objects have been stored in Fred since the last lower-level GC, those references do need to be scanned. To accomplish this, advanced garbage collectors set up hardware traps which fire when parts of the older generation heap are written. When such a trap fires, it adds the objects in that region to a list of older generation objects which will need to be scanned, and then disables the trap associated with that region. In cases where older-generation objects frequently have references to newer objects stored in them, this extra bookkeeping can hurt performance, but in most cases it ends up being a major performance win. |
17,646,509 | I have a "why does it work that way?" question about garbage collection (any/all implementations: Java, Python, CLR, etc.). Garbage collectors deallocate an object when it is no longer in any scope; the number of references pointing to it is zero. It seems to me that a framework could deallocate as soon as the number of references reaches zero, but all implementations I've encountered wait a while and then deallocate many objects at a time. My question is, why?
I'm assuming that the framework keeps an integer for each object (which I think Python does, because you have to call `PyINCREF` and `PyDECREF` when writing extension modules for it in C; presumably these functions modify a real counter somewhere). If so, then it shouldn't take any more CPU time to eliminate the object the moment it goes out of scope. If it takes x nanoseconds per object now, then it would take x nanoseconds per object later, right?
If my assumption is wrong and there is no integer associated with each object, then I understand why garbage collection waits: it would have to walk the graph of references to determine the status of each object, and that calculation takes time. Such a method would consume less memory than the explicit reference-count method, but I'm astonished that it's quicker or is the preferred method for other reasons. It sounds like a lot of work.
From a programming point of view, it would be nice if objects deallocated immediately after they go out of scope. Not only could we rely on destructors being executed when we want them to be (one of the Python gotchas is that `__del__` is not called at a predictable time), but it would become much easier to memory-profile a program. [Here's an example](https://stackoverflow.com/questions/7114661/why-does-java-wait-so-long-to-run-the-garbage-collector) of how much confusion this causes. To my mind, the benefits of programming in a deallocate-right-away framework are so great that there must be some good reason why all the implementations I've heard of wait before deallocating. What is that benefit?
Note: if the walk over the graph of references is only needed to identify circular references (a pure reference count can't), then why not a hybrid approach? Deallocate objects as soon as their reference count hits zero and then also do periodic sweeps to look for circular references. Programmers working in such a framework would have a performance/determinism reason to stick to non-circular references as much as is feasible. It's often feasible (e.g. all data are in the form of JSON objects with no pointers to parents). Is this how any popular garbage collectors work? | 2013/07/15 | [
"https://Stackoverflow.com/questions/17646509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1623645/"
] | To start with, a point of terminology: "garbage collection" means different things to different people, and some GC schemes are more sophisticated than others. Some people consider reference counting to be a form of GC, but personally I consider "true GC" to be distinct from reference counting.
With refcounts, there is an integer tracking the number of references, and you can trigger deallocation immediately when the refcount hits zero. This us how the CPython implementation works, and how most varieties of C++ smart pointers work. The CPython implementation adds a mark/sweep GC as a backup, so it's very much like the hybrid design you describe.
But refcounting is actually a pretty terrible solution, since it incurs a (relatively) expensive memory write (plus a memory barrier and/or lock, to ensure thread safety) every time a reference is passed, which happens quite a lot. In imperative languages like C++ it's possible (just difficult) to manage memory ownership through macros and coding conventions, but in functional languages like Lisp it becomes well-nigh impossible, because memory allocation usually happens implicitly due to local variable capture in a closure.
So it should come as no surprise that the first step toward a modern GC was invented for Lisp. It was called the "twospace allocator" or "twospace collector" and it worked exactly like it sounds: it divided allocatable memory (the "heap") into two spaces. Every new object was allocated out of the first space until it got too full, at which point allocation would stop and the runtime would walk the reference graph and copy only the live (still referenced) objects to the second space. After the live objects were copied, the first space would be marked empty, and allocation would resume, allocating new objects from the second space, until it got too full, at which point the live objects would be copied back to the first space and the process would start all over again.
The advantage of the twospace collector is that, instead of doing `O(N)` work, where *N* is the total number of garbage objects, it would only do `O(M)` work, where *M* is the *number of objects that were* ***not garbage***. Since in practice, most objects are allocated and then deallocated in a short period of time, this can lead to a substantial performance improvement.
Additionally, the twospace collector made it possible to simplify the allocator side as well. Most `malloc()` implementations maintain what is called a "free list": a list of which blocks are still available to be allocated. To allocate a new object, `malloc()` must scan the free list looking for an empty space that's big enough. But the twospace allocator didn't bother with that: it just allocated objects in each space like a stack, by just pushing a pointer up by the desired number of bytes.
So the twospace collector was much faster than `malloc()`, which was great because Lisp programs would do a lot more allocations than C programs would. Or, to put it another way, Lisp programs needed a way to allocate memory like a stack but with a lifetime that was not limited to the execution stack -- in other words, a stack that could grow infinitely without the program running out of memory. And, in fact, Raymond Chen argues that that's exactly how people should think about GC. I highly recommend his series of blog posts starting with [Everybody thinks about garbage collection the wrong way](http://blogs.msdn.com/b/oldnewthing/archive/2010/08/09/10047586.aspx).
But the twospace collector had a major flaw, which is that no program could ever use more than half the available RAM: the other half was always wasted. So the history of GC techniques is the history of attempts to improve on the twospace collector, usually by using heuristics of program behavior. However, GC algorithms inevitably involve tradeoffs, usually preferring to deallocate objects in batches instead of individually, which inevitably leads to delays where objects aren't deallocated immediately.
**Edit:** To answer your follow-up question, modern GCs generally incorporate the idea of [generational garbage collection](http://en.wikipedia.org/wiki/Garbage_collection_%28computer_science%29#Generational_GC_.28ephemeral_GC.29), where objects are grouped into different "generations" based on lifetime, and an object in one generation gets "promoted" to another generation once it's lived long enough. Sometimes a small difference in object lifetime (e.g. in a request-driven server, storing an object for longer than one request) can lead to a large difference in the amount of time it takes before the object gets deallocated, since it causes it to become more "tenured".
You correctly observe that a true GC has to operate "beneath" the level of `malloc()` and `free()`. (As a side note, it's worth learning about how `malloc()` and `free()` are implemented -- they aren't magic either!) Additionally, for an effective GC, you either need to be conservative (like the Boehm GC) and never move objects, and check things that *might* be pointers, or else you need some kind of "opaque pointer" type -- which Java and C# call "references". Opaque pointers are actually great for an allocation system, since it means you can always move objects by updating pointers to them. In a language like C where you interact directly with raw memory addresses, it's never really safe to move objects.
And there are multiple options for GC algorithms. The standard Java runtime contains no less than five collectors (Young, Serial, old CMS, new CMS, and G1, although I think I'm forgetting one) and each has a set of options that are all configurable.
However, GCs aren't magic. Most GCs are just exploiting the [time-space tradeoff](https://en.wikipedia.org/wiki/Space%E2%80%93time_tradeoff) of batching work, which means that the gains in speed are usually paid for in increased memory usage (compared to manual memory management or refcounting). But the combination of increased program performance and increased programmer performance, versus the low cost of RAM these days, makes the tradeoff usually worth it.
Hopefully that helps make things clearer! | Garbage collection using reference counting is very slow, especially in a threaded environment.
I really recommend [this post by Brian Harry](http://blogs.msdn.com/b/brada/archive/2005/02/11/371015.aspx).
A code sample is provided there which more than enough to convince me (C#):
```
public interface IRefCounted : IDisposable
{
void AddRef();
}
// ref counted base class.
class RefCountable : IRefCountable
{
private m_ref;
public RefCountable()
{
m_ref = 1;
}
public void AddRef()
{
Interlocked.Increment(ref m_ref);
}
public void Dispose()
{
if (Interlocked.Decrement(ref m_ref) == 0)
OnFinalDispose();
}
protected virtual void OnFinalDispose()
{
}
}
```
`Interlocked.Increment(ref m_ref)` is an atomic operation that takes hundreds of memory cycles. |
17,646,509 | I have a "why does it work that way?" question about garbage collection (any/all implementations: Java, Python, CLR, etc.). Garbage collectors deallocate an object when it is no longer in any scope; the number of references pointing to it is zero. It seems to me that a framework could deallocate as soon as the number of references reaches zero, but all implementations I've encountered wait a while and then deallocate many objects at a time. My question is, why?
I'm assuming that the framework keeps an integer for each object (which I think Python does, because you have to call `PyINCREF` and `PyDECREF` when writing extension modules for it in C; presumably these functions modify a real counter somewhere). If so, then it shouldn't take any more CPU time to eliminate the object the moment it goes out of scope. If it takes x nanoseconds per object now, then it would take x nanoseconds per object later, right?
If my assumption is wrong and there is no integer associated with each object, then I understand why garbage collection waits: it would have to walk the graph of references to determine the status of each object, and that calculation takes time. Such a method would consume less memory than the explicit reference-count method, but I'm astonished that it's quicker or is the preferred method for other reasons. It sounds like a lot of work.
From a programming point of view, it would be nice if objects deallocated immediately after they go out of scope. Not only could we rely on destructors being executed when we want them to be (one of the Python gotchas is that `__del__` is not called at a predictable time), but it would become much easier to memory-profile a program. [Here's an example](https://stackoverflow.com/questions/7114661/why-does-java-wait-so-long-to-run-the-garbage-collector) of how much confusion this causes. To my mind, the benefits of programming in a deallocate-right-away framework are so great that there must be some good reason why all the implementations I've heard of wait before deallocating. What is that benefit?
Note: if the walk over the graph of references is only needed to identify circular references (a pure reference count can't), then why not a hybrid approach? Deallocate objects as soon as their reference count hits zero and then also do periodic sweeps to look for circular references. Programmers working in such a framework would have a performance/determinism reason to stick to non-circular references as much as is feasible. It's often feasible (e.g. all data are in the form of JSON objects with no pointers to parents). Is this how any popular garbage collectors work? | 2013/07/15 | [
"https://Stackoverflow.com/questions/17646509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1623645/"
] | To start with, a point of terminology: "garbage collection" means different things to different people, and some GC schemes are more sophisticated than others. Some people consider reference counting to be a form of GC, but personally I consider "true GC" to be distinct from reference counting.
With refcounts, there is an integer tracking the number of references, and you can trigger deallocation immediately when the refcount hits zero. This us how the CPython implementation works, and how most varieties of C++ smart pointers work. The CPython implementation adds a mark/sweep GC as a backup, so it's very much like the hybrid design you describe.
But refcounting is actually a pretty terrible solution, since it incurs a (relatively) expensive memory write (plus a memory barrier and/or lock, to ensure thread safety) every time a reference is passed, which happens quite a lot. In imperative languages like C++ it's possible (just difficult) to manage memory ownership through macros and coding conventions, but in functional languages like Lisp it becomes well-nigh impossible, because memory allocation usually happens implicitly due to local variable capture in a closure.
So it should come as no surprise that the first step toward a modern GC was invented for Lisp. It was called the "twospace allocator" or "twospace collector" and it worked exactly like it sounds: it divided allocatable memory (the "heap") into two spaces. Every new object was allocated out of the first space until it got too full, at which point allocation would stop and the runtime would walk the reference graph and copy only the live (still referenced) objects to the second space. After the live objects were copied, the first space would be marked empty, and allocation would resume, allocating new objects from the second space, until it got too full, at which point the live objects would be copied back to the first space and the process would start all over again.
The advantage of the twospace collector is that, instead of doing `O(N)` work, where *N* is the total number of garbage objects, it would only do `O(M)` work, where *M* is the *number of objects that were* ***not garbage***. Since in practice, most objects are allocated and then deallocated in a short period of time, this can lead to a substantial performance improvement.
Additionally, the twospace collector made it possible to simplify the allocator side as well. Most `malloc()` implementations maintain what is called a "free list": a list of which blocks are still available to be allocated. To allocate a new object, `malloc()` must scan the free list looking for an empty space that's big enough. But the twospace allocator didn't bother with that: it just allocated objects in each space like a stack, by just pushing a pointer up by the desired number of bytes.
So the twospace collector was much faster than `malloc()`, which was great because Lisp programs would do a lot more allocations than C programs would. Or, to put it another way, Lisp programs needed a way to allocate memory like a stack but with a lifetime that was not limited to the execution stack -- in other words, a stack that could grow infinitely without the program running out of memory. And, in fact, Raymond Chen argues that that's exactly how people should think about GC. I highly recommend his series of blog posts starting with [Everybody thinks about garbage collection the wrong way](http://blogs.msdn.com/b/oldnewthing/archive/2010/08/09/10047586.aspx).
But the twospace collector had a major flaw, which is that no program could ever use more than half the available RAM: the other half was always wasted. So the history of GC techniques is the history of attempts to improve on the twospace collector, usually by using heuristics of program behavior. However, GC algorithms inevitably involve tradeoffs, usually preferring to deallocate objects in batches instead of individually, which inevitably leads to delays where objects aren't deallocated immediately.
**Edit:** To answer your follow-up question, modern GCs generally incorporate the idea of [generational garbage collection](http://en.wikipedia.org/wiki/Garbage_collection_%28computer_science%29#Generational_GC_.28ephemeral_GC.29), where objects are grouped into different "generations" based on lifetime, and an object in one generation gets "promoted" to another generation once it's lived long enough. Sometimes a small difference in object lifetime (e.g. in a request-driven server, storing an object for longer than one request) can lead to a large difference in the amount of time it takes before the object gets deallocated, since it causes it to become more "tenured".
You correctly observe that a true GC has to operate "beneath" the level of `malloc()` and `free()`. (As a side note, it's worth learning about how `malloc()` and `free()` are implemented -- they aren't magic either!) Additionally, for an effective GC, you either need to be conservative (like the Boehm GC) and never move objects, and check things that *might* be pointers, or else you need some kind of "opaque pointer" type -- which Java and C# call "references". Opaque pointers are actually great for an allocation system, since it means you can always move objects by updating pointers to them. In a language like C where you interact directly with raw memory addresses, it's never really safe to move objects.
And there are multiple options for GC algorithms. The standard Java runtime contains no less than five collectors (Young, Serial, old CMS, new CMS, and G1, although I think I'm forgetting one) and each has a set of options that are all configurable.
However, GCs aren't magic. Most GCs are just exploiting the [time-space tradeoff](https://en.wikipedia.org/wiki/Space%E2%80%93time_tradeoff) of batching work, which means that the gains in speed are usually paid for in increased memory usage (compared to manual memory management or refcounting). But the combination of increased program performance and increased programmer performance, versus the low cost of RAM these days, makes the tradeoff usually worth it.
Hopefully that helps make things clearer! | @Jim has answered quite some, I will add more to it.
Firstly what makes you think that deallocating[A1] as soon as the count is `0` is a good alternative?
Garbage Collectors not only only deallocate objects but are responsible for the complete memory management. Starting with `fragmentation`, one of the biggest Issues with garbage collectors. If not done properly, it will result in unnecessary page hits and cache misses. Garbage collectors from the start are designed to handle this Issue. With different generations, it becomes easier to handle this. With `A[1]`, periodically a thread has to set it up and handle it.
Moreover it turns out that clearing multiple objects is faster than doing as in `A[1]`. (Think of it, for a room with sand spread - it is faster to clear all of them together rather than picking each one of them individually)
Secondly, for thread-safety in Multi-Threaded systems, one will have to hold a lock for every Object to increase/decrease the count which is bad performance and extra memory.Plus Modern collectors have the ability to do it in parallel and not Stop The World (Ex: Java's ParallelGC), i wonder how this can happen with `A[1]`. |
17,646,509 | I have a "why does it work that way?" question about garbage collection (any/all implementations: Java, Python, CLR, etc.). Garbage collectors deallocate an object when it is no longer in any scope; the number of references pointing to it is zero. It seems to me that a framework could deallocate as soon as the number of references reaches zero, but all implementations I've encountered wait a while and then deallocate many objects at a time. My question is, why?
I'm assuming that the framework keeps an integer for each object (which I think Python does, because you have to call `PyINCREF` and `PyDECREF` when writing extension modules for it in C; presumably these functions modify a real counter somewhere). If so, then it shouldn't take any more CPU time to eliminate the object the moment it goes out of scope. If it takes x nanoseconds per object now, then it would take x nanoseconds per object later, right?
If my assumption is wrong and there is no integer associated with each object, then I understand why garbage collection waits: it would have to walk the graph of references to determine the status of each object, and that calculation takes time. Such a method would consume less memory than the explicit reference-count method, but I'm astonished that it's quicker or is the preferred method for other reasons. It sounds like a lot of work.
From a programming point of view, it would be nice if objects deallocated immediately after they go out of scope. Not only could we rely on destructors being executed when we want them to be (one of the Python gotchas is that `__del__` is not called at a predictable time), but it would become much easier to memory-profile a program. [Here's an example](https://stackoverflow.com/questions/7114661/why-does-java-wait-so-long-to-run-the-garbage-collector) of how much confusion this causes. To my mind, the benefits of programming in a deallocate-right-away framework are so great that there must be some good reason why all the implementations I've heard of wait before deallocating. What is that benefit?
Note: if the walk over the graph of references is only needed to identify circular references (a pure reference count can't), then why not a hybrid approach? Deallocate objects as soon as their reference count hits zero and then also do periodic sweeps to look for circular references. Programmers working in such a framework would have a performance/determinism reason to stick to non-circular references as much as is feasible. It's often feasible (e.g. all data are in the form of JSON objects with no pointers to parents). Is this how any popular garbage collectors work? | 2013/07/15 | [
"https://Stackoverflow.com/questions/17646509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1623645/"
] | To start with, a point of terminology: "garbage collection" means different things to different people, and some GC schemes are more sophisticated than others. Some people consider reference counting to be a form of GC, but personally I consider "true GC" to be distinct from reference counting.
With refcounts, there is an integer tracking the number of references, and you can trigger deallocation immediately when the refcount hits zero. This us how the CPython implementation works, and how most varieties of C++ smart pointers work. The CPython implementation adds a mark/sweep GC as a backup, so it's very much like the hybrid design you describe.
But refcounting is actually a pretty terrible solution, since it incurs a (relatively) expensive memory write (plus a memory barrier and/or lock, to ensure thread safety) every time a reference is passed, which happens quite a lot. In imperative languages like C++ it's possible (just difficult) to manage memory ownership through macros and coding conventions, but in functional languages like Lisp it becomes well-nigh impossible, because memory allocation usually happens implicitly due to local variable capture in a closure.
So it should come as no surprise that the first step toward a modern GC was invented for Lisp. It was called the "twospace allocator" or "twospace collector" and it worked exactly like it sounds: it divided allocatable memory (the "heap") into two spaces. Every new object was allocated out of the first space until it got too full, at which point allocation would stop and the runtime would walk the reference graph and copy only the live (still referenced) objects to the second space. After the live objects were copied, the first space would be marked empty, and allocation would resume, allocating new objects from the second space, until it got too full, at which point the live objects would be copied back to the first space and the process would start all over again.
The advantage of the twospace collector is that, instead of doing `O(N)` work, where *N* is the total number of garbage objects, it would only do `O(M)` work, where *M* is the *number of objects that were* ***not garbage***. Since in practice, most objects are allocated and then deallocated in a short period of time, this can lead to a substantial performance improvement.
Additionally, the twospace collector made it possible to simplify the allocator side as well. Most `malloc()` implementations maintain what is called a "free list": a list of which blocks are still available to be allocated. To allocate a new object, `malloc()` must scan the free list looking for an empty space that's big enough. But the twospace allocator didn't bother with that: it just allocated objects in each space like a stack, by just pushing a pointer up by the desired number of bytes.
So the twospace collector was much faster than `malloc()`, which was great because Lisp programs would do a lot more allocations than C programs would. Or, to put it another way, Lisp programs needed a way to allocate memory like a stack but with a lifetime that was not limited to the execution stack -- in other words, a stack that could grow infinitely without the program running out of memory. And, in fact, Raymond Chen argues that that's exactly how people should think about GC. I highly recommend his series of blog posts starting with [Everybody thinks about garbage collection the wrong way](http://blogs.msdn.com/b/oldnewthing/archive/2010/08/09/10047586.aspx).
But the twospace collector had a major flaw, which is that no program could ever use more than half the available RAM: the other half was always wasted. So the history of GC techniques is the history of attempts to improve on the twospace collector, usually by using heuristics of program behavior. However, GC algorithms inevitably involve tradeoffs, usually preferring to deallocate objects in batches instead of individually, which inevitably leads to delays where objects aren't deallocated immediately.
**Edit:** To answer your follow-up question, modern GCs generally incorporate the idea of [generational garbage collection](http://en.wikipedia.org/wiki/Garbage_collection_%28computer_science%29#Generational_GC_.28ephemeral_GC.29), where objects are grouped into different "generations" based on lifetime, and an object in one generation gets "promoted" to another generation once it's lived long enough. Sometimes a small difference in object lifetime (e.g. in a request-driven server, storing an object for longer than one request) can lead to a large difference in the amount of time it takes before the object gets deallocated, since it causes it to become more "tenured".
You correctly observe that a true GC has to operate "beneath" the level of `malloc()` and `free()`. (As a side note, it's worth learning about how `malloc()` and `free()` are implemented -- they aren't magic either!) Additionally, for an effective GC, you either need to be conservative (like the Boehm GC) and never move objects, and check things that *might* be pointers, or else you need some kind of "opaque pointer" type -- which Java and C# call "references". Opaque pointers are actually great for an allocation system, since it means you can always move objects by updating pointers to them. In a language like C where you interact directly with raw memory addresses, it's never really safe to move objects.
And there are multiple options for GC algorithms. The standard Java runtime contains no less than five collectors (Young, Serial, old CMS, new CMS, and G1, although I think I'm forgetting one) and each has a set of options that are all configurable.
However, GCs aren't magic. Most GCs are just exploiting the [time-space tradeoff](https://en.wikipedia.org/wiki/Space%E2%80%93time_tradeoff) of batching work, which means that the gains in speed are usually paid for in increased memory usage (compared to manual memory management or refcounting). But the combination of increased program performance and increased programmer performance, versus the low cost of RAM these days, makes the tradeoff usually worth it.
Hopefully that helps make things clearer! | Where I've come across GC systems they wait until they need to run, so that the relocation of objects still in use can be done once, rather than many times.
Consider a series of objects allocated sequentially in memory:
```
Object 1
Object 2
Object 3
Object 4
Object 5
```
If Object 2 can be deallocated, and GC operates immediately, Objects 3,4 and 5 will all need to be moved.
Now consider that object 4 can be deallocated, GC will now move Object 5 next to Object 3. Object 5 has been moved twice
However, if GC waits a short while, both Objects2 and 4 can be removed at the same time, meaning that Object 5 is moved once, and moved further.
Multiply the number of objects by, say, 100 and you can see considerable time savings from this approach |
17,646,509 | I have a "why does it work that way?" question about garbage collection (any/all implementations: Java, Python, CLR, etc.). Garbage collectors deallocate an object when it is no longer in any scope; the number of references pointing to it is zero. It seems to me that a framework could deallocate as soon as the number of references reaches zero, but all implementations I've encountered wait a while and then deallocate many objects at a time. My question is, why?
I'm assuming that the framework keeps an integer for each object (which I think Python does, because you have to call `PyINCREF` and `PyDECREF` when writing extension modules for it in C; presumably these functions modify a real counter somewhere). If so, then it shouldn't take any more CPU time to eliminate the object the moment it goes out of scope. If it takes x nanoseconds per object now, then it would take x nanoseconds per object later, right?
If my assumption is wrong and there is no integer associated with each object, then I understand why garbage collection waits: it would have to walk the graph of references to determine the status of each object, and that calculation takes time. Such a method would consume less memory than the explicit reference-count method, but I'm astonished that it's quicker or is the preferred method for other reasons. It sounds like a lot of work.
From a programming point of view, it would be nice if objects deallocated immediately after they go out of scope. Not only could we rely on destructors being executed when we want them to be (one of the Python gotchas is that `__del__` is not called at a predictable time), but it would become much easier to memory-profile a program. [Here's an example](https://stackoverflow.com/questions/7114661/why-does-java-wait-so-long-to-run-the-garbage-collector) of how much confusion this causes. To my mind, the benefits of programming in a deallocate-right-away framework are so great that there must be some good reason why all the implementations I've heard of wait before deallocating. What is that benefit?
Note: if the walk over the graph of references is only needed to identify circular references (a pure reference count can't), then why not a hybrid approach? Deallocate objects as soon as their reference count hits zero and then also do periodic sweeps to look for circular references. Programmers working in such a framework would have a performance/determinism reason to stick to non-circular references as much as is feasible. It's often feasible (e.g. all data are in the form of JSON objects with no pointers to parents). Is this how any popular garbage collectors work? | 2013/07/15 | [
"https://Stackoverflow.com/questions/17646509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1623645/"
] | To understand garbage-collection, go to a bowling alley and watch how the pinsetter removes fallen pins after the first ball has been rolled. Rather than identifying and removing individual fallen pins, the pinsetter mechanism picks up all the pins that are still standing, lifts them to safety, and then runs a sweeper bar across the lane without regard for how many pins are lying there or where they are located. Once that is done, the pins that were standing are placed back on the lane. Many garbage-collection systems work on much the same principle: they have to do a non-trivial amount of work for each live object to ensure it doesn't get destroyed, but dead objects are destroyed wholesale without even being looked at or noticed.
**Addendum**
A garbage collector that always has to act on every live item to ensure its preservation is apt to be slow when there are a lot of live items; this is why garbage collectors have, historically, gotten a bad rap. The BASIC interpreter on the Commodore 64 (which was, incidentally, written by Microsoft in the days *before* MS-DOS) would take many seconds to perform a garbage collection in a program which had an array of a few hundred strings. Performance can be improved enormously if items which survive their first garbage collection can be ignored until *many* items have survived their first garbage collection, and those which have *participated in* and survived two garbage collections (note that they won't have to participate in their second collection until many other objects have survived their first) can be ignored until many other objects have also participated and survived in their second. This concept can be partially implemented easily (even on the Commodore 64, one could force all strings that exist at a given moment to be exempt from future garbage collection, which could be useful if on startup a program created large arrays of strings that would never change) but becomes more powerful with a little extra hardware support.
If one figures that a garbage collector will try to pack the objects which are going to be kept as close close to an end of memory as it can, generational support requires doing nothing more than keeping track of what (contiguous) range of memory is used by objects of each generation. All objects of every generation must be scanned to make sure all newer-generation live objects are located and preserved, but older-generation objects don't have to be moved, since the memory they occupy isn't in danger of wholesale elimination. This approach is very simple to implement, and can offer some significant performance improvements versus a non-generational GC, but even the scanning phase of a GC can be expensive if there are many live objects.
They key to speed up a "newer-generation" garbage collections is to observe that if an object "Fred" has not been written since the last garbage-collection in which it participated, it cannot possibly contain any references to any objects which have been created since that time. Consequently, none of the objects to which it holds references would be in any danger of elimination until Fred itself is eligible for elimination. Of course, if references to newer objects have been stored in Fred since the last lower-level GC, those references do need to be scanned. To accomplish this, advanced garbage collectors set up hardware traps which fire when parts of the older generation heap are written. When such a trap fires, it adds the objects in that region to a list of older generation objects which will need to be scanned, and then disables the trap associated with that region. In cases where older-generation objects frequently have references to newer objects stored in them, this extra bookkeeping can hurt performance, but in most cases it ends up being a major performance win. | I think the reason in performance. If you create much objects in a loop and destroy them at the end of a loop step it would take more time to execute that code, then waiting until the program is idle and freeing the data at once. Or on low memory of cause. |
17,646,509 | I have a "why does it work that way?" question about garbage collection (any/all implementations: Java, Python, CLR, etc.). Garbage collectors deallocate an object when it is no longer in any scope; the number of references pointing to it is zero. It seems to me that a framework could deallocate as soon as the number of references reaches zero, but all implementations I've encountered wait a while and then deallocate many objects at a time. My question is, why?
I'm assuming that the framework keeps an integer for each object (which I think Python does, because you have to call `PyINCREF` and `PyDECREF` when writing extension modules for it in C; presumably these functions modify a real counter somewhere). If so, then it shouldn't take any more CPU time to eliminate the object the moment it goes out of scope. If it takes x nanoseconds per object now, then it would take x nanoseconds per object later, right?
If my assumption is wrong and there is no integer associated with each object, then I understand why garbage collection waits: it would have to walk the graph of references to determine the status of each object, and that calculation takes time. Such a method would consume less memory than the explicit reference-count method, but I'm astonished that it's quicker or is the preferred method for other reasons. It sounds like a lot of work.
From a programming point of view, it would be nice if objects deallocated immediately after they go out of scope. Not only could we rely on destructors being executed when we want them to be (one of the Python gotchas is that `__del__` is not called at a predictable time), but it would become much easier to memory-profile a program. [Here's an example](https://stackoverflow.com/questions/7114661/why-does-java-wait-so-long-to-run-the-garbage-collector) of how much confusion this causes. To my mind, the benefits of programming in a deallocate-right-away framework are so great that there must be some good reason why all the implementations I've heard of wait before deallocating. What is that benefit?
Note: if the walk over the graph of references is only needed to identify circular references (a pure reference count can't), then why not a hybrid approach? Deallocate objects as soon as their reference count hits zero and then also do periodic sweeps to look for circular references. Programmers working in such a framework would have a performance/determinism reason to stick to non-circular references as much as is feasible. It's often feasible (e.g. all data are in the form of JSON objects with no pointers to parents). Is this how any popular garbage collectors work? | 2013/07/15 | [
"https://Stackoverflow.com/questions/17646509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1623645/"
] | To understand garbage-collection, go to a bowling alley and watch how the pinsetter removes fallen pins after the first ball has been rolled. Rather than identifying and removing individual fallen pins, the pinsetter mechanism picks up all the pins that are still standing, lifts them to safety, and then runs a sweeper bar across the lane without regard for how many pins are lying there or where they are located. Once that is done, the pins that were standing are placed back on the lane. Many garbage-collection systems work on much the same principle: they have to do a non-trivial amount of work for each live object to ensure it doesn't get destroyed, but dead objects are destroyed wholesale without even being looked at or noticed.
**Addendum**
A garbage collector that always has to act on every live item to ensure its preservation is apt to be slow when there are a lot of live items; this is why garbage collectors have, historically, gotten a bad rap. The BASIC interpreter on the Commodore 64 (which was, incidentally, written by Microsoft in the days *before* MS-DOS) would take many seconds to perform a garbage collection in a program which had an array of a few hundred strings. Performance can be improved enormously if items which survive their first garbage collection can be ignored until *many* items have survived their first garbage collection, and those which have *participated in* and survived two garbage collections (note that they won't have to participate in their second collection until many other objects have survived their first) can be ignored until many other objects have also participated and survived in their second. This concept can be partially implemented easily (even on the Commodore 64, one could force all strings that exist at a given moment to be exempt from future garbage collection, which could be useful if on startup a program created large arrays of strings that would never change) but becomes more powerful with a little extra hardware support.
If one figures that a garbage collector will try to pack the objects which are going to be kept as close close to an end of memory as it can, generational support requires doing nothing more than keeping track of what (contiguous) range of memory is used by objects of each generation. All objects of every generation must be scanned to make sure all newer-generation live objects are located and preserved, but older-generation objects don't have to be moved, since the memory they occupy isn't in danger of wholesale elimination. This approach is very simple to implement, and can offer some significant performance improvements versus a non-generational GC, but even the scanning phase of a GC can be expensive if there are many live objects.
They key to speed up a "newer-generation" garbage collections is to observe that if an object "Fred" has not been written since the last garbage-collection in which it participated, it cannot possibly contain any references to any objects which have been created since that time. Consequently, none of the objects to which it holds references would be in any danger of elimination until Fred itself is eligible for elimination. Of course, if references to newer objects have been stored in Fred since the last lower-level GC, those references do need to be scanned. To accomplish this, advanced garbage collectors set up hardware traps which fire when parts of the older generation heap are written. When such a trap fires, it adds the objects in that region to a list of older generation objects which will need to be scanned, and then disables the trap associated with that region. In cases where older-generation objects frequently have references to newer objects stored in them, this extra bookkeeping can hurt performance, but in most cases it ends up being a major performance win. | Where I've come across GC systems they wait until they need to run, so that the relocation of objects still in use can be done once, rather than many times.
Consider a series of objects allocated sequentially in memory:
```
Object 1
Object 2
Object 3
Object 4
Object 5
```
If Object 2 can be deallocated, and GC operates immediately, Objects 3,4 and 5 will all need to be moved.
Now consider that object 4 can be deallocated, GC will now move Object 5 next to Object 3. Object 5 has been moved twice
However, if GC waits a short while, both Objects2 and 4 can be removed at the same time, meaning that Object 5 is moved once, and moved further.
Multiply the number of objects by, say, 100 and you can see considerable time savings from this approach |
17,646,509 | I have a "why does it work that way?" question about garbage collection (any/all implementations: Java, Python, CLR, etc.). Garbage collectors deallocate an object when it is no longer in any scope; the number of references pointing to it is zero. It seems to me that a framework could deallocate as soon as the number of references reaches zero, but all implementations I've encountered wait a while and then deallocate many objects at a time. My question is, why?
I'm assuming that the framework keeps an integer for each object (which I think Python does, because you have to call `PyINCREF` and `PyDECREF` when writing extension modules for it in C; presumably these functions modify a real counter somewhere). If so, then it shouldn't take any more CPU time to eliminate the object the moment it goes out of scope. If it takes x nanoseconds per object now, then it would take x nanoseconds per object later, right?
If my assumption is wrong and there is no integer associated with each object, then I understand why garbage collection waits: it would have to walk the graph of references to determine the status of each object, and that calculation takes time. Such a method would consume less memory than the explicit reference-count method, but I'm astonished that it's quicker or is the preferred method for other reasons. It sounds like a lot of work.
From a programming point of view, it would be nice if objects deallocated immediately after they go out of scope. Not only could we rely on destructors being executed when we want them to be (one of the Python gotchas is that `__del__` is not called at a predictable time), but it would become much easier to memory-profile a program. [Here's an example](https://stackoverflow.com/questions/7114661/why-does-java-wait-so-long-to-run-the-garbage-collector) of how much confusion this causes. To my mind, the benefits of programming in a deallocate-right-away framework are so great that there must be some good reason why all the implementations I've heard of wait before deallocating. What is that benefit?
Note: if the walk over the graph of references is only needed to identify circular references (a pure reference count can't), then why not a hybrid approach? Deallocate objects as soon as their reference count hits zero and then also do periodic sweeps to look for circular references. Programmers working in such a framework would have a performance/determinism reason to stick to non-circular references as much as is feasible. It's often feasible (e.g. all data are in the form of JSON objects with no pointers to parents). Is this how any popular garbage collectors work? | 2013/07/15 | [
"https://Stackoverflow.com/questions/17646509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1623645/"
] | To start with, a point of terminology: "garbage collection" means different things to different people, and some GC schemes are more sophisticated than others. Some people consider reference counting to be a form of GC, but personally I consider "true GC" to be distinct from reference counting.
With refcounts, there is an integer tracking the number of references, and you can trigger deallocation immediately when the refcount hits zero. This us how the CPython implementation works, and how most varieties of C++ smart pointers work. The CPython implementation adds a mark/sweep GC as a backup, so it's very much like the hybrid design you describe.
But refcounting is actually a pretty terrible solution, since it incurs a (relatively) expensive memory write (plus a memory barrier and/or lock, to ensure thread safety) every time a reference is passed, which happens quite a lot. In imperative languages like C++ it's possible (just difficult) to manage memory ownership through macros and coding conventions, but in functional languages like Lisp it becomes well-nigh impossible, because memory allocation usually happens implicitly due to local variable capture in a closure.
So it should come as no surprise that the first step toward a modern GC was invented for Lisp. It was called the "twospace allocator" or "twospace collector" and it worked exactly like it sounds: it divided allocatable memory (the "heap") into two spaces. Every new object was allocated out of the first space until it got too full, at which point allocation would stop and the runtime would walk the reference graph and copy only the live (still referenced) objects to the second space. After the live objects were copied, the first space would be marked empty, and allocation would resume, allocating new objects from the second space, until it got too full, at which point the live objects would be copied back to the first space and the process would start all over again.
The advantage of the twospace collector is that, instead of doing `O(N)` work, where *N* is the total number of garbage objects, it would only do `O(M)` work, where *M* is the *number of objects that were* ***not garbage***. Since in practice, most objects are allocated and then deallocated in a short period of time, this can lead to a substantial performance improvement.
Additionally, the twospace collector made it possible to simplify the allocator side as well. Most `malloc()` implementations maintain what is called a "free list": a list of which blocks are still available to be allocated. To allocate a new object, `malloc()` must scan the free list looking for an empty space that's big enough. But the twospace allocator didn't bother with that: it just allocated objects in each space like a stack, by just pushing a pointer up by the desired number of bytes.
So the twospace collector was much faster than `malloc()`, which was great because Lisp programs would do a lot more allocations than C programs would. Or, to put it another way, Lisp programs needed a way to allocate memory like a stack but with a lifetime that was not limited to the execution stack -- in other words, a stack that could grow infinitely without the program running out of memory. And, in fact, Raymond Chen argues that that's exactly how people should think about GC. I highly recommend his series of blog posts starting with [Everybody thinks about garbage collection the wrong way](http://blogs.msdn.com/b/oldnewthing/archive/2010/08/09/10047586.aspx).
But the twospace collector had a major flaw, which is that no program could ever use more than half the available RAM: the other half was always wasted. So the history of GC techniques is the history of attempts to improve on the twospace collector, usually by using heuristics of program behavior. However, GC algorithms inevitably involve tradeoffs, usually preferring to deallocate objects in batches instead of individually, which inevitably leads to delays where objects aren't deallocated immediately.
**Edit:** To answer your follow-up question, modern GCs generally incorporate the idea of [generational garbage collection](http://en.wikipedia.org/wiki/Garbage_collection_%28computer_science%29#Generational_GC_.28ephemeral_GC.29), where objects are grouped into different "generations" based on lifetime, and an object in one generation gets "promoted" to another generation once it's lived long enough. Sometimes a small difference in object lifetime (e.g. in a request-driven server, storing an object for longer than one request) can lead to a large difference in the amount of time it takes before the object gets deallocated, since it causes it to become more "tenured".
You correctly observe that a true GC has to operate "beneath" the level of `malloc()` and `free()`. (As a side note, it's worth learning about how `malloc()` and `free()` are implemented -- they aren't magic either!) Additionally, for an effective GC, you either need to be conservative (like the Boehm GC) and never move objects, and check things that *might* be pointers, or else you need some kind of "opaque pointer" type -- which Java and C# call "references". Opaque pointers are actually great for an allocation system, since it means you can always move objects by updating pointers to them. In a language like C where you interact directly with raw memory addresses, it's never really safe to move objects.
And there are multiple options for GC algorithms. The standard Java runtime contains no less than five collectors (Young, Serial, old CMS, new CMS, and G1, although I think I'm forgetting one) and each has a set of options that are all configurable.
However, GCs aren't magic. Most GCs are just exploiting the [time-space tradeoff](https://en.wikipedia.org/wiki/Space%E2%80%93time_tradeoff) of batching work, which means that the gains in speed are usually paid for in increased memory usage (compared to manual memory management or refcounting). But the combination of increased program performance and increased programmer performance, versus the low cost of RAM these days, makes the tradeoff usually worth it.
Hopefully that helps make things clearer! | Your thoughts are generally very insightful and well considered. You're just missing some basic information.
>
> Garbage collectors deallocate an object when it is no longer in any scope
>
>
>
That is completely incorrect in general. Garbage collectors work at run-time on a representation in which the notion of scope has long since been removed. For example, inlining and applications of liveness analysis destroy scope.
Tracing garbage collectors recycle space at some point after the last reference disappears. Liveness analysis can have references in the stack frame overwritten with other references even if the variable is still in scope because liveness analysis determined that the variable is never used again and, therefore, is no longer needed.
>
> It seems to me that a framework could deallocate as soon as the number of references reaches zero, but all implementations I've encountered wait a while and then deallocate many objects at a time. My question is, why?
>
>
>
Performance. You can reference count at the level of stack entries and registers but performance is absolutely terrible. All practical reference counting garbage collectors defer counter decrements to the end of scope in order to achieve reasonable ([but still bad](http://flyingfrogblog.blogspot.co.nz/2011/01/boosts-sharedptr-up-to-10-slower-than.html)) performance. State-of-the-art reference counting garbage collectors defer decrements in order to batch them up and can [allegedly](http://users.cecs.anu.edu.au/~steveb/downloads/pdf/rc-ismm-2012.pdf) attain competitive performance.
>
> I'm assuming that the framework keeps an integer for each object
>
>
>
Not necessarily. For example, OCaml uses a single bit.
>
> From a programming point of view, it would be nice if objects deallocated immediately after they go out of scope.
>
>
>
From a programming point of view, it would be nice if code ran 10x faster effortlessly.
Note that destructors inhibit tail call elimination which are invaluable in functional programming.
>
> I'm astonished that it's quicker or is the preferred method for other reasons. It sounds like a lot of work.
>
>
>
Consider a program that solves the n-queens problem by manipulating lists of chess board coordinates. The input is a single integer. The output is a list containing a few board coordinates. The intermediate data is a huge spaghetti stack of linked list nodes. If you coded this up by pre-allocating a big enough stack of linked list nodes, manipulating them to get the answer, copy out the (small) answer and then calling `free` once on the entire stack then you'd be doing almost exactly the same thing that a generational garbage collector does. In particular, you'd only copy ~6% of your data and you'd deallocate the other ~94% with a single call to `free`.
That was a perfect happy day scenario for a generational garbage collector that adheres to the hypothesis that "most objects die young and old objects rarely refer to new object". A pathological counter example where generational garbage collectors struggle is filling a hash table with freshly allocated objects. The spine of the hash table is a big array that survives so it will be in the old generation. Every new object inserted into it is a backpointer from the old generation to the new generation. Every new object survives. So generational garbage collectors allocate quickly but then mark everything, copy everything and update pointers to everything and, therefore, run ~3x slower than a simple C or C++ solution would.
>
> Not only could we rely on destructors being executed when we want them to be (one of the Python gotchas is that **del** is not called at a predictable time), but it would become much easier to memory-profile a program
>
>
>
Note that destructors and garbage collection are orthogonal concepts. For example, .NET provides destructors in the form of `IDisposable`.
FWIW, in ~15 years of using garbage collected languages I have used memory profiling maybe 3 times.
>
> why not a hybrid approach? Deallocate objects as soon as their reference count hits zero and then also do periodic sweeps to look for circular references. Programmers working in such a framework would have a performance/determinism reason to stick to non-circular references as much as is feasible. It's often feasible (e.g. all data are in the form of JSON objects with no pointers to parents). Is this how any popular garbage collectors work?
>
>
>
CPython does that, I believe. Mathematica and Erlang restrict the heap to be a DAG by design so they can use reference counting alone. GC researchers have proposed related techniques such as trial-deletion as an auxiliary algorithm to detect cycles.
Note also that reference counting is theoretically asymptotically faster than tracing garbage collection as its performance is independent of the size of the (live) heap. In practice, tracing garbage collection is still *much* faster even with 100GB heaps. |
31,984,699 | I have a table with large amount of data and I need to get some information with only one query.
Content of `PROCESSDATA` table:
```
PROCESSID | FIELDTIME | FIELDNAME | FIELDVALUE
-------------------------------------------------------------------------
125869 | 10/08/15 10:43:47,139000000 | IDREQUEST | 1236968702
125869 | 10/08/15 10:45:14,168000000 | state | Corrected
125869 | 10/08/15 10:43:10,698000000 | state | Pending
125869 | 10/08/15 10:45:15,193000000 | MsgReq | correctly updated
```
I need to get this result:
```
125869 IDREQUEST 1236968702 state Corrected MsgReq correctly updated
```
So I made this kind of query:
```
SELECT PROCESSID,
MAX(CASE WHEN FIELDNAME = 'IDREQUEST' THEN FIELDVALUE END) AS IDREQUEST
MAX(CASE WHEN FIELDNAME = 'state' THEN FIELDVALUE END) AS state,
MAX(CASE WHEN FIELDNAME = 'MsgReq' THEN FIELDVALUE END) AS MsgReq
FROM PROCESSDATA
WHERE FIELDNAME IN ('IDREQUEST', 'state', 'MsgReq')
GROUP BY PROCESSID, FIELDNAME;
```
But I don't get exactly what I want:
```
125869 IDREQUEST 1236968702 state Pending MsgReq correctly updated
```
I need to get the `FIELDVALUE` of a `FIELDNAME` based on `FIELDTIME`. In this example `FIELDNAME = 'state'` has two values `'Pending'` and `'Corrected'`,
so I want to get `'Corrected'` because its `FIELDTIME` 10/08/15 10:45:14,168000000 > 10/08/15 10:43:10,698000000 | 2015/08/13 | [
"https://Stackoverflow.com/questions/31984699",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5061027/"
] | Use `MAX( ... ) KEEP ( DENSE_RANK FIRST ORDER BY ... )` to get the maximum of a column based on the maximum of another column:
[SQL Fiddle](http://sqlfiddle.com/#!4/473c3f/1)
**Oracle 11g R2 Schema Setup**:
```
CREATE TABLE PROCESSDATA ( PROCESSID, FIELDTIME, FIELDNAME, FIELDVALUE ) AS
SELECT 125869, TO_TIMESTAMP( '10/08/15 10:43:47,139000000', 'DD/MM/YY HH24:MI:SS,FF9' ), 'IDREQUEST', '1236968702' FROM DUAL
UNION ALL SELECT 125869, TO_TIMESTAMP( '10/08/15 10:45:14,168000000', 'DD/MM/YY HH24:MI:SS,FF9' ), 'state', 'Corrected' FROM DUAL
UNION ALL SELECT 125869, TO_TIMESTAMP( '10/08/15 10:43:10,698000000', 'DD/MM/YY HH24:MI:SS,FF9' ), 'state', 'Pending' FROM DUAL
UNION ALL SELECT 125869, TO_TIMESTAMP( '10/08/15 10:45:15,193000000', 'DD/MM/YY HH24:MI:SS,FF9' ), 'MsgReq', 'correctly updated' FROM DUAL
UNION ALL SELECT 125870, TO_TIMESTAMP( '10/08/15 10:43:47,139000000', 'DD/MM/YY HH24:MI:SS,FF9' ), 'IDREQUEST', '1236968702' FROM DUAL
UNION ALL SELECT 125870, TO_TIMESTAMP( '10/08/15 10:45:15,193000000', 'DD/MM/YY HH24:MI:SS,FF9' ), 'MsgReq', 'correctly updated' FROM DUAL
```
**Query 1**:
```
SELECT PROCESSID,
MAX( CASE FIELDNAME WHEN 'IDREQUEST' THEN FIELDVALUE END ) KEEP ( DENSE_RANK FIRST ORDER BY CASE FIELDNAME WHEN 'IDREQUEST' THEN FIELDTIME END DESC NULLS LAST ) AS IDREQUEST,
MAX( CASE FIELDNAME WHEN 'state' THEN FIELDVALUE END ) KEEP ( DENSE_RANK FIRST ORDER BY CASE FIELDNAME WHEN 'state' THEN FIELDTIME END DESC NULLS LAST ) AS state,
MAX( CASE FIELDNAME WHEN 'MsgReq' THEN FIELDVALUE END ) KEEP ( DENSE_RANK FIRST ORDER BY CASE FIELDNAME WHEN 'MsgReq' THEN FIELDTIME END DESC NULLS LAST ) AS MsgReq
FROM PROCESSDATA
GROUP BY PROCESSID
```
**[Results](http://sqlfiddle.com/#!4/473c3f/1/0)**:
```
| PROCESSID | IDREQUEST | STATE | MSGREQ |
|-----------|------------|-----------|-------------------|
| 125869 | 1236968702 | Corrected | correctly updated |
| 125870 | 1236968702 | (null) | correctly updated |
``` | Try this
```
select t1.PROCESSID, t1.FIELDTIME,t1.FIELDNAME,t1.FIELDVALUE,
t1.STATE,t1.IDREQUEST from PROCESSDATA as t1
inner join
(
select PROCESSID, max(FIELDTIME) as FIELDTIME from PROCESSDATA
where FIELDNAME IN ('IDREQUEST', 'state', 'MsgReq')
group by PROCESSID
) as t2 on t1.PROCESSID=t2.PROCESSID and t1.FIELDTIME=t2.FIELDTIME
where t1.FIELDNAME IN ('IDREQUEST', 'state', 'MsgReq')
``` |
31,984,699 | I have a table with large amount of data and I need to get some information with only one query.
Content of `PROCESSDATA` table:
```
PROCESSID | FIELDTIME | FIELDNAME | FIELDVALUE
-------------------------------------------------------------------------
125869 | 10/08/15 10:43:47,139000000 | IDREQUEST | 1236968702
125869 | 10/08/15 10:45:14,168000000 | state | Corrected
125869 | 10/08/15 10:43:10,698000000 | state | Pending
125869 | 10/08/15 10:45:15,193000000 | MsgReq | correctly updated
```
I need to get this result:
```
125869 IDREQUEST 1236968702 state Corrected MsgReq correctly updated
```
So I made this kind of query:
```
SELECT PROCESSID,
MAX(CASE WHEN FIELDNAME = 'IDREQUEST' THEN FIELDVALUE END) AS IDREQUEST
MAX(CASE WHEN FIELDNAME = 'state' THEN FIELDVALUE END) AS state,
MAX(CASE WHEN FIELDNAME = 'MsgReq' THEN FIELDVALUE END) AS MsgReq
FROM PROCESSDATA
WHERE FIELDNAME IN ('IDREQUEST', 'state', 'MsgReq')
GROUP BY PROCESSID, FIELDNAME;
```
But I don't get exactly what I want:
```
125869 IDREQUEST 1236968702 state Pending MsgReq correctly updated
```
I need to get the `FIELDVALUE` of a `FIELDNAME` based on `FIELDTIME`. In this example `FIELDNAME = 'state'` has two values `'Pending'` and `'Corrected'`,
so I want to get `'Corrected'` because its `FIELDTIME` 10/08/15 10:45:14,168000000 > 10/08/15 10:43:10,698000000 | 2015/08/13 | [
"https://Stackoverflow.com/questions/31984699",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5061027/"
] | Use `MAX( ... ) KEEP ( DENSE_RANK FIRST ORDER BY ... )` to get the maximum of a column based on the maximum of another column:
[SQL Fiddle](http://sqlfiddle.com/#!4/473c3f/1)
**Oracle 11g R2 Schema Setup**:
```
CREATE TABLE PROCESSDATA ( PROCESSID, FIELDTIME, FIELDNAME, FIELDVALUE ) AS
SELECT 125869, TO_TIMESTAMP( '10/08/15 10:43:47,139000000', 'DD/MM/YY HH24:MI:SS,FF9' ), 'IDREQUEST', '1236968702' FROM DUAL
UNION ALL SELECT 125869, TO_TIMESTAMP( '10/08/15 10:45:14,168000000', 'DD/MM/YY HH24:MI:SS,FF9' ), 'state', 'Corrected' FROM DUAL
UNION ALL SELECT 125869, TO_TIMESTAMP( '10/08/15 10:43:10,698000000', 'DD/MM/YY HH24:MI:SS,FF9' ), 'state', 'Pending' FROM DUAL
UNION ALL SELECT 125869, TO_TIMESTAMP( '10/08/15 10:45:15,193000000', 'DD/MM/YY HH24:MI:SS,FF9' ), 'MsgReq', 'correctly updated' FROM DUAL
UNION ALL SELECT 125870, TO_TIMESTAMP( '10/08/15 10:43:47,139000000', 'DD/MM/YY HH24:MI:SS,FF9' ), 'IDREQUEST', '1236968702' FROM DUAL
UNION ALL SELECT 125870, TO_TIMESTAMP( '10/08/15 10:45:15,193000000', 'DD/MM/YY HH24:MI:SS,FF9' ), 'MsgReq', 'correctly updated' FROM DUAL
```
**Query 1**:
```
SELECT PROCESSID,
MAX( CASE FIELDNAME WHEN 'IDREQUEST' THEN FIELDVALUE END ) KEEP ( DENSE_RANK FIRST ORDER BY CASE FIELDNAME WHEN 'IDREQUEST' THEN FIELDTIME END DESC NULLS LAST ) AS IDREQUEST,
MAX( CASE FIELDNAME WHEN 'state' THEN FIELDVALUE END ) KEEP ( DENSE_RANK FIRST ORDER BY CASE FIELDNAME WHEN 'state' THEN FIELDTIME END DESC NULLS LAST ) AS state,
MAX( CASE FIELDNAME WHEN 'MsgReq' THEN FIELDVALUE END ) KEEP ( DENSE_RANK FIRST ORDER BY CASE FIELDNAME WHEN 'MsgReq' THEN FIELDTIME END DESC NULLS LAST ) AS MsgReq
FROM PROCESSDATA
GROUP BY PROCESSID
```
**[Results](http://sqlfiddle.com/#!4/473c3f/1/0)**:
```
| PROCESSID | IDREQUEST | STATE | MSGREQ |
|-----------|------------|-----------|-------------------|
| 125869 | 1236968702 | Corrected | correctly updated |
| 125870 | 1236968702 | (null) | correctly updated |
``` | Here I wrote you a very fast and efficient query that gets the last 7 days of processdata. and it also handles if two records have the exact same milliseconds, it will grab just 1.
```
SELECT PD.PROCESSID,
(SELECT MAX(PD1.FIELDVALUE) FROM PROCESSDATA PD1 WHERE PD.PROCESSID = PD1.PROCESSID AND PD1.FIELDNAME = 'IDREQUEST' AND PD1.FIELDTIME = (SELECT MAX(PD2.FIELDTIME) FROM PROCESSDATA PD2 WHERE PD.PROCESSID = PD2.PROCESSID AND PD2.FIELDNAME = 'IDREQUEST')) AS IDREQUEST,
(SELECT MAX(PD3.FIELDVALUE) FROM PROCESSDATA PD3 WHERE PD.PROCESSID = PD3.PROCESSID AND PD3.FIELDNAME = 'state' AND PD3.FIELDTIME = (SELECT MAX(PD4.FIELDTIME) FROM PROCESSDATA PD4 WHERE PD.PROCESSID = PD4.PROCESSID AND PD4.FIELDNAME = 'state')) AS state,
(SELECT MAX(PD5.FIELDVALUE) FROM PROCESSDATA PD5 WHERE PD.PROCESSID = PD5.PROCESSID AND PD5.FIELDNAME = 'MsgReq' AND PD5.FIELDTIME = (SELECT MAX(PD6.FIELDTIME) FROM PROCESSDATA PD6 WHERE PD.PROCESSID = PD6.PROCESSID AND PD6.FIELDNAME = 'MsgReq')) AS MsgReq
FROM PROCESSDATA PD
WHERE PD.PROCESSID IN
(SELECT PD7.PROCESSID FROM PROCESSDATA PD7 WHERE PD7.FIELDTIME >= (GETDATE()-7))
``` |
15,728 | When I budget for me, I usually count savings part of the expenses.
**Is it the correct way to do it?** Because of this, when I look at my budget at the end of the month, it feels like my expenses are a bit high. | 2012/06/25 | [
"https://money.stackexchange.com/questions/15728",
"https://money.stackexchange.com",
"https://money.stackexchange.com/users/4904/"
] | I have found it useful to have four (yes, four) categories in a budget, rather than the traditional two (income and expenses). This may not be the "correct" way to do it, but it makes sense to me.
First, **income**. This one is a no-brainer and is needed in any budget. In this scheme income also includes money taken out of deferred-spending savings (see below). I don't consider money taken out of savings "income", however.
Second, **expenses**. This is the money you actually *spent* in the given period (month, with most budgets, but if you are paid on a different schedule it may be bi-weekly, quarterly, or whatever else makes sense in your particular situation). Plain and simple whatever goes into "expenses" is money that is no longer available to you in any way shape or form. I'd put credit card spending into this category as well (treat it the same as debit card spending) but as long as you aren't carrying a balance, that's simply a matter of taste. I'd also put loan payments into this category since "getting the money back" then means taking out a new loan.
Third, **deferred spending**. This category is for setting money aside for big-ticket items that you know are coming and which you either aren't sure exactly when they will turn into real expenses (money spent), or know when they will and they are simply too large to handle comfortably with the day-to-day cash flow. An example of the former might be a new car fund, and the latter might be something like my buying dog food about once every four months; the cost for such a batch is too large to handle comfortably within a single month's budget without making large adjustments elsewhere, so I set money aside each month and then use it every four months (or thereabouts). In this categorization, *setting the money aside* classifies as deferred spending, and *taking it back out* classifies as income. If done right, this category will trend toward zero over time, but may hold a substantial balance at times and will rarely or never actually *be at* zero. When you are just starting out, it may very well be negative unless you compensate by taking money from elsewhere. Note that some may refer to this category as targeted savings, which to me certainly is an overlapping term but does not carry quite the same meaning.
Fourth, **savings**. This is long term savings and investments that are not earmarked for a particular purpose or the purpose for which it is earmarked is very far into the future. How you define "particular purpose" and "very far into the future" are really a matter of definition, here, and you will have to come up with distinctions that work for you. **The point is** to separate saving for the future (savings) from saving to cover upcoming expenses *that you have already decided on or committed to* (deferred spending).
The deferred spending and savings categories can be further subdivided in much the same way as income and expenses, so you can keep track of the money going into your "new car" fund and your "travelling around the world" fund separately. The actual place where the money ends up is on your balance sheet, not the budget. (If you really want to keep track of your money, you will need both.)
In the end, the budget's bottom line becomes **(income + money out of deferred spending) - (expenses + money into deferred spending + savings)**. Just like a regular income/expenses budget, this will be zero if your budget is balanced. | Savings isn't really an expense but I understand what your trying to say, you are allocating a specific amount to save. Which is definitely a good idea. If you're going over the budget then you have to decide what to cut down on. You can sacrifice some luxuries or if you feel comfortable enough you can cut back on how much you save. Clearly the more conservative choice would be to cut back on luxuries. |
15,728 | When I budget for me, I usually count savings part of the expenses.
**Is it the correct way to do it?** Because of this, when I look at my budget at the end of the month, it feels like my expenses are a bit high. | 2012/06/25 | [
"https://money.stackexchange.com/questions/15728",
"https://money.stackexchange.com",
"https://money.stackexchange.com/users/4904/"
] | It's a matter of semantics.
If a budget is done to include every last item, by definition there would be no extra. I save about 20% of my income, but if that's in my budget, the budget is a 100% view of my gross income, a pie chart (or table) of all cash flow.
Some people prefer to think of their budget as the spending with what remains going into saving. In the end, that produces the same result. An accounting of all of your money, although this approach strikes me as having the potential of the savings aspect appearing an afterthought.
Most financial advisors would council a client to save "off the top" and not with what remains after everything else.
From looking at some of the other answers I see the issue is one of accounting. If it helps the matter, the piece you are missing in the discussion is the rest of the balance sheet, the assets accruing. If you gross $2000, account for $2000, but $400 went to savings, your balance sheet has improved $400.
By the way, paying a debt's principal will also improve the balance sheet. Your net worth grows for saving a dollar or paying off a dollar's worth of debt. | Savings isn't really an expense but I understand what your trying to say, you are allocating a specific amount to save. Which is definitely a good idea. If you're going over the budget then you have to decide what to cut down on. You can sacrifice some luxuries or if you feel comfortable enough you can cut back on how much you save. Clearly the more conservative choice would be to cut back on luxuries. |
15,728 | When I budget for me, I usually count savings part of the expenses.
**Is it the correct way to do it?** Because of this, when I look at my budget at the end of the month, it feels like my expenses are a bit high. | 2012/06/25 | [
"https://money.stackexchange.com/questions/15728",
"https://money.stackexchange.com",
"https://money.stackexchange.com/users/4904/"
] | I have found it useful to have four (yes, four) categories in a budget, rather than the traditional two (income and expenses). This may not be the "correct" way to do it, but it makes sense to me.
First, **income**. This one is a no-brainer and is needed in any budget. In this scheme income also includes money taken out of deferred-spending savings (see below). I don't consider money taken out of savings "income", however.
Second, **expenses**. This is the money you actually *spent* in the given period (month, with most budgets, but if you are paid on a different schedule it may be bi-weekly, quarterly, or whatever else makes sense in your particular situation). Plain and simple whatever goes into "expenses" is money that is no longer available to you in any way shape or form. I'd put credit card spending into this category as well (treat it the same as debit card spending) but as long as you aren't carrying a balance, that's simply a matter of taste. I'd also put loan payments into this category since "getting the money back" then means taking out a new loan.
Third, **deferred spending**. This category is for setting money aside for big-ticket items that you know are coming and which you either aren't sure exactly when they will turn into real expenses (money spent), or know when they will and they are simply too large to handle comfortably with the day-to-day cash flow. An example of the former might be a new car fund, and the latter might be something like my buying dog food about once every four months; the cost for such a batch is too large to handle comfortably within a single month's budget without making large adjustments elsewhere, so I set money aside each month and then use it every four months (or thereabouts). In this categorization, *setting the money aside* classifies as deferred spending, and *taking it back out* classifies as income. If done right, this category will trend toward zero over time, but may hold a substantial balance at times and will rarely or never actually *be at* zero. When you are just starting out, it may very well be negative unless you compensate by taking money from elsewhere. Note that some may refer to this category as targeted savings, which to me certainly is an overlapping term but does not carry quite the same meaning.
Fourth, **savings**. This is long term savings and investments that are not earmarked for a particular purpose or the purpose for which it is earmarked is very far into the future. How you define "particular purpose" and "very far into the future" are really a matter of definition, here, and you will have to come up with distinctions that work for you. **The point is** to separate saving for the future (savings) from saving to cover upcoming expenses *that you have already decided on or committed to* (deferred spending).
The deferred spending and savings categories can be further subdivided in much the same way as income and expenses, so you can keep track of the money going into your "new car" fund and your "travelling around the world" fund separately. The actual place where the money ends up is on your balance sheet, not the budget. (If you really want to keep track of your money, you will need both.)
In the end, the budget's bottom line becomes **(income + money out of deferred spending) - (expenses + money into deferred spending + savings)**. Just like a regular income/expenses budget, this will be zero if your budget is balanced. | It's a matter of semantics.
If a budget is done to include every last item, by definition there would be no extra. I save about 20% of my income, but if that's in my budget, the budget is a 100% view of my gross income, a pie chart (or table) of all cash flow.
Some people prefer to think of their budget as the spending with what remains going into saving. In the end, that produces the same result. An accounting of all of your money, although this approach strikes me as having the potential of the savings aspect appearing an afterthought.
Most financial advisors would council a client to save "off the top" and not with what remains after everything else.
From looking at some of the other answers I see the issue is one of accounting. If it helps the matter, the piece you are missing in the discussion is the rest of the balance sheet, the assets accruing. If you gross $2000, account for $2000, but $400 went to savings, your balance sheet has improved $400.
By the way, paying a debt's principal will also improve the balance sheet. Your net worth grows for saving a dollar or paying off a dollar's worth of debt. |
10,317,755 | I am just curious to know if at all there is any technical/theoretical reasons for a windows NT service to be more stable that created with c++ rather than .Net application or vice versa.
Actually I had two Nt Services one made with cpp and other with .Net application. I observe both as showing in start mode but I need to restart service created by .Net often(on average once every 2 days) to respond. When I tried to know about this strange behavior of .Net service some of my friends come up with answers related to OS internals and some say .Net was build like that. I am totally unaware of .Net platform so in finding the reason this forum is one of my attempt.
Thanks
Anil | 2012/04/25 | [
"https://Stackoverflow.com/questions/10317755",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/319048/"
] | You would probably gain some performance with C++ (if it is native) than with .NET, but this would be only during the startup. Once they are both up and running, there shouldn't be much of a difference.
However, creating a service through native C++ (as far as I can remembe now) was really pain and it took quite a bit of time. With .NET it is much easier and faster. To be honest, I never had a need to create some super important high speed service. I have created quite a number of services in .NET and they successfully do their job. In these cases the business end result was more important than the actual performance.
It is really all about your needs, but as someone said in the comment, the service will be as stable as the programmer wrote it. If you are more comfortable creating a service for controlling a nuclear reactor in .NET, do it in .NET. :-) | The C++ Service, or any win32 executable, will continue to work across multiple versions of windows, both future and past. This gives win32 longevity and near immunity to changes in the server. Your C++ service will likely not have very many, or none at all, dependencies on anything installed on the server. This reduces the number of points of failure.
.Net applications are highly fragile and can cost a lot of money to repair a failed .net application. The fragilness comes from an application being finicky on the version of .net framework installed on the server. Microsoft has a short lifespan on .net versions. .Net has lots of dependencies that increase points of failure such as a high number or assembly files and sensitive config files.
In conclusion, C++/Win32 will be much more stable due to the stability of win32 and having much fewer dependencies. |
13,218,191 | I have Tried number of available examples which helps to load contacts from phone. It works fine on emulator but when i try on real time mobile then it crashes. Can any one send me tested piece of code which is working flawless. then i can compare with my code.
one failed code example.
```
Cursor phones = getContentResolver().query(ContactsContract.CommonDataKinds.Phone.CONTENT_URI, null,null,null, null);
if (phones.getCount() > 0)
{
while (phones.moveToNext())
{
name=phones.getString(phones.getColumnIndex(ContactsContract.CommonDataKinds.Phone.DISPLAY_NAME));
phoneNumber = phones.getString(phones.getColumnIndex(ContactsContract.CommonDataKinds.Phone.NUMBER));
}
phones.close();
```
Kindly help. | 2012/11/04 | [
"https://Stackoverflow.com/questions/13218191",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1642500/"
] | This is me playing around with this data structure and some template tomfoolery.
At the bottom of all of this mess is accessing two flat arrays, one of them containing a tree of sums, the other containing a tree of carry values to propogate down later. Conceptually they form one binary tree.
The true value of a node in the binary tree is the value in stored sum tree, plus the number of leaves under the node times the sum of all carry tree values from the node back up to the root.
At the same time, the true value of each node in the tree is equal to the true value of each leaf node under it.
I wrote one function to do both carry and sum, because it turns out they are visiting the same nodes. And a read sometimes writes. So you get a sum by calling it with an `increase` of zero.
All the template tomfoolery does is do the math for where the offsets into each tree the nodes are, and where the left and right children are.
While I do use a `struct`, the `struct` is transient -- it is just a wrapper with some pre calculated values around an offset into an array. I do store a pointer to the start of the array, but every `block_ptr` uses the exact same `root` value in this program.
For debugging, I have some craptacular Assert() and Debug() macros, plus a trace nullary function that the recursive sum function calls (which I use to track the total number of calls into it). Once again, being needlessly complex to avoid global state. :)
```
#include <memory>
#include <iostream>
// note that you need more than 2^30 space to fit this
enum {max_tier = 30};
typedef long long intt;
#define Assert(x) (!(x)?(std::cout << "ASSERT FAILED: (" << #x << ")\n"):(void*)0)
#define DEBUG(x)
template<size_t tier, size_t count=0>
struct block_ptr
{
enum {array_size = 1+block_ptr<tier-1>::array_size * 2};
enum {range_size = block_ptr<tier-1>::range_size * 2};
intt* root;
size_t offset;
size_t logical_offset;
explicit block_ptr( intt* start, size_t index, size_t logical_loc=0 ):root(start),offset(index), logical_offset(logical_loc) {}
intt& operator()() const
{
return root[offset];
}
block_ptr<tier-1> left() const
{
return block_ptr<tier-1>(root, offset+1, logical_offset);
}
block_ptr<tier-1> right() const
{
return block_ptr<tier-1>(root, offset+1+block_ptr<tier-1>::array_size, logical_offset+block_ptr<tier-1>::range_size);
}
enum {is_leaf=false};
};
template<>
struct block_ptr<0>
{
enum {array_size = 1};
enum {range_size = 1};
enum {is_leaf=true};
intt* root;
size_t offset;
size_t logical_offset;
explicit block_ptr( intt* start, size_t index, size_t logical_loc=0 ):root(start),offset(index), logical_offset(logical_loc)
{}
intt& operator()() const
{
return root[offset];
}
// exists only to make some of the below code easier:
block_ptr<0> left() const { Assert(false); return *this; }
block_ptr<0> right() const { Assert(false); return *this; }
};
template<size_t tier>
void propogate_carry( block_ptr<tier> values, block_ptr<tier> carry )
{
if (carry() != 0)
{
values() += carry() * block_ptr<tier>::range_size;
if (!block_ptr<tier>::is_leaf)
{
carry.left()() += carry();
carry.right()() += carry();
}
carry() = 0;
}
}
// sums the values from begin to end, but not including end!
// ie, the half-open interval [begin, end) in the tree
// if increase is non-zero, increases those values by that much
// before returning it
template<size_t tier, typename trace>
intt query_or_modify( block_ptr<tier> values, block_ptr<tier> carry, int begin, int end, int increase=0, trace const& tr = [](){} )
{
tr();
DEBUG(
std::cout << begin << " " << end << " " << increase << "\n";
if (increase)
{
std::cout << "Increasing " << end-begin << " elements by " << increase << " starting at " << begin+values.offset << "\n";
}
else
{
std::cout << "Totaling " << end-begin << " elements starting at " << begin+values.logical_offset << "\n";
}
)
if (end <= begin)
return 0;
size_t mid = block_ptr<tier>::range_size / 2;
DEBUG( std::cout << "[" << values.logical_offset << ";" << values.logical_offset+mid << ";" << values.logical_offset+block_ptr<tier>::range_size << "]\n"; )
// exatch math first:
bool bExact = (begin == 0 && end >= block_ptr<tier>::range_size);
if (block_ptr<tier>::is_leaf)
{
Assert(bExact);
}
bExact = bExact || block_ptr<tier>::is_leaf; // leaves are always exact
if (bExact)
{
carry()+=increase;
intt retval = (values()+carry()*block_ptr<tier>::range_size);
DEBUG( std::cout << "Exact sum is " << retval << "\n"; )
return retval;
}
// we don't have an exact match. Apply the carry and pass it down to children:
propogate_carry(values, carry);
values() += increase * end-begin;
// Now delegate to children:
if (begin >= mid)
{
DEBUG( std::cout << "Right:"; )
intt retval = query_or_modify( values.right(), carry.right(), begin-mid, end-mid, increase, tr );
DEBUG( std::cout << "Right sum is " << retval << "\n"; )
return retval;
}
else if (end <= mid)
{
DEBUG( std::cout << "Left:"; )
intt retval = query_or_modify( values.left(), carry.left(), begin, end, increase, tr );
DEBUG( std::cout << "Left sum is " << retval << "\n"; )
return retval;
}
else
{
DEBUG( std::cout << "Left:"; )
intt left = query_or_modify( values.left(), carry.left(), begin, mid, increase, tr );
DEBUG( std::cout << "Right:"; )
intt right = query_or_modify( values.right(), carry.right(), 0, end-mid, increase, tr );
DEBUG( std::cout << "Right sum is " << left << " and left sum is " << right << "\n"; )
return left+right;
}
}
```
Here are some helper classes to make creating a segment tree of a given size easy. Note, however, that all you need is an array of the right size, and you can construct a block\_ptr from a pointer to element 0, and you are good to go.
```
template<size_t tier>
struct segment_tree
{
typedef block_ptr<tier> full_block_ptr;
intt block[full_block_ptr::range_size];
full_block_ptr root() { return full_block_ptr(&block[0],0); }
void init()
{
std::fill_n( &block[0], size_t(full_block_ptr::range_size), 0 );
}
};
template<size_t entries, size_t starting=0>
struct required_tier
{
enum{ tier =
block_ptr<starting>::array_size >= entries
?starting
:required_tier<entries, starting+1>::tier
};
enum{ error =
block_ptr<starting>::array_size >= entries
?false
:required_tier<entries, starting+1>::error
};
};
// max 2^30, to limit template generation.
template<size_t entries>
struct required_tier<entries, size_t(max_tier)>
{
enum{ tier = 0 };
enum{ error = true };
};
// really, these just exist to create an array of the correct size
typedef required_tier< 1000000 > how_big;
enum {tier = how_big::tier};
int main()
{
segment_tree<tier> values;
segment_tree<tier> increments;
Assert(!how_big::error); // can be a static assert -- fails if the enum of max tier is too small for the number of entries you want
values.init();
increments.init();
auto value_root = values.root();
auto carry_root = increments.root();
size_t count = 0;
auto tracer = [&count](){count++;};
intt zero = query_or_modify( value_root, carry_root, 0, 100000, 0, tracer );
std::cout << "zero is " << zero << " in " << count << " steps\n";
count = 0;
Assert( zero == 0 );
intt test2 = query_or_modify( value_root, carry_root, 0, 100, 10, tracer ); // increase everything from 0 to 100 by 10
Assert(test2 == 1000);
std::cout << "test2 is " << test2 << " in " << count << " steps \n";
count = 0;
intt test3 = query_or_modify( value_root, carry_root, 1, 1000, 0, tracer );
Assert(test3 == 990);
std::cout << "test3 is " << test3 << " in " << count << " steps\n";
count = 0;
intt test4 = query_or_modify( value_root, carry_root, 50, 5000, 87, tracer );
Assert(test4 == 10*(100-50) + 87*(5000-50) );
std::cout << "test4 is " << test4 << " in " << count << " steps\n";
count = 0;
}
```
While this isn't the answer you want, it might make it easier for someone to write it. And writing this amused me. So, hope it helps!
The code was tested and compiled on Ideone.com using a C++0x compiler. | Lazy propagation means updating only when required. Its a technique that allows range updates to be carried out with asymptotic time complexity O(logN) (N here is the range).
Say you want to update the range [0,15] then you update the nodes [0,15] and set a flag in the node that says that it's children nodes are to be updated (use a sentinel value in case the flag is not used) .
Possible Stress Test Case :
0 1 100000
0 1 100000
0 1 100000 ...repeat Q times (where Q = 99999) and the 100000th Query would be
1 1 100000
In that case most implentations would sit flipping 100000 coins 99999 times just to answer one simple query in the end and time out.
With Lazy propagation you just need to flip the node [0,100000] 99999 times and set/unset a flag that its children are to be updated. When the actual query itself is asked, you start traversing its children and start flipping them, push the flag down and unset the parent's flag.
Oh and be sure you are using proper I/O routines (scanf and printf instead of cin and cout if its c++) Hope this has given you an idea of what lazy propagation means. More information : <http://www.spoj.pl/forum/viewtopic.php?f=27&t=8296> |
8,431,133 | I have an existing `SqlConnection conn;` in some controller (using `ASP.NET MVC3 + Razor`). Now, I would like to render a simple table depending on some SQL command.
**The question is:**
How to "bind" loaded data in Razor using `ViewBag`? Is it necessary to iterate row-after-row and produce `<tr>....</tr>` in Razor? | 2011/12/08 | [
"https://Stackoverflow.com/questions/8431133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/684534/"
] | There is no binding like this. And a simple for loop means it isnt much code either, example...
```
<table>
@foreach(var row in Model.MyRows)
{
<tr>
@foreach(var col in row.Columns)
{
<td>@(col.Value)</td>
}
</tr>
}
</table>
```
hope that gives you an idea anyway, and this way you get a lot more control over the style of your rendered table | I'd suggest you to use strongly typed views and pass a populated model to your view and as you mentioned, iterate items of the ViewModel. Binding doesn't really have a place in MVC. |
57,446,980 | I am trying to write a recursive function which returns a copy of a list where neighbouring elements have been swapped. For example, swapElements([2, 3, 4, 9]) would return [3, 2, 9, 4].
This is my code as of now:
```
def swapElements(mylist):
if len(mylist) == 1:
pass
if len(mylist) == 2:
mylist[0], mylist[1] = mylist[1], mylist[0]
else:
mylist[0], mylist[1] = mylist[1], mylist[0]
swapElements(mylist[2:])
return mylist
```
When I run this function it only returns the list with the first two elements swapped, does anyone know why this function is not swapping any other elements other than the first two and how I could fix it? | 2019/08/11 | [
"https://Stackoverflow.com/questions/57446980",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | >
> Comparing two slices for missing element
>
>
> I have two slices: `a []string`, `b []string`. `b` contains all the same
> elements as `a` plus an extra.
>
>
>
For example,
```
package main
import (
"fmt"
)
func missing(a, b []string) string {
ma := make(map[string]bool, len(a))
for _, ka := range a {
ma[ka] = true
}
for _, kb := range b {
if !ma[kb] {
return kb
}
}
return ""
}
func main() {
a := []string{"a", "b", "c", "d", "e", "f", "g"}
b := []string{"a", "1sdsdfsdfsdsdf", "c", "d", "e", "f", "g", "b"}
fmt.Println(missing(a, b))
}
```
Output:
```
1sdsdfsdfsdsdf
``` | ```
/*
An example of how to find the difference between two slices.
This example uses empty struct (0 bytes) for map values.
*/
package main
import (
"fmt"
)
// empty struct (0 bytes)
type void struct{}
// missing compares two slices and returns slice of differences
func missing(a, b []string) []string {
// create map with length of the 'a' slice
ma := make(map[string]void, len(a))
diffs := []string{}
// Convert first slice to map with empty struct (0 bytes)
for _, ka := range a {
ma[ka] = void{}
}
// find missing values in a
for _, kb := range b {
if _, ok := ma[kb]; !ok {
diffs = append(diffs, kb)
}
}
return diffs
}
func main() {
a := []string{"a", "b", "c", "d", "e", "f", "g"}
b := []string{"a", "c", "d", "e", "f", "g", "b"}
c := []string{"a", "b", "x", "y", "z"}
fmt.Println("a and b diffs", missing(a, b))
fmt.Println("a and c diffs", missing(a, c))
}
```
Output
```
a and b diffs []
a and c diffs [x y z]
``` |
7,173,238 | I having the page in that i have the gridview and page index changing also for every record i have
a check box.on top of the page i have the imagebutton in that when i click that button i am redirecting it into another page
in that page i have a back button which redirects to present page with checkbox and gridview.
what should i do to retain to get the checkbox when ever i check or some thing else?
This is gridview paging:
```
protected void ManageCalenderShift_PageIndexChanging(object sender, GridViewPageEventArgs e)
{
StoreOldValue();
EmployeeDetails.PageIndex = e.NewPageIndex;
SortedBindDataToGrid();
PupulateoldCheckValue();
}
private void StoreOldValue()
{
ArrayList categoryIDList = new ArrayList();
foreach (GridViewRow row in EmployeeDetails.Rows)
{
Label can_id = (Label)row.FindControl("UserACENumber");
bool result = ((CheckBox)row.FindControl("Chkgrid")).Checked;
if (Session["CHECKED_ITEMS"] != null)
categoryIDList = (ArrayList)Session["CHECKED_ITEMS"];
if (result)
{
if (!categoryIDList.Contains(can_id.Text))
categoryIDList.Add(can_id.Text);
}
else
categoryIDList.Remove(can_id.Text);
}
if (categoryIDList != null && categoryIDList.Count > 0)
Session["CHECKED_ITEMS"] = categoryIDList;
}
private void PupulateoldCheckValue()
{
ArrayList categoryIDList = (ArrayList)Session["CHECKED_ITEMS"];
if (categoryIDList != null && categoryIDList.Count > 0)
{
foreach (GridViewRow row in EmployeeDetails.Rows)
{
Label can_id = (Label)row.FindControl("UserACENumber");
if (categoryIDList.Contains(can_id.Text))
{
CheckBox myCheckBox = (CheckBox)row.FindControl("Chkgrid");
myCheckBox.Checked = true;
}
}
}
}
```
This is the redirect to another page code that goes to page1:
```
protected void imgView_Click(object sender, ImageClickEventArgs e)
{
StoreOldValue();
PupulateoldCheckValue();
Response.Redirect("page1.aspx?UserACENumber=" + (Server.UrlDecode(URLSecurity.Encrypt(UserContext.ACENumber))));
}
```
then in the "page1" i have back button which redirects to "page" aspx :
```
protected void imgimgBack_Click(object sender, ImageClickEventArgs e)
{
Response.Redirect("page.aspx?UserACENumber=" + (Server.UrlDecode(URLSecurity.Encrypt(UserContext.ACENumber))));
}
```
now my issue is:
when i check any one checkbox in the "page.aspx" and i go click image button and redirects to "page1.aspx" and come back to current working "page.aspx" whatever the checkbox i have checked gets disappear. | 2011/08/24 | [
"https://Stackoverflow.com/questions/7173238",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/422570/"
] | I will answer my own question and give credit to Quentin, who commented on the question but did not write an answer (if you post the answer, I will select you as the correct answer).
There is an HTML version:
```
<div class="g-plusone" data-size="medium"></div>
``` | Well, the only solution you have is to change the taglib which you are using.
If you are using a taglib, the only thing you must do is add a static attribute named namespace with the name you want, like the following example:
```
static namespace = google1
``` |
7,173,238 | I having the page in that i have the gridview and page index changing also for every record i have
a check box.on top of the page i have the imagebutton in that when i click that button i am redirecting it into another page
in that page i have a back button which redirects to present page with checkbox and gridview.
what should i do to retain to get the checkbox when ever i check or some thing else?
This is gridview paging:
```
protected void ManageCalenderShift_PageIndexChanging(object sender, GridViewPageEventArgs e)
{
StoreOldValue();
EmployeeDetails.PageIndex = e.NewPageIndex;
SortedBindDataToGrid();
PupulateoldCheckValue();
}
private void StoreOldValue()
{
ArrayList categoryIDList = new ArrayList();
foreach (GridViewRow row in EmployeeDetails.Rows)
{
Label can_id = (Label)row.FindControl("UserACENumber");
bool result = ((CheckBox)row.FindControl("Chkgrid")).Checked;
if (Session["CHECKED_ITEMS"] != null)
categoryIDList = (ArrayList)Session["CHECKED_ITEMS"];
if (result)
{
if (!categoryIDList.Contains(can_id.Text))
categoryIDList.Add(can_id.Text);
}
else
categoryIDList.Remove(can_id.Text);
}
if (categoryIDList != null && categoryIDList.Count > 0)
Session["CHECKED_ITEMS"] = categoryIDList;
}
private void PupulateoldCheckValue()
{
ArrayList categoryIDList = (ArrayList)Session["CHECKED_ITEMS"];
if (categoryIDList != null && categoryIDList.Count > 0)
{
foreach (GridViewRow row in EmployeeDetails.Rows)
{
Label can_id = (Label)row.FindControl("UserACENumber");
if (categoryIDList.Contains(can_id.Text))
{
CheckBox myCheckBox = (CheckBox)row.FindControl("Chkgrid");
myCheckBox.Checked = true;
}
}
}
}
```
This is the redirect to another page code that goes to page1:
```
protected void imgView_Click(object sender, ImageClickEventArgs e)
{
StoreOldValue();
PupulateoldCheckValue();
Response.Redirect("page1.aspx?UserACENumber=" + (Server.UrlDecode(URLSecurity.Encrypt(UserContext.ACENumber))));
}
```
then in the "page1" i have back button which redirects to "page" aspx :
```
protected void imgimgBack_Click(object sender, ImageClickEventArgs e)
{
Response.Redirect("page.aspx?UserACENumber=" + (Server.UrlDecode(URLSecurity.Encrypt(UserContext.ACENumber))));
}
```
now my issue is:
when i check any one checkbox in the "page.aspx" and i go click image button and redirects to "page1.aspx" and come back to current working "page.aspx" whatever the checkbox i have checked gets disappear. | 2011/08/24 | [
"https://Stackoverflow.com/questions/7173238",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/422570/"
] | Or you may output it like this in your gsp file:
```
${'<g:plusone annotation="inline" href="http://www.anythinggoeshere_xyz.com"></g:plusone>'}
``` | Well, the only solution you have is to change the taglib which you are using.
If you are using a taglib, the only thing you must do is add a static attribute named namespace with the name you want, like the following example:
```
static namespace = google1
``` |
7,173,238 | I having the page in that i have the gridview and page index changing also for every record i have
a check box.on top of the page i have the imagebutton in that when i click that button i am redirecting it into another page
in that page i have a back button which redirects to present page with checkbox and gridview.
what should i do to retain to get the checkbox when ever i check or some thing else?
This is gridview paging:
```
protected void ManageCalenderShift_PageIndexChanging(object sender, GridViewPageEventArgs e)
{
StoreOldValue();
EmployeeDetails.PageIndex = e.NewPageIndex;
SortedBindDataToGrid();
PupulateoldCheckValue();
}
private void StoreOldValue()
{
ArrayList categoryIDList = new ArrayList();
foreach (GridViewRow row in EmployeeDetails.Rows)
{
Label can_id = (Label)row.FindControl("UserACENumber");
bool result = ((CheckBox)row.FindControl("Chkgrid")).Checked;
if (Session["CHECKED_ITEMS"] != null)
categoryIDList = (ArrayList)Session["CHECKED_ITEMS"];
if (result)
{
if (!categoryIDList.Contains(can_id.Text))
categoryIDList.Add(can_id.Text);
}
else
categoryIDList.Remove(can_id.Text);
}
if (categoryIDList != null && categoryIDList.Count > 0)
Session["CHECKED_ITEMS"] = categoryIDList;
}
private void PupulateoldCheckValue()
{
ArrayList categoryIDList = (ArrayList)Session["CHECKED_ITEMS"];
if (categoryIDList != null && categoryIDList.Count > 0)
{
foreach (GridViewRow row in EmployeeDetails.Rows)
{
Label can_id = (Label)row.FindControl("UserACENumber");
if (categoryIDList.Contains(can_id.Text))
{
CheckBox myCheckBox = (CheckBox)row.FindControl("Chkgrid");
myCheckBox.Checked = true;
}
}
}
}
```
This is the redirect to another page code that goes to page1:
```
protected void imgView_Click(object sender, ImageClickEventArgs e)
{
StoreOldValue();
PupulateoldCheckValue();
Response.Redirect("page1.aspx?UserACENumber=" + (Server.UrlDecode(URLSecurity.Encrypt(UserContext.ACENumber))));
}
```
then in the "page1" i have back button which redirects to "page" aspx :
```
protected void imgimgBack_Click(object sender, ImageClickEventArgs e)
{
Response.Redirect("page.aspx?UserACENumber=" + (Server.UrlDecode(URLSecurity.Encrypt(UserContext.ACENumber))));
}
```
now my issue is:
when i check any one checkbox in the "page.aspx" and i go click image button and redirects to "page1.aspx" and come back to current working "page.aspx" whatever the checkbox i have checked gets disappear. | 2011/08/24 | [
"https://Stackoverflow.com/questions/7173238",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/422570/"
] | I will answer my own question and give credit to Quentin, who commented on the question but did not write an answer (if you post the answer, I will select you as the correct answer).
There is an HTML version:
```
<div class="g-plusone" data-size="medium"></div>
``` | Or you may output it like this in your gsp file:
```
${'<g:plusone annotation="inline" href="http://www.anythinggoeshere_xyz.com"></g:plusone>'}
``` |
7,173,238 | I having the page in that i have the gridview and page index changing also for every record i have
a check box.on top of the page i have the imagebutton in that when i click that button i am redirecting it into another page
in that page i have a back button which redirects to present page with checkbox and gridview.
what should i do to retain to get the checkbox when ever i check or some thing else?
This is gridview paging:
```
protected void ManageCalenderShift_PageIndexChanging(object sender, GridViewPageEventArgs e)
{
StoreOldValue();
EmployeeDetails.PageIndex = e.NewPageIndex;
SortedBindDataToGrid();
PupulateoldCheckValue();
}
private void StoreOldValue()
{
ArrayList categoryIDList = new ArrayList();
foreach (GridViewRow row in EmployeeDetails.Rows)
{
Label can_id = (Label)row.FindControl("UserACENumber");
bool result = ((CheckBox)row.FindControl("Chkgrid")).Checked;
if (Session["CHECKED_ITEMS"] != null)
categoryIDList = (ArrayList)Session["CHECKED_ITEMS"];
if (result)
{
if (!categoryIDList.Contains(can_id.Text))
categoryIDList.Add(can_id.Text);
}
else
categoryIDList.Remove(can_id.Text);
}
if (categoryIDList != null && categoryIDList.Count > 0)
Session["CHECKED_ITEMS"] = categoryIDList;
}
private void PupulateoldCheckValue()
{
ArrayList categoryIDList = (ArrayList)Session["CHECKED_ITEMS"];
if (categoryIDList != null && categoryIDList.Count > 0)
{
foreach (GridViewRow row in EmployeeDetails.Rows)
{
Label can_id = (Label)row.FindControl("UserACENumber");
if (categoryIDList.Contains(can_id.Text))
{
CheckBox myCheckBox = (CheckBox)row.FindControl("Chkgrid");
myCheckBox.Checked = true;
}
}
}
}
```
This is the redirect to another page code that goes to page1:
```
protected void imgView_Click(object sender, ImageClickEventArgs e)
{
StoreOldValue();
PupulateoldCheckValue();
Response.Redirect("page1.aspx?UserACENumber=" + (Server.UrlDecode(URLSecurity.Encrypt(UserContext.ACENumber))));
}
```
then in the "page1" i have back button which redirects to "page" aspx :
```
protected void imgimgBack_Click(object sender, ImageClickEventArgs e)
{
Response.Redirect("page.aspx?UserACENumber=" + (Server.UrlDecode(URLSecurity.Encrypt(UserContext.ACENumber))));
}
```
now my issue is:
when i check any one checkbox in the "page.aspx" and i go click image button and redirects to "page1.aspx" and come back to current working "page.aspx" whatever the checkbox i have checked gets disappear. | 2011/08/24 | [
"https://Stackoverflow.com/questions/7173238",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/422570/"
] | I will answer my own question and give credit to Quentin, who commented on the question but did not write an answer (if you post the answer, I will select you as the correct answer).
There is an HTML version:
```
<div class="g-plusone" data-size="medium"></div>
``` | building on the [taglib solution by James Williams](http://jameswilliams.be/blog/entry/215), I was able to render a +1 button with this taglib, which I named GooglePlusOneTagLib.grooovy in my taglib folder:
```
class GooglePlusOneTagLib {
def plusone = { attrs, body ->
out << "<g:plusone "+(attrs.size ? "size=${attrs.size}" : "") +"></g:plusone>"
}
def plusoneScript = { attrs, body ->
out << '''<script type="text/javascript">
(function() {
var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true;
po.src = 'https://apis.google.com/js/plusone.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s);
})();
</script>'''
}
}
```
and then you just use these tags in your gsp
```
<g:plusone size="medium"></g:plusone>
<g:plusoneScript />
``` |
7,173,238 | I having the page in that i have the gridview and page index changing also for every record i have
a check box.on top of the page i have the imagebutton in that when i click that button i am redirecting it into another page
in that page i have a back button which redirects to present page with checkbox and gridview.
what should i do to retain to get the checkbox when ever i check or some thing else?
This is gridview paging:
```
protected void ManageCalenderShift_PageIndexChanging(object sender, GridViewPageEventArgs e)
{
StoreOldValue();
EmployeeDetails.PageIndex = e.NewPageIndex;
SortedBindDataToGrid();
PupulateoldCheckValue();
}
private void StoreOldValue()
{
ArrayList categoryIDList = new ArrayList();
foreach (GridViewRow row in EmployeeDetails.Rows)
{
Label can_id = (Label)row.FindControl("UserACENumber");
bool result = ((CheckBox)row.FindControl("Chkgrid")).Checked;
if (Session["CHECKED_ITEMS"] != null)
categoryIDList = (ArrayList)Session["CHECKED_ITEMS"];
if (result)
{
if (!categoryIDList.Contains(can_id.Text))
categoryIDList.Add(can_id.Text);
}
else
categoryIDList.Remove(can_id.Text);
}
if (categoryIDList != null && categoryIDList.Count > 0)
Session["CHECKED_ITEMS"] = categoryIDList;
}
private void PupulateoldCheckValue()
{
ArrayList categoryIDList = (ArrayList)Session["CHECKED_ITEMS"];
if (categoryIDList != null && categoryIDList.Count > 0)
{
foreach (GridViewRow row in EmployeeDetails.Rows)
{
Label can_id = (Label)row.FindControl("UserACENumber");
if (categoryIDList.Contains(can_id.Text))
{
CheckBox myCheckBox = (CheckBox)row.FindControl("Chkgrid");
myCheckBox.Checked = true;
}
}
}
}
```
This is the redirect to another page code that goes to page1:
```
protected void imgView_Click(object sender, ImageClickEventArgs e)
{
StoreOldValue();
PupulateoldCheckValue();
Response.Redirect("page1.aspx?UserACENumber=" + (Server.UrlDecode(URLSecurity.Encrypt(UserContext.ACENumber))));
}
```
then in the "page1" i have back button which redirects to "page" aspx :
```
protected void imgimgBack_Click(object sender, ImageClickEventArgs e)
{
Response.Redirect("page.aspx?UserACENumber=" + (Server.UrlDecode(URLSecurity.Encrypt(UserContext.ACENumber))));
}
```
now my issue is:
when i check any one checkbox in the "page.aspx" and i go click image button and redirects to "page1.aspx" and come back to current working "page.aspx" whatever the checkbox i have checked gets disappear. | 2011/08/24 | [
"https://Stackoverflow.com/questions/7173238",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/422570/"
] | Or you may output it like this in your gsp file:
```
${'<g:plusone annotation="inline" href="http://www.anythinggoeshere_xyz.com"></g:plusone>'}
``` | building on the [taglib solution by James Williams](http://jameswilliams.be/blog/entry/215), I was able to render a +1 button with this taglib, which I named GooglePlusOneTagLib.grooovy in my taglib folder:
```
class GooglePlusOneTagLib {
def plusone = { attrs, body ->
out << "<g:plusone "+(attrs.size ? "size=${attrs.size}" : "") +"></g:plusone>"
}
def plusoneScript = { attrs, body ->
out << '''<script type="text/javascript">
(function() {
var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true;
po.src = 'https://apis.google.com/js/plusone.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s);
})();
</script>'''
}
}
```
and then you just use these tags in your gsp
```
<g:plusone size="medium"></g:plusone>
<g:plusoneScript />
``` |
8,765,798 | ok, my file have this structure.
```
<system.serviceModel>
<services>
<service name="ManyaWCF.ServiceManya" behaviorConfiguration="ServiceBehaviour">
<!-- Service Endpoints -->
<!-- Unless fully qualified, address is relative to base address supplied above -->
<endpoint address="" binding="webHttpBinding" contract="ManyaWCF.IServiceManya" behaviorConfiguration="web">
<!--
Upon deployment, the following identity element should be removed or replaced to reflect the
identity under which the deployed service runs. If removed, WCF will infer an appropriate identity
automatically.
-->
</endpoint>
</service>
</services>
<behaviors>
<serviceBehaviors>
<behavior name="ServiceBehaviour">
<!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment -->
<serviceMetadata httpGetEnabled="true" />
<!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information -->
<serviceDebug includeExceptionDetailInFaults="false" />
</behavior>
</serviceBehaviors>
<endpointBehaviors>
<behavior name="web">
<webHttp />
</behavior>
</endpointBehaviors>
</behaviors>
<serviceHostingEnvironment multipleSiteBindingsEnabled="true" />
</system.serviceModel>
```
i got the same web.config in other wcf and worked like a champ, ofc with different folders and files.
my folder structure is the following.

When i try to play it i get this,
```
Service
This is a Windows Β© Communication Foundation.
The metadata publishing for this service is currently disabled.
If you access the service, you can enable metadata publishing by completing the following steps to modify the configuration file or web application:
1. Create the following service behavior configuration, or add the item to a configuration <serviceMetadata> existing service behavior:
<behaviors>
<serviceBehaviors>
<behavior name="MyServiceTypeBehaviors">
httpGetEnabled="true" <serviceMetadata />
</ Behavior>
</ ServiceBehaviors>
</ Behaviors>
2. Add the behavior configuration to the service:
name="MyNamespace.MyServiceType" <service behaviorConfiguration="MyServiceTypeBehaviors">
Note: The service name must match the name of the configuration for the service implementation.
3. Add the following to end service configuration:
binding="mexHttpBinding" contract="IMetadataExchange" <endpoint address="mex" />
Note: the service must have an http base address to add this.
Here is an example of a service configuration file with metadata publishing enabled:
<configuration>
<system.serviceModel>
<services>
<! - Note: the service name must match the name of the configuration for the service implementation. ->
name="MyNamespace.MyServiceType" <service behaviorConfiguration="MyServiceTypeBehaviors">
<! - Add the following end. ->
<! - Note: the service must have an http base address to add this. ->
binding="mexHttpBinding" contract="IMetadataExchange" <endpoint address="mex" />
</ Service>
</ Services>
<behaviors>
<serviceBehaviors>
<behavior name="MyServiceTypeBehaviors">
<! - Add the following item to the service behavior configuration. ->
httpGetEnabled="true" <serviceMetadata />
</ Behavior>
</ ServiceBehaviors>
</ Behaviors>
</ System.serviceModel>
</ Configuration>
For more information about publishing metadata, see the following documentation: http://go.microsoft.com/fwlink/?LinkId=65455 (may be in English).
```
so, i only did 1 wcf and worked fine with same web.conif. My luck of exp and knowledge about this is killing me.
Any clue?
Thx in advance. | 2012/01/06 | [
"https://Stackoverflow.com/questions/8765798",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/965856/"
] | As far as I know, you only need the endpoint with the `mexHttpBinding` if you want to expose the WSDL to clients. Visual Studio (or wcfutil.exe) needs the WSDL description to create the webservice client classes.
After these webservice client classes are created, you shouldn't need to expose the WSDL anymore.
**UPDATE:** The `<service>` element in your configuration file should look like this:
```
<service name="ManyaWCF.ServiceManya" behaviorConfiguration="ServiceBehaviour">
<endpoint address="" binding="webHttpBinding"
contract="ManyaWCF.IServiceManya" behaviorConfiguration="web" />
<endpoint address="mex" binding="mexHttpBinding"
contract="IMetadataExchange" />
</service>
``` | ```
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.web>
<compilation debug="true" targetFramework="4.0" />
<httpRuntime maxRequestLength="1048576" executionTimeout="3600" />
</system.web>
<appSettings>
</appSettings>
<connectionStrings>
<add name="SQLConnect" connectionString="Your_Connection_String";User id=sa;Password=welcome3#"/>
</connectionStrings>
<system.serviceModel>
<services>
<service name="WCFRestService.RestServiceSvc" behaviorConfiguration="serviceBehavior">
<endpoint address="" bindingConfiguration="secureHttpBinding" binding="webHttpBinding" contract="WCFRestService.IRestServiceSvc" behaviorConfiguration="web"></endpoint>
</service>
</services>
<behaviors>
<endpointBehaviors>
<behavior name="web">
<webHttp />
</behavior>
</endpointBehaviors>
<serviceBehaviors>
<behavior name="serviceBehavior">
<serviceMetadata httpGetEnabled="true" />
<serviceDebug includeExceptionDetailInFaults="false" />
</behavior>
<behavior name="">
<serviceMetadata httpGetEnabled="true" />
<serviceDebug includeExceptionDetailInFaults="false" />
</behavior>
</serviceBehaviors>
</behaviors>
<bindings>
<webHttpBinding>
<binding name="secureHttpBinding"
maxBufferPoolSize="2147483647"
maxReceivedMessageSize="2147483647"
maxBufferSize="2147483647" transferMode="Streamed">
</binding>
</webHttpBinding>
</bindings>
<serviceHostingEnvironment multipleSiteBindingsEnabled="true" />
</system.serviceModel>
<system.webServer>
<modules runAllManagedModulesForAllRequests="true" />
<httpProtocol>
<customHeaders>
</customHeaders>
</httpProtocol>
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="1073741824" />
</requestFiltering>
</security>
<directoryBrowse enabled="true" />
</system.webServer>
<system.web.extensions>
<scripting>
<webServices>
<jsonSerialization maxJsonLength="50000000"/>
</webServices>
</scripting>
</system.web.extensions>
</configuration>
``` |
38,591,400 | I am a Java developer (I often used Spring MVC to develop MVC web app in Java) with a very litle knowledge of PHP and I have to work on a PHP project that use **CodeIgniter 2.1.3**.
So I have some doubts about how controller works in **CodeIgniter**.
1) In Spring MVC I have a controller class with some annoted method, each method handle a specific HTTP Request (the annotation defines the URL handled by the method) and return the name of the view that have to be shown.
Reading the official documentation of **CodeIgniter** it seems me that the logic of this framework is pretty different: <https://www.codeigniter.com/userguide3/general/controllers.html#what-is-a-controller>
So it seems to understand that in **CodeIgniter** is a class that handle a single URL of the application having the same name of the class name. Is it correct?
So I have this class:
```
class garanzieValoreFlex extends CI_Controller {
.....................................................
.....................................................
.....................................................
function __construct() {
parent::__construct();
$this->load->helper(array('form', 'url'));
$this->load->library(array('form_validation','session'));
}
public function reset() {
$this->session->unset_userdata("datiPreventivo");
$this->load->view('garanziavalore/garanzie_valore_questionario_bootstrap',array());
}
public function index() {
$this->load->model('Direct');
$flagDeroga = "true" ;
$this->session->userdata("flagDeroga");
$data = $this->session->userdata("datiPreventivo");
$this->load->model('GaranzieValoreFlexModel');
$data = $this->session->userdata("datiPreventivo");
$this->load->model('GaranzieValoreFlexModel');
$this->load->view('garanziavalore/index_bootstrap',$data);
}
public function back() {
$this->load->model('Direct');
$flagDeroga = "true" ;
$this->session->userdata("flagDeroga");
$data = $this->session->userdata("datiPreventivo");
$this->load->model('GaranzieValoreFlexModel');
//$this->load->view('garanziavalore/garanzie_valore_questionario_bootstrap',$data);
$this->load->view('garanziavalore/index_tornaIndietro_bootstrap',$data);
}
.....................................................
.....................................................
.....................................................
}
```
So, from what I have understand, basically this controller handle only the HTTP Request toward the URL: **<http://MYURL/garanzieValoreFlex>**.
So from what I have understand the method performed when I access to the previous URL is the **index()** that by this line:
```
$this->load->view('garanziavalore/index_bootstrap',$data);
```
show the **garanziavalore/index\_bootstrap.php** page that I found into the **views** directory of my prohect (is it a standard that have to be into the **views** directory?)
Is it my reasoning correct?
If yes I am loading the view passing to id also the **$data** variable that I think is the model containing the data that can be shown in the page, this variable is retrieved by:
```
$data = $this->session->userdata("datiPreventivo");
```
What exactly does this line?
The last doubt is related the other **back()** method that I have found in the previous controller: is it a method of **CodeIgniter** CI\_Controller class or something totally custom defined by the developer that work on this application before me? | 2016/07/26 | [
"https://Stackoverflow.com/questions/38591400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1833945/"
] | The `GC.KeepAlive` method is [empty](http://referencesource.microsoft.com/#mscorlib/system/gc.cs,279). All it does is ensure that a particular variable is read from at that point in the code, because otherwise that variable is never read from again and is thus not a valid reference to keep an object alive.
It's pointless here because the same variable that you're passing to `KeepAlive` *is* read from again at a later point in time - during the hidden `finally` block when `Dispose` is called. So, the `GC.KeepAlive` achieves nothing here. | It is pretty easy to test, here is a quick test program, be sure it is run in release mode without a debugger attached.
```
using System;
namespace SandboxConsole
{
class Program
{
static void Main(string[] args)
{
using (var context = new TestClass())
{
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
Console.WriteLine("After collection");
}
Console.WriteLine("After dispose, before 2nd collection");
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
Console.WriteLine("After 2nd collection");
Console.ReadLine();
}
}
internal class TestClass : IDisposable
{
public void Dispose()
{
Dispose(true);
}
~TestClass()
{
Console.WriteLine("In finalizer");
Dispose(false);
}
private void Dispose(bool isDisposing)
{
Console.WriteLine("In Dispose: {0}", isDisposing);
if (isDisposing)
{
//uncomment this line out to have the finalizer never run
//GC.SuppressFinalize(this);
}
}
}
}
```
It will always output
```
After collection
In Dispose: True
After dispose, before 2nd collection
In finalizer
In Dispose: False
After 2nd collection
```
---
For more concrete proof, here is the IL for the above program's Main method
```
.method private hidebysig static void Main(string[] args) cil managed
{
.entrypoint
// Code size 85 (0x55)
.maxstack 1
.locals init ([0] class SandboxConsole.TestClass context)
IL_0000: newobj instance void SandboxConsole.TestClass::.ctor()
IL_0005: stloc.0
.try
{
IL_0006: call void [mscorlib]System.GC::Collect()
IL_000b: call void [mscorlib]System.GC::WaitForPendingFinalizers()
IL_0010: call void [mscorlib]System.GC::Collect()
IL_0015: ldstr "After collection"
IL_001a: call void [mscorlib]System.Console::WriteLine(string)
IL_001f: leave.s IL_002b
} // end .try
finally
{
IL_0021: ldloc.0
IL_0022: brfalse.s IL_002a
IL_0024: ldloc.0
IL_0025: callvirt instance void [mscorlib]System.IDisposable::Dispose()
IL_002a: endfinally
} // end handler
IL_002b: ldstr "After dispose, before 2nd collection"
IL_0030: call void [mscorlib]System.Console::WriteLine(string)
IL_0035: call void [mscorlib]System.GC::Collect()
IL_003a: call void [mscorlib]System.GC::WaitForPendingFinalizers()
IL_003f: call void [mscorlib]System.GC::Collect()
IL_0044: ldstr "After 2nd collection"
IL_0049: call void [mscorlib]System.Console::WriteLine(string)
IL_004e: call string [mscorlib]System.Console::ReadLine()
IL_0053: pop
IL_0054: ret
} // end of method Program::Main
```
You can see there is a hidden finally block that checks if the object is null then calls Dispose on it. That reference will keep the object alive the entire scope of the using block.
UPDATE: See [Damien's comment below](https://stackoverflow.com/users/15498/damien-the-unbeliever), this *specific* example does have the opportunity to actually call the finalizer early due to the fact I don't ever use any variables that use a implicit `this` in the dispose method. To guarantee the behavior be sure to use a instance level variable (which my short example has none) or have `GC.SuppressFinalize(this);` uncommented. |
28,115,526 | I'm completely useless regarding databases, but currently I'm having to work with it.
I need to make a query that compares date values between to different entries of my table. I have a query like this:
```
SELECT t1.serial_number, t1.fault_type, t2.fault_type
FROM shipped_products t1
JOIN shipped_products t2 ON t1.serial_number=t2.serial_number
WHERE ABS(DATEDIFF(t2.date_rcv,t1.date_rcv))<90;
```
But it's taking forever to run. Really, I left it running for 18 hours and it never stoped. Is this query correct? Is there a better, more clever way to do this?
Thank you very much guys.
BTW: I'll automate all the process with python scripts, so if you know of a better way to do this inside python without all the logic having to be inside the query, it would also help.
EDIT:
My question seems unclear, so I'll explain better what I need to do.
I have a problem that sometimes products go to repair centers and are shipped back to clients as "No Deffect found". After that the client ship it againg to repair centers for they present the same issue. So i need a query to count how many products have been to repair centers twice in an interval of 90 days. The unic ID for each single product is its serial number, and that's why I'm searching for sereal number duplicates. | 2015/01/23 | [
"https://Stackoverflow.com/questions/28115526",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1044735/"
] | Every record is going to match itself (in t1 and t2) in this join since the DateDiff will be the same and thus less than 90. Make sure you are not matching to the same record. If you have an ID field in your table you could do this:
```
SELECT t1.serial_number, t1.fault_type, t2.fault_type
FROM shipped_products t1
JOIN shipped_products t2
ON t1.serial_number=t2.serial_number
AND t1.ID <> t2.ID
WHERE ABS(DATEDIFF(t2.date_rcv,t1.date_rcv))<90;
```
Also make sure you have a key on serial\_number. | It is unclear to me why you would want duplicates in the results. If you have two rows that meet the condition, then both will be in the result set. Why not just look at records that come later? If you phrase the query like this:
```
SELECT t1.serial_number, t1.fault_type, t2.fault_type
FROM shipped_products t1 JOIN
shipped_products t2
ON t1.serial_number = t2.serial_number
WHERE t2.date_recv >= t1.date_rcv and
t2.date_recv < t1.date_recv + interval 90 day;
```
Then the resulting query can take advantage of an index on `shipped_products(serial_number, date_recv)`. Note: Perhaps the 90 should be 180.
I am suspicious when I see this type of self-join. Sometimes, it can be replaced with an aggregation query (sometimes not). However, what you actually want to do is unclear. |
34,041,384 | I created a setup by Advanced Installer software for my program that I've written by c#. I installed my program on VMWare Windows7.
When I try to run it this message is displayed:
 | 2015/12/02 | [
"https://Stackoverflow.com/questions/34041384",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5394345/"
] | What if `monthNumber` is not between 1 and 12? In that case, `monthString` won't be initialized. You should give it some default value when you declare it :
```
String monthString = null; // or ""
``` | monthString is a local variable within main(), therefore, it must be initialized to prevent the compiler error.
If monthString were a Class variable then it does not have to be initialized explicitly.
You can do this by moving monthString outside of main() and declare it as:
static String monthString; |
34,041,384 | I created a setup by Advanced Installer software for my program that I've written by c#. I installed my program on VMWare Windows7.
When I try to run it this message is displayed:
 | 2015/12/02 | [
"https://Stackoverflow.com/questions/34041384",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5394345/"
] | It would be a good idea to add a default case to your switch statement.
Example:
```
switch (monthNumber) {
case 1: monthString = "January";
break;
//other cases...
default: monthString = "Invalid Month Number";
break;
}
```
This way if `monthNumber` is not 1-12 then there is still a default case for the switch statement to flow to. | May be this link will help to get proper understanding.
```
http://stackoverflow.com/questions/5478996/should-java-string-method-local-variables-be-initialized-to-null-or
``` |
34,041,384 | I created a setup by Advanced Installer software for my program that I've written by c#. I installed my program on VMWare Windows7.
When I try to run it this message is displayed:
 | 2015/12/02 | [
"https://Stackoverflow.com/questions/34041384",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5394345/"
] | It would be a good idea to add a default case to your switch statement.
Example:
```
switch (monthNumber) {
case 1: monthString = "January";
break;
//other cases...
default: monthString = "Invalid Month Number";
break;
}
```
This way if `monthNumber` is not 1-12 then there is still a default case for the switch statement to flow to. | monthString is a local variable within main(), therefore, it must be initialized to prevent the compiler error.
If monthString were a Class variable then it does not have to be initialized explicitly.
You can do this by moving monthString outside of main() and declare it as:
static String monthString; |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.