qid
int64
1
74.7M
question
stringlengths
0
58.3k
date
stringlengths
10
10
metadata
sequence
response_j
stringlengths
2
48.3k
response_k
stringlengths
2
40.5k
21,668,250
I'm trying to make a program that will get the user input of a new file name, create the file, and write to it. It works but it will only write the first word of the string to the file. How can i get it to write the full string? thanks. ``` #include "stdafx.h" #include <fstream> #include <iostream> #include <string> using namespace std; int main() { for (;;) { char *myFile = " "; string f = " "; string w = " "; cout <<"What is the name of the file you would like to write to? " <<endl; cin >>f; ofstream myStream(f,ios_base::ate|ios_base::out); cout <<"What would you like to write to " <<f <<" ? "; cin >>w; myStream <<w; if (myStream.bad()) { myStream <<"A serious error has occured."; myStream.close(); break; } } } ```
2014/02/10
[ "https://Stackoverflow.com/questions/21668250", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3150762/" ]
According to [this post](https://stackoverflow.com/questions/5455802/how-to-read-a-complete-line-from-the-user-using-cin), you should consult [this reference](http://www.cplusplus.com/reference/fstream/fstream/) to use a method like getline(). Also, when you are writing out I recommend that you flush the output (cout.flush()) before ending the program, especially in this case, since I presume you are ending the program with a ctrl-C break. In formulating a suggestion, I will read data into char\*, and convert them to "string" in case you will use them elsewhere in your program. I tested this code in MS Visual C++ Express. ``` #include "stdafx.h" #include <fstream> #include <iostream> #include <string> using namespace std; int main() { for (;;) { char *myFile = new char[200]; // modified this line //added this line char *myInput = new char[200]; string f = " "; string w = " "; cout << "What is the name of the file you would like to write to? " << endl; cin.getline(myFile, 200);//modified this line f = (myFile);//added this line cin.clear(); //added this line ofstream myStream(f, ios_base::ate | ios_base::out); cout << "What would you like to write to " << f << " ? "; cin.getline(myInput, 200);//edited this line w = string(myInput);//added this line myStream << w; myStream.flush();//added this line if (myStream.bad()) { myStream << "A serious error has occured."; myStream.close(); break; } delete myFile; delete myInput; } } ```
`cin<<w;` cin would stop consuming input character when it encounter whitespace tab and other unseeable characters. you should probably use `std::getline()` instead. take a look at this page for ref. <http://en.cppreference.com/w/cpp/string/basic_string/getline> Or you can use manipulator to not skip whitespace.
31,870
I'm trying to find undirected random graphs $G(V,E)$ with $|V|$ = $d^2$ for $d \in \mathbb{N}$ such that $\forall v \in V: deg(v) = d$. For $d \in 2\mathbb{N} +1$ this trivially is impossible as no such graph exists: The number of incidences (connections between vertices and edges) is given by $|V|\cdot d = d^3 = 8k^3 + 12k^2 + 6k + 1$ (for some $k$). As the number of incidences is always double the number of edges $|E| = d^3/2$ is a contradiction. This argument however, doesn't work for $d \in 2\mathbb{N}$. My first guess was just constructing a random graph would do, however, this can get stuck in a local maximum. For instance in $d = 2$: ``` +---+ example for | / an incomplete | / graph that |/ cannot be + + completed ``` A similar example can be constructed for $d = 4$ leaving up to two unconnectable vertices (essentially by using a 4-HyperCube). I strongly suspect that for each $d$ the number of valid graphs significantly outweigh the number of incomplete graphs, but I would like to know **how likely it is to end up with an incomplete graph**. And if there is a **better way to find these graphs** than the random algorithm above (which could perhaps be fixed by breaking apart incomplete graphs, but that would not be guaranteed to terminate).
2015/06/30
[ "https://cstheory.stackexchange.com/questions/31870", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1908/" ]
The standard simple way of generating random regular graphs is: * while the degree < d + choose a random perfect matching from the edges still possible to add to the graph + If no matching is possible, restart the process. The problem with this is that the higher edge degree you want, the more likely it is for the algorithm to get stuck. I see many papers limiting themself to $|V|>d^3$, so I don't know if this process will work for you.
Much progress was made recently in this area, see in particular this FOCS'19 paper: ``` Fast uniform generation of random graphs with given degree sequences Andrii Arman, Pu Gao, Nicholas Wormald ``` and the [more extensive arxiv version](https://arxiv.org/abs/1905.03446). This paper presents an $O(nd+d^4)$ time algorithm for $d$-regular graphs when $d=o(\sqrt{n})$, and [an implementation is provided](https://mathoverflow.net/questions/365865/fast-uniform-generation-of-random-graphs-with-given-degree-sequences-any-imple).
34,601,754
JavaScrpt expert, i want if the below script exist in my template coding then my page should redirect to example.com ``` <script> $(document).ready(function(){ $("#wrapper").hide(); }) </script> ``` if the above script exist in my template, then it should redirect to example.com **Attention:** please add some condition in that script like this: ``` <script> $(document).ready(function(){ If //#wrapper is hide $("#wrapper").hide(); //then it should redirected to example.com </script> ``` I hope someone will figure out and will share code with me. thanks.
2016/01/04
[ "https://Stackoverflow.com/questions/34601754", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5741391/" ]
If you need this functionality somewhere after the bit of code you show, this would work: ``` var $wrapper=$("#wrapper"); if($wrapper.length>0 && !$wrapper.is(':visible')){ // #wrapper exists on the page but is not visible, redirect user window.location.href = "http://example.com"; } ```
What Taplar says is: ``` <script> $(document).ready(function(){ // $("#wrapper").hide(); window.location.href = "http://example.com"; }) </script> ``` If you need this behaviour in another place in your code, then see DelightedD0D answer. Very good point by DelightedD0D, I've fixed the code. ;) DelightedD0D, I'd give you another point if I could.
15,369,589
i use this script to compress all .txt and .cpi files into the backup folder in separated files with 7zip. After the files are zipped i delete the original files. However this script has a logical flaw. Lets say if the 7zip program fails to run, the files will also get deleted. How can i change the script so that it should not delete the files if they don't get zipped first. Also how can i change this script so it zips files that are older than 7 days? Thanks for your help. @echo off setlocal ``` set _source=C:\test7zip\bak set _dest=C:\test7zip\bak set _wrpath=C:\Program Files\7-Zip if NOT EXIST %_dest% md %_dest% for %%I in (%_source%\*.txt,%_source%\*.cpi) do "%_wrpath%\7z" a "%_dest%\%%~nI.7z" "%%I" & del "%%I" ``` pause
2013/03/12
[ "https://Stackoverflow.com/questions/15369589", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2120466/" ]
This is a partial solution to a partial question, but generally you would use: `ORDER BY YEAR(TimeColumn), DATEPART(m, TimeColumn)`
Try this: ``` ORDER BY DatePart(y,time),DatePart(m,time) ```
15,369,589
i use this script to compress all .txt and .cpi files into the backup folder in separated files with 7zip. After the files are zipped i delete the original files. However this script has a logical flaw. Lets say if the 7zip program fails to run, the files will also get deleted. How can i change the script so that it should not delete the files if they don't get zipped first. Also how can i change this script so it zips files that are older than 7 days? Thanks for your help. @echo off setlocal ``` set _source=C:\test7zip\bak set _dest=C:\test7zip\bak set _wrpath=C:\Program Files\7-Zip if NOT EXIST %_dest% md %_dest% for %%I in (%_source%\*.txt,%_source%\*.cpi) do "%_wrpath%\7z" a "%_dest%\%%~nI.7z" "%%I" & del "%%I" ``` pause
2013/03/12
[ "https://Stackoverflow.com/questions/15369589", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2120466/" ]
This is a partial solution to a partial question, but generally you would use: `ORDER BY YEAR(TimeColumn), DATEPART(m, TimeColumn)`
Repeating what has already been said ``` SELECT '01 jan 2012' AS Dt INTO #tempDates UNION SELECT '01 jun 2012' UNION SELECT '10 oct 2012' UNION SELECT '01 jan 2013' --1 SELECT Dt FROM #tempDates ORDER BY DatePart(YY,Dt),DatePart(m,Dt) --2 SELECT Dt FROM #tempDates ORDER BY YEAR(Dt), MONTH(Dt) --3 SELECT Dt FROM #tempDates ORDER BY YEAR(Dt), DatePart(m,Dt) ```
82,435
Is the full version of Quake II free now? If so, where can I find it - I don't see it available on ID's site, even in the Store section. I see that there's a GPL source available. And I recall ID having made other's free as well. Is there an authoritative source?
2012/09/03
[ "https://gaming.stackexchange.com/questions/82435", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/16833/" ]
While id has [open-sourced the *engine*](https://github.com/id-Software/Quake-2), the game itself is not free (as in beer or speech). > > All of the Q2 data files remain copyrighted and licensed under the > original terms, so you cannot redistribute data from the original game, but if > you do a true total conversion, you can create a standalone game based on > this code. > > > This essentially means that you can download the source code, and use it as long as you comply with the terms of the license. Additionally, you can download versions of the Q2 binaries/source that other people have modified or made available, if you so desire. If you want to play the *game,* you'll need the game files from the original disc or some other source. As far as the game is concerned, you can [pick it up on Steam](http://store.steampowered.com/app/2320/?snr=1_7_suggest__13), along with most of the rest of the id library.
Quake 2's engine has been released as an open source product. The best proof of this I can find is the id Software [public git repository](https://github.com/id-Software). The actual game content however has not and you still need to buy a copy from steam if you want to play it. The source code can also be found zipped on id Software's FTP site [here](ftp://ftp.idsoftware.com/idstuff/source/). This [link on slashdot](http://games.slashdot.org/story/01/12/22/053211/quake-2-source-code-released-under-the-gpl) points to the original id Software page, which has been updated since but which also point to the git repository.
2,805,674
I am experiencing problems creating a connection pool in glassfish v3, just for reference i am using the Java EE glassfish bundle. my enviroment vars are as follows ``` Url: jdbc:oracle:thin:@localhost:1521:xe User: sys Password : xxxxxxxx ``` which i think is all i need to make a connection. but i get the following exception ``` WARNING: Can not find resource bundle for this logger. class name that failed: com.sun.gjc.common.DataSourceObjectBuilder SEVERE: jdbc.exc_cnfe_ds java.lang.ClassNotFoundException: oracle.jdbc.pool.OracleDataSource at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:247) at com.sun.gjc.common.DataSourceObjectBuilder.getDataSourceObject(DataSourceObjectBuilder.java:279) at com.sun.gjc.common.DataSourceObjectBuilder.constructDataSourceObject(DataSourceObjectBuilder.java:108) at com.sun.gjc.spi.ManagedConnectionFactory.getDataSource(ManagedConnectionFactory.java:1167) at com.sun.gjc.spi.DSManagedConnectionFactory.getDataSource(DSManagedConnectionFactory.java:135) at com.sun.gjc.spi.DSManagedConnectionFactory.createManagedConnection(DSManagedConnectionFactory.java:90) at com.sun.enterprise.connectors.service.ConnectorConnectionPoolAdminServiceImpl.getManagedConnection(ConnectorConnectionPoolAdminServiceImpl.java:520) at com.sun.enterprise.connectors.service.ConnectorConnectionPoolAdminServiceImpl.getUnpooledConnection(ConnectorConnectionPoolAdminServiceImpl.java:630) at com.sun.enterprise.connectors.service.ConnectorConnectionPoolAdminServiceImpl.testConnectionPool(ConnectorConnectionPoolAdminServiceImpl.java:442) at com.sun.enterprise.connectors.ConnectorRuntime.pingConnectionPool(ConnectorRuntime.java:898) at org.glassfish.admin.amx.impl.ext.ConnectorRuntimeAPIProviderImpl.pingJDBCConnectionPool(ConnectorRuntimeAPIProviderImpl.java:570) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.glassfish.admin.amx.impl.mbean.AMXImplBase.invoke(AMXImplBase.java:1038) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761) at javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:288) at org.glassfish.admin.amx.util.jmx.MBeanProxyHandler.invoke(MBeanProxyHandler.java:453) at org.glassfish.admin.amx.core.proxy.AMXProxyHandler._invoke(AMXProxyHandler.java:822) at org.glassfish.admin.amx.core.proxy.AMXProxyHandler.invoke(AMXProxyHandler.java:526) at $Proxy233.pingJDBCConnectionPool(Unknown Source) at org.glassfish.admingui.common.handlers.JdbcTempHandler.pingJdbcConnectionPool(JdbcTempHandler.java:99) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.jsftemplating.layout.descriptors.handler.Handler.invoke(Handler.java:442) at com.sun.jsftemplating.layout.descriptors.LayoutElementBase.dispatchHandlers(LayoutElementBase.java:420) at com.sun.jsftemplating.layout.descriptors.LayoutElementBase.dispatchHandlers(LayoutElementBase.java:394) at com.sun.jsftemplating.layout.event.CommandActionListener.invokeCommandHandlers(CommandActionListener.java:150) at com.sun.jsftemplating.layout.event.CommandActionListener.processAction(CommandActionListener.java:98) at javax.faces.event.ActionEvent.processListener(ActionEvent.java:88) at javax.faces.component.UIComponentBase.broadcast(UIComponentBase.java:772) at javax.faces.component.UICommand.broadcast(UICommand.java:300) at com.sun.webui.jsf.component.WebuiCommand.broadcast(WebuiCommand.java:160) at javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:775) at javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:1267) at com.sun.faces.lifecycle.InvokeApplicationPhase.execute(InvokeApplicationPhase.java:82) at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:118) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:312) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1523) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:343) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:215) at com.sun.webui.jsf.util.UploadFilter.doFilter(UploadFilter.java:240) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:215) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:277) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:641) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:97) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:85) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:185) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:332) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:233) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:239) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:619) WARNING: RAR8054: Exception while creating an unpooled [test] connection for pool [ testingManagmentDataConnection ], Class name is wrong or classpath is not set for : oracle.jdbc.pool.OracleDataSource WARNING: Can not find resource bundle for this logger. class name that failed: com.sun.gjc.common.DataSourceObjectBuilder ``` does anyone have any ideas what i am doing wrong/ what i will have to do to correct this issue, Thanks for your time Jon
2010/05/10
[ "https://Stackoverflow.com/questions/2805674", "https://Stackoverflow.com", "https://Stackoverflow.com/users/337609/" ]
Copy the jdbc jar to $glassfish-v3/glassfish/domains/domain1/lib/ext/ and restart glassfish. this should fix the problem.
it looks like the server doesn't have the jar that contains the class oracle.jdbc.pool.OracleDataSource on the classpath. [there are a couple methods to do this](http://docs.sun.com/app/docs/doc/820-7692/ghatb?l=en&a=view). The easiest way to get the class onto the classpath is to copy the jar that hold the class into your domain's lib directory. That is usually $glassfish-v3-install-root/glassfish/domains/domain1/lib.
72,411,979
I'm trying to style a group of images or paragraphs in a certain way, ```css .products { margin: 40px; border-radius: 4px solid black; } ``` ```html <div class="products"> <img src="images/bodybutter1.jpg" style="width 250px; height: 300px; border-radius: 15px;" alt="bodybutter"> <img src="images/bodybutter1.jpg" style="width 250px; height: 300px; border-radius: 15px;" alt="bodybutter"> <img src="images/bodybutter1.jpg" style="width 250px; height: 300px; border-radius: 15px;" alt="bodybutter"> <img src="images/bodybutter1.jpg" style="width 250px; height: 300px; border-radius: 15px;" alt="bodybutter"> </div> <div class="products"> <img src="images/bodybutter1.jpg" style="width 250px; height: 300px; border-radius: 15px;" alt="bodybutter"> <img src="images/bodybutter1.jpg" style="width 250px; height: 300px; border-radius: 15px;" alt="bodybutter"> <img src="images/bodybutter1.jpg" style="width 250px; height: 300px; border-radius: 15px;" alt="bodybutter"> <img src="images/bodybutter1.jpg" style="width 250px; height: 300px; border-radius: 15px;" alt="bodybutter"> </div> ```
2022/05/27
[ "https://Stackoverflow.com/questions/72411979", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19210232/" ]
You'll need to target the img tag inside the parent instead of targeting the parent div. See code below: ``` .products img { margin: 40px; border-radius: 4px solid black; } ``` You can use this to reduce code duplication.
Note: The first property you have inline is missing the semicolon and breaking your code. Which may be why you are not seeing those styles applied to your image elements. Because you are reusing the same style for every image you should remove those inline styles using the style attribute and use CSS to target those elements. Check out the code for context. Hope this helps! ```css .products { margin: 40px; border-radius: 4px solid black; } .products img { width: 250px; height: auto; border-radius: 15px; object-fit: cover; max-width: 100%; } ``` ```html <div class="products"> <img src="https://wtwp.com/wp-content/uploads/2015/06/placeholder-image.png" alt="bodybutter"> <img src="https://wtwp.com/wp-content/uploads/2015/06/placeholder-image.png" alt="bodybutter"> <img src="https://wtwp.com/wp-content/uploads/2015/06/placeholder-image.png" alt="bodybutter"> <img src="https://wtwp.com/wp-content/uploads/2015/06/placeholder-image.png" alt="bodybutter"> </div> <div class="products"> <img src="https://wtwp.com/wp-content/uploads/2015/06/placeholder-image.png" alt="bodybutter"> <img src="https://wtwp.com/wp-content/uploads/2015/06/placeholder-image.png" alt="bodybutter"> <img src="https://wtwp.com/wp-content/uploads/2015/06/placeholder-image.png" alt="bodybutter"> <img src="https://wtwp.com/wp-content/uploads/2015/06/placeholder-image.png" alt="bodybutter"> </div> ```
24,300
In R, if I set.seed(), and then use the sample function to randomize a list, can I guarantee I won't generate the same permutation? ie... ``` set.seed(25) limit <- 3 myindex <- seq(0,limit) for (x in seq(1,factorial(limit))) { permutations <- sample(myindex) print(permutations) } ``` This produces ``` [1] 1 2 0 3 [1] 0 2 1 3 [1] 0 3 2 1 [1] 3 1 2 0 [1] 2 3 0 1 [1] 0 1 3 2 ``` will all permutations printed be unique permutations? Or is there some chance, based on the way this is implemented, that I could get some repeats? I want to be able to do this without repeats, guaranteed. How would I do that? (I also want to avoid having to use a function like permn(), which has a very mechanistic method for generating all permutations---it doesn't look random.) Also, sidenote---it looks like this problem is O((n!)!), if I'm not mistaken.
2012/03/08
[ "https://stats.stackexchange.com/questions/24300", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/7420/" ]
The question has many valid interpretations. The comments--especially the one indicating permutations of 15 or more elements are needed (15! = 1307674368000 is getting big)--suggest that what is wanted is a *relatively small* random sample, without replacement, of all n! = n\*(n-1)*(n-2)*...\*2\*1 permutations of 1:n. If this is true, there exist (somewhat) efficient solutions. The following function, `rperm`, accepts two arguments `n` (the size of the permutations to sample) and `m` (the number of permutations of size n to draw). If m approaches or exceeds n!, the function will take a long time and return many NA values: it is intended for use when n is relatively big (say, 8 or more) and m is much smaller than n!. It works by caching a string representation of the permutations found so far and then generating new permutations (randomly) until a new one is found. It exploits R's associative list-indexing ability to search the list of previously-found permutations quickly. ``` rperm <- function(m, size=2) { # Obtain m unique permutations of 1:size # Function to obtain a new permutation. newperm <- function() { count <- 0 # Protects against infinite loops repeat { # Generate a permutation and check against previous ones. p <- sample(1:size) hash.p <- paste(p, collapse="") if (is.null(cache[[hash.p]])) break # Prepare to try again. count <- count+1 if (count > 1000) { # 1000 is arbitrary; adjust to taste p <- NA # NA indicates a new permutation wasn't found hash.p <- "" break } } cache[[hash.p]] <<- TRUE # Update the list of permutations found p # Return this (new) permutation } # Obtain m unique permutations. cache <- list() replicate(m, newperm()) } # Returns a `size` by `m` matrix; each column is a permutation of 1:size. ``` The nature of `replicate` is to return the permutations as *column* vectors; *e.g.*, the following reproduces an example in the original question, *transposed*: ``` > set.seed(17) > rperm(6, size=4) [,1] [,2] [,3] [,4] [,5] [,6] [1,] 1 2 4 4 3 4 [2,] 3 4 1 3 1 2 [3,] 4 1 3 2 2 3 [4,] 2 3 2 1 4 1 ``` Timings are excellent for small to moderate values of m, up to about 10,000, but degrade for larger problems. For example, a sample of m = 10,000 permutations of n = 1000 elements (a matrix of 10 million values) was obtained in 10 seconds; a sample of m = 20,000 permutations of n = 20 elements required 11 seconds, even though the output (a matrix of 400,000 entries) was much smaller; and computing sample of m = 100,000 permutations of n = 20 elements was aborted after 260 seconds (I didn't have the patience to wait for completion). This scaling problem appears to be related to scaling inefficiencies in R's associative addressing. One can work around it by generating samples in groups of, say, 1000 or so, then combining those samples into a large sample and removing duplicates. R experts might be able to suggest more efficient solutions or better workarounds. ### Edit **We can achieve near linear asymptotic performance** by breaking the cache into a hierarchy of two caches, so that R never has to search through a large list. Conceptually (although not as implemented), create an array indexed by the first $k$ elements of a permutation. Entries in this array are lists of all permutations sharing those first $k$ elements. To check whether a permutation has been seen, use its first $k$ elements to find its entry in the cache and then search for that permutation within that entry. We can choose $k$ to balance the expected sizes of all the lists. The actual implementation does not use a $k$-fold array, which would be hard to program in sufficient generality, but instead uses another list. Here are some elapsed times in seconds for a range of permutation sizes and numbers of distinct permutations requested: ``` Number Size=10 Size=15 Size=1000 size=10000 size=100000 10 0.00 0.00 0.02 0.08 1.03 100 0.01 0.01 0.07 0.64 8.36 1000 0.08 0.09 0.68 6.38 10000 0.83 0.87 7.04 65.74 100000 11.77 10.51 69.33 1000000 195.5 125.5 ``` (The apparently anomalous speedup from size=10 to size=15 is because the first level of the cache is larger for size=15, reducing the average number of entries in the second-level lists, thereby speeding up R's associative search. At some cost in RAM, execution could be made faster by increasing the upper-level cache size. Just increasing `k.head` by 1 (which multiplies the upper-level size by 10) sped up `rperm(100000, size=10)` from 11.77 seconds to 8.72 seconds, for instance. Making the upper-level cache 10 times bigger yet achieved no appreciable gain, clocking at 8.51 seconds.) Except for the case of 1,000,000 unique permutations of 10 elements (a substantial portion of all 10! = about 3.63 million such permutations), practically no collisions were ever detected. In this exceptional case, there were 169,301 collisions, but no complete failures (one million unique permutations were in fact obtained). Note that with large permutation sizes (greater than 20 or so), the chance of obtaining two identical permutations even in a sample as large as 1,000,000,000 is vanishingly small. Thus, this solution is applicable primarily in situations where (a) large numbers of unique permutations of (b) between $n=5$ and $n=15$ or so elements are to be generated but even so, (c) substantially fewer than all $n!$ permutations are needed. Working code follows. ``` rperm <- function(m, size=2) { # Obtain m unique permutations of 1:size max.failures <- 10 # Function to index into the upper-level cache. prefix <- function(p, k) { # p is a permutation, k is the prefix size sum((p[1:k] - 1) * (size ^ ((1:k)-1))) + 1 } # Returns a value from 1 through size^k # Function to obtain a new permutation. newperm <- function() { # References cache, k.head, and failures in parent context. # Modifies cache and failures. count <- 0 # Protects against infinite loops repeat { # Generate a permutation and check against previous ones. p <- sample(1:size) k <- prefix(p, k.head) ip <- cache[[k]] hash.p <- paste(tail(p,-k.head), collapse="") if (is.null(ip[[hash.p]])) break # Prepare to try again. n.failures <<- n.failures + 1 count <- count+1 if (count > max.failures) { p <- NA # NA indicates a new permutation wasn't found hash.p <- "" break } } if (count <= max.failures) { ip[[hash.p]] <- TRUE # Update the list of permutations found cache[[k]] <<- ip } p # Return this (new) permutation } # Initialize the cache. k.head <- min(size-1, max(1, floor(log(m / log(m)) / log(size)))) cache <- as.list(1:(size^k.head)) for (i in 1:(size^k.head)) cache[[i]] <- list() # Count failures (for benchmarking and error checking). n.failures <- 0 # Obtain (up to) m unique permutations. s <- replicate(m, newperm()) s[is.na(s)] <- NULL list(failures=n.failures, sample=matrix(unlist(s), ncol=size)) } # Returns an m by size matrix; each row is a permutation of 1:size. ```
I"m going to side step your first question a bit, and suggest that if your are dealing with relatively short vectors, you could simply generate all the permutations using `permn` and them randomly order *those* using `sample`: ``` x <- combinat:::permn(1:3) > x[sample(factorial(3),factorial(3),replace = FALSE)] [[1]] [1] 1 2 3 [[2]] [1] 3 2 1 [[3]] [1] 3 1 2 [[4]] [1] 2 1 3 [[5]] [1] 2 3 1 [[6]] [1] 1 3 2 ```
24,300
In R, if I set.seed(), and then use the sample function to randomize a list, can I guarantee I won't generate the same permutation? ie... ``` set.seed(25) limit <- 3 myindex <- seq(0,limit) for (x in seq(1,factorial(limit))) { permutations <- sample(myindex) print(permutations) } ``` This produces ``` [1] 1 2 0 3 [1] 0 2 1 3 [1] 0 3 2 1 [1] 3 1 2 0 [1] 2 3 0 1 [1] 0 1 3 2 ``` will all permutations printed be unique permutations? Or is there some chance, based on the way this is implemented, that I could get some repeats? I want to be able to do this without repeats, guaranteed. How would I do that? (I also want to avoid having to use a function like permn(), which has a very mechanistic method for generating all permutations---it doesn't look random.) Also, sidenote---it looks like this problem is O((n!)!), if I'm not mistaken.
2012/03/08
[ "https://stats.stackexchange.com/questions/24300", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/7420/" ]
The question has many valid interpretations. The comments--especially the one indicating permutations of 15 or more elements are needed (15! = 1307674368000 is getting big)--suggest that what is wanted is a *relatively small* random sample, without replacement, of all n! = n\*(n-1)*(n-2)*...\*2\*1 permutations of 1:n. If this is true, there exist (somewhat) efficient solutions. The following function, `rperm`, accepts two arguments `n` (the size of the permutations to sample) and `m` (the number of permutations of size n to draw). If m approaches or exceeds n!, the function will take a long time and return many NA values: it is intended for use when n is relatively big (say, 8 or more) and m is much smaller than n!. It works by caching a string representation of the permutations found so far and then generating new permutations (randomly) until a new one is found. It exploits R's associative list-indexing ability to search the list of previously-found permutations quickly. ``` rperm <- function(m, size=2) { # Obtain m unique permutations of 1:size # Function to obtain a new permutation. newperm <- function() { count <- 0 # Protects against infinite loops repeat { # Generate a permutation and check against previous ones. p <- sample(1:size) hash.p <- paste(p, collapse="") if (is.null(cache[[hash.p]])) break # Prepare to try again. count <- count+1 if (count > 1000) { # 1000 is arbitrary; adjust to taste p <- NA # NA indicates a new permutation wasn't found hash.p <- "" break } } cache[[hash.p]] <<- TRUE # Update the list of permutations found p # Return this (new) permutation } # Obtain m unique permutations. cache <- list() replicate(m, newperm()) } # Returns a `size` by `m` matrix; each column is a permutation of 1:size. ``` The nature of `replicate` is to return the permutations as *column* vectors; *e.g.*, the following reproduces an example in the original question, *transposed*: ``` > set.seed(17) > rperm(6, size=4) [,1] [,2] [,3] [,4] [,5] [,6] [1,] 1 2 4 4 3 4 [2,] 3 4 1 3 1 2 [3,] 4 1 3 2 2 3 [4,] 2 3 2 1 4 1 ``` Timings are excellent for small to moderate values of m, up to about 10,000, but degrade for larger problems. For example, a sample of m = 10,000 permutations of n = 1000 elements (a matrix of 10 million values) was obtained in 10 seconds; a sample of m = 20,000 permutations of n = 20 elements required 11 seconds, even though the output (a matrix of 400,000 entries) was much smaller; and computing sample of m = 100,000 permutations of n = 20 elements was aborted after 260 seconds (I didn't have the patience to wait for completion). This scaling problem appears to be related to scaling inefficiencies in R's associative addressing. One can work around it by generating samples in groups of, say, 1000 or so, then combining those samples into a large sample and removing duplicates. R experts might be able to suggest more efficient solutions or better workarounds. ### Edit **We can achieve near linear asymptotic performance** by breaking the cache into a hierarchy of two caches, so that R never has to search through a large list. Conceptually (although not as implemented), create an array indexed by the first $k$ elements of a permutation. Entries in this array are lists of all permutations sharing those first $k$ elements. To check whether a permutation has been seen, use its first $k$ elements to find its entry in the cache and then search for that permutation within that entry. We can choose $k$ to balance the expected sizes of all the lists. The actual implementation does not use a $k$-fold array, which would be hard to program in sufficient generality, but instead uses another list. Here are some elapsed times in seconds for a range of permutation sizes and numbers of distinct permutations requested: ``` Number Size=10 Size=15 Size=1000 size=10000 size=100000 10 0.00 0.00 0.02 0.08 1.03 100 0.01 0.01 0.07 0.64 8.36 1000 0.08 0.09 0.68 6.38 10000 0.83 0.87 7.04 65.74 100000 11.77 10.51 69.33 1000000 195.5 125.5 ``` (The apparently anomalous speedup from size=10 to size=15 is because the first level of the cache is larger for size=15, reducing the average number of entries in the second-level lists, thereby speeding up R's associative search. At some cost in RAM, execution could be made faster by increasing the upper-level cache size. Just increasing `k.head` by 1 (which multiplies the upper-level size by 10) sped up `rperm(100000, size=10)` from 11.77 seconds to 8.72 seconds, for instance. Making the upper-level cache 10 times bigger yet achieved no appreciable gain, clocking at 8.51 seconds.) Except for the case of 1,000,000 unique permutations of 10 elements (a substantial portion of all 10! = about 3.63 million such permutations), practically no collisions were ever detected. In this exceptional case, there were 169,301 collisions, but no complete failures (one million unique permutations were in fact obtained). Note that with large permutation sizes (greater than 20 or so), the chance of obtaining two identical permutations even in a sample as large as 1,000,000,000 is vanishingly small. Thus, this solution is applicable primarily in situations where (a) large numbers of unique permutations of (b) between $n=5$ and $n=15$ or so elements are to be generated but even so, (c) substantially fewer than all $n!$ permutations are needed. Working code follows. ``` rperm <- function(m, size=2) { # Obtain m unique permutations of 1:size max.failures <- 10 # Function to index into the upper-level cache. prefix <- function(p, k) { # p is a permutation, k is the prefix size sum((p[1:k] - 1) * (size ^ ((1:k)-1))) + 1 } # Returns a value from 1 through size^k # Function to obtain a new permutation. newperm <- function() { # References cache, k.head, and failures in parent context. # Modifies cache and failures. count <- 0 # Protects against infinite loops repeat { # Generate a permutation and check against previous ones. p <- sample(1:size) k <- prefix(p, k.head) ip <- cache[[k]] hash.p <- paste(tail(p,-k.head), collapse="") if (is.null(ip[[hash.p]])) break # Prepare to try again. n.failures <<- n.failures + 1 count <- count+1 if (count > max.failures) { p <- NA # NA indicates a new permutation wasn't found hash.p <- "" break } } if (count <= max.failures) { ip[[hash.p]] <- TRUE # Update the list of permutations found cache[[k]] <<- ip } p # Return this (new) permutation } # Initialize the cache. k.head <- min(size-1, max(1, floor(log(m / log(m)) / log(size)))) cache <- as.list(1:(size^k.head)) for (i in 1:(size^k.head)) cache[[i]] <- list() # Count failures (for benchmarking and error checking). n.failures <- 0 # Obtain (up to) m unique permutations. s <- replicate(m, newperm()) s[is.na(s)] <- NULL list(failures=n.failures, sample=matrix(unlist(s), ncol=size)) } # Returns an m by size matrix; each row is a permutation of 1:size. ```
Using `unique` in the right way ought to do the trick: ``` set.seed(2) limit <- 3 myindex <- seq(0,limit) endDim<-factorial(limit) permutations<-sample(myindex) while(is.null(dim(unique(permutations))) || dim(unique(permutations))[1]!=endDim) { permutations <- rbind(permutations,sample(myindex)) } # Resulting permutations: unique(permutations) # Compare to set.seed(2) permutations<-sample(myindex) for(i in 1:endDim) { permutations<-rbind(permutations,sample(myindex)) } permutations # which contains the same permutation twice ```
43,950,988
I am using ExtJS 6 version. I have panel element and dynamically I am updating panel html as ``` panel.update("<img src=app/resources/first.jpg ></img>"); ``` after updating html, image is not loading. I did `panel.updateLayout();` still image is not loading. Please suggest
2017/05/13
[ "https://Stackoverflow.com/questions/43950988", "https://Stackoverflow.com", "https://Stackoverflow.com/users/373142/" ]
If you want the list of lines without the trailing new-line character you can use `str.splitlines()` method, which in this case you can read the file as string using `file_obj.read()` then use `splitlines()` over the whole string. Although, there is no need for such thing when the `open` function is already returned a generator from your lines (you can simply strip the trailing new-line while processing the lines) or just call the `str.strip()` with a `map` to create an iterator of striped lines: ``` with open('dictionary.txt'): striped_lines = map(str.strip, f) ``` But if you just want to count the words as a pythonic way you can use a generator expression within `sum` function like following: ``` with open('dictionary.txt') as f: word_count = sum(len(line.split()) for line in f) ``` Note that there is no need to strip the new lines while you're splitting the line. e.g. ``` In [14]: 'sd f\n'.split() Out[14]: ['sd', 'f'] ``` But if you still want all the words in a list you can use a list comprehension instead of a generator expression: ``` with open('dictionary.txt') as f: all_words = [word for line in f for word in line.split()] word_count = len(all_words) ```
if you want to return a list of lines without `\n` and then print the length of this list: ``` def line_list(fname): with open(fname) as file: return file.read().splitlines() word_list = line_list('dictionary.txt') # 1 word per line print(len(word_list)) ```
50,008,690
I'm having some trouble implementing the boostrap/js/jquery date picker into my webpage. I am not sure if there is code I am missing or hwaat. These are the includes in the head ``` <!-- Include Required Prerequisites --> <script type="text/javascript" src="//cdn.jsdelivr.net/jquery/1/jquery.min.js"></script> <script type="text/javascript" src="//cdn.jsdelivr.net/momentjs/latest/moment.min.js"></script> <link rel="stylesheet" type="text/css" href="//cdn.jsdelivr.net/bootstrap/3/css/bootstrap.css" /> <!-- Include Date Range Picker --> <script type="text/javascript" src="//cdn.jsdelivr.net/bootstrap.daterangepicker/2/daterangepicker.js"></script> <link rel="stylesheet" type="text/css" href="//cdn.jsdelivr.net/bootstrap.daterangepicker/2/daterangepicker.css" /> ``` This is the Mark Up ``` <div class="container"> <div class="well"> <h1><span class="glyphicon glyphicon-calendar"></span></h1> <h3>Choose Your Dates</h3> <input type="text" id="datepicker" value="01/01/2015 - 01/31/2015"/> </div> </div> ``` This is the script ``` <script type="text/javascript"> $(function(){ $(".datepicker").daterangepicker(); }); </script> ``` UPDATE Looks like content loading problem [![console errors](https://i.stack.imgur.com/yiRvA.png)](https://i.stack.imgur.com/yiRvA.png)
2018/04/24
[ "https://Stackoverflow.com/questions/50008690", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8569800/" ]
try ``` $("#datepicker").daterangepicker(); ```
It looks like you're selector in your javascript is wrong. In your html, you've defined datepicker as an id. In your javascript you're referencing it as a class.
50,008,690
I'm having some trouble implementing the boostrap/js/jquery date picker into my webpage. I am not sure if there is code I am missing or hwaat. These are the includes in the head ``` <!-- Include Required Prerequisites --> <script type="text/javascript" src="//cdn.jsdelivr.net/jquery/1/jquery.min.js"></script> <script type="text/javascript" src="//cdn.jsdelivr.net/momentjs/latest/moment.min.js"></script> <link rel="stylesheet" type="text/css" href="//cdn.jsdelivr.net/bootstrap/3/css/bootstrap.css" /> <!-- Include Date Range Picker --> <script type="text/javascript" src="//cdn.jsdelivr.net/bootstrap.daterangepicker/2/daterangepicker.js"></script> <link rel="stylesheet" type="text/css" href="//cdn.jsdelivr.net/bootstrap.daterangepicker/2/daterangepicker.css" /> ``` This is the Mark Up ``` <div class="container"> <div class="well"> <h1><span class="glyphicon glyphicon-calendar"></span></h1> <h3>Choose Your Dates</h3> <input type="text" id="datepicker" value="01/01/2015 - 01/31/2015"/> </div> </div> ``` This is the script ``` <script type="text/javascript"> $(function(){ $(".datepicker").daterangepicker(); }); </script> ``` UPDATE Looks like content loading problem [![console errors](https://i.stack.imgur.com/yiRvA.png)](https://i.stack.imgur.com/yiRvA.png)
2018/04/24
[ "https://Stackoverflow.com/questions/50008690", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8569800/" ]
try ``` $("#datepicker").daterangepicker(); ```
You are referencing your datepicker input via a classname `.datepicker` that it doesn't have. Either reference it via it's id, with `#datepicker` or add `class="datepicker"` to the input element.
1,334,121
I have several Ubuntu servers running on my Windows 2019 Hyper-V however I lost the ISO image. I downloaded `ubuntu-20.04.2-live-server-amd64.iso` and when I create a new VM it bombs every time: [![enter image description here](https://i.stack.imgur.com/wRhUU.png)](https://i.stack.imgur.com/wRhUU.png) What am I doing wrong? I have: * Windows 2019 Server w/24 Cores, 256 GB of RAM, and 4 TB of usable space. * Generation 2, Secure Boot unchecked * 32 GB of RAM * 12 Virtual Processors * 1 TB of hard disk space (tried 25 GB, 125 GB, and 512 GB) * SCSI Controller * All but Guest Services checked. I must have forgotten a step. Any ideas?
2021/04/26
[ "https://askubuntu.com/questions/1334121", "https://askubuntu.com", "https://askubuntu.com/users/1225018/" ]
Use the `numbered` when using `mv` ``` numbered, t make numbered backups ``` [man mv](https://linux.die.net/man/1/mv) `mv --backup=TYPE` with type being 1 of these: * none, off never make backups (even if --backup is given) * numbered, t make numbered backups * existing, nil numbered if numbered backups exist, simple otherwise * simple, never always make simple backups
I would suggest using Thunar package manager, to rename and relocate files in bulk. You can install it with this command: `sudo apt-get install -y thunar`
31,560,712
I am digging for quite a while and I am wondering how do I open an HttpClient connection in Java (Android) and then close the socket(s) right away without getting CLOSE\_WAIT and TIME\_WAIT TCP statuses while I am checking network monitoring tools. What I am doing is (Found this solution on stackoverflow site): ``` String url = "http://example.com/myfile.php"; String result = null; InputStream is = null; StringBuilder sb = null; try { HttpClient httpclient = new DefaultHttpClient(); HttpPost httppost = new HttpPost(url); HttpResponse response = httpclient.execute(httppost); HttpEntity entity = response.getEntity(); is = entity.getContent(); } catch (Exception e) { Log.e("log_tag", "Error in http connection" + e.toString()); } // convert response to string try { BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); sb = new StringBuilder(); sb.append(reader.readLine() + "\n"); String line = "0"; while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } is.close(); result = sb.toString(); } catch (Exception e) { } Toast.makeText(getApplicationContext(), result, Toast.LENGTH_LONG).show(); ``` After I run this code - The PHP file is executed well, I get the response back to TOAST, BUT - when I analyze the networking environment of my mobile device with external network analyzer tool - I see that the connection(s) stay in CLOSE\_WAIT or/and TIME\_WAIT for about 1 minute and only then they move to CLOSED state. The problem is: I am calling the above function every ~2 to 5 seconds in an infinite loop, which result over time a huge amount of CLOSE\_WAITs and TIME\_WAITs - which affect the overall performance of my Android app, until it gets stuck and useless ! What I want to do is (And need your answer if possible): I wish to really close the connection RIGHT AWAY after I TOAST the response message without any open sockets. No TIME\_WAIT and no CLOSE\_WAIT. No left overs at all - close all communication IMMEDIATELY at the split second that I run code that should do so. I don't need the connection anymore until the next iteration of the loop. How can I accomplish that ? I have in mind that I don't want the application to halt or have poor performance over time, since it should run in a service/stay open forever. I would really appreciate if you could write simple code that work after I do copy-paste. I am new to Java and Android, so I will try to figure out the code that you write, so please keep it as simple as possible. Thanks a lot ! Question asker.
2015/07/22
[ "https://Stackoverflow.com/questions/31560712", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5142880/" ]
1. If you want to write " #002s ", Why not write at once? May be the serial device cant identify the control code when you write each character. ``` void Test_Serial::writeDataToSerialPort() { QByteArray input = QString("#002s").toLocal8Bit(); serial->write(input); } ``` 2. And no need for this reading part . ``` serial->waitForReadyRead(100); QByteArray output = serial->readAll(); ui->label_2->setText(output); ``` The `Test_Serial::serialReceived` will be called any way when you have the response from the serial device. 3. And you can catch the error on opening the port by using the `error` `signal` from `QSerialPort` ``` connect(serial,SIGNAL(error(QSerialPort::SerialPortError)),this,SLOT(serialPortError(QSerialPort::SerialPortError))); void Test_Serial::serialPortError(QSerialPort::SerialPortError error) { //Print error etc. } ```
The issue ended up being that the `readyread` flag is only emitted if theirs data to read. However I was sending the data too quickly for the external device to receive it. This meant that some data was lost thus the device never recognised it as a valid command. This meant that it was still waiting for the message to finish, hence the "IRP\_MJ\_DEVICE\_CONTROL (IOCTL\_SERIAL\_WAIT\_ON\_MASK) UP STATUS\_CANCELLED COM1" error message upon closing the program. This also explains why their were no error messages with regards to writing data. This also explains why the same program occasionally managed to read data, and at other times failed, (even without rebuilding the program, just re-running it.) When the data was read, the processor was more loaded, i.e. programs running in the background. This meant that the data was transmitted more slowly and thus the external device could recognise the commands and thus reply.
1,387,296
I am using `nl2br()` to convert `\n` characters to the `<br />` tag but I do not want more than one `<br />` tag at a time. For example, `Hello \n\n\n\n Everybody` should become `Hello <br /> Everybody`. How can I do this?
2009/09/07
[ "https://Stackoverflow.com/questions/1387296", "https://Stackoverflow.com", "https://Stackoverflow.com/users/169141/" ]
The most direct approach might be to first replace the multiple newlines with one using a simple regular expression: ``` nl2br(preg_replace("/\n+/", "\n", $input)); ```
I'd try replacing repeated newlines with single newlines using preg\_replace() first, then using nl2br to convert to HTML tags. `nl2br(preg_replace('/\n+/', '\n', $the_string))` should do the trick (untested).
1,387,296
I am using `nl2br()` to convert `\n` characters to the `<br />` tag but I do not want more than one `<br />` tag at a time. For example, `Hello \n\n\n\n Everybody` should become `Hello <br /> Everybody`. How can I do this?
2009/09/07
[ "https://Stackoverflow.com/questions/1387296", "https://Stackoverflow.com", "https://Stackoverflow.com/users/169141/" ]
The most direct approach might be to first replace the multiple newlines with one using a simple regular expression: ``` nl2br(preg_replace("/\n+/", "\n", $input)); ```
If you have php 5.2.4+ you can use preg\_replace and the vertical whitespace character type `\v` ``` $str = preg_replace('/\v+/','<br>', $str); ```
1,387,296
I am using `nl2br()` to convert `\n` characters to the `<br />` tag but I do not want more than one `<br />` tag at a time. For example, `Hello \n\n\n\n Everybody` should become `Hello <br /> Everybody`. How can I do this?
2009/09/07
[ "https://Stackoverflow.com/questions/1387296", "https://Stackoverflow.com", "https://Stackoverflow.com/users/169141/" ]
If you have php 5.2.4+ you can use preg\_replace and the vertical whitespace character type `\v` ``` $str = preg_replace('/\v+/','<br>', $str); ```
I'd try replacing repeated newlines with single newlines using preg\_replace() first, then using nl2br to convert to HTML tags. `nl2br(preg_replace('/\n+/', '\n', $the_string))` should do the trick (untested).
48,497,134
I have two DataFrames recommendations and movies. Columns rec1-rec3 in recommendations represent movie id from movies dataframe. ``` val recommendations: DataFrame = List( (0, 1, 2, 3), (1, 2, 3, 4), (2, 1, 3, 4)).toDF("id", "rec1", "rec2", "rec3") val movies = List( (1, "the Lord of the Rings"), (2, "Star Wars"), (3, "Star Trek"), (4, "Pulp Fiction")).toDF("id", "name") ``` What I want: ```none +---+------------------------+------------+------------+ | id| rec1| rec2| rec3| +---+------------------------+------------+------------+ | 0| the Lord of the Rings| Star Wars| Star Trek| | 1| Star Wars| Star Trek|Pulp Fiction| | 2| the Lord of the Rings| Star Trek| Star Trek| +---+------------------------+------------+------------+ ```
2018/01/29
[ "https://Stackoverflow.com/questions/48497134", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5123568/" ]
We can also use the functions `stack()` and `pivot()` to arrive at your expected output, joining the two dataframes only once. ``` // First rename 'id' column to 'ids' avoid duplicate names further downstream val moviesRenamed = movies.withColumnRenamed("id", "ids") recommendations.select($"id", expr("stack(3, 'rec1', rec1, 'rec2', rec2, 'rec3', rec3) as (rec, movie_id)")) .where("rec is not null") .join(moviesRenamed, col("movie_id") === moviesRenamed.col("ids")) .groupBy("id") .pivot("rec") .agg(first("name")) .show() +---+--------------------+---------+------------+ | id| rec1| rec2| rec3| +---+--------------------+---------+------------+ | 0|the Lord of the R...|Star Wars| Star Trek| | 1| Star Wars|Star Trek|Pulp Fiction| | 2|the Lord of the R...|Star Trek|Pulp Fiction| +---+--------------------+---------+------------+ ```
I figured it out. You should create aliases for your columns just like in SQL. ``` val joined = recommendation .join(movies.select(col("id").as("id1"), 'name.as("n1")), 'id1 === recommendation.col("rec1")) .join(movies.select(col("id").as("id2"), 'name.as("n2")), 'id2 === recommendation.col("rec2")) .join(movies.select(col("id").as("id3"), 'name.as("n3")), 'id3 === recommendation.col("rec3")) .select('id, 'n1, 'n2, 'n3) joined.show() ``` Query will result in ``` +---+--------------------+---------+------------+ | id| n1| n2| n3| +---+--------------------+---------+------------+ | 0|the Lord of the R...|Star Wars| Star Trek| | 1| Star Wars|Star Trek|Pulp Fiction| | 2|the Lord of the R...|Star Trek|Pulp Fiction| +---+--------------------+---------+------------+ ```
31,979,254
in my app Im trying to give the user points every time they create an event. I am setting up a PFQuery to retrieve the current score then saving the required points back to the class. My problem is that I can't update the score once it has been created so I need a way to "Update" the current score data with the added score. This is my code: ``` // Give the User Points let saveScore = PFUser.currentUser() var query = PFQuery(className:"User") query.whereKey("score", equalTo: saveScore!) query.findObjectsInBackgroundWithBlock ({ objects, error in if error == nil { // The find succeeded. println("Successfully retrieved \(objects!.count) scores.") // Do something with the found objects if let objects = objects as? [PFObject] { for object in objects { let Score = object["score"] as! String println(object.objectId) let Points = ("100" + Score) saveScore!.setObject(Points, forKey: "score") saveScore!.saveInBackgroundWithBlock { (success: Bool,error: NSError?) -> Void in println("Score added to User."); } } } } else { // Log details of the failure println("Error: \(error!) \(error!.userInfo!)") } }) ``` Can anyone help? Thanks
2015/08/13
[ "https://Stackoverflow.com/questions/31979254", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5002014/" ]
Since you already have the current user there's no reason to query it. However you should fetch it if needed to make sure you're working with the latest data. Once fetched set your score variable, add the 100 string and then save the updated score variable, like so: ``` if let currentUser = PFUser.currentUser() { currentUser.fetchIfNeededInBackgroundWithBlock({ (foundUser: PFObject?, error: NSError?) -> Void in // Get and update score if foundUser != nil { let score = foundUser!["score"] as! String let points = "100" + score foundUser!["score"] = points foundUser?.saveInBackgroundWithBlock({ (succeeded: Bool, error: NSError?) -> Void in if succeeded { println("score added to user") } }) } }) } ```
You need to query for the saved object and then just save like normal. It will update like so: ``` var query = PFQuery(className:"GameScore") query.getObjectInBackgroundWithId("xWMyZEGZ") { (gameScore: PFObject?, error: NSError?) -> Void in if error != nil { println(error) } else if let gameScore = gameScore { gameScore["cheatMode"] = true gameScore["score"] = 1338 gameScore.saveInBackground() } } ```
25,670,647
I have set my GAE web app with all the appropriate endpoints and deployed it locally on my ubuntu pc. I get connected to my home network, I found my computer's local IP, selected the correct port (8888 is the default for GAE web apps) tried to connect (from Chrome and my android device) but no luck. I get `Google Chrome's connection attempt to 192.168.1.2 was rejected. The website may be down, or your network may not be properly configured.` `Error code: ERR_CONNECTION_REFUSED` Where should I look? Is this a GAE, network or Ubuntu issue?
2014/09/04
[ "https://Stackoverflow.com/questions/25670647", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2583086/" ]
Make sure you set the --address=192.168.1.2 (for java) (--host=192.168.1.2 for python) flag when you startup the app
192.168.1.2 is generally the router address, although it's possible you have set it as your local computer IP. The --address flag will work as long as you have the correct LAN IP address, in your case, I think you got the wrong one. If you are on Windows, go to your cmd and type ipconfig. Look for the field IPv4 Address, it should have your local IP address that looks "similar" to 198.128.1.2. Let me know if that works for you, thanks.
355,367
I have been using Ubuntu 11.10 on a VMWare for sometime now. I did a lot of customization on it (e.g. remove unity, install my applications etc.). Last week I managed to get another laptop and installed Ubuntu 11.10 on it. I was wondering if it's possible ot make these two Ubuntu system identical, i.e. move everything (settings, applications etc.) from my VM Ubuntu to my laptop Ubuntu.
2012/01/31
[ "https://serverfault.com/questions/355367", "https://serverfault.com", "https://serverfault.com/users/70984/" ]
You can get a list of all installed packages on machine A by running: > > `sudo dpkg --get-selections > packagelist.txt`. > > > On machine B, you can install all those packages by running: > > `sudo dpkg --set-selections < packagelist.txt` > > >
1. boot the laptop with LiveCD 2. Make the apropriate partitions 3. Rsync your files to the laptop, use -AHaXx --numeric-ids 4. mount the target system somewhere: / /boot /proc /dev /sys ... 5. chroot in it 6. grub it 7. done.:)
355,367
I have been using Ubuntu 11.10 on a VMWare for sometime now. I did a lot of customization on it (e.g. remove unity, install my applications etc.). Last week I managed to get another laptop and installed Ubuntu 11.10 on it. I was wondering if it's possible ot make these two Ubuntu system identical, i.e. move everything (settings, applications etc.) from my VM Ubuntu to my laptop Ubuntu.
2012/01/31
[ "https://serverfault.com/questions/355367", "https://serverfault.com", "https://serverfault.com/users/70984/" ]
You can get a list of all installed packages on machine A by running: > > `sudo dpkg --get-selections > packagelist.txt`. > > > On machine B, you can install all those packages by running: > > `sudo dpkg --set-selections < packagelist.txt` > > >
You can try [clonezilla](http://clonezilla.org/). It supports several file systems.
355,367
I have been using Ubuntu 11.10 on a VMWare for sometime now. I did a lot of customization on it (e.g. remove unity, install my applications etc.). Last week I managed to get another laptop and installed Ubuntu 11.10 on it. I was wondering if it's possible ot make these two Ubuntu system identical, i.e. move everything (settings, applications etc.) from my VM Ubuntu to my laptop Ubuntu.
2012/01/31
[ "https://serverfault.com/questions/355367", "https://serverfault.com", "https://serverfault.com/users/70984/" ]
You can try [clonezilla](http://clonezilla.org/). It supports several file systems.
1. boot the laptop with LiveCD 2. Make the apropriate partitions 3. Rsync your files to the laptop, use -AHaXx --numeric-ids 4. mount the target system somewhere: / /boot /proc /dev /sys ... 5. chroot in it 6. grub it 7. done.:)
355,367
I have been using Ubuntu 11.10 on a VMWare for sometime now. I did a lot of customization on it (e.g. remove unity, install my applications etc.). Last week I managed to get another laptop and installed Ubuntu 11.10 on it. I was wondering if it's possible ot make these two Ubuntu system identical, i.e. move everything (settings, applications etc.) from my VM Ubuntu to my laptop Ubuntu.
2012/01/31
[ "https://serverfault.com/questions/355367", "https://serverfault.com", "https://serverfault.com/users/70984/" ]
You may also be able to create a template of the original VM and move it over to the laptop. <http://communities.vmware.com/servlet/JiveServlet/downloadBody/2664-102-1-2324/V2P.doc> That document discusses V2P migration for linux. I did this once but it was from a VM to a desktop with almost identical hardware. There were networking issues that were resolved with a simple if down up. Also if anyone is interested, the windows steps are : <http://www.vmware.com/support/v2p/doc/V2P_TechNote.pdf> Never done them however. EDIT: This isnt really making a template of the original VM, I guess thats a misnomer. However this worked for me. Making a template and redeploying it on the laptop as a guest OS is also another option however that requires you to have VMware installed, probably not the path youre looking for.
1. boot the laptop with LiveCD 2. Make the apropriate partitions 3. Rsync your files to the laptop, use -AHaXx --numeric-ids 4. mount the target system somewhere: / /boot /proc /dev /sys ... 5. chroot in it 6. grub it 7. done.:)
355,367
I have been using Ubuntu 11.10 on a VMWare for sometime now. I did a lot of customization on it (e.g. remove unity, install my applications etc.). Last week I managed to get another laptop and installed Ubuntu 11.10 on it. I was wondering if it's possible ot make these two Ubuntu system identical, i.e. move everything (settings, applications etc.) from my VM Ubuntu to my laptop Ubuntu.
2012/01/31
[ "https://serverfault.com/questions/355367", "https://serverfault.com", "https://serverfault.com/users/70984/" ]
You may also be able to create a template of the original VM and move it over to the laptop. <http://communities.vmware.com/servlet/JiveServlet/downloadBody/2664-102-1-2324/V2P.doc> That document discusses V2P migration for linux. I did this once but it was from a VM to a desktop with almost identical hardware. There were networking issues that were resolved with a simple if down up. Also if anyone is interested, the windows steps are : <http://www.vmware.com/support/v2p/doc/V2P_TechNote.pdf> Never done them however. EDIT: This isnt really making a template of the original VM, I guess thats a misnomer. However this worked for me. Making a template and redeploying it on the laptop as a guest OS is also another option however that requires you to have VMware installed, probably not the path youre looking for.
You can try [clonezilla](http://clonezilla.org/). It supports several file systems.
13,930,049
I was wondering if the following is possible to do and with hope someone could potentially help me. I would like to create a 'download zip' feature but when the individual clicks to download then the button fetches images from my external domain and then bundles them into a zip and then downloads it for them. I have checked on how to do this and I can't find any good ways of grabbing the images and forcing them into a zip to download. I was hoping someone could assist
2012/12/18
[ "https://Stackoverflow.com/questions/13930049", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1152045/" ]
``` # define file array $files = array( 'https://www.google.com/images/logo.png', 'https://upload.wikimedia.org/wikipedia/commons/thumb/5/53/Wikipedia-logo-en-big.png/220px-Wikipedia-logo-en-big.png', ); # create new zip object $zip = new ZipArchive(); # create a temp file & open it $tmp_file = tempnam('.', ''); $zip->open($tmp_file, ZipArchive::CREATE); # loop through each file foreach ($files as $file) { # download file $download_file = file_get_contents($file); #add it to the zip $zip->addFromString(basename($file), $download_file); } # close zip $zip->close(); # send the file to the browser as a download header('Content-disposition: attachment; filename="my file.zip"'); header('Content-type: application/zip'); readfile($tmp_file); unlink($tmp_file); ``` Note: This solution assumes you have `allow_url_fopen` enabled. Otherwise look into using cURL to download the file.
I hope I didn't understand wrong. <http://php.net/manual/en/book.zip.php> I haven't tried this, but it seems like what you're looking for. ``` <?php $zip = new ZipArchive; if ($zip->open('my_archive.zip') === TRUE) { $zip->addFile($url, basename($url)); $zip->close(); echo 'ok'; } else { echo 'failed'; } ?> ```
20,213,739
I am trying to animate a footer section in a webpage, and found out that the jQuery `.animate()` method doesn't 'animate' when using percentages (apparently is a bug still not fixed as of jQuery 1.10.2?). So, as suggested by [another answer on SO](https://stackoverflow.com/questions/6877081/jquery-animate-and-property-values-in-percentage) I should use notation for the parent's height, plus `'px'` to allow jQuery to animate properly. So I tried that, but this breaks the function :/ From my [JS Fiddle](http://jsfiddle.net/zYzT2/), I have this code: ``` $(function () { var open = false; $('#footbutton').click(function () { if (open === false) { $('#footcontent').animate({ height: '100%' }); open = true; } else { $('#footcontent').animate({ height: '0px' }); open = false; } }); }); ``` This works, except, the `#footcontent` div instantly pops open with no animation (but works fine when closing, as I have height set in px). Replacing `height: '100%'` with `height: $('footer').height() +'px'` breaks the function, where for some reason jQuery now sees the parent's height at 0px and cannot make any changes in height. How can the two be different? I've tried replacing `$('footer').height()` with `$(#footcontent).parent().height()` and it's a no-go as well. Surely I'm missing something here or just being dense...it is 4am my time........
2013/11/26
[ "https://Stackoverflow.com/questions/20213739", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1077783/" ]
Just throwing out ideas there. The current setup looks like this ``` +------------+ +------------+ | CGI |----------->| Apache | | Program |<-----------| Server | +------------+ +------------+ ``` How about something like this ``` +------------+ +------------+ +------------+ | Daemon |--->| CGI |--->| Apache | | Program |<---| Passthru |<---| Server | +------------+ +------------+ +------------+ ``` Basically, move all of the functionality of your current program into a daemon that is launched once at startup. Then create a tiny passthru program for Apache to launch via CGI. The passthru program attaches itself to the daemon using IPC, either shared memory, or sockets. Everything that the passthru program receives on stdin, it forwards to the daemon. Everything that the passthru program receives from the daemon, it forwards to Apache on stdout. That way Apache can launch/kill the passthru program as it pleases, without affecting what you're trying to do on the backend.
You might also try to capture `TERM` signal and ignore it until you're done processing.
20,213,739
I am trying to animate a footer section in a webpage, and found out that the jQuery `.animate()` method doesn't 'animate' when using percentages (apparently is a bug still not fixed as of jQuery 1.10.2?). So, as suggested by [another answer on SO](https://stackoverflow.com/questions/6877081/jquery-animate-and-property-values-in-percentage) I should use notation for the parent's height, plus `'px'` to allow jQuery to animate properly. So I tried that, but this breaks the function :/ From my [JS Fiddle](http://jsfiddle.net/zYzT2/), I have this code: ``` $(function () { var open = false; $('#footbutton').click(function () { if (open === false) { $('#footcontent').animate({ height: '100%' }); open = true; } else { $('#footcontent').animate({ height: '0px' }); open = false; } }); }); ``` This works, except, the `#footcontent` div instantly pops open with no animation (but works fine when closing, as I have height set in px). Replacing `height: '100%'` with `height: $('footer').height() +'px'` breaks the function, where for some reason jQuery now sees the parent's height at 0px and cannot make any changes in height. How can the two be different? I've tried replacing `$('footer').height()` with `$(#footcontent).parent().height()` and it's a no-go as well. Surely I'm missing something here or just being dense...it is 4am my time........
2013/11/26
[ "https://Stackoverflow.com/questions/20213739", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1077783/" ]
Just throwing out ideas there. The current setup looks like this ``` +------------+ +------------+ | CGI |----------->| Apache | | Program |<-----------| Server | +------------+ +------------+ ``` How about something like this ``` +------------+ +------------+ +------------+ | Daemon |--->| CGI |--->| Apache | | Program |<---| Passthru |<---| Server | +------------+ +------------+ +------------+ ``` Basically, move all of the functionality of your current program into a daemon that is launched once at startup. Then create a tiny passthru program for Apache to launch via CGI. The passthru program attaches itself to the daemon using IPC, either shared memory, or sockets. Everything that the passthru program receives on stdin, it forwards to the daemon. Everything that the passthru program receives from the daemon, it forwards to Apache on stdout. That way Apache can launch/kill the passthru program as it pleases, without affecting what you're trying to do on the backend.
You might check out [FastCGI](https://httpd.apache.org/mod_fcgid/mod/mod_fcgid.html). It allows you to stick closely to the CGI programming model but decouples request lifetime from process lifetime. You could then do something like this: ``` while (FCGI_Accept() >= 0) { // handle normal request FCGI_Finish(); // do synchronous I/O outside of request lifetime } ```
20,213,739
I am trying to animate a footer section in a webpage, and found out that the jQuery `.animate()` method doesn't 'animate' when using percentages (apparently is a bug still not fixed as of jQuery 1.10.2?). So, as suggested by [another answer on SO](https://stackoverflow.com/questions/6877081/jquery-animate-and-property-values-in-percentage) I should use notation for the parent's height, plus `'px'` to allow jQuery to animate properly. So I tried that, but this breaks the function :/ From my [JS Fiddle](http://jsfiddle.net/zYzT2/), I have this code: ``` $(function () { var open = false; $('#footbutton').click(function () { if (open === false) { $('#footcontent').animate({ height: '100%' }); open = true; } else { $('#footcontent').animate({ height: '0px' }); open = false; } }); }); ``` This works, except, the `#footcontent` div instantly pops open with no animation (but works fine when closing, as I have height set in px). Replacing `height: '100%'` with `height: $('footer').height() +'px'` breaks the function, where for some reason jQuery now sees the parent's height at 0px and cannot make any changes in height. How can the two be different? I've tried replacing `$('footer').height()` with `$(#footcontent).parent().height()` and it's a no-go as well. Surely I'm missing something here or just being dense...it is 4am my time........
2013/11/26
[ "https://Stackoverflow.com/questions/20213739", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1077783/" ]
An option to consider is: 1. CGI program generates output to Apache. 2. Before closing standard output, it forks. 3. The parent process closes its standard output and then uses one of the emergency exit functions (`_exit()` or `_Exit()` or one of their relatives) to bail out. This avoids flushing the other I/O streams. 4. The child processes closes file descriptor 1 with `close()` — it avoids using `fclose()` because the parent is writing to standard output. 5. The child process then deals with the rest of the writing work. You might need to isolate the child from its parent by setting its process group. Apache can only send the child process a signal via the process group (because it doesn't know the PID of the child process), so by disassociating from the original process group, your child process becomes immune to Apache sending it signals. Another option worth considering has the parent do the work, then fork. The parent process doesn't flush standard output; it just uses the emergency exit function. The child process isolates itself from the parent, and then closes standard output with `fclose()`, flushing any pending output. Apache sees the file is closed and continues on its merry way. The child process then does its cleanup. Again, setting the child into its own process group is crucial. The advantage of this is that the output is not closed until after the child has isolated itself, so Apache shouldn't be able to use timing coincidences to send a signal to the child (Apache's grandchild). You might even just isolate the process in its own process group before forking…that also eliminates the window of vulnerability for the child. You shouldn't do that isolation until you're about to fork and exit, though; otherwise, you defeat the protection mechanisms that Apache provides. --- ### Example code Here's a simulation of what you might do, in a source file `playcgi.c` compiled to a program `playcgi`: ``` #include <errno.h> #include <signal.h> #include <stdarg.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/wait.h> #include <time.h> #include <unistd.h> static void err_exit(const char *fmt, ...); static void be_childish(void); static void be_parental(pid_t pid, int fd[2]); int main(void) { int fd[2]; if (pipe(fd) != 0) err_exit("pipe()"); pid_t pid = fork(); if (pid < 0) err_exit("fork()"); else if (pid == 0) { dup2(fd[1], STDOUT_FILENO); close(fd[0]); close(fd[1]); be_childish(); } else be_parental(pid, fd); return 0; } static void be_parental(pid_t pid, int fd[2]) { close(fd[1]); char buffer[1024]; int nbytes; while ((nbytes = read(fd[0], buffer, sizeof(buffer))) > 0) write(STDOUT_FILENO, buffer, nbytes); kill(-pid, SIGTERM); struct timespec nap = { .tv_sec = 0, .tv_nsec = 10 * 1000 * 1000 }; nanosleep(&nap, 0); int status; pid_t corpse = waitpid(pid, &status, WNOHANG); if (corpse <= 0) { kill(-pid, SIGKILL); corpse = waitpid(pid, &status, 0); } printf("PID %5d died 0x%.4X\n", corpse, status); } static void be_childish(void) { /* Simulate activity on pipe */ for (int i = 0; i < 10; i++) printf("Data block %d from child (%d)\n", i, (int)getpid()); fflush(stdout); /* Create new pipe to coordinate between child and grandchild */ int fd[2]; if (pipe(fd) != 0) err_exit("child's pipe()"); pid_t pid = fork(); if (pid < 0) err_exit("child's fork()"); if (pid > 0) { char buffer[4]; fprintf(stderr, "Child (%d) waiting\n", (int)getpid()); close(fd[1]); read(fd[0], buffer, sizeof(buffer)); close(fd[0]); fprintf(stderr, "Child (%d) exiting\n", (int)getpid()); close(STDOUT_FILENO); _exit(0); } else { /* Grandchild continues - with no standard output */ close(STDOUT_FILENO); pid_t sid = setsid(); fprintf(stderr, "Grandchild (%d) in session %d\n", (int)getpid(), (int)sid); /* Let child know grandchild has set its own session */ close(fd[0]); close(fd[1]); struct timespec nap = { .tv_sec = 2, .tv_nsec = 0 }; nanosleep(&nap, 0); for (int i = 0; i < 10; i++) { fprintf(stderr, "Data block %d from grandchild (%d)\n", i, (int)getpid()); } fprintf(stderr, "Grandchild (%d) exiting\n", (int)getpid()); } } static void err_vexit(const char *fmt, va_list args) { int errnum = errno; vfprintf(stderr, fmt, args); if (fmt[strlen(fmt)-1] != '\n') putc('\n', stderr); if (errno != 0) fprintf(stderr, "%d: %s\n", errnum, strerror(errnum)); exit(EXIT_FAILURE); } static void err_exit(const char *fmt, ...) { va_list args; va_start(args, fmt); err_vexit(fmt, args); va_end(args); } ``` The functions `main()` and `be_parental()` simulate what Apache might do. There's a pipe which becomes the standard output of the CGI child process. The parent reads from the pipe, then sends a terminate signal to the child, and snoozes for 10 milliseconds, and then looks for a corpse. (This is probably the least convincing part of the code, but…) If it doesn't find one, it sends a SIGKILL signal and collects the corpse of the dead child. It reports on how the child died and returns (and in this simulation, exits successfully). The function `be_childish()` is the CGI child process. It writes some output to its standard output, and flushes standard output. It then creates a pipe so that the child and grandchild can synchronize their activity. The child forks. The surviving child reports that it is waiting on the grandchild, closes the write end of the pipe, reads on the read end of the pipe (for data that will never arrive, so the read will return 0 indicating EOF). It closes the read end of the pipe, reports (on standard error) that it will exit, closes its standard output, and does exit. Meanwhile, the grandchild closes standard output, and then makes itself into a session leader with [`setsid()`](http://pubs.opengroup.org/onlinepubs/9699919799/functions/setsid.html). It reports on its new status, closes both ends of the pipe, thus releasing its parent (the orginal child process) so it can exit. It then takes a 2-second sleep — plenty of time for the parent and grandparent to exit – and then writes some information to standard error, an 'exiting' message and exits. ### Sample output ``` $ ./playcgi Data block 0 from child (60867) Data block 1 from child (60867) Data block 2 from child (60867) Data block 3 from child (60867) Data block 4 from child (60867) Data block 5 from child (60867) Data block 6 from child (60867) Data block 7 from child (60867) Data block 8 from child (60867) Data block 9 from child (60867) Child (60867) waiting Grandchild (60868) in session 60868 Child (60867) exiting PID 60867 died 0x0000 $ Data block 0 from grandchild (60868) Data block 1 from grandchild (60868) Data block 2 from grandchild (60868) Data block 3 from grandchild (60868) Data block 4 from grandchild (60868) Data block 5 from grandchild (60868) Data block 6 from grandchild (60868) Data block 7 from grandchild (60868) Data block 8 from grandchild (60868) Data block 9 from grandchild (60868) Grandchild (60868) exiting ``` You can hit return to enter an empty command line and get another command prompt. There's a perceptible pause between the 'PID 60867 died 0x0000' message (and the prompt that appears) and the output of the 'Data block 0 from grandchild (60868)' message. This shows that the child continues despite the parent dying, etc. You can dink around with the child snoozing (so it gets signalled), or with the parent process sending signals to the process group (`kill(-pid, SIGTERM)` and `kill(-pid, SIGKILL)`), etc. But I believe the grandchild will survive to do its writing.
You might also try to capture `TERM` signal and ignore it until you're done processing.
20,213,739
I am trying to animate a footer section in a webpage, and found out that the jQuery `.animate()` method doesn't 'animate' when using percentages (apparently is a bug still not fixed as of jQuery 1.10.2?). So, as suggested by [another answer on SO](https://stackoverflow.com/questions/6877081/jquery-animate-and-property-values-in-percentage) I should use notation for the parent's height, plus `'px'` to allow jQuery to animate properly. So I tried that, but this breaks the function :/ From my [JS Fiddle](http://jsfiddle.net/zYzT2/), I have this code: ``` $(function () { var open = false; $('#footbutton').click(function () { if (open === false) { $('#footcontent').animate({ height: '100%' }); open = true; } else { $('#footcontent').animate({ height: '0px' }); open = false; } }); }); ``` This works, except, the `#footcontent` div instantly pops open with no animation (but works fine when closing, as I have height set in px). Replacing `height: '100%'` with `height: $('footer').height() +'px'` breaks the function, where for some reason jQuery now sees the parent's height at 0px and cannot make any changes in height. How can the two be different? I've tried replacing `$('footer').height()` with `$(#footcontent).parent().height()` and it's a no-go as well. Surely I'm missing something here or just being dense...it is 4am my time........
2013/11/26
[ "https://Stackoverflow.com/questions/20213739", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1077783/" ]
An option to consider is: 1. CGI program generates output to Apache. 2. Before closing standard output, it forks. 3. The parent process closes its standard output and then uses one of the emergency exit functions (`_exit()` or `_Exit()` or one of their relatives) to bail out. This avoids flushing the other I/O streams. 4. The child processes closes file descriptor 1 with `close()` — it avoids using `fclose()` because the parent is writing to standard output. 5. The child process then deals with the rest of the writing work. You might need to isolate the child from its parent by setting its process group. Apache can only send the child process a signal via the process group (because it doesn't know the PID of the child process), so by disassociating from the original process group, your child process becomes immune to Apache sending it signals. Another option worth considering has the parent do the work, then fork. The parent process doesn't flush standard output; it just uses the emergency exit function. The child process isolates itself from the parent, and then closes standard output with `fclose()`, flushing any pending output. Apache sees the file is closed and continues on its merry way. The child process then does its cleanup. Again, setting the child into its own process group is crucial. The advantage of this is that the output is not closed until after the child has isolated itself, so Apache shouldn't be able to use timing coincidences to send a signal to the child (Apache's grandchild). You might even just isolate the process in its own process group before forking…that also eliminates the window of vulnerability for the child. You shouldn't do that isolation until you're about to fork and exit, though; otherwise, you defeat the protection mechanisms that Apache provides. --- ### Example code Here's a simulation of what you might do, in a source file `playcgi.c` compiled to a program `playcgi`: ``` #include <errno.h> #include <signal.h> #include <stdarg.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/wait.h> #include <time.h> #include <unistd.h> static void err_exit(const char *fmt, ...); static void be_childish(void); static void be_parental(pid_t pid, int fd[2]); int main(void) { int fd[2]; if (pipe(fd) != 0) err_exit("pipe()"); pid_t pid = fork(); if (pid < 0) err_exit("fork()"); else if (pid == 0) { dup2(fd[1], STDOUT_FILENO); close(fd[0]); close(fd[1]); be_childish(); } else be_parental(pid, fd); return 0; } static void be_parental(pid_t pid, int fd[2]) { close(fd[1]); char buffer[1024]; int nbytes; while ((nbytes = read(fd[0], buffer, sizeof(buffer))) > 0) write(STDOUT_FILENO, buffer, nbytes); kill(-pid, SIGTERM); struct timespec nap = { .tv_sec = 0, .tv_nsec = 10 * 1000 * 1000 }; nanosleep(&nap, 0); int status; pid_t corpse = waitpid(pid, &status, WNOHANG); if (corpse <= 0) { kill(-pid, SIGKILL); corpse = waitpid(pid, &status, 0); } printf("PID %5d died 0x%.4X\n", corpse, status); } static void be_childish(void) { /* Simulate activity on pipe */ for (int i = 0; i < 10; i++) printf("Data block %d from child (%d)\n", i, (int)getpid()); fflush(stdout); /* Create new pipe to coordinate between child and grandchild */ int fd[2]; if (pipe(fd) != 0) err_exit("child's pipe()"); pid_t pid = fork(); if (pid < 0) err_exit("child's fork()"); if (pid > 0) { char buffer[4]; fprintf(stderr, "Child (%d) waiting\n", (int)getpid()); close(fd[1]); read(fd[0], buffer, sizeof(buffer)); close(fd[0]); fprintf(stderr, "Child (%d) exiting\n", (int)getpid()); close(STDOUT_FILENO); _exit(0); } else { /* Grandchild continues - with no standard output */ close(STDOUT_FILENO); pid_t sid = setsid(); fprintf(stderr, "Grandchild (%d) in session %d\n", (int)getpid(), (int)sid); /* Let child know grandchild has set its own session */ close(fd[0]); close(fd[1]); struct timespec nap = { .tv_sec = 2, .tv_nsec = 0 }; nanosleep(&nap, 0); for (int i = 0; i < 10; i++) { fprintf(stderr, "Data block %d from grandchild (%d)\n", i, (int)getpid()); } fprintf(stderr, "Grandchild (%d) exiting\n", (int)getpid()); } } static void err_vexit(const char *fmt, va_list args) { int errnum = errno; vfprintf(stderr, fmt, args); if (fmt[strlen(fmt)-1] != '\n') putc('\n', stderr); if (errno != 0) fprintf(stderr, "%d: %s\n", errnum, strerror(errnum)); exit(EXIT_FAILURE); } static void err_exit(const char *fmt, ...) { va_list args; va_start(args, fmt); err_vexit(fmt, args); va_end(args); } ``` The functions `main()` and `be_parental()` simulate what Apache might do. There's a pipe which becomes the standard output of the CGI child process. The parent reads from the pipe, then sends a terminate signal to the child, and snoozes for 10 milliseconds, and then looks for a corpse. (This is probably the least convincing part of the code, but…) If it doesn't find one, it sends a SIGKILL signal and collects the corpse of the dead child. It reports on how the child died and returns (and in this simulation, exits successfully). The function `be_childish()` is the CGI child process. It writes some output to its standard output, and flushes standard output. It then creates a pipe so that the child and grandchild can synchronize their activity. The child forks. The surviving child reports that it is waiting on the grandchild, closes the write end of the pipe, reads on the read end of the pipe (for data that will never arrive, so the read will return 0 indicating EOF). It closes the read end of the pipe, reports (on standard error) that it will exit, closes its standard output, and does exit. Meanwhile, the grandchild closes standard output, and then makes itself into a session leader with [`setsid()`](http://pubs.opengroup.org/onlinepubs/9699919799/functions/setsid.html). It reports on its new status, closes both ends of the pipe, thus releasing its parent (the orginal child process) so it can exit. It then takes a 2-second sleep — plenty of time for the parent and grandparent to exit – and then writes some information to standard error, an 'exiting' message and exits. ### Sample output ``` $ ./playcgi Data block 0 from child (60867) Data block 1 from child (60867) Data block 2 from child (60867) Data block 3 from child (60867) Data block 4 from child (60867) Data block 5 from child (60867) Data block 6 from child (60867) Data block 7 from child (60867) Data block 8 from child (60867) Data block 9 from child (60867) Child (60867) waiting Grandchild (60868) in session 60868 Child (60867) exiting PID 60867 died 0x0000 $ Data block 0 from grandchild (60868) Data block 1 from grandchild (60868) Data block 2 from grandchild (60868) Data block 3 from grandchild (60868) Data block 4 from grandchild (60868) Data block 5 from grandchild (60868) Data block 6 from grandchild (60868) Data block 7 from grandchild (60868) Data block 8 from grandchild (60868) Data block 9 from grandchild (60868) Grandchild (60868) exiting ``` You can hit return to enter an empty command line and get another command prompt. There's a perceptible pause between the 'PID 60867 died 0x0000' message (and the prompt that appears) and the output of the 'Data block 0 from grandchild (60868)' message. This shows that the child continues despite the parent dying, etc. You can dink around with the child snoozing (so it gets signalled), or with the parent process sending signals to the process group (`kill(-pid, SIGTERM)` and `kill(-pid, SIGKILL)`), etc. But I believe the grandchild will survive to do its writing.
You might check out [FastCGI](https://httpd.apache.org/mod_fcgid/mod/mod_fcgid.html). It allows you to stick closely to the CGI programming model but decouples request lifetime from process lifetime. You could then do something like this: ``` while (FCGI_Accept() >= 0) { // handle normal request FCGI_Finish(); // do synchronous I/O outside of request lifetime } ```
20,213,739
I am trying to animate a footer section in a webpage, and found out that the jQuery `.animate()` method doesn't 'animate' when using percentages (apparently is a bug still not fixed as of jQuery 1.10.2?). So, as suggested by [another answer on SO](https://stackoverflow.com/questions/6877081/jquery-animate-and-property-values-in-percentage) I should use notation for the parent's height, plus `'px'` to allow jQuery to animate properly. So I tried that, but this breaks the function :/ From my [JS Fiddle](http://jsfiddle.net/zYzT2/), I have this code: ``` $(function () { var open = false; $('#footbutton').click(function () { if (open === false) { $('#footcontent').animate({ height: '100%' }); open = true; } else { $('#footcontent').animate({ height: '0px' }); open = false; } }); }); ``` This works, except, the `#footcontent` div instantly pops open with no animation (but works fine when closing, as I have height set in px). Replacing `height: '100%'` with `height: $('footer').height() +'px'` breaks the function, where for some reason jQuery now sees the parent's height at 0px and cannot make any changes in height. How can the two be different? I've tried replacing `$('footer').height()` with `$(#footcontent).parent().height()` and it's a no-go as well. Surely I'm missing something here or just being dense...it is 4am my time........
2013/11/26
[ "https://Stackoverflow.com/questions/20213739", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1077783/" ]
There is not a way to do this when using mod\_cgi and a single process. It's heavy on resources, but a generally accepted method of achieving this is called a double fork. It goes something like this: ``` pid_t kid, grandkid; if ((kid = fork())) { waitpid(kid, null, 0); } else if ((grandkid = fork())) { exit(0); } else { // code here // do something long lasting exit(0); } ``` [Adapted from this PERL code](http://modperlbook.org/html/10-2-4-Avoiding-Zombie-Processes.html#pmodperl-CHP-10-EX-18)
You might also try to capture `TERM` signal and ignore it until you're done processing.
20,213,739
I am trying to animate a footer section in a webpage, and found out that the jQuery `.animate()` method doesn't 'animate' when using percentages (apparently is a bug still not fixed as of jQuery 1.10.2?). So, as suggested by [another answer on SO](https://stackoverflow.com/questions/6877081/jquery-animate-and-property-values-in-percentage) I should use notation for the parent's height, plus `'px'` to allow jQuery to animate properly. So I tried that, but this breaks the function :/ From my [JS Fiddle](http://jsfiddle.net/zYzT2/), I have this code: ``` $(function () { var open = false; $('#footbutton').click(function () { if (open === false) { $('#footcontent').animate({ height: '100%' }); open = true; } else { $('#footcontent').animate({ height: '0px' }); open = false; } }); }); ``` This works, except, the `#footcontent` div instantly pops open with no animation (but works fine when closing, as I have height set in px). Replacing `height: '100%'` with `height: $('footer').height() +'px'` breaks the function, where for some reason jQuery now sees the parent's height at 0px and cannot make any changes in height. How can the two be different? I've tried replacing `$('footer').height()` with `$(#footcontent).parent().height()` and it's a no-go as well. Surely I'm missing something here or just being dense...it is 4am my time........
2013/11/26
[ "https://Stackoverflow.com/questions/20213739", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1077783/" ]
There is not a way to do this when using mod\_cgi and a single process. It's heavy on resources, but a generally accepted method of achieving this is called a double fork. It goes something like this: ``` pid_t kid, grandkid; if ((kid = fork())) { waitpid(kid, null, 0); } else if ((grandkid = fork())) { exit(0); } else { // code here // do something long lasting exit(0); } ``` [Adapted from this PERL code](http://modperlbook.org/html/10-2-4-Avoiding-Zombie-Processes.html#pmodperl-CHP-10-EX-18)
You might check out [FastCGI](https://httpd.apache.org/mod_fcgid/mod/mod_fcgid.html). It allows you to stick closely to the CGI programming model but decouples request lifetime from process lifetime. You could then do something like this: ``` while (FCGI_Accept() >= 0) { // handle normal request FCGI_Finish(); // do synchronous I/O outside of request lifetime } ```
51,605,079
I'm playing with co-routines, and they're often described as useful for pipelines. I found a lecture from Berkeley that was very helpful ( <http://wla.berkeley.edu/~cs61a/fa11/lectures/streams.html#coroutines> ), but there's one thing I'm having trouble with. In that lecture, there's a diagram where a pipeline forks, then re-combines later. If order doesn't matter, recombining is easy, the consumer has one yield, but two producers are send()ing to it. But what if order matters? What if I want strict alternation (get a value from left fork, get a value from right fork, lather, rinse, repeat)? Is this possible? Trivial recombine: ``` def producer1(ncr): while True: ncr.send(0) def producer2(ncr): while True: ncr.send(1) def combine(): while True: s=(yield) print(s) chain = combine() chain.__next__() producer1(chain) producer2(chain) ``` I get an output of 0 1 0 1 etc, but I'm pretty sure that's a side effect of scheduling. Is there a way to guarantee the ordering, like a yield-from-1,yield-from-2? To be clear, I know of `yield from` and `__await__`, but I haven't understood them yet.
2018/07/31
[ "https://Stackoverflow.com/questions/51605079", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6454568/" ]
This isn't difficult if you "pull" through your pipeline rather than "push": ``` def producer1(): while True: yield 0 def producer2(): while True: yield 1 def combine(*producers): while True: for producer in producers: val = next(producer) print(s) combine(producer1(), producer2()) ``` Should reliably produce alternating 1s and 0s You can also have the final consumer (the thing that does work with each value- printing in this case) work as a receiver with no reference to the producers if you really want: ``` def producer1(): while True: yield 0 def producer2(): while True: yield 1 def combine_to_push(co, *producers): while True: for producer in producers: s = next(producer) co.send(s) def consumer(): while True: val = (yield) print(val) co = consumer() co.__next__() combine_to_push(co, producer1(), producer2()) ```
I think figured out how to do it. It works with trivial pipelines, anyhow. Here's my combine: ``` class combiner(): def __init__(self,n,ncr): self.q = [ deque() for i in range(n)] self.n = n self.x = 0 self.ncr = ncr @coroutine def receiver(self,n): while True: s=(yield) self.q[n].append(s) self.sender() def sender(self): while True: if self.q[self.x]: self.ncr.send(self.q[self.x].popleft()) self.x = (self.x + 1) % self.n else: break ``` this will round-robin between n streams. Basically, combiner.receiver() is a coroutine that takes data from a stream and puts it into a queue. There's a unique queue per stream. combiner.sender will flush out as much of the queue as it can manage then return. I'm worrying a little that calling a function from a generator that then does a send might be *bad*, but I could just roll sender into receiver and that issue goes away....
55,599,038
I have a customers table and an orders table. I want to display the customer and all of his/her order dates on one row, rather than multiple rows. Here is what I have and what I'm looking for: ![Picture of both tables, current results, what i'm looking for](https://i.stack.imgur.com/IQclF.png) Basic code to get results: ``` select customerid, name, orderdate from customer_table c inner join order_table o on c.customerid = o.customerid ```
2019/04/09
[ "https://Stackoverflow.com/questions/55599038", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8260859/" ]
this will work at the most you cant show it on different columns having nulls: ``` select customer_id,name,LISTAGG(orderdate, ', ') WITHIN GROUP (ORDER BY orderdate) from(select customerid, name, orderdate from customer_table c inner join order_table o on c.customerid = o.customerid ); ```
Try this if you use MS SQL Server: ``` -- 1st, get the number of columns declare @columnnumber int = 1 select @columnnumber = max(a.count) from ( select c.cid,c.name, count(o.orderdate) as count from customer_table c join order_table o on c.cid = o.cid group by c.cid,c.name)a print @columnnumber -- Compose the column names for Pivot declare @columnname varchar(max) = '' declare @int int = 1 while @int <= @columnnumber begin set @columnname = @columnname + '[date' + cast(@int as varchar(10))+ '],' set @int = @int + 1 end set @columnname = '('+left(@columnname,len(@columnname)-1)+')' print @columnname --Pivot !!! + Dynamic SQL declare @str varchar(max) set @str = 'SELECT * FROM (SELECT c.cid,c.name, o.orderdate,concat(''date'',row_number() over (partition by c.cid,c.name order by o.orderdate)) rnk FROM customer_table c join order_table o on c.cid = o.cid) AS s PIVOT ( min(s.orderdate) FOR s.rnk IN '+ @columnname+ ' ) AS PivotTable' print @str execute (@str) ``` Please change the column name. I used cid as your customerid. Output: ``` cid name date1 date2 date3 12 John 2017-03-04 2017-05-26 2017-12-01 4 Nancy 2017-02-01 NULL NULL ```
55,599,038
I have a customers table and an orders table. I want to display the customer and all of his/her order dates on one row, rather than multiple rows. Here is what I have and what I'm looking for: ![Picture of both tables, current results, what i'm looking for](https://i.stack.imgur.com/IQclF.png) Basic code to get results: ``` select customerid, name, orderdate from customer_table c inner join order_table o on c.customerid = o.customerid ```
2019/04/09
[ "https://Stackoverflow.com/questions/55599038", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8260859/" ]
Try this if you use MS SQL Server: ``` -- 1st, get the number of columns declare @columnnumber int = 1 select @columnnumber = max(a.count) from ( select c.cid,c.name, count(o.orderdate) as count from customer_table c join order_table o on c.cid = o.cid group by c.cid,c.name)a print @columnnumber -- Compose the column names for Pivot declare @columnname varchar(max) = '' declare @int int = 1 while @int <= @columnnumber begin set @columnname = @columnname + '[date' + cast(@int as varchar(10))+ '],' set @int = @int + 1 end set @columnname = '('+left(@columnname,len(@columnname)-1)+')' print @columnname --Pivot !!! + Dynamic SQL declare @str varchar(max) set @str = 'SELECT * FROM (SELECT c.cid,c.name, o.orderdate,concat(''date'',row_number() over (partition by c.cid,c.name order by o.orderdate)) rnk FROM customer_table c join order_table o on c.cid = o.cid) AS s PIVOT ( min(s.orderdate) FOR s.rnk IN '+ @columnname+ ' ) AS PivotTable' print @str execute (@str) ``` Please change the column name. I used cid as your customerid. Output: ``` cid name date1 date2 date3 12 John 2017-03-04 2017-05-26 2017-12-01 4 Nancy 2017-02-01 NULL NULL ```
first create a view like this: ``` create view order_view as select count(*) over (partition by customerId order by orderDate) as ord, CustomerId, orderdate from order_table ``` then you can use this query: ``` select c.customerid, o1.orderdate, o2.orderdate o3.orderdate . . . o12.orderdate from customer_table c left join order_view o1 on c.customerid = o1.customerid and ord = 1 left join order_view o2 on c.customerid = o2.customerid and ord = 2 left join order_view o3 on c.customerid = o3.customerid and ord = 3 . . . left join order_view o12 on c.customerid = o12.customerid and ord = 12 ```
55,599,038
I have a customers table and an orders table. I want to display the customer and all of his/her order dates on one row, rather than multiple rows. Here is what I have and what I'm looking for: ![Picture of both tables, current results, what i'm looking for](https://i.stack.imgur.com/IQclF.png) Basic code to get results: ``` select customerid, name, orderdate from customer_table c inner join order_table o on c.customerid = o.customerid ```
2019/04/09
[ "https://Stackoverflow.com/questions/55599038", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8260859/" ]
this will work at the most you cant show it on different columns having nulls: ``` select customer_id,name,LISTAGG(orderdate, ', ') WITHIN GROUP (ORDER BY orderdate) from(select customerid, name, orderdate from customer_table c inner join order_table o on c.customerid = o.customerid ); ```
You can use the following because you're bound by the 12 order limit. If you expand to an unknown number of orders with no upper limit, then you would need to use dynamic SQL and even then it would be tricky, because you would also need to dynamically create unique column names. This query batches by customer and sets the order values. They will be in dated order and set as NULL if there's no more orders. It kinda assumes that at least one customer has 12 orders. You'll get a column of all NULLS if that's not the case ``` IF OBJECT_ID('tempdb..#Results') IS NOT NULL DROP TABLE #Results IF OBJECT_ID('tempdb..#sortedRows') IS NOT NULL DROP TABLE #SortedRows DECLARE @CustomerList TABLE(CustomerID INT, RowNo INT); INSERT INTO @CustomerList SELECT DISTINCT CustomerID, ROW_NUMBER() OVER(ORDER BY CustomerID) RowNo FROM Customer_Table (NOLOCK) DECLARE @Count INT = (SELECT COUNT(DISTINCT CustomerID) RowNumber FROM @CustomerList) DECLARE @Counter INT = 0 DECLARE @CustToProcess INT CREATE TABLE #Results(CustomerID INT, [Name] VARCHAR(50), OrderDate1 DATETIME, OrderDate2 DATETIME, OrderDate3 DATETIME, OrderDate4 DATETIME, OrderDate5 DATETIME, OrderDate6 DATETIME, OrderDate7 DATETIME, OrderDate8 DATETIME, OrderDate9 DATETIME, OrderDate10 DATETIME, OrderDate11 DATETIME, OrderDate12 DATETIME) INSERT INTO #Results(CustomerID, Name) SELECT DISTINCT CustomerID, Name FROM Customer_Table SELECT ROW_NUMBER() OVER(PARTITION BY c.CustomerID ORDER BY OrderDate) RowNo, c.CustomerID, c.Name, t.OrderDate INTO #SortedRows FROM Customer_Table c (NOLOCK) JOIN Order_Table t ON c.CustomerID = t.CustomerID WHILE @Counter < @Count BEGIN SET @Counter += 1 SET @CustToProcess = (SELECT CustomerID FROM @CustomerList WHERE RowNo = @Counter) PRINT @CustToProcess SELECT * INTO #RowsForProcessing FROM #SortedRows WHERE CustomerID = @CustToProcess UPDATE #Results SET OrderDate1 = (SELECT OrderDate FROM #RowsForProcessing WHERE Rowno = 1) WHERE CustomerID = @CustToProcess UPDATE #Results SET OrderDate2 = (SELECT OrderDate FROM #RowsForProcessing WHERE Rowno = 2) WHERE CustomerID = @CustToProcess UPDATE #Results SET OrderDate3 = (SELECT OrderDate FROM #RowsForProcessing WHERE Rowno = 3) WHERE CustomerID = @CustToProcess UPDATE #Results SET OrderDate4 = (SELECT OrderDate FROM #RowsForProcessing WHERE Rowno = 4) WHERE CustomerID = @CustToProcess UPDATE #Results SET OrderDate5 = (SELECT OrderDate FROM #RowsForProcessing WHERE Rowno = 5) WHERE CustomerID = @CustToProcess UPDATE #Results SET OrderDate6 = (SELECT OrderDate FROM #RowsForProcessing WHERE Rowno = 6) WHERE CustomerID = @CustToProcess UPDATE #Results SET OrderDate7 = (SELECT OrderDate FROM #RowsForProcessing WHERE Rowno = 7) WHERE CustomerID = @CustToProcess UPDATE #Results SET OrderDate8 = (SELECT OrderDate FROM #RowsForProcessing WHERE Rowno = 8) WHERE CustomerID = @CustToProcess UPDATE #Results SET OrderDate9 = (SELECT OrderDate FROM #RowsForProcessing WHERE Rowno = 9) WHERE CustomerID = @CustToProcess UPDATE #Results SET OrderDate10 = (SELECT OrderDate FROM #RowsForProcessing WHERE Rowno = 10) WHERE CustomerID = @CustToProcess UPDATE #Results SET OrderDate11 = (SELECT OrderDate FROM #RowsForProcessing WHERE Rowno = 11) WHERE CustomerID = @CustToProcess UPDATE #Results SET OrderDate12 = (SELECT OrderDate FROM #RowsForProcessing WHERE Rowno = 12) WHERE CustomerID = @CustToProcess DROP Table #RowsForProcessing END SELECT * FROM #Results ```
55,599,038
I have a customers table and an orders table. I want to display the customer and all of his/her order dates on one row, rather than multiple rows. Here is what I have and what I'm looking for: ![Picture of both tables, current results, what i'm looking for](https://i.stack.imgur.com/IQclF.png) Basic code to get results: ``` select customerid, name, orderdate from customer_table c inner join order_table o on c.customerid = o.customerid ```
2019/04/09
[ "https://Stackoverflow.com/questions/55599038", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8260859/" ]
this will work at the most you cant show it on different columns having nulls: ``` select customer_id,name,LISTAGG(orderdate, ', ') WITHIN GROUP (ORDER BY orderdate) from(select customerid, name, orderdate from customer_table c inner join order_table o on c.customerid = o.customerid ); ```
first create a view like this: ``` create view order_view as select count(*) over (partition by customerId order by orderDate) as ord, CustomerId, orderdate from order_table ``` then you can use this query: ``` select c.customerid, o1.orderdate, o2.orderdate o3.orderdate . . . o12.orderdate from customer_table c left join order_view o1 on c.customerid = o1.customerid and ord = 1 left join order_view o2 on c.customerid = o2.customerid and ord = 2 left join order_view o3 on c.customerid = o3.customerid and ord = 3 . . . left join order_view o12 on c.customerid = o12.customerid and ord = 12 ```
55,599,038
I have a customers table and an orders table. I want to display the customer and all of his/her order dates on one row, rather than multiple rows. Here is what I have and what I'm looking for: ![Picture of both tables, current results, what i'm looking for](https://i.stack.imgur.com/IQclF.png) Basic code to get results: ``` select customerid, name, orderdate from customer_table c inner join order_table o on c.customerid = o.customerid ```
2019/04/09
[ "https://Stackoverflow.com/questions/55599038", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8260859/" ]
this will work at the most you cant show it on different columns having nulls: ``` select customer_id,name,LISTAGG(orderdate, ', ') WITHIN GROUP (ORDER BY orderdate) from(select customerid, name, orderdate from customer_table c inner join order_table o on c.customerid = o.customerid ); ```
Like others said, it's in principle impossible (in pure SQL) to generate this not knowing how many orders you have for a single customer. I like @nikhil-sugandh's answer, it works great if you are OK with having all orders comma-separated in a single column. If you insist on having multiple columns, you can build on that answer, by replacing `LISTAGG` with `ARRAY_AGG` and postprocessing it. It will be MUCH more efficient than e.g. the proposed solution with multiple joins. You can also use `ARRAY_SLICE` to handle cases when there are more orders than you are prepared for. Example (note, I added an extra order to demonstrate handling more-than-expected orders ``` create or replace table customer_table(customerId int, name varchar) as select * from values (12,'John'),(4,'Nancy'); create or replace table order_table(orderId int, customerId int, orderDate date) as select * from values (1,12,'3/4/2017'),(2,12,'5/26/2017'),(3,12,'12/1/2017'),(4,4,'2/1/2017'),(5,12,'1/1/2019'); with subq as ( select c.customerid, name, array_agg(orderdate) within group (order by orderdate) as orders from customer_table c inner join order_table o on c.customerid = o.customerid group by c.customerid, c.name ) select customerid, name, orders[0]::date AS order1, orders[1]::date AS order2, array_to_string(array_slice(orders, 2, 999), ' , ') AS overflow from subq; ------------+-------+------------+------------+-------------------------+ CUSTOMERID | NAME | ORDER1 | ORDER2 | OVERFLOW | ------------+-------+------------+------------+-------------------------+ 4 | Nancy | 2017-02-01 | [NULL] | | 12 | John | 2017-03-04 | 2017-05-26 | 2017-12-01 , 2019-01-01 | ------------+-------+------------+------------+-------------------------+ ```
55,599,038
I have a customers table and an orders table. I want to display the customer and all of his/her order dates on one row, rather than multiple rows. Here is what I have and what I'm looking for: ![Picture of both tables, current results, what i'm looking for](https://i.stack.imgur.com/IQclF.png) Basic code to get results: ``` select customerid, name, orderdate from customer_table c inner join order_table o on c.customerid = o.customerid ```
2019/04/09
[ "https://Stackoverflow.com/questions/55599038", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8260859/" ]
You can use the following because you're bound by the 12 order limit. If you expand to an unknown number of orders with no upper limit, then you would need to use dynamic SQL and even then it would be tricky, because you would also need to dynamically create unique column names. This query batches by customer and sets the order values. They will be in dated order and set as NULL if there's no more orders. It kinda assumes that at least one customer has 12 orders. You'll get a column of all NULLS if that's not the case ``` IF OBJECT_ID('tempdb..#Results') IS NOT NULL DROP TABLE #Results IF OBJECT_ID('tempdb..#sortedRows') IS NOT NULL DROP TABLE #SortedRows DECLARE @CustomerList TABLE(CustomerID INT, RowNo INT); INSERT INTO @CustomerList SELECT DISTINCT CustomerID, ROW_NUMBER() OVER(ORDER BY CustomerID) RowNo FROM Customer_Table (NOLOCK) DECLARE @Count INT = (SELECT COUNT(DISTINCT CustomerID) RowNumber FROM @CustomerList) DECLARE @Counter INT = 0 DECLARE @CustToProcess INT CREATE TABLE #Results(CustomerID INT, [Name] VARCHAR(50), OrderDate1 DATETIME, OrderDate2 DATETIME, OrderDate3 DATETIME, OrderDate4 DATETIME, OrderDate5 DATETIME, OrderDate6 DATETIME, OrderDate7 DATETIME, OrderDate8 DATETIME, OrderDate9 DATETIME, OrderDate10 DATETIME, OrderDate11 DATETIME, OrderDate12 DATETIME) INSERT INTO #Results(CustomerID, Name) SELECT DISTINCT CustomerID, Name FROM Customer_Table SELECT ROW_NUMBER() OVER(PARTITION BY c.CustomerID ORDER BY OrderDate) RowNo, c.CustomerID, c.Name, t.OrderDate INTO #SortedRows FROM Customer_Table c (NOLOCK) JOIN Order_Table t ON c.CustomerID = t.CustomerID WHILE @Counter < @Count BEGIN SET @Counter += 1 SET @CustToProcess = (SELECT CustomerID FROM @CustomerList WHERE RowNo = @Counter) PRINT @CustToProcess SELECT * INTO #RowsForProcessing FROM #SortedRows WHERE CustomerID = @CustToProcess UPDATE #Results SET OrderDate1 = (SELECT OrderDate FROM #RowsForProcessing WHERE Rowno = 1) WHERE CustomerID = @CustToProcess UPDATE #Results SET OrderDate2 = (SELECT OrderDate FROM #RowsForProcessing WHERE Rowno = 2) WHERE CustomerID = @CustToProcess UPDATE #Results SET OrderDate3 = (SELECT OrderDate FROM #RowsForProcessing WHERE Rowno = 3) WHERE CustomerID = @CustToProcess UPDATE #Results SET OrderDate4 = (SELECT OrderDate FROM #RowsForProcessing WHERE Rowno = 4) WHERE CustomerID = @CustToProcess UPDATE #Results SET OrderDate5 = (SELECT OrderDate FROM #RowsForProcessing WHERE Rowno = 5) WHERE CustomerID = @CustToProcess UPDATE #Results SET OrderDate6 = (SELECT OrderDate FROM #RowsForProcessing WHERE Rowno = 6) WHERE CustomerID = @CustToProcess UPDATE #Results SET OrderDate7 = (SELECT OrderDate FROM #RowsForProcessing WHERE Rowno = 7) WHERE CustomerID = @CustToProcess UPDATE #Results SET OrderDate8 = (SELECT OrderDate FROM #RowsForProcessing WHERE Rowno = 8) WHERE CustomerID = @CustToProcess UPDATE #Results SET OrderDate9 = (SELECT OrderDate FROM #RowsForProcessing WHERE Rowno = 9) WHERE CustomerID = @CustToProcess UPDATE #Results SET OrderDate10 = (SELECT OrderDate FROM #RowsForProcessing WHERE Rowno = 10) WHERE CustomerID = @CustToProcess UPDATE #Results SET OrderDate11 = (SELECT OrderDate FROM #RowsForProcessing WHERE Rowno = 11) WHERE CustomerID = @CustToProcess UPDATE #Results SET OrderDate12 = (SELECT OrderDate FROM #RowsForProcessing WHERE Rowno = 12) WHERE CustomerID = @CustToProcess DROP Table #RowsForProcessing END SELECT * FROM #Results ```
first create a view like this: ``` create view order_view as select count(*) over (partition by customerId order by orderDate) as ord, CustomerId, orderdate from order_table ``` then you can use this query: ``` select c.customerid, o1.orderdate, o2.orderdate o3.orderdate . . . o12.orderdate from customer_table c left join order_view o1 on c.customerid = o1.customerid and ord = 1 left join order_view o2 on c.customerid = o2.customerid and ord = 2 left join order_view o3 on c.customerid = o3.customerid and ord = 3 . . . left join order_view o12 on c.customerid = o12.customerid and ord = 12 ```
55,599,038
I have a customers table and an orders table. I want to display the customer and all of his/her order dates on one row, rather than multiple rows. Here is what I have and what I'm looking for: ![Picture of both tables, current results, what i'm looking for](https://i.stack.imgur.com/IQclF.png) Basic code to get results: ``` select customerid, name, orderdate from customer_table c inner join order_table o on c.customerid = o.customerid ```
2019/04/09
[ "https://Stackoverflow.com/questions/55599038", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8260859/" ]
Like others said, it's in principle impossible (in pure SQL) to generate this not knowing how many orders you have for a single customer. I like @nikhil-sugandh's answer, it works great if you are OK with having all orders comma-separated in a single column. If you insist on having multiple columns, you can build on that answer, by replacing `LISTAGG` with `ARRAY_AGG` and postprocessing it. It will be MUCH more efficient than e.g. the proposed solution with multiple joins. You can also use `ARRAY_SLICE` to handle cases when there are more orders than you are prepared for. Example (note, I added an extra order to demonstrate handling more-than-expected orders ``` create or replace table customer_table(customerId int, name varchar) as select * from values (12,'John'),(4,'Nancy'); create or replace table order_table(orderId int, customerId int, orderDate date) as select * from values (1,12,'3/4/2017'),(2,12,'5/26/2017'),(3,12,'12/1/2017'),(4,4,'2/1/2017'),(5,12,'1/1/2019'); with subq as ( select c.customerid, name, array_agg(orderdate) within group (order by orderdate) as orders from customer_table c inner join order_table o on c.customerid = o.customerid group by c.customerid, c.name ) select customerid, name, orders[0]::date AS order1, orders[1]::date AS order2, array_to_string(array_slice(orders, 2, 999), ' , ') AS overflow from subq; ------------+-------+------------+------------+-------------------------+ CUSTOMERID | NAME | ORDER1 | ORDER2 | OVERFLOW | ------------+-------+------------+------------+-------------------------+ 4 | Nancy | 2017-02-01 | [NULL] | | 12 | John | 2017-03-04 | 2017-05-26 | 2017-12-01 , 2019-01-01 | ------------+-------+------------+------------+-------------------------+ ```
first create a view like this: ``` create view order_view as select count(*) over (partition by customerId order by orderDate) as ord, CustomerId, orderdate from order_table ``` then you can use this query: ``` select c.customerid, o1.orderdate, o2.orderdate o3.orderdate . . . o12.orderdate from customer_table c left join order_view o1 on c.customerid = o1.customerid and ord = 1 left join order_view o2 on c.customerid = o2.customerid and ord = 2 left join order_view o3 on c.customerid = o3.customerid and ord = 3 . . . left join order_view o12 on c.customerid = o12.customerid and ord = 12 ```
20,165,576
I want to add 10 points to each of my students grades with PLSQL. ``` UPDATE AverageView SET AverageModifier = 10 WHERE COURSE_ID = 'INFO101' AND GROUP_ID = 101 AND SEMESTER = 'SUMER14'; ``` So when I try to update the view, I want this trigger activated and I want to use INSTEAD OF UPDATE and modify the real tables. Like this : ``` CREATE OR REPLACE TRIGGER ChangeAverage INSTEAD OF UPDATE ON AverageView FOR EACH ROW BEGIN UPDATE INSCRIPTIONS SET grade = (grade + 10) WHERE COURSE_ID = :NEW.COURSE_ID AND GROUP_ID = :NEW.GROUP_ID AND SEMESTER = :NEW.SEMESTER ; END; / ``` I added a +10 ''manually'' but eventually it will be a just a variable. ``` UPDATE INSCRIPTIONS SET grade = (grade + modifier) .... ``` I think it doesn't work because there is more than one grades to update, and I am stuck there. When I remove the grade and set a static value it ''kind of'' work, but it's setting all the student grades to 10. ``` CREATE OR REPLACE TRIGGER ChangeAverage INSTEAD OF UPDATE ON AverageView FOR EACH ROW BEGIN UPDATE INSCRIPTIONS SET grade = 10 WHERE COURSE_ID = :NEW.COURSE_ID AND GROUP_ID = :NEW.GROUP_ID AND SEMESTER = :NEW.SEMESTER ; END; / ``` Should I use a loop and a cursor ? My inscriptions table looks like this : ``` STUDENT_ID CHAR(12) NOT NULL, COURSE_ID CHAR(12) NOT NULL, GROUP_ID INTEGER NOT NULL, SEMESTER CHAR(12) NOT NULL, REGISTRATION_DATE DATE NOT NULL, GRADE INTEGER, ``` My AverageView is : ``` CREATE OR REPLACE VIEW AverageView AS SELECT COURSE_ID, GROUP_ID, SEMESTER, AVG(GRADE) AS Average FROM Inscriptions GROUP BY COURSE_ID, GROUP_ID, SEMESTER / ``` From what I know of the views, they cant be modified. Instead I modify the inscription table. Lets say my inscriptions table look like this ``` Student A, INFO101, 101, SUMER14, ramdom_date, 70 Student B, INFO101, 101, SUMER14, ramdom_date, 50 ``` My view works perfectly : It outputs an average of 60 for this course. Now, with my trigger, I want to boost the average. ``` UPDATE AverageView SET AverageModifier = 10 WHERE COURSE_ID = 'INFO101' AND GROUP_ID = 101 AND SEMESTER = 'SUMER14'; ``` But from what I know, I cannot modify the content of an view, so I want to add 10 (for now) to every grade in the table inscriptions. So the trigger will result something like that : ``` Student A, INFO101, 101, SUMER14, ramdom_date, 80 Student B, INFO101, 101, SUMER14, ramdom_date, 60 ``` Best regards
2013/11/23
[ "https://Stackoverflow.com/questions/20165576", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1352575/" ]
Try ``` CREATE OR REPLACE TRIGGER ChangeAverage INSTEAD OF UPDATE ON AverageView FOR EACH ROW BEGIN UPDATE INSCRIPTIONS SET grade = (:old.grade + 10) WHERE COURSE_ID = :NEW.COURSE_ID AND GROUP_ID = :NEW.GROUP_ID AND SEMESTER = :NEW.SEMESTER ; END; / ```
Turns out that I had a constraint which was not updated and was keeping me from updating within certain parameters. I guess we can delete the question !
9,271,724
I'm trying to create a function that will check to make sure all the input a user provided is numeric. ``` function wholeFormValid() { var inp = document.getElementsByClassName('userInput'); //userInput is a class name i provide to all my non-hidden input fields. //I have over 20 hidden values (I need to use hidden values to store state in a session). //Wanted to print out what my function is getting. I keep seeing undefined values. var string= "justToInitializeThis"; for(var m in inp) { string = string + " " + inp.value; } alert(string); //Actual function that will be used once I track down the bug. for(var i in inp) { if(inp.value != "") { if(!(/^[0-9]+$/.test(inp.value))) { return false; } } } return true; ``` } The function does get the right input fields back, I can tell from my different pages and they vary in the amount of input a user can give. But what i can't understand is why all my values are returned as null instead of what the user entered. I'm fairly new to HTML & Javascript and just needed a second pair of eyes on this :) Thanks in Advance.
2012/02/14
[ "https://Stackoverflow.com/questions/9271724", "https://Stackoverflow.com", "https://Stackoverflow.com/users/855127/" ]
Use this ``` var inp = document.getElementsByClassName('userInput'); var string= "justToInitializeThis"; for(var i=0; i < inp.length; i++) { string = string + " " + inp[i].value; } alert(string); ``` Same for another loop too
***Change:*** ``` for(var m in inp) { string = string + " " + inp.value; } ``` ***To:*** ``` for(var m in inp) { string = string + " " + inp[m].value; } ``` > > A for ... in loop iterates over the properties of an object > > > <https://developer.mozilla.org/en/JavaScript/Reference/Statements/for...in>
17,279,320
I always use this query in sql server to get Row number in a table: ``` SELECT * FROM (SELECT *, Row_number() OVER( ORDER BY [myidentitycolumn]) RowID FROM mytable) sub WHERE rowid = 15 ``` Now I am working in Access 2010 and this seems to be not working. Is there any replacement for this query in Access?
2013/06/24
[ "https://Stackoverflow.com/questions/17279320", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1901125/" ]
MS-Access doesn't support ROW\_NUMBER(). Use TOP 1: ``` SELECT TOP 1 * FROM [MyTable] ORDER BY [MyIdentityCOlumn] ``` If you need the 15th row - MS-Access has no simple, built-in, way to do this. You can simulate the rownumber by using reverse nested ordering to get this: ``` SELECT TOP 1 * FROM ( SELECT TOP 15 * FROM [MyTable] ORDER BY [MyIdentityColumn] ) t ORDER BY [MyIdentityColumn] DESC ```
Though this is an old question, this has worked for me, but I've never tested its efficiency... ``` SELECT (SELECT COUNT(t1.SourceID) FROM [SourceTable] t1 WHERE t1.SourceID<t2.SourceID) AS RowID, t2.field2, t2.field3, t2.field4, t2.field5 FROM SourceTable AS t2 ORDER BY t2.SourceID; ``` Some advantages of this method: * It doesn't rely on the order of the table, either - the `RowID` is calculated on its actual value and those that are less than it. * This method can be applied to any (primary key) type (e.g. `Number`, `String` or `Date`). * This method is *fairly* SQL agnostic, or requires very little adaptation. **Notable Disadvantages** Though this will work with practically any data type, I must emphasise that, for some, it *may* create other problems. For instance, with strings, consider: | ID | Description | ROWID | | --- | --- | --- | | aaa | Aardvark | 1 | | bbb | Bear | 2 | | ccc | Canary | 3 | If I were to insert: `bba Boar`, then the `Canary` `RowID` will change... | ID | Description | ROWID | | --- | --- | --- | | aaa | Aardvark | 1 | | bbb | Bear | 2 | | bba | Boar | 3 | | ccc | Canary | 4 | You cannot rely on the IDs remaining in place unless the table remains unsorted.
17,279,320
I always use this query in sql server to get Row number in a table: ``` SELECT * FROM (SELECT *, Row_number() OVER( ORDER BY [myidentitycolumn]) RowID FROM mytable) sub WHERE rowid = 15 ``` Now I am working in Access 2010 and this seems to be not working. Is there any replacement for this query in Access?
2013/06/24
[ "https://Stackoverflow.com/questions/17279320", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1901125/" ]
MS-Access doesn't support ROW\_NUMBER(). Use TOP 1: ``` SELECT TOP 1 * FROM [MyTable] ORDER BY [MyIdentityCOlumn] ``` If you need the 15th row - MS-Access has no simple, built-in, way to do this. You can simulate the rownumber by using reverse nested ordering to get this: ``` SELECT TOP 1 * FROM ( SELECT TOP 15 * FROM [MyTable] ORDER BY [MyIdentityColumn] ) t ORDER BY [MyIdentityColumn] DESC ```
Another way to assign a row number in a query is to use the `DCount` function. ``` SELECT *, DCount("[ID]","[mytable]","[ID]<=" & [ID]) AS row_id FROM [mytable] WHERE row_id=15 ```
17,279,320
I always use this query in sql server to get Row number in a table: ``` SELECT * FROM (SELECT *, Row_number() OVER( ORDER BY [myidentitycolumn]) RowID FROM mytable) sub WHERE rowid = 15 ``` Now I am working in Access 2010 and this seems to be not working. Is there any replacement for this query in Access?
2013/06/24
[ "https://Stackoverflow.com/questions/17279320", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1901125/" ]
Another way to assign a row number in a query is to use the `DCount` function. ``` SELECT *, DCount("[ID]","[mytable]","[ID]<=" & [ID]) AS row_id FROM [mytable] WHERE row_id=15 ```
I needed the best *x* results of points per team. Ranking does not solves this problem when there are results with equal points. So I need a `recordnumber` I made a VBA function in Access to create a `recordnumber` that resets on ID change. You have to query this query with where `recordnumber <= x` to get the points per team. NB Access changes the record-number 1. when you query the query filtered on record number 2. when you filter out some results 3. when you change the sort order That is not what I thought that would happen. Solved this by using a temporary table and saving the `recordnumbers` and keys or an extra field in the table. ``` SELECT ID, Points, RecordNumberOffId([ID}) AS Recordnumber FROM Team ORDER BY ID ASC, Points DESC; ``` It uses 3 module level variables to remember between calls ```vb Dim PreviousID As Long Dim PreviousRecordNumber As Long Dim TimeLastID As Date Public Function RecordNumberOffID(ID As Long) As Long 'ID is sortgroup identity 'Reset if last call longer dan nn seconds in the past If Time() - TimeLastID > 0.0003 Then '0,000277778 = 1 second PreviousID = 0 PreviousRecordNumber = 0 End If If ID <> PreviousID Then PreviousRecordNumber = 0 PreviousID = ID End If PreviousRecordNumber = PreviousRecordNumber + 1 RecordNumberOffID = PreviousRecordNumber TimeLastID = Time() End Function ```
17,279,320
I always use this query in sql server to get Row number in a table: ``` SELECT * FROM (SELECT *, Row_number() OVER( ORDER BY [myidentitycolumn]) RowID FROM mytable) sub WHERE rowid = 15 ``` Now I am working in Access 2010 and this seems to be not working. Is there any replacement for this query in Access?
2013/06/24
[ "https://Stackoverflow.com/questions/17279320", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1901125/" ]
Another way to assign a row number in a query is to use the `DCount` function. ``` SELECT *, DCount("[ID]","[mytable]","[ID]<=" & [ID]) AS row_id FROM [mytable] WHERE row_id=15 ```
Though this is an old question, this has worked for me, but I've never tested its efficiency... ``` SELECT (SELECT COUNT(t1.SourceID) FROM [SourceTable] t1 WHERE t1.SourceID<t2.SourceID) AS RowID, t2.field2, t2.field3, t2.field4, t2.field5 FROM SourceTable AS t2 ORDER BY t2.SourceID; ``` Some advantages of this method: * It doesn't rely on the order of the table, either - the `RowID` is calculated on its actual value and those that are less than it. * This method can be applied to any (primary key) type (e.g. `Number`, `String` or `Date`). * This method is *fairly* SQL agnostic, or requires very little adaptation. **Notable Disadvantages** Though this will work with practically any data type, I must emphasise that, for some, it *may* create other problems. For instance, with strings, consider: | ID | Description | ROWID | | --- | --- | --- | | aaa | Aardvark | 1 | | bbb | Bear | 2 | | ccc | Canary | 3 | If I were to insert: `bba Boar`, then the `Canary` `RowID` will change... | ID | Description | ROWID | | --- | --- | --- | | aaa | Aardvark | 1 | | bbb | Bear | 2 | | bba | Boar | 3 | | ccc | Canary | 4 | You cannot rely on the IDs remaining in place unless the table remains unsorted.
17,279,320
I always use this query in sql server to get Row number in a table: ``` SELECT * FROM (SELECT *, Row_number() OVER( ORDER BY [myidentitycolumn]) RowID FROM mytable) sub WHERE rowid = 15 ``` Now I am working in Access 2010 and this seems to be not working. Is there any replacement for this query in Access?
2013/06/24
[ "https://Stackoverflow.com/questions/17279320", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1901125/" ]
I needed the best *x* results of points per team. Ranking does not solves this problem when there are results with equal points. So I need a `recordnumber` I made a VBA function in Access to create a `recordnumber` that resets on ID change. You have to query this query with where `recordnumber <= x` to get the points per team. NB Access changes the record-number 1. when you query the query filtered on record number 2. when you filter out some results 3. when you change the sort order That is not what I thought that would happen. Solved this by using a temporary table and saving the `recordnumbers` and keys or an extra field in the table. ``` SELECT ID, Points, RecordNumberOffId([ID}) AS Recordnumber FROM Team ORDER BY ID ASC, Points DESC; ``` It uses 3 module level variables to remember between calls ```vb Dim PreviousID As Long Dim PreviousRecordNumber As Long Dim TimeLastID As Date Public Function RecordNumberOffID(ID As Long) As Long 'ID is sortgroup identity 'Reset if last call longer dan nn seconds in the past If Time() - TimeLastID > 0.0003 Then '0,000277778 = 1 second PreviousID = 0 PreviousRecordNumber = 0 End If If ID <> PreviousID Then PreviousRecordNumber = 0 PreviousID = ID End If PreviousRecordNumber = PreviousRecordNumber + 1 RecordNumberOffID = PreviousRecordNumber TimeLastID = Time() End Function ```
I might be late. Simply add a new field ID in the table with type AutoNumber. This will generate unique IDs and can utilize in Access too
17,279,320
I always use this query in sql server to get Row number in a table: ``` SELECT * FROM (SELECT *, Row_number() OVER( ORDER BY [myidentitycolumn]) RowID FROM mytable) sub WHERE rowid = 15 ``` Now I am working in Access 2010 and this seems to be not working. Is there any replacement for this query in Access?
2013/06/24
[ "https://Stackoverflow.com/questions/17279320", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1901125/" ]
Another way to assign a row number in a query is to use the `DCount` function. ``` SELECT *, DCount("[ID]","[mytable]","[ID]<=" & [ID]) AS row_id FROM [mytable] WHERE row_id=15 ```
I might be late. Simply add a new field ID in the table with type AutoNumber. This will generate unique IDs and can utilize in Access too
17,279,320
I always use this query in sql server to get Row number in a table: ``` SELECT * FROM (SELECT *, Row_number() OVER( ORDER BY [myidentitycolumn]) RowID FROM mytable) sub WHERE rowid = 15 ``` Now I am working in Access 2010 and this seems to be not working. Is there any replacement for this query in Access?
2013/06/24
[ "https://Stackoverflow.com/questions/17279320", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1901125/" ]
Though this is an old question, this has worked for me, but I've never tested its efficiency... ``` SELECT (SELECT COUNT(t1.SourceID) FROM [SourceTable] t1 WHERE t1.SourceID<t2.SourceID) AS RowID, t2.field2, t2.field3, t2.field4, t2.field5 FROM SourceTable AS t2 ORDER BY t2.SourceID; ``` Some advantages of this method: * It doesn't rely on the order of the table, either - the `RowID` is calculated on its actual value and those that are less than it. * This method can be applied to any (primary key) type (e.g. `Number`, `String` or `Date`). * This method is *fairly* SQL agnostic, or requires very little adaptation. **Notable Disadvantages** Though this will work with practically any data type, I must emphasise that, for some, it *may* create other problems. For instance, with strings, consider: | ID | Description | ROWID | | --- | --- | --- | | aaa | Aardvark | 1 | | bbb | Bear | 2 | | ccc | Canary | 3 | If I were to insert: `bba Boar`, then the `Canary` `RowID` will change... | ID | Description | ROWID | | --- | --- | --- | | aaa | Aardvark | 1 | | bbb | Bear | 2 | | bba | Boar | 3 | | ccc | Canary | 4 | You cannot rely on the IDs remaining in place unless the table remains unsorted.
Thanks for your solutions above! DCount did the trick for me too! I had to use a combination of date columns and a unique identifier for the sorting portion of it (as well as some additional conditions), so here is what I ended up doing: 1) I had to check if DateColumnA was null, then check if DateColumnB was null, then use DateColumnC; then, if multiple records have the same date value, they all end up with the same id! 2) So, I figured I would use the integer unique ID of the table, and add it up to the time as "minutes". This will always provide different results 3) Finally, the logic above results in the count starting in 0... so just add 1! ``` SELECT 1+DCount("[RequestID]","[Request]","Archived=0 and ProjectPhase <> 2 and iif(isnull(DateColumnA)=true,iif(isnull(DateColumnB)=true,DateColumnC,DateColumnB),DateColumnA)+(RequestID/3600) < #" & if(isnull(DateColumnA)=true,iif(isnull(DateColumnB)=true,DateColumnC,DateColumnB),DateColumnA) + (RequestID/3600) & "#") AS RowID, FROM Request ORDER BY 1 ``` I hope this helps you out!
17,279,320
I always use this query in sql server to get Row number in a table: ``` SELECT * FROM (SELECT *, Row_number() OVER( ORDER BY [myidentitycolumn]) RowID FROM mytable) sub WHERE rowid = 15 ``` Now I am working in Access 2010 and this seems to be not working. Is there any replacement for this query in Access?
2013/06/24
[ "https://Stackoverflow.com/questions/17279320", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1901125/" ]
Another way to assign a row number in a query is to use the `DCount` function. ``` SELECT *, DCount("[ID]","[mytable]","[ID]<=" & [ID]) AS row_id FROM [mytable] WHERE row_id=15 ```
Thanks for your solutions above! DCount did the trick for me too! I had to use a combination of date columns and a unique identifier for the sorting portion of it (as well as some additional conditions), so here is what I ended up doing: 1) I had to check if DateColumnA was null, then check if DateColumnB was null, then use DateColumnC; then, if multiple records have the same date value, they all end up with the same id! 2) So, I figured I would use the integer unique ID of the table, and add it up to the time as "minutes". This will always provide different results 3) Finally, the logic above results in the count starting in 0... so just add 1! ``` SELECT 1+DCount("[RequestID]","[Request]","Archived=0 and ProjectPhase <> 2 and iif(isnull(DateColumnA)=true,iif(isnull(DateColumnB)=true,DateColumnC,DateColumnB),DateColumnA)+(RequestID/3600) < #" & if(isnull(DateColumnA)=true,iif(isnull(DateColumnB)=true,DateColumnC,DateColumnB),DateColumnA) + (RequestID/3600) & "#") AS RowID, FROM Request ORDER BY 1 ``` I hope this helps you out!
17,279,320
I always use this query in sql server to get Row number in a table: ``` SELECT * FROM (SELECT *, Row_number() OVER( ORDER BY [myidentitycolumn]) RowID FROM mytable) sub WHERE rowid = 15 ``` Now I am working in Access 2010 and this seems to be not working. Is there any replacement for this query in Access?
2013/06/24
[ "https://Stackoverflow.com/questions/17279320", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1901125/" ]
Another way to assign a row number in a query is to use the `DCount` function. ``` SELECT *, DCount("[ID]","[mytable]","[ID]<=" & [ID]) AS row_id FROM [mytable] WHERE row_id=15 ```
Since I am sorting alphabetically on a string field and NOT by ID, the Count(\*) and DCOUNT() approaches didn't work for me. My solution was to write a function that returns the Row Number: ``` Option Compare Database Option Explicit Private Rst As Recordset Public Function GetRowNum(ID As Long) As Long If Rst Is Nothing Then Set Rst = CurrentDb.OpenRecordset("SELECT ID FROM FileList ORDER BY RealName") End If Rst.FindFirst "ID=" & ID GetRowNum = Rst.AbsolutePosition + 1 ' Release the Rst 1 sec after it's last use '------------------------------------------ SetTimer Application.hWndAccessApp, 1, 1000, AddressOf ReleaseRst End Function Private Sub ReleaseRst(ByVal hWnd As LongPtr, ByVal uMsg As Long, ByVal nIDEEvent As Long, ByVal dwTime As Long) KillTimer Application.hWndAccessApp, 1 Set Rst = Nothing End Sub ```
17,279,320
I always use this query in sql server to get Row number in a table: ``` SELECT * FROM (SELECT *, Row_number() OVER( ORDER BY [myidentitycolumn]) RowID FROM mytable) sub WHERE rowid = 15 ``` Now I am working in Access 2010 and this seems to be not working. Is there any replacement for this query in Access?
2013/06/24
[ "https://Stackoverflow.com/questions/17279320", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1901125/" ]
I needed the best *x* results of points per team. Ranking does not solves this problem when there are results with equal points. So I need a `recordnumber` I made a VBA function in Access to create a `recordnumber` that resets on ID change. You have to query this query with where `recordnumber <= x` to get the points per team. NB Access changes the record-number 1. when you query the query filtered on record number 2. when you filter out some results 3. when you change the sort order That is not what I thought that would happen. Solved this by using a temporary table and saving the `recordnumbers` and keys or an extra field in the table. ``` SELECT ID, Points, RecordNumberOffId([ID}) AS Recordnumber FROM Team ORDER BY ID ASC, Points DESC; ``` It uses 3 module level variables to remember between calls ```vb Dim PreviousID As Long Dim PreviousRecordNumber As Long Dim TimeLastID As Date Public Function RecordNumberOffID(ID As Long) As Long 'ID is sortgroup identity 'Reset if last call longer dan nn seconds in the past If Time() - TimeLastID > 0.0003 Then '0,000277778 = 1 second PreviousID = 0 PreviousRecordNumber = 0 End If If ID <> PreviousID Then PreviousRecordNumber = 0 PreviousID = ID End If PreviousRecordNumber = PreviousRecordNumber + 1 RecordNumberOffID = PreviousRecordNumber TimeLastID = Time() End Function ```
Thanks for your solutions above! DCount did the trick for me too! I had to use a combination of date columns and a unique identifier for the sorting portion of it (as well as some additional conditions), so here is what I ended up doing: 1) I had to check if DateColumnA was null, then check if DateColumnB was null, then use DateColumnC; then, if multiple records have the same date value, they all end up with the same id! 2) So, I figured I would use the integer unique ID of the table, and add it up to the time as "minutes". This will always provide different results 3) Finally, the logic above results in the count starting in 0... so just add 1! ``` SELECT 1+DCount("[RequestID]","[Request]","Archived=0 and ProjectPhase <> 2 and iif(isnull(DateColumnA)=true,iif(isnull(DateColumnB)=true,DateColumnC,DateColumnB),DateColumnA)+(RequestID/3600) < #" & if(isnull(DateColumnA)=true,iif(isnull(DateColumnB)=true,DateColumnC,DateColumnB),DateColumnA) + (RequestID/3600) & "#") AS RowID, FROM Request ORDER BY 1 ``` I hope this helps you out!
22,284,380
I am new to reactive programming and confused about composing observables that have dependencies. Here is the scenario: There are two observables **A**, **B**. Observable **A** depends on a value emitted by **B**. (Therefore A needs to observe B). Is there a way to create an Observable **C** that composes **A** and **B**, and emits **V**? I am just looking for pointers in the RxJava [documentation](https://github.com/Netflix/RxJava/wiki).
2014/03/09
[ "https://Stackoverflow.com/questions/22284380", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3398673/" ]
You question is a bit vague on how A depends on B so I'll try to give a several examples of how to combine observables. Example - **A** cannot be created without **B** - Use map() ```java public class B { public final int value; public B(int value) { this.value = value; } } public class A { public final B b; public A(B b) { this.b = b; } } public Observable<B> createObservableB() { return Observable.from(new B(0), new B(1), new B(2), new B(3)); } public Observable<A> createObservableA() { return createObservableB() .map(new Func1<B, A>() { @Override public A call(B b) { return new A(b); } }); } ``` Example - Each occurrence of **B** can create zero or more **A** - Use flatMap() ```java public class B { public final int value; public B(int value) { this.value = value; } } public class A { public final int value; public A(int value) { this.value = value; } } public Observable<B> createObservableB() { return Observable.from(new B(0), new B(1), new B(2), new B(3)); } public Observable<A> createObservableA() { return createObservableB() .flatMap(new Func1<B, Observable<? extends A>>() { @Override public Observable<? extends A> call(final B b) { return Observable.create(new Observable.OnSubscribe<A>() { @Override public void call(Subscriber<? super A> subscriber) { for (int i = 0; i < b.value; i++) { subscriber.onNext(new A(i)); } subscriber.onCompleted(); } }); } }); } ``` I'm not exactly sure what you are asking with Observables **C** and **V** so let's look at a few more ways to combine observables. Example - Combine each pair of items emitted by two observables - Use zip() ```java public class A { public final int value; public A(int value) { this.value = value; } } public class B { public final int value; public B(int value) { this.value = value; } } public class C { private final A a; private final B b; public C(A a, B b) { this.a = a; this.b = b; } } public Observable<B> createObservableB() { return Observable.from(new B(0), new B(1), new B(2), new B(3)); } public Observable<A> createObservableA() { return Observable.from(new A(0), new A(1), new A(2), new A(3)); } public Observable<C> createObservableC() { return Observable.zip(createObservableA(), createObservableB(), new Func2<A, B, C>() { @Override public C call(A a, B b) { return new C(a, b); } } ); } ``` Example - Combine the last item of two Observables - Use combineLatest() ```java // Use the same class definitions from previous example. public Observable<C> createObservableC1() { return Observable.combineLatest(createObservableA(), createObservableB(), new Func2<A, B, C>() { @Override public C call(A a, B b) { return new C(a, b); } } ); } ```
I am also new to reactive programming, and just put together some code that may be interesting for your case > > A needs to observe B > > > ``` import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.testng.annotations.Test; import rx.Observable; import rx.Subscriber; import rx.functions.Func1; import java.util.concurrent.atomic.AtomicBoolean; import static org.testng.Assert.assertTrue; public class Q22284380TestCase { private static final Logger LOGGER = LoggerFactory.getLogger( Q22284380TestCase.class); private AtomicBoolean completed = new AtomicBoolean(false); @Test public void testName() throws Exception { final Observable.OnSubscribe<Integer> onSubProduceTwoValues = new Observable.OnSubscribe<Integer>() { @Override public void call(final Subscriber<? super Integer> subscriber) { final Thread thread = new Thread(new Runnable() { public Integer i = 0; @Override public void run() { final Integer max = 2; while (i < max) { subscriber.onNext(i); i++; } subscriber.onCompleted(); } }); thread.start(); } }; final Observable<Integer> values = Observable.create(onSubProduceTwoValues); final Observable<Integer> byTwoMultiplier = values .flatMap(new Func1<Integer, Observable<Integer>>() { @Override public Observable<Integer> call(Integer aValue) { return doubleIt(aValue); } }); byTwoMultiplier.subscribe(new Subscriber<Integer>() { @Override public void onNext(Integer a) { LOGGER.info("" + a); } @Override public void onCompleted() { completed.set(true); } @Override public void onError(Throwable e) { LOGGER.error(e.getMessage()); } }); Thread.sleep(1000L); assertTrue(completed.get()); } private Observable<Integer> doubleIt(final Integer value) { return Observable.create(new Observable.OnSubscribe<Integer>() { @Override public void call(final Subscriber<? super Integer> subscriber) { final Thread thread = new Thread(new Runnable() { @Override public void run() { try { subscriber.onNext(value * 2); subscriber.onCompleted(); } catch (Throwable e) { subscriber.onError(e); } } }); thread.start(); } }); } } ``` Having a producer of values, it just uses [flatMap](https://github.com/Netflix/RxJava/wiki/Transforming-Observables#wiki-flatmap) to apply a doubleIt function to the output. To do something different, you can maybe read [zip](https://github.com/Netflix/RxJava/wiki/Combining-Observables#combine-observables-together-via-a-specified-function-and-emit-items-based-on-the-results-of-this-function) if you want to have a V that is a combination of A and B.
22,284,380
I am new to reactive programming and confused about composing observables that have dependencies. Here is the scenario: There are two observables **A**, **B**. Observable **A** depends on a value emitted by **B**. (Therefore A needs to observe B). Is there a way to create an Observable **C** that composes **A** and **B**, and emits **V**? I am just looking for pointers in the RxJava [documentation](https://github.com/Netflix/RxJava/wiki).
2014/03/09
[ "https://Stackoverflow.com/questions/22284380", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3398673/" ]
You question is a bit vague on how A depends on B so I'll try to give a several examples of how to combine observables. Example - **A** cannot be created without **B** - Use map() ```java public class B { public final int value; public B(int value) { this.value = value; } } public class A { public final B b; public A(B b) { this.b = b; } } public Observable<B> createObservableB() { return Observable.from(new B(0), new B(1), new B(2), new B(3)); } public Observable<A> createObservableA() { return createObservableB() .map(new Func1<B, A>() { @Override public A call(B b) { return new A(b); } }); } ``` Example - Each occurrence of **B** can create zero or more **A** - Use flatMap() ```java public class B { public final int value; public B(int value) { this.value = value; } } public class A { public final int value; public A(int value) { this.value = value; } } public Observable<B> createObservableB() { return Observable.from(new B(0), new B(1), new B(2), new B(3)); } public Observable<A> createObservableA() { return createObservableB() .flatMap(new Func1<B, Observable<? extends A>>() { @Override public Observable<? extends A> call(final B b) { return Observable.create(new Observable.OnSubscribe<A>() { @Override public void call(Subscriber<? super A> subscriber) { for (int i = 0; i < b.value; i++) { subscriber.onNext(new A(i)); } subscriber.onCompleted(); } }); } }); } ``` I'm not exactly sure what you are asking with Observables **C** and **V** so let's look at a few more ways to combine observables. Example - Combine each pair of items emitted by two observables - Use zip() ```java public class A { public final int value; public A(int value) { this.value = value; } } public class B { public final int value; public B(int value) { this.value = value; } } public class C { private final A a; private final B b; public C(A a, B b) { this.a = a; this.b = b; } } public Observable<B> createObservableB() { return Observable.from(new B(0), new B(1), new B(2), new B(3)); } public Observable<A> createObservableA() { return Observable.from(new A(0), new A(1), new A(2), new A(3)); } public Observable<C> createObservableC() { return Observable.zip(createObservableA(), createObservableB(), new Func2<A, B, C>() { @Override public C call(A a, B b) { return new C(a, b); } } ); } ``` Example - Combine the last item of two Observables - Use combineLatest() ```java // Use the same class definitions from previous example. public Observable<C> createObservableC1() { return Observable.combineLatest(createObservableA(), createObservableB(), new Func2<A, B, C>() { @Override public C call(A a, B b) { return new C(a, b); } } ); } ```
I think it depends on the kind of composition between A and B that you need to do and also on how A depends on B. Does C compose A and B pair after pair (A1 combined with B1, A2 combined with B2, etc.) - then `zip` would be the function you want. But, in that case I wonder if you could not just do that work when you transform B to A in the first place - after all I assume that you transform B to A element by element (in which case map would be the way to go). If instead you want to create a new A for each value emitted by B (but want to combine all those As into one Observable), then `flatMap` is what you need. If you really first need B to *create* A and then need it again to *combine* A and B, then you might want to `cache` B to save you the trouble of calculating everything again. There are other functions that could be of interest here (like `reduce` or `combineLatest`). Maybe you could give some more details on what you want to do?
22,284,380
I am new to reactive programming and confused about composing observables that have dependencies. Here is the scenario: There are two observables **A**, **B**. Observable **A** depends on a value emitted by **B**. (Therefore A needs to observe B). Is there a way to create an Observable **C** that composes **A** and **B**, and emits **V**? I am just looking for pointers in the RxJava [documentation](https://github.com/Netflix/RxJava/wiki).
2014/03/09
[ "https://Stackoverflow.com/questions/22284380", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3398673/" ]
You question is a bit vague on how A depends on B so I'll try to give a several examples of how to combine observables. Example - **A** cannot be created without **B** - Use map() ```java public class B { public final int value; public B(int value) { this.value = value; } } public class A { public final B b; public A(B b) { this.b = b; } } public Observable<B> createObservableB() { return Observable.from(new B(0), new B(1), new B(2), new B(3)); } public Observable<A> createObservableA() { return createObservableB() .map(new Func1<B, A>() { @Override public A call(B b) { return new A(b); } }); } ``` Example - Each occurrence of **B** can create zero or more **A** - Use flatMap() ```java public class B { public final int value; public B(int value) { this.value = value; } } public class A { public final int value; public A(int value) { this.value = value; } } public Observable<B> createObservableB() { return Observable.from(new B(0), new B(1), new B(2), new B(3)); } public Observable<A> createObservableA() { return createObservableB() .flatMap(new Func1<B, Observable<? extends A>>() { @Override public Observable<? extends A> call(final B b) { return Observable.create(new Observable.OnSubscribe<A>() { @Override public void call(Subscriber<? super A> subscriber) { for (int i = 0; i < b.value; i++) { subscriber.onNext(new A(i)); } subscriber.onCompleted(); } }); } }); } ``` I'm not exactly sure what you are asking with Observables **C** and **V** so let's look at a few more ways to combine observables. Example - Combine each pair of items emitted by two observables - Use zip() ```java public class A { public final int value; public A(int value) { this.value = value; } } public class B { public final int value; public B(int value) { this.value = value; } } public class C { private final A a; private final B b; public C(A a, B b) { this.a = a; this.b = b; } } public Observable<B> createObservableB() { return Observable.from(new B(0), new B(1), new B(2), new B(3)); } public Observable<A> createObservableA() { return Observable.from(new A(0), new A(1), new A(2), new A(3)); } public Observable<C> createObservableC() { return Observable.zip(createObservableA(), createObservableB(), new Func2<A, B, C>() { @Override public C call(A a, B b) { return new C(a, b); } } ); } ``` Example - Combine the last item of two Observables - Use combineLatest() ```java // Use the same class definitions from previous example. public Observable<C> createObservableC1() { return Observable.combineLatest(createObservableA(), createObservableB(), new Func2<A, B, C>() { @Override public C call(A a, B b) { return new C(a, b); } } ); } ```
If you are looking for making async observables work I suggest have a quick look at this question [RxJava Fetching Observables In Parallel](https://stackoverflow.com/questions/26249030/rxjava-fetching-observables-in-parallel). Ben (author of RxJava) helped clarify my doubts on the topic. Hope this helps anand
53,153,288
So basically what I want to be able to do is to be able to a single property name space that can be overridden by a child of that property's parent. The practical application here is maintaining a Model/View system. Where the views are derived from the same base view class, however some require a more complex model to perform their function. I was wondering if there is a best practice pattern for this since my current approach of just hiding variable namespaces seems too messy. Here's some sample pseudo-code of what I mean: ``` public class ParentPropertyType{ public string bar = "bar"; } public class ChildPropertyType: ParentPropertyType{ public string foo = "foo"; } public class ChildManager:ParentManager { public virtual ChildPropertyType PropertyType {get; set;} } public class ParentManager{ public override ParentPropertyType PropertyType {get; set;} } ... Debug.Log(new ParentManager().PropertyType.foo); ```
2018/11/05
[ "https://Stackoverflow.com/questions/53153288", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2941307/" ]
Seems like you should make your `ParentManager`-class generic: ``` public class ChildManager : ParentManager<ChildPropertyType> { } public class ParentManager<T> where T: ParentPropertyType { public T PropertyType { get; set; } } ``` Now you don´t even need `virtual` and `override`, as the property has the correct type depending on the class it is used in. Therefor the following compiles fine, because PropertyType returns an instance of ChildPropertyType: ``` var manager = new ChildManager(); ChildPropertyType c = manager.PropertyType; ```
In C#, there is no way to override a method and change its return type to a descendant type. "Sad but true". If I were you, I would reintroduce (using "new") the method / property to return the desired type.
7,517,885
I wanted to give all of the child's div elements a background-color of parent div. But, as I see, child divs' style overwrite parent's style even though child's did not have that property. For example, ``` <!-- Parent's div --> <div style="background-color:#ADADAD;"> some codes here... <!-- child's div --> <div style="position:absolute;font-size:12px; left:600px;top:100px;"> again some codes... </div> </div> ``` In here, If i delete the style of child div, it works fine. I think my problem may be solved if i did the same thing with external css file also. But, I have already done hundreds of divs exactly like this. So, is there anyway to force parent's style to child style, just for background-color?(new in css)
2011/09/22
[ "https://Stackoverflow.com/questions/7517885", "https://Stackoverflow.com", "https://Stackoverflow.com/users/893970/" ]
> > But, as i see, chid divs' style overwrite parent's style even though child's did not have that property. > > > No, they just don't inherit the value by default, so they get whatever value they would otherwise have (which is usually `transparent`). You can (in theory) get what you want with background-color: [inherit](http://www.w3.org/TR/CSS2/cascade.html#value-def-inherit). That has [problems in older versions of IE](https://stackoverflow.com/questions/511066/ie7-css-inherit-problem/511108#511108) though.
Use the inherit property on the child div : ``` background:inherit <div style="position:absolute;font-size:12px; left:600px;top:100px; background:inherit"> ```
7,517,885
I wanted to give all of the child's div elements a background-color of parent div. But, as I see, child divs' style overwrite parent's style even though child's did not have that property. For example, ``` <!-- Parent's div --> <div style="background-color:#ADADAD;"> some codes here... <!-- child's div --> <div style="position:absolute;font-size:12px; left:600px;top:100px;"> again some codes... </div> </div> ``` In here, If i delete the style of child div, it works fine. I think my problem may be solved if i did the same thing with external css file also. But, I have already done hundreds of divs exactly like this. So, is there anyway to force parent's style to child style, just for background-color?(new in css)
2011/09/22
[ "https://Stackoverflow.com/questions/7517885", "https://Stackoverflow.com", "https://Stackoverflow.com/users/893970/" ]
> > But, as i see, chid divs' style overwrite parent's style even though child's did not have that property. > > > No, they just don't inherit the value by default, so they get whatever value they would otherwise have (which is usually `transparent`). You can (in theory) get what you want with background-color: [inherit](http://www.w3.org/TR/CSS2/cascade.html#value-def-inherit). That has [problems in older versions of IE](https://stackoverflow.com/questions/511066/ie7-css-inherit-problem/511108#511108) though.
Use css selectors like this to make the background of child div's inherit from their parent: ``` Parent's div <div id="thisparticulardiv"> some codes here... child's div <div class="childrendiv"> again some codes... </div></div> ``` CSS: ``` #thisparticulardiv { background-color:#ADADAD; ... } #thisparticulardiv div { background: inherit; position:absolute; font-size:12px; left:600px; top:100px; } ```
7,517,885
I wanted to give all of the child's div elements a background-color of parent div. But, as I see, child divs' style overwrite parent's style even though child's did not have that property. For example, ``` <!-- Parent's div --> <div style="background-color:#ADADAD;"> some codes here... <!-- child's div --> <div style="position:absolute;font-size:12px; left:600px;top:100px;"> again some codes... </div> </div> ``` In here, If i delete the style of child div, it works fine. I think my problem may be solved if i did the same thing with external css file also. But, I have already done hundreds of divs exactly like this. So, is there anyway to force parent's style to child style, just for background-color?(new in css)
2011/09/22
[ "https://Stackoverflow.com/questions/7517885", "https://Stackoverflow.com", "https://Stackoverflow.com/users/893970/" ]
Use css selectors like this to make the background of child div's inherit from their parent: ``` Parent's div <div id="thisparticulardiv"> some codes here... child's div <div class="childrendiv"> again some codes... </div></div> ``` CSS: ``` #thisparticulardiv { background-color:#ADADAD; ... } #thisparticulardiv div { background: inherit; position:absolute; font-size:12px; left:600px; top:100px; } ```
Use the inherit property on the child div : ``` background:inherit <div style="position:absolute;font-size:12px; left:600px;top:100px; background:inherit"> ```
422,681
I'm trying to figure out how to integrate one command that updates multiple Aggregates in different contexts in a video game. The components/considerations for my particular design are DDD, CQRS, Ports and Adapters, Event-Driven Architecture, Aggregate+Event Sourcing, and Entity-Component-System. It's a lot to manage, but there are a few cases that can simplify quite a number of the friction points. [![diagram showing interaction between purchase item command and inventory, shop, and wallet contexts](https://i.stack.imgur.com/O7LGC.png)](https://i.stack.imgur.com/O7LGC.png) This diagram kind of illustrates what I'm working with. I have 5 different models, Inventory, Item, Shop, Currency, and Wallet. This is a collaborative, disconnected environment. What I mean by that is, each of these models is designed to not necessarily know about each other, except when necessary. I want these models to be reusable, by unknown third-parties, resulting in other compositions, aside from the ones that I can think of, or the one I'll talk about here. For instance, the Item model doesn't know about the Currency model, or the Wallet model, or the Shop model, or the Inventory model. But, the Inventory deals with storing Items, so it knows about Items. The Shop model knows about assigning prices to Items, so it knows about Items and Currencies. The Wallet model knows about storing and keeping track of money, so it knows about Currencies. Items can be added to a specific Inventory by using an AddItemToInventory command. Money can be removed from a specific Wallet by using a RemoveMoneyFromWallet command. When a Player wants to buy an Item from a Shop, they select the given Item, and if they have enough of the specified type of money in their Wallet, they confirm they want to buy it. At that point, the money is removed from their Wallet, and the specified Item is added to their Inventory. Because this is in a video game, I don't want to create places where duplication can occur. There are two particular cases I want to avoid. * An Item is added to a Player's Inventory, but money is not removed from their Wallet, allowing them to get the Item for free. * Money is removed from a Player's Wallet, but the Item is not added to their Inventory, resulting in lost money. I've been trying to figure out how I can avoid these particular issues. My initial thinking was that this would be managed in a single transaction. The problem though is that these are in different models, different contexts, different Aggregates. Because they are different Aggregates, they shouldn't be updated in the same transaction. Other Aggregates could be updated asynchronously, in my case via published Domain Events. But because they're updated asynchronously, that still leaves open the problems, where items are given, but money isn't taken, or money is taken, and items aren't given. In the diagram, I have the `PurchaseItem` command going into a `???` component. I don't know what it's supposed to be. I didn't want it to be given to the Shop Aggregate, because then the Shop context would have to know about Inventories and Wallets. It knows about Items and Currencies, but it feels improper for it to also know about Inventories and Wallets. The Shop should work in exactly the same way, whether the Item is added to an Inventory, and funds removed from a Wallet; or if the Item is added to a Package, and funds removed in one batch at the end of the day from a Customer Account, or any other combination of things. So more than one Aggregate can't be updated at a time. I'm thinking that either another Aggregate needs to exist somewhere else, which becomes responsible for this interaction. This though would move the Source of Truth to this new Aggregate, making these targeted contexts, sometimes the Source of Truth, sometimes not. More like Eventual Source of Truth. Or, maybe it's acceptable to create a new Aggregate, in a different context, which uses the same Entities and Value Objects of these other contexts. But these other contexts would still own their own data, so it would still be Eventually Consistent. I read in Implementing Domain-Driven Design about a bargain basement Domain Model, where three separate models were used, and a question was posed about whether it makes sense to introduce a new model for the interaction. My problem with that approach is that I still need these other contexts to exist as they are, because third-party developers will use them for completely different purposes. Using Items for Equipment, Money for Quest Rewards, In-game and out-of-game Shops. So I can't just condense these all into a single model, without also forcing those developers to NOT use most of the model that's available. Additionally, all Aggregates in my design are sourced from Events. On the development side, the game developer will create a new Item by assigning a new Item Id. This gets recorded in the game files as an ItemIdAssigned event. Same thing for Currency, Shop, Wallet, Inventory, etc. But once these things are created on the developer side, they're interacted with in completely different ways by the Player. A developer can add an Item to a Shop, a Player can't. A Player can buy an Item from a Shop, a developer can't. This also indicates to me that there are different Aggregates that are at play here that I can't identify. Ones that may be sourced from all, or part, of a different Aggregate's Event Stream. Or that these Aggregates are missing functionality that I haven't been able to understand it's place. How can I model this?
2021/02/24
[ "https://softwareengineering.stackexchange.com/questions/422681", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/161992/" ]
**This answer is based on the presumtion that this is not a hobby project:** Probably that is not what you want to hear , but trying to go at the same time DDD, CQRS, Event Driven Architecture, Event sourcing sounds like a bit of big bite that will lead, either to nowhere, or to a very overengeneered system. If it is a hobby project put it in the description so I will try to answer it. Otherwise my answer is re-think the architecture to have less amount of Buzz words. Pick some concepts of these architectures or just pick one of them and not try to build a system only to test these architectures. I was part once of a project that was trying to utilize at the same time: TDD,CQRS,Event Driven architecture,Behaviour testing and Domain Driven Design and microservice architecture. A project that normaly should take months was year and half in development without visible end because of the added complexity of the different architecture. I find CQRS one of the main sources of overengeneering. I think majority of developers understand very literaly that there MUST BE one read store and one write store which raises multiple problems related to synchronization of two stores. While I have personaly thought that the main principle of CQRS is that you have two different channels and if you actualy have 1 read and 1 write stor or read and write store are the same. As long as you have 2 separate channels all is good.
There is good news, bad news, and more good news. The good news is that this is a *solved* problem! You have found yourself in the exact space that every ATM in the world needs to navigate before dispensing money. The bad news is that the infrastructure and complexity necessary to mediate this kind of distributed transaction is considerable (and not perfect!). This is a pretty good summary of how they work: [How are transactions on ATMs and their banks typically synchronized?](https://softwareengineering.stackexchange.com/questions/378358/how-are-transactions-on-atms-and-their-banks-typically-synchronized) The final bit of good news is that *you do not need to go the above route* (and if/when you do reach that point, it will be a good problem to have). How much easier would it be if you just locked and updated your data in a single transaction? I urge you to make a list of the pros and cons. You will find that the only real issue with a single transaction is that it limits availability. Are you having problems with availability? DDD has nothing to say about database transactions. The boundaries around aggregates are *logical* boundaries. This is often a point of confusion. DDD doesn't mandate eventual consistency.
422,681
I'm trying to figure out how to integrate one command that updates multiple Aggregates in different contexts in a video game. The components/considerations for my particular design are DDD, CQRS, Ports and Adapters, Event-Driven Architecture, Aggregate+Event Sourcing, and Entity-Component-System. It's a lot to manage, but there are a few cases that can simplify quite a number of the friction points. [![diagram showing interaction between purchase item command and inventory, shop, and wallet contexts](https://i.stack.imgur.com/O7LGC.png)](https://i.stack.imgur.com/O7LGC.png) This diagram kind of illustrates what I'm working with. I have 5 different models, Inventory, Item, Shop, Currency, and Wallet. This is a collaborative, disconnected environment. What I mean by that is, each of these models is designed to not necessarily know about each other, except when necessary. I want these models to be reusable, by unknown third-parties, resulting in other compositions, aside from the ones that I can think of, or the one I'll talk about here. For instance, the Item model doesn't know about the Currency model, or the Wallet model, or the Shop model, or the Inventory model. But, the Inventory deals with storing Items, so it knows about Items. The Shop model knows about assigning prices to Items, so it knows about Items and Currencies. The Wallet model knows about storing and keeping track of money, so it knows about Currencies. Items can be added to a specific Inventory by using an AddItemToInventory command. Money can be removed from a specific Wallet by using a RemoveMoneyFromWallet command. When a Player wants to buy an Item from a Shop, they select the given Item, and if they have enough of the specified type of money in their Wallet, they confirm they want to buy it. At that point, the money is removed from their Wallet, and the specified Item is added to their Inventory. Because this is in a video game, I don't want to create places where duplication can occur. There are two particular cases I want to avoid. * An Item is added to a Player's Inventory, but money is not removed from their Wallet, allowing them to get the Item for free. * Money is removed from a Player's Wallet, but the Item is not added to their Inventory, resulting in lost money. I've been trying to figure out how I can avoid these particular issues. My initial thinking was that this would be managed in a single transaction. The problem though is that these are in different models, different contexts, different Aggregates. Because they are different Aggregates, they shouldn't be updated in the same transaction. Other Aggregates could be updated asynchronously, in my case via published Domain Events. But because they're updated asynchronously, that still leaves open the problems, where items are given, but money isn't taken, or money is taken, and items aren't given. In the diagram, I have the `PurchaseItem` command going into a `???` component. I don't know what it's supposed to be. I didn't want it to be given to the Shop Aggregate, because then the Shop context would have to know about Inventories and Wallets. It knows about Items and Currencies, but it feels improper for it to also know about Inventories and Wallets. The Shop should work in exactly the same way, whether the Item is added to an Inventory, and funds removed from a Wallet; or if the Item is added to a Package, and funds removed in one batch at the end of the day from a Customer Account, or any other combination of things. So more than one Aggregate can't be updated at a time. I'm thinking that either another Aggregate needs to exist somewhere else, which becomes responsible for this interaction. This though would move the Source of Truth to this new Aggregate, making these targeted contexts, sometimes the Source of Truth, sometimes not. More like Eventual Source of Truth. Or, maybe it's acceptable to create a new Aggregate, in a different context, which uses the same Entities and Value Objects of these other contexts. But these other contexts would still own their own data, so it would still be Eventually Consistent. I read in Implementing Domain-Driven Design about a bargain basement Domain Model, where three separate models were used, and a question was posed about whether it makes sense to introduce a new model for the interaction. My problem with that approach is that I still need these other contexts to exist as they are, because third-party developers will use them for completely different purposes. Using Items for Equipment, Money for Quest Rewards, In-game and out-of-game Shops. So I can't just condense these all into a single model, without also forcing those developers to NOT use most of the model that's available. Additionally, all Aggregates in my design are sourced from Events. On the development side, the game developer will create a new Item by assigning a new Item Id. This gets recorded in the game files as an ItemIdAssigned event. Same thing for Currency, Shop, Wallet, Inventory, etc. But once these things are created on the developer side, they're interacted with in completely different ways by the Player. A developer can add an Item to a Shop, a Player can't. A Player can buy an Item from a Shop, a developer can't. This also indicates to me that there are different Aggregates that are at play here that I can't identify. Ones that may be sourced from all, or part, of a different Aggregate's Event Stream. Or that these Aggregates are missing functionality that I haven't been able to understand it's place. How can I model this?
2021/02/24
[ "https://softwareengineering.stackexchange.com/questions/422681", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/161992/" ]
**This answer is based on the presumtion that this is not a hobby project:** Probably that is not what you want to hear , but trying to go at the same time DDD, CQRS, Event Driven Architecture, Event sourcing sounds like a bit of big bite that will lead, either to nowhere, or to a very overengeneered system. If it is a hobby project put it in the description so I will try to answer it. Otherwise my answer is re-think the architecture to have less amount of Buzz words. Pick some concepts of these architectures or just pick one of them and not try to build a system only to test these architectures. I was part once of a project that was trying to utilize at the same time: TDD,CQRS,Event Driven architecture,Behaviour testing and Domain Driven Design and microservice architecture. A project that normaly should take months was year and half in development without visible end because of the added complexity of the different architecture. I find CQRS one of the main sources of overengeneering. I think majority of developers understand very literaly that there MUST BE one read store and one write store which raises multiple problems related to synchronization of two stores. While I have personaly thought that the main principle of CQRS is that you have two different channels and if you actualy have 1 read and 1 write stor or read and write store are the same. As long as you have 2 separate channels all is good.
Create a new entity called "Transaction" or "Purchase." Each instance of this entity records the event when a user requested and completed a purchase (Note that "requested" and "completed" are not the same thing). The workflow goes like this: 1. User requests a purchase 2. System create a purchase record with an identifier specifying which item was requested for purchase 3. The purchase record contains blank slots to store (a) The identifier that is created as a side effect of collecting payment (e.g. a confirmation code or transaction ID). This is left blank until payment is actually collected. (b) An identifier that specifies which item was moved to the user's inventory. This is left blank until the item is actually moved. 4. After all identifiers are populated, the purchase record is marked as "completed." At any given moment, if your system is not certain of the integrity of a transaction, it can go back and look at the record. If the payment identifier is blank, it knows to go back and collect payment. And if the item identifier is blank, it knows to go back and move the item to the user's inventory. This new entity must know about items and payments (obviously), but items and payments don't need to know about the new entity, preserving your loosely-coupled design.
22,556,020
I am using `Gson` to parse data from Json String. Everything is working fine. But now, Main `Json String` contains Inner Json String and I want that Inner `Json String`'s data. How to get it ? **Check my `Json String`:** ``` [{"Question":"Are you inclined to:","QId":"2","Options":[{"Option":"Argue or debate issues","OptionId":"4"},{"Option":"Avoid arguments","OptionId":"5"},{"Option":"Swutch topics","OptionId":"6"}]}] ``` **Here, I want value of "Options". My Sample code is as below :** ``` PrimaryTest[] testlist = gson.fromJson(result, PrimaryTest[].class); // result is Json String. List<PrimaryTest> lstTest = Arrays.asList(testlist); PrimaryTest objTest = lstTest.get(0); String Question = objTest.getQuestion(); ``` Here, I am getting Question value perfectly but don't know how to get Options value. If I am using same method then It is giving error : `Invalid Json Data`. **PrimaryTest.java Code :** ``` public class PrimaryTest { private String Question; public PrimaryTest(String Question) { this.Question = Question; } public void setQuestion(String Question) { this.Question = Question; } public String getQuestion() { return this.Question; } } ```
2014/03/21
[ "https://Stackoverflow.com/questions/22556020", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3234665/" ]
Write your class which include parameter which your JSON string contain so that you can directly parse your string into class object. Your `PrimaryTest` class should contain following members : ``` public class PrimaryTest { private String Question; private int QId; private List<Option> Options; //Getter and Setter methods here } ``` And your `Option` class contain members as shown below : ``` public class Option{ private String Option; private int OptionId; // Getter and Setter methods } ``` Now parse your JSON string in as `PrimaryTest` class and get your `Options` list from getter method of class. Below is the test code : ``` PrimaryTest[] response = gson.fromJson(json, PrimaryTest[].class); List<PrimaryTest> lstTest = Arrays.asList(response); PrimaryTest objTest = lstTest.get(0); List<Option> options = objTest.getOptions(); for(Option option : options){ System.out.println(option.getOption()); } ```
Hint an how the objects could be. ``` OptionClass{ String Option int/long OptionId } QuestionClass{ String Question int/long QId Collection<OptionClass> Options } ``` Note: This is not a java class. Design your class based on the hints.
377,189
I'm implementing my own simple database with disk storage, and I'm not sure how to go about modifying and deleting entries. The problem is that as you delete a record from arbitrary position within a file, a "hole" is left there. As you insert a new entry, you may or may not be able to plug it into the hole. Modifying an entry in-place may be possible if the new value is smaller, leaving another hole. Or the new one may be larger, so one has to insert it some place else and delete the old one. Another hole. If implemented like this, the database file starts looking like Swiss cheese after a while. The obvious solution is to run optimization every now and then to compact the file, but that is a tedious and not trivial to implement task as well. For instance, if the file is much larger than the amount of RAM, and you must carefully juggle the records in the file. My question is: are there other approaches to database storage file management? And how do the big database management systems store the data on persistent storage? How do they deal with these problems? I tried Googling but didn't get much info, possibly because I don't even know the right keywords.
2018/08/20
[ "https://softwareengineering.stackexchange.com/questions/377189", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/29016/" ]
The approach you describe is the same which is used, for instance, by SQL Server. With time, the data file grows, and you have to run a maintenance plan in order to [shrink it](https://docs.microsoft.com/en-us/sql/relational-databases/databases/shrink-a-database?view=sql-server-2017) by moving pages from the end of the file to its beginning. The only difference is that you're talking about *records*, while the usual notion is the one of *pages*. Similarly, many file systems have a notion of *fragmentation*, which is eventually solved by performing a *defragmentation* on regular basis. Note that: * If you're creating your own database for learning purposes and shrinking looks too complicated, then maybe you can leave it alone and focus on the things which are fun for your learning project. Just let the file grow over time—it's not like you're expecting to store terabytes of data in a home made database system anyway. * If you're creating your own database because you think you can do a better job compared to all existent database software products, then you may want to reconsider your choice. Note that if relational databases don't fit your needs, you may be better using other types of databases: the ones which store records, the hierarchical ones, the key-value stores, etc.
> > are there other approaches to database storage file management? > > > Don't *actually* delete records. Only *mark* them as deleted, then have your "DBMS" ignore the "marked" ones. Of course, this means that your tables get "fragmented", with lots of "holes" that you're constantly skipping over as you try to read the useful ones. To counter this, you need maintenance processes that re-write the file, actually removing all the dead records. Postgres implements something akin to this in its VACUUM process. Trying to work with any shared resource (file) in a multi-user, multi-threaded fashion is scary. Here's a possible, small-scale, alternative, if you feel like *really* thrashing your file system: You might play with a "File per Record" implementation. Instead of trying to managing a shared file containing many records, you could put each (and every) record into a *file of its own* (a table is, therefore, represented by a directory). Want to delete a record? Delete the corresponding file. > > And how do the big database management systems store the data on persistent storage? How do they deal with these problems? > > > Every DBMS stores things differently and each has its own way of dealing with these problems ... but their owners are *not* very likely to give you chapter and verse on how they go about it.
12,659,035
How to get access to the custom named views instead of giving the same method name in Zend framework 2.0. For Eg: Under index action "return new ViewModel();" will call index.phtml but i want to call an another view here.
2012/09/30
[ "https://Stackoverflow.com/questions/12659035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1608722/" ]
Just call model view with view you want: ``` $model = new ViewModel(); $model->setTemplate('edit'); return $model; ``` More info: <http://framework.zend.com/manual/2.0/en/modules/zend.view.renderer.php-renderer.html>
Within your controller, you can use the `ViewModel`'s `setTemplate` method to change which script will be rendered: ```php public function someAction() { // do stuff here $viewModel = new ViewModel($anArrayOfVariablesForTheView); $viewModel->setTemplate('application/view/arbitrary'); return $viewModel; } ``` Note that you don't need to specify the `.phtml`.
38,562,968
I know how to dismiss a keyboard, I use this extension: ``` extension UIViewController { func hideKeyboardWhenTappedAround() { let tap: UITapGestureRecognizer = UITapGestureRecognizer(target: self, action: "dismissKeyboard") view.addGestureRecognizer(tap) } func dismissKeyboard() { view.endEditing(true) } } ``` And called `hideKeyboardWhenTappedAround` in `viewDidLoad` But my problem now is I added a `UITextField` to a `navigationBar`, and this extension no longer works! This is how I added the `UITextField`: ``` let textField = UITextField(frame: CGRectMake(0,0,textfieldW,0.8*ram.navigationBarHeight) ) textField.borderStyle = UITextBorderStyle.RoundedRect textField.center.y = centerView.center.y centerView.addSubview(textField) self.navigationItem.titleView = centerView ``` How to dismiss a keyboard brought from a `UITextField` that lurks in a navigation bar? [![enter image description here](https://i.stack.imgur.com/NTwM3.png)](https://i.stack.imgur.com/NTwM3.png)
2016/07/25
[ "https://Stackoverflow.com/questions/38562968", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5312361/" ]
Make a reference to this text field, like: ``` var navigationBarField : UITextField? ``` Then initialize it: ``` navigationBarField = UITextField(frame: CGRectMake(0,0,textfieldW,0.8*ram.navigationBarHeight) ) textField.borderStyle = UITextBorderStyle.RoundedRect textField.center.y = centerView.center.y centerView.addSubview(navigationBarField) self.navigationItem.titleView = centerView ``` And when you want to remove keyboard call: ``` navigationBarField?.resignFirstResponder() ```
Instead of declare instance you just need to call `endEditing` method of `navigationController's` `view` also like this ``` func dismissKeyboard() { navigationController?.view.endEditing(true) view.endEditing(true) } ```
3,459,990
I have a program that fundamentally requires a lot of memory. However, for some reason java gives me an error when I try to set the max heap space above 1.5GB. That is, running ``` java -Xmx1582m [my program] ``` is okay, but ``` java -Xmx1583m [my program] ``` gives the error ``` Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. ``` I got the same error in both Windows command line and Eclipse. Here are my system configurations: Windows 7 (64-bit) Intel Core 2 Quad CPU Installed RAM: 8.00 GB Java version 1.6.0 It is weird that I can only set 1.5GB memory even though I'm running 64-bit OS with 8 GB RAM. Is there a way to work around this?
2010/08/11
[ "https://Stackoverflow.com/questions/3459990", "https://Stackoverflow.com", "https://Stackoverflow.com/users/298724/" ]
The likely case is that while your *operating system* is 64-bit, your JVM is not. Opening a command line and typing `java -version` will give you the verbose version information, which should indicate whether your installed JVM is a 32 or 64-bit build. A 64-bit JVM should have no problem with the higher memory limits.
For heap space is used ``` -XX:MaxPermSize=64m ```
25,882,522
How can I display result with continue serial number from MySql table using PHP pagination
2014/09/17
[ "https://Stackoverflow.com/questions/25882522", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4048865/" ]
Try this. It's working for me ``` $pageno = $this->uri->segment(2); // ( $this->uri->segment(2) ) : this is for codeigniter if(empty($pageno) || $pageno == 1){ $srno = 1; } else{ $temp = $pageno-1; $new_temp = $temp.'1'; $srno = (int)$new_temp; } ```
**Try this functions will displays serial no of paginations in php.** <http://code.stephenmorley.org/php/creating-pagination-links/> **or** <http://www.otallu.com/tutorials/simple-php-mysql-pagination/#sthash.7WPg1FdO.dpbs> **or you can use** ``` function pagination($item_count, $limit, $cur_page, $link) { $page_count = ceil($item_count/$limit); $current_range = array(($cur_page-2 < 1 ? 1 : $cur_page-2), ($cur_page+2 > $page_count ? $page_count : $cur_page+2)); // First and Last pages $first_page = $cur_page > 3 ? '<a href="'.sprintf($link, '1').'">1</a>'.($cur_page < 5 ? ', ' : ' ... ') : null; $last_page = $cur_page < $page_count-2 ? ($cur_page > $page_count-4 ? ', ' : ' ... ').'<a href="'.sprintf($link, $page_count).'">'.$page_count.'</a>' : null; // Previous and next page $previous_page = $cur_page > 1 ? '<a href="'.sprintf($link, ($cur_page-1)).'">Previous</a> | ' : null; $next_page = $cur_page < $page_count ? ' | <a href="'.sprintf($link, ($cur_page+1)).'">Next</a>' : null; // Display pages that are in range for ($x=$current_range[0];$x <= $current_range[1]; ++$x) $pages[] = '<a href="'.sprintf($link, $x).'">'.($x == $cur_page ? '<strong>'.$x.'</strong>' : $x).'</a>'; if ($page_count > 1) return '<p class="pagination"><strong>Pages:</strong> '.$previous_page.$first_page.implode(', ', $pages).$last_page.$next_page.'</p>'; } ``` **Usage** ``` pagination( total amount of item/rows/whatever, limit of items per page, current page number, url ); ```
73,271,313
I need to develop a function whereby a type property gets appended to an array if it doesn't exist. So the key will be of type and it should have a value of "dog". Can someone kindly point out how I can iterate over the array to check if the key "type" exists and also provide guidance on how to append {type:"dog"} to the inner array if it doesn't exist. I tried animalArray[0][1]={type:"dog"} but it doesnt seem to work. A typical array of animals will look like this: ``` labelTheDogs( [ {name: 'Obi'}, {name: 'Felix', type: 'cat'} ] ) // should return [ {name: 'Obi', type: 'dog'}, {name: 'Felix', type: 'cat'} ] ```
2022/08/07
[ "https://Stackoverflow.com/questions/73271313", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19364797/" ]
This is not a nested array ```js function labelTheDogs(dogs) { dogs.forEach(dog => { if (!dog.type) { dog.type = 'dog' } }) return dogs } const dogs = labelTheDogs( [{ name: 'Obi' }, { name: 'Felix', type: 'cat' } ] ) console.log(dogs) ```
you can use the map function to do that ```js const newArray = yourArray.map((element) => { if(!element.type){ element.type = "dog"; }else { return element } }) ```
1,662,967
In general, if $X$ is a random variable defined on a probability space $(Ω, Σ, P)$, then the expected value of $X$ is defined as \begin{align} \int\_\Omega X \, \mathrm{d}P = \int\_\Omega X(\omega) P(\mathrm{d}\omega) \end{align} Let $X\_1, X\_2, \dots$ be a sequence of independent random variables identically distributed with probability measure $P$. The empirical measure $P\_n$ is given by \begin{align} P\_n =\frac{1}{n}\sum\_{i=1}^n \delta\_{X\_i} \end{align} Is the following the correct notation for the expectation with respect of the empirical measure? \begin{align} \int\_\Omega X \, \mathrm{d}P\_n = \int\_\Omega X(\omega) P\_n(\mathrm{d}\omega) = \frac{1}{n}\sum\_{i=1}^n X\_i \end{align} **Edit:** What is important for me, is that $P\_n$ is an approximation for $P$. I think $X$ and $X\_1,\dotsc,X\_n$ should map from $(Ω, Σ)$ to some measureable space $(\mathcal{F},\mathscr{F})$.
2016/02/19
[ "https://math.stackexchange.com/questions/1662967", "https://math.stackexchange.com", "https://math.stackexchange.com/users/148666/" ]
It's not correct. The empirical measure isn't a measure on the sample space $\Omega$, it's a (random) measure on $\mathbb{R}$. Notationally, I think most people reserve letters like $P, P\_n$, etc, for measures on $\Omega$, using letters like $\mu, \nu$ for measures on other spaces. So I'd call your empirical measure $\mu\_n$ and then write its mean as $$\int\_{\mathbb{R}} x\,\mu\_n(dx) = \frac{1}{n} \sum\_{i=1}^n X\_i.$$ Note that the left-hand side denotes the integral over $\mathbb{R}$, with respect to the measure $\mu\_n$, of the identity function $f : \mathbb{R} \to \mathbb{R}$ given by $f(x) = x$. The lower-case $x$ is intentional and not a typo.
Let us formalize this a bit. Let $(\Omega\_0,\Sigma\_0,P\_0)$ be a probability space and take $X\_1,X\_2,\ldots$ be a sequence of independent random variables on $\Omega\_0$ with values in $\mathbb R,$ we consider the random measure $P\_n$ on $\mathbb R$ given by $$P\_n=\frac{1}{n}\sum\limits\_{i=1}^n\delta\_{X\_i}$$ where $\delta\_{X\_i}(A)=1$ when $X\_i\in A,$ that is with probability $P(A).$ Fix now $\omega\_0\in\Omega\_0$ and consider the measure $P\_n(\omega\_0).$ Additionally, take some random variable $Y$ on $\mathbb{R}.$ Then $$\int\limits\_{\mathbb{R}}Y\mathrm{d}P\_n(\omega\_0)=\frac{1}{n}\sum\limits\_{i=1}^n\int\limits\_{\mathbb{R}}Y\mathrm{d}\delta\_{X\_i(\omega\_0)}.$$ Since $$f(x)=\int\limits\_{\mathbb{R}}f\mathrm{d}\delta\_x,$$ we obtain $$\int\limits\_{\mathbb{R}}Y\mathrm{d}P\_n(\omega\_0)=\frac{1}{n}\sum\limits\_{i=1}^n(Y\circ X\_i)(\omega\_0).$$ In other worlds, $$\frac{1}{n}\sum\limits\_{i=1}^n(Y\circ X\_i)=\int\limits\_{\mathbb{R}}Y\mathrm{d}P\_n:\Omega\_0\to\mathbb{R}$$ is a random variable on $\Omega\_0.$ The random measure $P\_n$ is a good approximation of $P$ in the following sense. Pick some $A\in\mathcal{B}(\mathbb R).$ Then $(\delta\_{X\_i}(A))\_{i\in\mathbb{N}}$ are iid random variables on $\Omega\_0$ with expectation $P(A).$ The strong law of large number implies $$P\_n(A)=\frac{1}{n}\sum\limits\_{i=1}^n\delta\_{X\_i}(A)\xrightarrow{n\to\infty}\mathbb{E}[\delta\_{X\_i}(A)]=P(A)$$ almost surely, so that $P\_n\to P$ pointwise, almost surely. From the comment below, it even can be shown that this convergence is uniform.
10,550,398
Editing of controller or any js file of sencha touch application bield from Sencha Architect 2 are editable or not?
2012/05/11
[ "https://Stackoverflow.com/questions/10550398", "https://Stackoverflow.com", "https://Stackoverflow.com/users/861521/" ]
Manually or via the UI? Manually you add a controller and then in the designer view choose CODE and then OVERWRITE. The original is saved (look in the dir structure) Next time open this screen, choose (in the first selector of code view) the VIEW OVERRIDE CODE so that you see your latest work. I work with intellij to edit the file. Then paste it into the field, and it automatically updates both files (there's a .js and an Architect metadata file) Via the UI of course: That's simple. Use the config inspector to select the views models and stores you wish to control. Moshe
If you are using architect each time you save the project architect edits the code of all the files in the app folder & app.html & app.js so any manual edits you make on the actual file will always be overwritten... This is actually displayed as comment in each file eg: ``` /* * File: app/controller/MyController.js * * This file was generated by Sencha Architect version 2.1.0. * http://www.sencha.com/products/architect/ * * This file requires use of the Sencha Touch 2.0.x library, under independent license. * License of Sencha Architect does not include license for Sencha Touch 2.0.x. For more * details see http://www.sencha.com/license or contact [email protected]. * * This file will be auto-generated each and everytime you save your project. * * Do NOT hand edit this file. */ ``` I hope this was what you were looking for
50,905
One channel of my split audio is cutting out when the cables are plugged into two devices. I have a computer audio output that I have [this splitter](https://rads.stackoverflow.com/amzn/click/com/B000067RC4) connected to. From the splitter I connect a cable to a standard headset. Then I connect the other cable from the splitter to [this bass amplifier](https://rads.stackoverflow.com/amzn/click/com/B00HWINLAE)'s Input 1/4" jack (with an adapter). This is when the issue occurs. Before I plug into the amp, the headset has sound coming from both ears. After I plug in, all sound stops coming through the right ear of the headset. Unplugging from the amp restores the sound to the right ear. I can't duplicate the output into two separate jacks, so it has to be split from a single one somehow. The behavior is also not limited to that single headset, splitter, or cables. I've tried switching out every part of the setup except for the bass (since I only have one.) I'm sure this is some normal phenomenon, but I can't find anything about it to save my life.
2021/08/24
[ "https://sound.stackexchange.com/questions/50905", "https://sound.stackexchange.com", "https://sound.stackexchange.com/users/34185/" ]
Guitar amp input impedance ≅ 1MΩ Headset impedance ≅ 50 - 300Ω Totally mis-matched. However, this doesn't add up to what you're hearing. The amp should be the one suffering lack of signal, though it can make up the gain significantly. Best guess, as your amp is mono, it is shorting one side of the stereo signal, resulting in loss of one side on the headset. In short [pardon the pun] - you really don't want to be doing it that way. Splitters like this really need to be splitting to two very similar devices. You'll need some kind of passive mixer at least to do this even vaguely properly.
Welcome to Sound Design. Your bass amp is mono. The 1/4" input jack is mono. When you plug in to the 1/4" jack it shorts out your right channel so you lose the sound in the right channel of your headset. This amp also has a 3.5 mm (1/8") auxiliary jack which accepts a stereo connection. Use that instead of the 1/4" main input and your headset should retain the right channel. (Inside the amplifier the left and right channels will be combined to give a mono sound output. However this will be done in a way that doesn't short out the right channel.)
41,234,161
Given two `numpy` arrays of `nx3` and `mx3`, what is an efficient way to determine the row indices (counter) wherein the rows are common in the two arrays. For instance I have the following solution, which is significantly slow for not even much larger arrays ``` def arrangment(arr1,arr2): hits = [] for i in range(arr2.shape[0]): current_row = np.repeat(arr2[i,:][None,:],arr1.shape[0],axis=0) x = current_row - arr1 for j in range(arr1.shape[0]): if np.isclose(x[j,0],0.0) and np.isclose(x[j,1],0.0) and np.isclose(x[j,2],0.0): hits.append(j) return hits ``` It checks if rows of `arr2` exist in `arr1` and returns the row indices of `arr1` where the rows match. I need this arrangement to be always sequentially ascending in terms of rows of `arr2`. For instance given ``` arr1 = np.array([[-1., -1., -1.], [ 1., -1., -1.], [ 1., 1., -1.], [-1., 1., -1.], [-1., -1., 1.], [ 1., -1., 1.], [ 1., 1., 1.], [-1., 1., 1.]]) arr2 = np.array([[-1., 1., -1.], [ 1., 1., -1.], [ 1., 1., 1.], [-1., 1., 1.]]) ``` The function should return: ``` [3, 2, 6, 7] ```
2016/12/20
[ "https://Stackoverflow.com/questions/41234161", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6345567/" ]
quick and dirty answer ``` (arr1[:, None] == arr2).all(-1).argmax(0) array([3, 2, 6, 7]) ``` --- Better answer Takes care of chance a row in `arr2` doesn't match anything in `arr1` ``` t = (arr1[:, None] == arr2).all(-1) np.where(t.any(0), t.argmax(0), np.nan) array([ 3., 2., 6., 7.]) ``` --- As pointed out by @Divakar `np.isclose` accounts for rounding error in comparing floats ``` t = np.isclose(arr1[:, None], arr2).all(-1) np.where(t.any(0), t.argmax(0), np.nan) ```
I had a similar [problem in the past](https://stackoverflow.com/questions/35234571/finding-a-set-of-indices-that-maps-the-rows-of-one-numpy-ndarray-to-another) and I came up with a fairly optimised solution for it. First you need a generalisation of `numpy.unique` for multidimensional arrays, which for the sake of completeness I would [copy](https://stackoverflow.com/a/35242447/2750396) it here ``` def unique2d(arr,consider_sort=False,return_index=False,return_inverse=False): """Get unique values along an axis for 2D arrays. input: arr: 2D array consider_sort: Does permutation of the values within the axis matter? Two rows can contain the same values but with different arrangements. If consider_sort is True then those rows would be considered equal return_index: Similar to numpy unique return_inverse: Similar to numpy unique returns: 2D array of unique rows If return_index is True also returns indices If return_inverse is True also returns the inverse array """ if consider_sort is True: a = np.sort(arr,axis=1) else: a = arr b = np.ascontiguousarray(a).view(np.dtype((np.void, a.dtype.itemsize * a.shape[1]))) if return_inverse is False: _, idx = np.unique(b, return_index=True) else: _, idx, inv = np.unique(b, return_index=True, return_inverse=True) if return_index == False and return_inverse == False: return arr[idx] elif return_index == True and return_inverse == False: return arr[idx], idx elif return_index == False and return_inverse == True: return arr[idx], inv else: return arr[idx], idx, inv ``` Now all you need is to concatenate (`np.vstack`) your arrays and find the unique rows. The reverse mapping together with `np.searchsorted` will give you the indices you need. So lets write another function similar to `numpy.in2d` but for multidimensional (2D) arrays ``` def in2d_unsorted(arr1, arr2, axis=1, consider_sort=False): """Find the elements in arr1 which are also in arr2 and sort them as the appear in arr2""" assert arr1.dtype == arr2.dtype if axis == 0: arr1 = np.copy(arr1.T,order='C') arr2 = np.copy(arr2.T,order='C') if consider_sort is True: sorter_arr1 = np.argsort(arr1) arr1 = arr1[np.arange(arr1.shape[0])[:,None],sorter_arr1] sorter_arr2 = np.argsort(arr2) arr2 = arr2[np.arange(arr2.shape[0])[:,None],sorter_arr2] arr = np.vstack((arr1,arr2)) _, inv = unique2d(arr, return_inverse=True) size1 = arr1.shape[0] size2 = arr2.shape[0] arr3 = inv[:size1] arr4 = inv[-size2:] # Sort the indices as they appear in arr2 sorter = np.argsort(arr3) idx = sorter[arr3.searchsorted(arr4, sorter=sorter)] return idx ``` Now all you need to do is call `in2d_unsorted` with your input parameters ``` >>> in2d_unsorted(arr1,arr2) array([ 3, 2, 6, 7]) ``` While may not be fully optimised this approach is much faster. Let's benchmark it against `@piRSquared`s solutions ``` def indices_piR(arr1,arr2): t = np.isclose(arr1[:, None], arr2).all(-1) return np.where(t.any(0), t.argmax(0), np.nan) ``` with the following arrays ``` n=150 arr1 = np.random.permutation(n).reshape(n//3, 3) idx = np.random.permutation(n//3) arr2 = arr1[idx] In [13]: np.allclose(in2d_unsorted(arr1,arr2),indices_piR(arr1,arr2)) True In [14]: %timeit indices_piR(arr1,arr2) 10000 loops, best of 3: 181 µs per loop In [15]: %timeit in2d_unsorted(arr1,arr2) 10000 loops, best of 3: 85.7 µs per loop ``` Now, for `n=1500` ``` In [24]: %timeit indices_piR(arr1,arr2) 100 loops, best of 3: 10.3 ms per loop In [25]: %timeit in2d_unsorted(arr1,arr2) 1000 loops, best of 3: 403 µs per loop ``` and for `n=15000` ``` In [28]: %timeit indices_piR(A,B) 1 loop, best of 3: 1.02 s per loop In [29]: %timeit in2d_unsorted(arr1,arr2) 100 loops, best of 3: 4.65 ms per loop ``` So for large`ish` arrays this is over **200X** faster compared to `@piRSquared`'s vectorised solution.
684,882
Environment ----------- Operating System: CentOS Linux 8 Virtual Machine running on VMWare Workstation 16 Pro on Windows 11 host. Question -------- I recently had an application lockup while using my VM and I had to power down the VM to recover. When I restarted the VM, I no longer had network access. In Gnome, I lost the network section of the settings application, and if I try to enable the interface using `nmtui` I get `Could not activate connection: Connection 'Host NAT' is not available on device ens160 because device is strictly unmanaged` The device appears in the nmcli status list as type `ethernet` and state `unmanaged`. `ip a` lists the interface with the state UP, but doesn't have an IP address. I tried: * `nmcli device set ens160 managed yes` - No change (still shows unmanaged) * manually editing `ifconfig-ens160` but the settings appear correct * Restarting the Guest OS * Restarting the Host OS One more note: None of the other VMs are showing the same symptoms, so I don't believe its the host OS or the VM configuration. I can't figure out how to restore my interface. Thoughts? Update ------ The interfaces came up when I resumed the VM this morning, so I still don't know what caused it. To answer Alex's questions however: ifcfg-ens160: ``` TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=dhcp DEFROUTE=yes IPV4_FAILURE_FATAL=yes IPV6_DISABLED=yes IPV6INIT=no NAME="Host NAT" UUID=89af5f75-265c-4766-891e-01003ef5a906 DEVICE=ens160 ONBOOT=yes ``` Output of `nmcli con up ens160` ``` Error: unknown connection 'ens160'. ``` Output of `nmcli con up "Host NAT"` ``` Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/7) ```
2022/01/03
[ "https://unix.stackexchange.com/questions/684882", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/96613/" ]
I was able to resolve my issue by suspending the virtual machine and resuming it. I can't quite explain what happened, but it would appear that VMWare changes the state of the network interfaces when suspending a VM and in my case, it wasn't resumed properly. I've had it happen a second time and the suspend/resume approach solved it again.
Did you clone your VM? I had also the same issue and the same solution worked. In my case I had cloned a VM from the existing Linux VM. Its better to do basic configuration of the Primary VM from which you will be cloning. The following article is a little old but addresses the same context. <https://kb.vmware.com/s/article/2002767> Thanks.
41,877,490
I'm trying to get the create function to have the user selected values entered into the database. When the create button is pushed, no error is thrown but, the data is not populated. I'm pretty sure my frequency fields are causing the issue but have been unable to come with a solution. There are two different types of frequencies a user can select depending upon their "Notification Name" selection. One selection has 3 separate fields for a numerical value, time frame (week, month etc.), and a before/after selection. The other simply states instantaneous as a static text field. Regardless of which option is chosen the frequency data should be populated into one cell within the database which is then separated using piping where necessary. I'm still pretty new to C# MVC so any help is greatly appreciated. Controller: ``` [HttpPost] [ValidateAntiForgeryToken] public ActionResult Create([Bind(Include = "Id,notificationType1,recipientTypeId,frequency")] NotificationType notificationType) { if (ModelState.IsValid) { db.NotificationType.Add(notificationType); db.SaveChanges(); return RedirectToAction("Create"); } ViewBag.recipientTypeId = new SelectList(db.RecipientType, "Id", "recipientRole", notificationType.recipientTypeId); return View(notificationType); } ``` View ``` @using (Html.BeginForm()) { @Html.AntiForgeryToken() <div class="form-horizontal"> <hr /> @Html.ValidationSummary(true, "", new { @class = "text-danger" }) <div class="form-group"> @Html.LabelFor(model => model.notificationType1, "Notification Name", htmlAttributes: new { @class = "control-label col-md-2 helper-format" }) <div class="col-md-10" id="type_selection"> @Html.DropDownList("notificationType1", new List<SelectListItem> { new SelectListItem { Text = "Make a Selection", Value="" }, new SelectListItem { Text = "Incomplete Documents", Value= "Incomplete Documents" }, new SelectListItem { Text = "All Documents Complete", Value = "All Documents Complete" }, new SelectListItem { Text = "Documents Requiring Action", Value = "Documents Requiring Action" } }, new { @class = "helper-format", @id = "value_select", style = "font-family: 'Roboto', Sans Serif;" }) @Html.ValidationMessageFor(model => model.notificationType1, "", new { @class = "text-danger" }) </div> </div> <div class="form-group" id="frequency_group"> @Html.LabelFor(model => model.frequency, "Frequency", htmlAttributes: new { @class = "control-label col-md-2" }) <div class="col-sm-3" id="frequency_group"> @Html.TextBoxFor(model => model.frequency, new { @class = "textbox-width", @placeholder = "42" }) @Html.DropDownList("frequency", new List<SelectListItem> { new SelectListItem { Text = "Day(s)", Value= "| Day"}, new SelectListItem { Text = "Week(s)", Value= "| Week"}, new SelectListItem { Text = "Month(s)", Value= "| Month"} }) @Html.DropDownList("frequency", new List<SelectListItem> { new SelectListItem { Text = "Before", Value= "| Before"}, new SelectListItem { Text = "After", Value= "| After"} }) </div> <p class="col-sm-2" id="psdatetext">The Beginning</p> </div> <div class="form-group" id="freq_instant"> @Html.LabelFor(model => model.frequency, "Frequency", htmlAttributes: new { @class = "control-label col-md-2" }) <div class="instant_text"> <p>Instantaneous</p></div> </div> <div class="form-group"> @Html.LabelFor(model => model.recipientTypeId, "Notification For", htmlAttributes: new { @class = "control-label col-md-2" }) <div class="col-md-10"> @Html.DropDownList("recipientTypeId", new List<SelectListItem> { new SelectListItem { Text = "Me", Value= "Me"}, new SelectListItem { Text = "Account Manager", Value="Account Manager" }, new SelectListItem { Text = "Candidate", Value= "Candidate"}, new SelectListItem { Text = "Recruiter", Value="Recruiter" }, new SelectListItem { Text = "Manager", Value= "Manager"} }) </div> </div> <div class="form-group"> <div class="col-md-offset-1 col-md-10"> <div id="hovercreate"> <button type="submit" value="CREATE" class="btn btn-primary" id="createbtn">CREATE</button> </div> </div> </div> </div> } ``` JS for frequency options ``` @Scripts.Render("~/bundles/jquery") <script type="text/javascript"> $(document).ready(function () { $('#frequency_group').hide() $('#freq_instant').hide() $('#value_select').change(function () { var selection = $('#value_select').val(); $('#frequency_group').hide(); switch (selection) { case 'Incomplete Documents': $('#frequency_group').show(); break; case 'All Documents Complete': $('#frequency_group').show(); break; } }); $('#value_select').on('change', function () { if (this.value == 'Documents Requiring Action') { $("#freq_instant").show(); } else { $("#freq_instant").hide(); } }); }); ```
2017/01/26
[ "https://Stackoverflow.com/questions/41877490", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6104319/" ]
Assuming you want the first 10% of rows ordered by Value (desc), you can achieve that by using window functions: ``` select * from ( select ID, Value, COUNT(*) over (partition by ID) as countrows, ROW_NUMBER() over (partition by ID order by Value desc) as rowno from mytable) as innertab where rowno <= floor(countrows*0.1+0.9) order by ID, rowno ``` The floor-thing brings 1 row per 1-10 rows, 2 rows for 11-20 rows and so on.
Alternatively you could use CROSS APPLY and specify TOP n PERCENT ``` SELECT x.* FROM ( SELECT DISTINCT ID FROM tab ) a CROSS APPLY ( SELECT TOP 10 PERCENT ID, Value FROM tab b WHERE b.ID = a.ID) x ``` TOP n PERCENT will produce at least one row.
62,270
I have read the comments from [this question](https://meta.stackexchange.com/questions/62191/questions-getting-closed-too-fast-within-hours-give-it-some-time-to-live) which gave me another proposal. So some people are 'annoyed' if some type of questions are on the front page. Is this a real problem? SO gets a lot of questions that the front page is a moving list. If you want the question out of the front page, have a voting system for moving it instead of closing it. A lot of people do not get the fact that a closed question is basically a signal to users NOT to bother adding more comments or answers to the question. A real DISSERVICE to the poster who is seeking a badly needed answer. A lot of people do not know about re-opening a question or not even care to offer any more help. (Why waste time with a closed questions. Let me run to the new ones) Having a single action, closing, is killing some good questions. Have some other actions like moving questions to another pile or away from the front page. Yeah I know about the bounty system. You lose the points even if you didn't get any answers. It's discouraging to use it unless you really need an answer.
2010/08/25
[ "https://meta.stackexchange.com/questions/62270", "https://meta.stackexchange.com", "https://meta.stackexchange.com/users/134444/" ]
A closed question isn't moved off the front page, though, and can be bumped back onto it. Consequently, [a negatively voted question is barred from the front page](https://meta.stackexchange.com/questions/62261/are-edited-questions-no-longer-bumped-to-the-front-page/62262#62262), so we already have a non-closing-based mechanism for removing questions from the front page. For the purpose of front-page-management, I find that the downvotes are good enough and that we don't need to install yet another voting system separate from it.
Your points are valid, but I think your proposed solution is wrong. Implementing a separate voting system might alleviate some of the symptoms, but doesn't address the real problem. What we really need to do is educate users about what question closure really means. We should be more active in letting people know about the reopen feature, and the criteria that make a question a candidate for reopening. I know I read closed questions, and I encourage others to do the same. **I don't buy that SO users won't "even care to offer any more help" for closed questions if they understand the system**; they're here to answer questions, and what is answering questions if not caring and helping? You're right that bounties aren't the solution here; the contract with bounties is rep for increased exposure, not necessarily rep for good answers. (I've actually had a related feature request for closed questions brewing in my head for a few months; I haven't quite worked out the right wording and details for it yet.)
62,270
I have read the comments from [this question](https://meta.stackexchange.com/questions/62191/questions-getting-closed-too-fast-within-hours-give-it-some-time-to-live) which gave me another proposal. So some people are 'annoyed' if some type of questions are on the front page. Is this a real problem? SO gets a lot of questions that the front page is a moving list. If you want the question out of the front page, have a voting system for moving it instead of closing it. A lot of people do not get the fact that a closed question is basically a signal to users NOT to bother adding more comments or answers to the question. A real DISSERVICE to the poster who is seeking a badly needed answer. A lot of people do not know about re-opening a question or not even care to offer any more help. (Why waste time with a closed questions. Let me run to the new ones) Having a single action, closing, is killing some good questions. Have some other actions like moving questions to another pile or away from the front page. Yeah I know about the bounty system. You lose the points even if you didn't get any answers. It's discouraging to use it unless you really need an answer.
2010/08/25
[ "https://meta.stackexchange.com/questions/62270", "https://meta.stackexchange.com", "https://meta.stackexchange.com/users/134444/" ]
Closing is not about whether or not a question is appropriate for the front page, it is about whether or not it is appropriate for the site at all. A separate but related issue. If a *question* is not appropriate for the front page, then it is **not** appropriate for the site at all. On Stack Overflow, all content is first tier. There isn't a section for good questions and a section for "off topic" questions. There are **only** good questions. Anything else *should* be removed and closed. > > A lot of people do not get the fact that a closed question is basically a signal to users NOT to bother adding more comments or answers to the question. > > > That is *literally* the purpose of closing questions. When I close something as a duplicate, or off topic, or subjective, I am saying that this questions does **not** belong and should **not** be answered or commented on. Adding a "move off front page" action implies that it is *acceptable* to have content which is *not good enough* to be presentable. We don't have tiers. It's got to be good enough or it's got to go.
Your points are valid, but I think your proposed solution is wrong. Implementing a separate voting system might alleviate some of the symptoms, but doesn't address the real problem. What we really need to do is educate users about what question closure really means. We should be more active in letting people know about the reopen feature, and the criteria that make a question a candidate for reopening. I know I read closed questions, and I encourage others to do the same. **I don't buy that SO users won't "even care to offer any more help" for closed questions if they understand the system**; they're here to answer questions, and what is answering questions if not caring and helping? You're right that bounties aren't the solution here; the contract with bounties is rep for increased exposure, not necessarily rep for good answers. (I've actually had a related feature request for closed questions brewing in my head for a few months; I haven't quite worked out the right wording and details for it yet.)
43,880,219
Probably very simple but got me stumped. I have an overlay menu and some navigation links inside it. What I'd like to do is a simple "slide up" text effect, wherein the text seems to "rise up" from the baseline. It's a commonly seen effect, and I have achieved it playing with line-height and a super simple animation. The jQuery: basically, the idea is that when somebody clicks on the menu icon, the text appears AS it slides up. **The problem: the effect "works", however, when I open the menu, the text actually appears for a split second BEFORE the effect kicks in.** HTML: ``` <ul> <li><span>Hello</span></li> <li><span>Dog</span></li> </ul> ``` CSS: ``` li { overflow: hidden; line-height: 1; } .reveal { display: block; animation: reveal 1.5s cubic-bezier(0.77, 0, 0.175, 1) 0.5s; } @keyframes reveal { 0% { transform: translate(0, 100%); } 100% { transform: translate(0, 0); } } ``` JQUERY: ``` $(document).ready(function() { $('button').click(function () { //make overlay appear $('li span').addClass('reveal'); //adds the animation to the text }); }); ``` This [jsFiddle](https://jsfiddle.net/m4rbot/v7qvdu46/) will show you the effect I'm going for and the problem. Note: the code is super broken, I just need help with the effect itself. I understand why it happens: I'm telling the browser to translate the text 100% AFTER it has already appeared on screen, without hiding it first. How do I hide the text until the animation kicks in? Nothing I've tried has worked. I just want the text to be invisible UNTIL it slides up into view. What am I doing wrong?
2017/05/09
[ "https://Stackoverflow.com/questions/43880219", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7459478/" ]
You can use `animation-fill-mode: forwards` to maintain the end state of the animation. And then translate the starting state: ``` span{ transform: translate(0, 100%); } .reveal { display: block; animation: reveal 1.5s cubic-bezier(0.77, 0, 0.175, 1) 0.5s forwards; } ```
To start out with the initial animation at 0% you want to add that same setting to the span intially: li span{transform: translate(0, 100%);}
1,166,068
The **Perron Frobenius theorem** states: > > Any square matrix $A$ with positive entries has a unique eigenvector with > positive entries (up to a multiplication by a positive scalar), and > the corresponding eigenvalue has multiplicity one and is strictly > greater than the absolute value of any other eigenvalue. > > > --- So I **tempted fate** using this matrix: $$ A =\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}$$ I find my eigenvalues to be $\lambda\_1 = 0, \lambda\_2 = 2$ Now I find my eigenvector, taking $v\_{11} = 1$, $v\_{21} = 1$ I find $v\_1 = \begin{bmatrix} v\_{11} & v\_{12} \end{bmatrix}$ = $\begin{bmatrix} 1 & -1 \end{bmatrix}$ $v\_1 = \begin{bmatrix} v\_{21} & v\_{22} \end{bmatrix}$ = $\begin{bmatrix} 1 & 1 \end{bmatrix}$ **This verifies the Perron-Frobenius theorem.** --- Now what is the great implication that every positive square matrix has a real eigenvector with an eigenvalue that is the largest of all eigenvalues? Can someone show me an application of this theorem?
2015/02/26
[ "https://math.stackexchange.com/questions/1166068", "https://math.stackexchange.com", "https://math.stackexchange.com/users/105951/" ]
For example, there are important applications in the theory of finite Markov chains. Among other things, it implies that a Markov chain with strictly positive transition probabilities has a unique stationary distribution.
First of all, the conclusion is more interesting if you also try an example where the matrix entries are still real, but not positive. For example, if $A$ is the $2\times2$ rotation matrix $$A=\pmatrix{\cos\theta&\sin\theta\\ -\sin\theta&\cos\theta}$$ then you can check that the eigenvectors are no longer real, and there's no longer a unique eigenvalue of maximum modulus: the two eigenvalues are $e^{i\theta}$ and $e^{-i\theta}$. I second Robert Israel's nomination of Markov chains as a great application. I'll add the following, though. (I'll remark that you don't actually need the matrix to have positive entries - just for some power of it to have positive entries.) Suppose you have a finite connected graph (or strongly connected digraph) such that the gcd of the cycle lengths is $1$. Then if $A$ is the adjacency matrix for the graph, some power $A^r$ will have positive entries so Perron-Frobenius applies. Since the entries of $A^n$ count paths in the graph, we conclude that for any pair of vertices in the graph, the number of paths of length $n$ between them is $c\lambda\_1^n(1+o(1))$, where $\lambda\_1$ is the Perron-Frobenius eigenvalue and $c$ is a computable constant. Applications of this particular consequence of Perron-Frobenius include asymptotic growth rate results for "regular languages," which are defined in terms of paths on graphs. In particular, there is a cute formula giving the $n$-th Fibonacci number as $$F\_n=\left\langle \frac1{\sqrt5}\phi^n\right\rangle$$ where $\phi$ is the "golden ratio" $(1+\sqrt5)/2$ and $\langle\cdot\rangle$ denotes "closest integer." This formula, and many others like it, become less surprising once you check that $$\pmatrix{1&1\\1&0}^n=\pmatrix{F\_{n+1}&F\_n\\ F\_n&F\_{n-1}}$$ and note that $\phi$ is the Perron-Frobenius eigenvalue (and the other eigenvalue is smaller than $1$ in absolute value.)
1,166,068
The **Perron Frobenius theorem** states: > > Any square matrix $A$ with positive entries has a unique eigenvector with > positive entries (up to a multiplication by a positive scalar), and > the corresponding eigenvalue has multiplicity one and is strictly > greater than the absolute value of any other eigenvalue. > > > --- So I **tempted fate** using this matrix: $$ A =\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}$$ I find my eigenvalues to be $\lambda\_1 = 0, \lambda\_2 = 2$ Now I find my eigenvector, taking $v\_{11} = 1$, $v\_{21} = 1$ I find $v\_1 = \begin{bmatrix} v\_{11} & v\_{12} \end{bmatrix}$ = $\begin{bmatrix} 1 & -1 \end{bmatrix}$ $v\_1 = \begin{bmatrix} v\_{21} & v\_{22} \end{bmatrix}$ = $\begin{bmatrix} 1 & 1 \end{bmatrix}$ **This verifies the Perron-Frobenius theorem.** --- Now what is the great implication that every positive square matrix has a real eigenvector with an eigenvalue that is the largest of all eigenvalues? Can someone show me an application of this theorem?
2015/02/26
[ "https://math.stackexchange.com/questions/1166068", "https://math.stackexchange.com", "https://math.stackexchange.com/users/105951/" ]
First of all, the conclusion is more interesting if you also try an example where the matrix entries are still real, but not positive. For example, if $A$ is the $2\times2$ rotation matrix $$A=\pmatrix{\cos\theta&\sin\theta\\ -\sin\theta&\cos\theta}$$ then you can check that the eigenvectors are no longer real, and there's no longer a unique eigenvalue of maximum modulus: the two eigenvalues are $e^{i\theta}$ and $e^{-i\theta}$. I second Robert Israel's nomination of Markov chains as a great application. I'll add the following, though. (I'll remark that you don't actually need the matrix to have positive entries - just for some power of it to have positive entries.) Suppose you have a finite connected graph (or strongly connected digraph) such that the gcd of the cycle lengths is $1$. Then if $A$ is the adjacency matrix for the graph, some power $A^r$ will have positive entries so Perron-Frobenius applies. Since the entries of $A^n$ count paths in the graph, we conclude that for any pair of vertices in the graph, the number of paths of length $n$ between them is $c\lambda\_1^n(1+o(1))$, where $\lambda\_1$ is the Perron-Frobenius eigenvalue and $c$ is a computable constant. Applications of this particular consequence of Perron-Frobenius include asymptotic growth rate results for "regular languages," which are defined in terms of paths on graphs. In particular, there is a cute formula giving the $n$-th Fibonacci number as $$F\_n=\left\langle \frac1{\sqrt5}\phi^n\right\rangle$$ where $\phi$ is the "golden ratio" $(1+\sqrt5)/2$ and $\langle\cdot\rangle$ denotes "closest integer." This formula, and many others like it, become less surprising once you check that $$\pmatrix{1&1\\1&0}^n=\pmatrix{F\_{n+1}&F\_n\\ F\_n&F\_{n-1}}$$ and note that $\phi$ is the Perron-Frobenius eigenvalue (and the other eigenvalue is smaller than $1$ in absolute value.)
I'll add an answer from the world of symbolic dynamics and tiling theory. If you have a substitution on some alphabet $\mathcal{A}=\{a\_1,\ldots,a\_k\}$, say something like the Fibonacci substitution $$\sigma\colon\begin{cases}a\mapsto ab\\ b\mapsto a\end{cases}$$ then you have an associated *transition matrix* $M\_{\sigma}$ where $m\_{ij}$ is the number of times the letter $a\_i$ appears in the word $\sigma(a\_j)$. So for example the transition matrix for the Fibonacci substitution is $M\_{\sigma}=\left(\begin{smallmatrix}1&1\\ 1&0\end{smallmatrix}\right)$. Whenever there exists a $k$ such that $M\_{\sigma}^k$ has strictly positive entries, we say that $\sigma$ is a *primitive* substitution. Transition matrices of primitive substitutions therefore satisfy the hypothesis of the Perron Frobenius theorem and we can say the following thanks to it. > > **Theorem** > Let $|w|$ be the length of a word. > If $\sigma$ is a primitive substitution, then the PF eigenvalue $\lambda\_{PF}$ has the property that $\lim\_{n\to \infty} |\sigma^n(a\_i)|/\lambda\_{PF} = 1$ for any $a\_i\in\mathcal{A}$. That is, the length of the words $\sigma^n(a\_i)$ are roughly $\lambda\_{PF}^n$. > > > If $v\_{PF}$ is the associated dominant right eigenvector then the frequency of letters in the limit word $\sigma^{\infty}(a\_j)$ is given by $\mbox{freq}(a\_i)=(v\_{PF})\_i/\|v\_{PF}\|\_1$, and this is independent of choice of seed letter $a\_j$. > > > We can use the left eigenvector to assign lengths to intervals in the line which are labelled by the symbols $a\_i$, and we can then think of $\lambda\_{PF}$ as being a natural *expansion factor* for a geometric substitution, whereby we apply $\sigma$ to the interval assigned the letter $a\_i$ by expanding it by a factor of $\lambda\_{PF}$ and then cutting it up into intervals of lengths associated to the eigenvector, and in the order prescribed by the substitution. A good paper to read more about this stuff, including how we can similarly do substitutions in higher dimensions is [this paper](http://www.sciencedirect.com/science/article/pii/S0723086908000042#) by Natalie Priebe-Frank.
1,166,068
The **Perron Frobenius theorem** states: > > Any square matrix $A$ with positive entries has a unique eigenvector with > positive entries (up to a multiplication by a positive scalar), and > the corresponding eigenvalue has multiplicity one and is strictly > greater than the absolute value of any other eigenvalue. > > > --- So I **tempted fate** using this matrix: $$ A =\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}$$ I find my eigenvalues to be $\lambda\_1 = 0, \lambda\_2 = 2$ Now I find my eigenvector, taking $v\_{11} = 1$, $v\_{21} = 1$ I find $v\_1 = \begin{bmatrix} v\_{11} & v\_{12} \end{bmatrix}$ = $\begin{bmatrix} 1 & -1 \end{bmatrix}$ $v\_1 = \begin{bmatrix} v\_{21} & v\_{22} \end{bmatrix}$ = $\begin{bmatrix} 1 & 1 \end{bmatrix}$ **This verifies the Perron-Frobenius theorem.** --- Now what is the great implication that every positive square matrix has a real eigenvector with an eigenvalue that is the largest of all eigenvalues? Can someone show me an application of this theorem?
2015/02/26
[ "https://math.stackexchange.com/questions/1166068", "https://math.stackexchange.com", "https://math.stackexchange.com/users/105951/" ]
The Perron-Frobenius theorem is used in Google's Pagerank. It's one of the things that make sure that the algorithm works. [Here](http://www.math.pku.edu.cn/teachers/yaoy/Fall2011/lecture07.pdf) it's explained why the the theorem is useful, you have a lot of information with easy explanations on Google.
To check the stability of dynamical systems, one can try to search for a real-valued positive function named [Lyapunov function](https://en.wikipedia.org/wiki/Lyapunov_function). If the dynamical system is [LTI](https://en.wikipedia.org/wiki/Linear_time-invariant_theory), that is, of the format $$ \dot{x}(t) = Ax(t) $$ or $$ x(k+1) = Ax(k),$$ one can restrict the search for quadratic Lyapunov functions, that is, for functions $V: \mathbb{R}^n \rightarrow \mathbb{R}$ of the format $V(x) = x^T P x$. If the LTI system is also positive (that is, the state is always guaranteed to be in the positive orthant of $\mathbb{R}^n$), it is possible to use **Perron-Frobenius theorem** to prove that this search can be restricted to linear functions, simplifying even more the problem of stability analysis. For details, see slides 31-37 of Prof. Boyd's [lecture notes on Perron-Frobenius theory](https://stanford.edu/class/ee363/lectures/pf.pdf). You can also check the following reference for an embracing survey of applications: Pillai, S. Unnikrishna, Torsten Suel, and Seunghun Cha. "[*The Perron-Frobenius theorem: some of its applications*](http://ieeexplore.ieee.org/abstract/document/1406483/)." IEEE Signal Processing Magazine 22.2 (2005): 62-75.
1,166,068
The **Perron Frobenius theorem** states: > > Any square matrix $A$ with positive entries has a unique eigenvector with > positive entries (up to a multiplication by a positive scalar), and > the corresponding eigenvalue has multiplicity one and is strictly > greater than the absolute value of any other eigenvalue. > > > --- So I **tempted fate** using this matrix: $$ A =\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}$$ I find my eigenvalues to be $\lambda\_1 = 0, \lambda\_2 = 2$ Now I find my eigenvector, taking $v\_{11} = 1$, $v\_{21} = 1$ I find $v\_1 = \begin{bmatrix} v\_{11} & v\_{12} \end{bmatrix}$ = $\begin{bmatrix} 1 & -1 \end{bmatrix}$ $v\_1 = \begin{bmatrix} v\_{21} & v\_{22} \end{bmatrix}$ = $\begin{bmatrix} 1 & 1 \end{bmatrix}$ **This verifies the Perron-Frobenius theorem.** --- Now what is the great implication that every positive square matrix has a real eigenvector with an eigenvalue that is the largest of all eigenvalues? Can someone show me an application of this theorem?
2015/02/26
[ "https://math.stackexchange.com/questions/1166068", "https://math.stackexchange.com", "https://math.stackexchange.com/users/105951/" ]
The Perron-Frobenius theorem is used in Google's Pagerank. It's one of the things that make sure that the algorithm works. [Here](http://www.math.pku.edu.cn/teachers/yaoy/Fall2011/lecture07.pdf) it's explained why the the theorem is useful, you have a lot of information with easy explanations on Google.
I'll add an answer from the world of symbolic dynamics and tiling theory. If you have a substitution on some alphabet $\mathcal{A}=\{a\_1,\ldots,a\_k\}$, say something like the Fibonacci substitution $$\sigma\colon\begin{cases}a\mapsto ab\\ b\mapsto a\end{cases}$$ then you have an associated *transition matrix* $M\_{\sigma}$ where $m\_{ij}$ is the number of times the letter $a\_i$ appears in the word $\sigma(a\_j)$. So for example the transition matrix for the Fibonacci substitution is $M\_{\sigma}=\left(\begin{smallmatrix}1&1\\ 1&0\end{smallmatrix}\right)$. Whenever there exists a $k$ such that $M\_{\sigma}^k$ has strictly positive entries, we say that $\sigma$ is a *primitive* substitution. Transition matrices of primitive substitutions therefore satisfy the hypothesis of the Perron Frobenius theorem and we can say the following thanks to it. > > **Theorem** > Let $|w|$ be the length of a word. > If $\sigma$ is a primitive substitution, then the PF eigenvalue $\lambda\_{PF}$ has the property that $\lim\_{n\to \infty} |\sigma^n(a\_i)|/\lambda\_{PF} = 1$ for any $a\_i\in\mathcal{A}$. That is, the length of the words $\sigma^n(a\_i)$ are roughly $\lambda\_{PF}^n$. > > > If $v\_{PF}$ is the associated dominant right eigenvector then the frequency of letters in the limit word $\sigma^{\infty}(a\_j)$ is given by $\mbox{freq}(a\_i)=(v\_{PF})\_i/\|v\_{PF}\|\_1$, and this is independent of choice of seed letter $a\_j$. > > > We can use the left eigenvector to assign lengths to intervals in the line which are labelled by the symbols $a\_i$, and we can then think of $\lambda\_{PF}$ as being a natural *expansion factor* for a geometric substitution, whereby we apply $\sigma$ to the interval assigned the letter $a\_i$ by expanding it by a factor of $\lambda\_{PF}$ and then cutting it up into intervals of lengths associated to the eigenvector, and in the order prescribed by the substitution. A good paper to read more about this stuff, including how we can similarly do substitutions in higher dimensions is [this paper](http://www.sciencedirect.com/science/article/pii/S0723086908000042#) by Natalie Priebe-Frank.
1,166,068
The **Perron Frobenius theorem** states: > > Any square matrix $A$ with positive entries has a unique eigenvector with > positive entries (up to a multiplication by a positive scalar), and > the corresponding eigenvalue has multiplicity one and is strictly > greater than the absolute value of any other eigenvalue. > > > --- So I **tempted fate** using this matrix: $$ A =\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}$$ I find my eigenvalues to be $\lambda\_1 = 0, \lambda\_2 = 2$ Now I find my eigenvector, taking $v\_{11} = 1$, $v\_{21} = 1$ I find $v\_1 = \begin{bmatrix} v\_{11} & v\_{12} \end{bmatrix}$ = $\begin{bmatrix} 1 & -1 \end{bmatrix}$ $v\_1 = \begin{bmatrix} v\_{21} & v\_{22} \end{bmatrix}$ = $\begin{bmatrix} 1 & 1 \end{bmatrix}$ **This verifies the Perron-Frobenius theorem.** --- Now what is the great implication that every positive square matrix has a real eigenvector with an eigenvalue that is the largest of all eigenvalues? Can someone show me an application of this theorem?
2015/02/26
[ "https://math.stackexchange.com/questions/1166068", "https://math.stackexchange.com", "https://math.stackexchange.com/users/105951/" ]
For example, there are important applications in the theory of finite Markov chains. Among other things, it implies that a Markov chain with strictly positive transition probabilities has a unique stationary distribution.
I'll add an answer from the world of symbolic dynamics and tiling theory. If you have a substitution on some alphabet $\mathcal{A}=\{a\_1,\ldots,a\_k\}$, say something like the Fibonacci substitution $$\sigma\colon\begin{cases}a\mapsto ab\\ b\mapsto a\end{cases}$$ then you have an associated *transition matrix* $M\_{\sigma}$ where $m\_{ij}$ is the number of times the letter $a\_i$ appears in the word $\sigma(a\_j)$. So for example the transition matrix for the Fibonacci substitution is $M\_{\sigma}=\left(\begin{smallmatrix}1&1\\ 1&0\end{smallmatrix}\right)$. Whenever there exists a $k$ such that $M\_{\sigma}^k$ has strictly positive entries, we say that $\sigma$ is a *primitive* substitution. Transition matrices of primitive substitutions therefore satisfy the hypothesis of the Perron Frobenius theorem and we can say the following thanks to it. > > **Theorem** > Let $|w|$ be the length of a word. > If $\sigma$ is a primitive substitution, then the PF eigenvalue $\lambda\_{PF}$ has the property that $\lim\_{n\to \infty} |\sigma^n(a\_i)|/\lambda\_{PF} = 1$ for any $a\_i\in\mathcal{A}$. That is, the length of the words $\sigma^n(a\_i)$ are roughly $\lambda\_{PF}^n$. > > > If $v\_{PF}$ is the associated dominant right eigenvector then the frequency of letters in the limit word $\sigma^{\infty}(a\_j)$ is given by $\mbox{freq}(a\_i)=(v\_{PF})\_i/\|v\_{PF}\|\_1$, and this is independent of choice of seed letter $a\_j$. > > > We can use the left eigenvector to assign lengths to intervals in the line which are labelled by the symbols $a\_i$, and we can then think of $\lambda\_{PF}$ as being a natural *expansion factor* for a geometric substitution, whereby we apply $\sigma$ to the interval assigned the letter $a\_i$ by expanding it by a factor of $\lambda\_{PF}$ and then cutting it up into intervals of lengths associated to the eigenvector, and in the order prescribed by the substitution. A good paper to read more about this stuff, including how we can similarly do substitutions in higher dimensions is [this paper](http://www.sciencedirect.com/science/article/pii/S0723086908000042#) by Natalie Priebe-Frank.
1,166,068
The **Perron Frobenius theorem** states: > > Any square matrix $A$ with positive entries has a unique eigenvector with > positive entries (up to a multiplication by a positive scalar), and > the corresponding eigenvalue has multiplicity one and is strictly > greater than the absolute value of any other eigenvalue. > > > --- So I **tempted fate** using this matrix: $$ A =\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}$$ I find my eigenvalues to be $\lambda\_1 = 0, \lambda\_2 = 2$ Now I find my eigenvector, taking $v\_{11} = 1$, $v\_{21} = 1$ I find $v\_1 = \begin{bmatrix} v\_{11} & v\_{12} \end{bmatrix}$ = $\begin{bmatrix} 1 & -1 \end{bmatrix}$ $v\_1 = \begin{bmatrix} v\_{21} & v\_{22} \end{bmatrix}$ = $\begin{bmatrix} 1 & 1 \end{bmatrix}$ **This verifies the Perron-Frobenius theorem.** --- Now what is the great implication that every positive square matrix has a real eigenvector with an eigenvalue that is the largest of all eigenvalues? Can someone show me an application of this theorem?
2015/02/26
[ "https://math.stackexchange.com/questions/1166068", "https://math.stackexchange.com", "https://math.stackexchange.com/users/105951/" ]
The Perron-Frobenius theorem is used in Google's Pagerank. It's one of the things that make sure that the algorithm works. [Here](http://www.math.pku.edu.cn/teachers/yaoy/Fall2011/lecture07.pdf) it's explained why the the theorem is useful, you have a lot of information with easy explanations on Google.
One of the nice things about it is that for $v$ a vector with all positive entries, if we let $v\_k = A^k v$, then $\lim\_{k\to\infty} \frac{v^k}{|| v\_k ||}$ exists and is an eigen vector for the Perron-Frobenius eigen value.
1,166,068
The **Perron Frobenius theorem** states: > > Any square matrix $A$ with positive entries has a unique eigenvector with > positive entries (up to a multiplication by a positive scalar), and > the corresponding eigenvalue has multiplicity one and is strictly > greater than the absolute value of any other eigenvalue. > > > --- So I **tempted fate** using this matrix: $$ A =\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}$$ I find my eigenvalues to be $\lambda\_1 = 0, \lambda\_2 = 2$ Now I find my eigenvector, taking $v\_{11} = 1$, $v\_{21} = 1$ I find $v\_1 = \begin{bmatrix} v\_{11} & v\_{12} \end{bmatrix}$ = $\begin{bmatrix} 1 & -1 \end{bmatrix}$ $v\_1 = \begin{bmatrix} v\_{21} & v\_{22} \end{bmatrix}$ = $\begin{bmatrix} 1 & 1 \end{bmatrix}$ **This verifies the Perron-Frobenius theorem.** --- Now what is the great implication that every positive square matrix has a real eigenvector with an eigenvalue that is the largest of all eigenvalues? Can someone show me an application of this theorem?
2015/02/26
[ "https://math.stackexchange.com/questions/1166068", "https://math.stackexchange.com", "https://math.stackexchange.com/users/105951/" ]
For example, there are important applications in the theory of finite Markov chains. Among other things, it implies that a Markov chain with strictly positive transition probabilities has a unique stationary distribution.
One of the nice things about it is that for $v$ a vector with all positive entries, if we let $v\_k = A^k v$, then $\lim\_{k\to\infty} \frac{v^k}{|| v\_k ||}$ exists and is an eigen vector for the Perron-Frobenius eigen value.
1,166,068
The **Perron Frobenius theorem** states: > > Any square matrix $A$ with positive entries has a unique eigenvector with > positive entries (up to a multiplication by a positive scalar), and > the corresponding eigenvalue has multiplicity one and is strictly > greater than the absolute value of any other eigenvalue. > > > --- So I **tempted fate** using this matrix: $$ A =\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}$$ I find my eigenvalues to be $\lambda\_1 = 0, \lambda\_2 = 2$ Now I find my eigenvector, taking $v\_{11} = 1$, $v\_{21} = 1$ I find $v\_1 = \begin{bmatrix} v\_{11} & v\_{12} \end{bmatrix}$ = $\begin{bmatrix} 1 & -1 \end{bmatrix}$ $v\_1 = \begin{bmatrix} v\_{21} & v\_{22} \end{bmatrix}$ = $\begin{bmatrix} 1 & 1 \end{bmatrix}$ **This verifies the Perron-Frobenius theorem.** --- Now what is the great implication that every positive square matrix has a real eigenvector with an eigenvalue that is the largest of all eigenvalues? Can someone show me an application of this theorem?
2015/02/26
[ "https://math.stackexchange.com/questions/1166068", "https://math.stackexchange.com", "https://math.stackexchange.com/users/105951/" ]
I would like to add an engineering application. I am a researcher from a wireless communication background. One of the most fundamental problem in our area would be the so-called Power Control wherein a mobile tower transmitting individual data to the several mobiles has to minimize its own power consumption meanwhile ensuring a minimum level of signal quality at the mobile. This can be formalized as a non-convex optimization problem. However, using some theoretical tools based on the perron-frobenius theory, you can find a simple, iterative algorithm which finds the true solution to this problem. This result has been a break through in our field. The perron-frobenius eigenvector will be the sort of numbers (positive) which states how much power the mobile tower has to use to serve its users. A well-cited paper in this regard is "A framework for uplink power control in cellular radio systems", in case you are interested to read more about this.
To check the stability of dynamical systems, one can try to search for a real-valued positive function named [Lyapunov function](https://en.wikipedia.org/wiki/Lyapunov_function). If the dynamical system is [LTI](https://en.wikipedia.org/wiki/Linear_time-invariant_theory), that is, of the format $$ \dot{x}(t) = Ax(t) $$ or $$ x(k+1) = Ax(k),$$ one can restrict the search for quadratic Lyapunov functions, that is, for functions $V: \mathbb{R}^n \rightarrow \mathbb{R}$ of the format $V(x) = x^T P x$. If the LTI system is also positive (that is, the state is always guaranteed to be in the positive orthant of $\mathbb{R}^n$), it is possible to use **Perron-Frobenius theorem** to prove that this search can be restricted to linear functions, simplifying even more the problem of stability analysis. For details, see slides 31-37 of Prof. Boyd's [lecture notes on Perron-Frobenius theory](https://stanford.edu/class/ee363/lectures/pf.pdf). You can also check the following reference for an embracing survey of applications: Pillai, S. Unnikrishna, Torsten Suel, and Seunghun Cha. "[*The Perron-Frobenius theorem: some of its applications*](http://ieeexplore.ieee.org/abstract/document/1406483/)." IEEE Signal Processing Magazine 22.2 (2005): 62-75.
1,166,068
The **Perron Frobenius theorem** states: > > Any square matrix $A$ with positive entries has a unique eigenvector with > positive entries (up to a multiplication by a positive scalar), and > the corresponding eigenvalue has multiplicity one and is strictly > greater than the absolute value of any other eigenvalue. > > > --- So I **tempted fate** using this matrix: $$ A =\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}$$ I find my eigenvalues to be $\lambda\_1 = 0, \lambda\_2 = 2$ Now I find my eigenvector, taking $v\_{11} = 1$, $v\_{21} = 1$ I find $v\_1 = \begin{bmatrix} v\_{11} & v\_{12} \end{bmatrix}$ = $\begin{bmatrix} 1 & -1 \end{bmatrix}$ $v\_1 = \begin{bmatrix} v\_{21} & v\_{22} \end{bmatrix}$ = $\begin{bmatrix} 1 & 1 \end{bmatrix}$ **This verifies the Perron-Frobenius theorem.** --- Now what is the great implication that every positive square matrix has a real eigenvector with an eigenvalue that is the largest of all eigenvalues? Can someone show me an application of this theorem?
2015/02/26
[ "https://math.stackexchange.com/questions/1166068", "https://math.stackexchange.com", "https://math.stackexchange.com/users/105951/" ]
The Perron-Frobenius theorem is used in Google's Pagerank. It's one of the things that make sure that the algorithm works. [Here](http://www.math.pku.edu.cn/teachers/yaoy/Fall2011/lecture07.pdf) it's explained why the the theorem is useful, you have a lot of information with easy explanations on Google.
First of all, the conclusion is more interesting if you also try an example where the matrix entries are still real, but not positive. For example, if $A$ is the $2\times2$ rotation matrix $$A=\pmatrix{\cos\theta&\sin\theta\\ -\sin\theta&\cos\theta}$$ then you can check that the eigenvectors are no longer real, and there's no longer a unique eigenvalue of maximum modulus: the two eigenvalues are $e^{i\theta}$ and $e^{-i\theta}$. I second Robert Israel's nomination of Markov chains as a great application. I'll add the following, though. (I'll remark that you don't actually need the matrix to have positive entries - just for some power of it to have positive entries.) Suppose you have a finite connected graph (or strongly connected digraph) such that the gcd of the cycle lengths is $1$. Then if $A$ is the adjacency matrix for the graph, some power $A^r$ will have positive entries so Perron-Frobenius applies. Since the entries of $A^n$ count paths in the graph, we conclude that for any pair of vertices in the graph, the number of paths of length $n$ between them is $c\lambda\_1^n(1+o(1))$, where $\lambda\_1$ is the Perron-Frobenius eigenvalue and $c$ is a computable constant. Applications of this particular consequence of Perron-Frobenius include asymptotic growth rate results for "regular languages," which are defined in terms of paths on graphs. In particular, there is a cute formula giving the $n$-th Fibonacci number as $$F\_n=\left\langle \frac1{\sqrt5}\phi^n\right\rangle$$ where $\phi$ is the "golden ratio" $(1+\sqrt5)/2$ and $\langle\cdot\rangle$ denotes "closest integer." This formula, and many others like it, become less surprising once you check that $$\pmatrix{1&1\\1&0}^n=\pmatrix{F\_{n+1}&F\_n\\ F\_n&F\_{n-1}}$$ and note that $\phi$ is the Perron-Frobenius eigenvalue (and the other eigenvalue is smaller than $1$ in absolute value.)
1,166,068
The **Perron Frobenius theorem** states: > > Any square matrix $A$ with positive entries has a unique eigenvector with > positive entries (up to a multiplication by a positive scalar), and > the corresponding eigenvalue has multiplicity one and is strictly > greater than the absolute value of any other eigenvalue. > > > --- So I **tempted fate** using this matrix: $$ A =\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}$$ I find my eigenvalues to be $\lambda\_1 = 0, \lambda\_2 = 2$ Now I find my eigenvector, taking $v\_{11} = 1$, $v\_{21} = 1$ I find $v\_1 = \begin{bmatrix} v\_{11} & v\_{12} \end{bmatrix}$ = $\begin{bmatrix} 1 & -1 \end{bmatrix}$ $v\_1 = \begin{bmatrix} v\_{21} & v\_{22} \end{bmatrix}$ = $\begin{bmatrix} 1 & 1 \end{bmatrix}$ **This verifies the Perron-Frobenius theorem.** --- Now what is the great implication that every positive square matrix has a real eigenvector with an eigenvalue that is the largest of all eigenvalues? Can someone show me an application of this theorem?
2015/02/26
[ "https://math.stackexchange.com/questions/1166068", "https://math.stackexchange.com", "https://math.stackexchange.com/users/105951/" ]
For example, there are important applications in the theory of finite Markov chains. Among other things, it implies that a Markov chain with strictly positive transition probabilities has a unique stationary distribution.
I would like to add an engineering application. I am a researcher from a wireless communication background. One of the most fundamental problem in our area would be the so-called Power Control wherein a mobile tower transmitting individual data to the several mobiles has to minimize its own power consumption meanwhile ensuring a minimum level of signal quality at the mobile. This can be formalized as a non-convex optimization problem. However, using some theoretical tools based on the perron-frobenius theory, you can find a simple, iterative algorithm which finds the true solution to this problem. This result has been a break through in our field. The perron-frobenius eigenvector will be the sort of numbers (positive) which states how much power the mobile tower has to use to serve its users. A well-cited paper in this regard is "A framework for uplink power control in cellular radio systems", in case you are interested to read more about this.
64,918,101
My primary `nav li` elements shift downwards whenever I hover over them. I thought this was due to use of margin causing this, but I am still receiving this issue after removing margin use, and I'm not sure what it is. I know it's most likely something simple. Any help would be appreciated. Thank you. ```css /*primary nav bar*/ .primarynav { background-color: #ffffff; border: solid 1px #f76f4d; position: relative; height: 50px; width: 1430px; top: 10px; } .primarynav ul { position: relative; padding-bottom: 10px; text-decoration: none; padding-left: 100px; } .primarynav a { position: relative; display: inline-block; text-decoration: none; color: #fd886b; width: 115px; height: 50px; padding: 17px 0px 0px 0px; font-weight: bold; border: 1px solid orangered; } /*primary navigation effects*/ /*.primarynav a:hover::before { background-color: #fd886b; } */ .primarynav a:hover { color: white; background-color: #fd886b; border: 2px solid orangered; border-radius: 3px; } .mainnavigation li { display: inline-block; bottom: 51px; padding-top: 50px; text-align: center; position: relative; font-size: 15px; left: 200px; } ``` ```html <header class="primarynav"> <div class="primaryContainer"> <!-- Main top of page navigation --> <nav alt="worldmainnavigation"><!-- Main navigation buttons on the top of the page (6) --> <ul class= "mainnavigation"> <li><a href="Index.php">Home</a></li> <li><a href="#">Items</a></li> <li><a href="#">Categories</a></li> <li><a href="#">Favourites</a></li> <li><a href="#">Deals</a></li> <li><a href="#">List An Item</a></li> </ul> </nav> </div> </header> ```
2020/11/19
[ "https://Stackoverflow.com/questions/64918101", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14390214/" ]
This: ``` if [ "$1" == "" ] ``` should be changed to: ``` if [ -z "$1" ] ``` `-z` is true if the string is zero length. `==` is used with `[[ ]]` while `=` is used with `[ ]`. You can read more about bash string comparison in [How to Compare Strings in Bash](https://linuxize.com/post/how-to-compare-strings-in-bash/).
You can also check for... ``` pingit(){ ping -c1 ${1} } if [ ${#} -gt 0 ] then pingit ${1} fi ``` ...the number of arguments. Then you can source it without argument or use it with argument... ``` # . pingit.sh # pingit localhost PING localhost(localhost (::1)) 56 data bytes 64 bytes from localhost (::1): icmp_seq=1 ttl=64 time=0.058 ms --- localhost ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms # sh pingit.sh localhost PING localhost(localhost (::1)) 56 data bytes 64 bytes from localhost (::1): icmp_seq=1 ttl=64 time=0.031 ms --- localhost ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms ```
43,276,990
I'm trying to write a code to do the following: 1. Using text-to-columns,the data should get divided in different columns. The data in Cells A1-A8 is like this: [![enter image description here](https://i.stack.imgur.com/uqIgn.png)](https://i.stack.imgur.com/uqIgn.png) This data should appear in different columns.
2017/04/07
[ "https://Stackoverflow.com/questions/43276990", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7832343/" ]
When you log in or log out from child component, emit 'login' or 'logout' event respectively. ``` <template> <div id="app"> <router-view name='login' @login="login=true"></router-view> <router-view name='header' @logout="login=false"></router-view> <keep-alive v-if="login"> <router-view name='write'></router-view> </keep-alive> <router-view name='management'></router-view> <router-view name='account'></router-view> <router-view name='statistics'></router-view> <router-view name='view'></router-view> <router-view name='password'></router-view> </div> </template> <script> export default { name: 'app', data () { return { login: true } } } </script> <style> #app { font-family: 'Avenir', Helvetica, Arial, sans-serif; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; text-align: center; color: #2c3e50; margin-top: 0; height: 100%; } </style> ```
I had similar problem and one simple solution I found was to force reload the web application via `location.reload()`, which clears the `keep-alive`. You might also find this discussion interesting: <https://github.com/vuejs/vue/issues/6259>
43,276,990
I'm trying to write a code to do the following: 1. Using text-to-columns,the data should get divided in different columns. The data in Cells A1-A8 is like this: [![enter image description here](https://i.stack.imgur.com/uqIgn.png)](https://i.stack.imgur.com/uqIgn.png) This data should appear in different columns.
2017/04/07
[ "https://Stackoverflow.com/questions/43276990", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7832343/" ]
When you log in or log out from child component, emit 'login' or 'logout' event respectively. ``` <template> <div id="app"> <router-view name='login' @login="login=true"></router-view> <router-view name='header' @logout="login=false"></router-view> <keep-alive v-if="login"> <router-view name='write'></router-view> </keep-alive> <router-view name='management'></router-view> <router-view name='account'></router-view> <router-view name='statistics'></router-view> <router-view name='view'></router-view> <router-view name='password'></router-view> </div> </template> <script> export default { name: 'app', data () { return { login: true } } } </script> <style> #app { font-family: 'Avenir', Helvetica, Arial, sans-serif; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; text-align: center; color: #2c3e50; margin-top: 0; height: 100%; } </style> ```
There is a pretty easy way to do this. Thanks to @loomchild for the link. You can define on routes a `meta` object. Define it in such a way for routes using keep-alive that you want to invalidate. ``` { path: '/invalidate_me', ... meta: {uuid: generateUUID()} // You'll need a method for this or something similar }, ``` Check out [Create GUID / UUID in JavaScript?](https://stackoverflow.com/questions/105034/create-guid-uuid-in-javascript) for help with UUID if need be. Then, in your keep alive definition: ``` <router-view :key="$route.path + ($route.params.id ? $route.params.id : '').toString() + ($route.meta && $route.meta.uuid ? $route.meta.uuid.toString() : '')"></router-view> ``` Something like that. If you don't include ids or don't want to cache by those, fine. Those are not related to the question, but something that I am using. So you really only need the `meta` part. We're basically telling Vue to consider a component to be defined by the combination of the id on the route (if it exists) and the uuid on the route (if it exists) Then, when you want to invalidate inside a component, change the uuid. You may need to find the route in the lists on $router, or if you're already inside the correct component, you can do: ``` this.$route.meta.uuid = generateUUID(); ``` This gets a new uuid and clears the cache. Hope this helps someone else!
6,360,499
What does @propert(retain) do? it doesn't actually retain my object by my tests: ``` id obj = getObjectSomehow(); NSLog(@"%d", [obj retainCount]); propertyWithRetain = obj; NSLog(@"%d", [obj retainCount]); // output: // 1 // 1 ``` How can I make a property that will really retain the object?
2011/06/15
[ "https://Stackoverflow.com/questions/6360499", "https://Stackoverflow.com", "https://Stackoverflow.com/users/362589/" ]
You're not using your property there, that's why it's not retaining! Try this : ``` id obj = getObjectSomehow(); NSLog(@"%d", [obj retainCount]); self.propertyWithRetain = obj; // Note the self. :) NSLog(@"%d", [obj retainCount]); ``` Using `self.` will use the property. Just using the variable name won't. --- EDIT especially for @bbum (who raises a very fair point in the comments) Don't rely on using retainCount - you don't know what else has retained your object and you don't know if some of those retains are actually scheduled autoreleases so it's usually a misleading number :)
``` propertyWithRetain = obj; ``` That just sets the ivar backing the property directly. When an @property is synthesized, if there is no instance variable declared, then one is generated automatically. The above is using that ivar directly. ``` self.propertyWithRetain = obj; ``` That would actually go through the `@synthesize`d setter and bump the retain count. Which is also why many of us use `@synthesize propertyWithRetain = propertyWithRetain_;` to cause the iVar to be named differently. Note that, even in this, calling `retainCount` can be horribly misleading. Try it with `[NSNumber numberWithInt: 2];` or a constant string. Really, don't call `retainCount`. Not ever.
19,104
When making a roll on Majesty to Awe people, is a Vampire permitted to make use of the +2 (or +4) social bonus for the merit of being exceptionally attractive?
2012/11/29
[ "https://rpg.stackexchange.com/questions/19104", "https://rpg.stackexchange.com", "https://rpg.stackexchange.com/users/5649/" ]
Yes. ---- Unless explicitly forbidden, Merits (like Striking Looks) do apply to Discipline rolls.
I've gone with the rule of thumb that effects can be stacked unless the rules explicitly say they don't. I don't see anything in the rules saying they don't. Logically it makes sense that beauty would play a role in successfully Awing someone. If I were running the game, I'd allow it.
19,104
When making a roll on Majesty to Awe people, is a Vampire permitted to make use of the +2 (or +4) social bonus for the merit of being exceptionally attractive?
2012/11/29
[ "https://rpg.stackexchange.com/questions/19104", "https://rpg.stackexchange.com", "https://rpg.stackexchange.com/users/5649/" ]
I've gone with the rule of thumb that effects can be stacked unless the rules explicitly say they don't. I don't see anything in the rules saying they don't. Logically it makes sense that beauty would play a role in successfully Awing someone. If I were running the game, I'd allow it.
Striking Looks explicitly does add to Majesty rolls, as I recall. It's also a +1/+2 bonus, not a +2/+4.
19,104
When making a roll on Majesty to Awe people, is a Vampire permitted to make use of the +2 (or +4) social bonus for the merit of being exceptionally attractive?
2012/11/29
[ "https://rpg.stackexchange.com/questions/19104", "https://rpg.stackexchange.com", "https://rpg.stackexchange.com/users/5649/" ]
Yes. ---- Unless explicitly forbidden, Merits (like Striking Looks) do apply to Discipline rolls.
Striking Looks explicitly does add to Majesty rolls, as I recall. It's also a +1/+2 bonus, not a +2/+4.
12,866,808
How do I use regex to convert ``` 11111aA$xx1111xxdj$%%` ``` to ``` aA$xx1111xxdj$%% ``` So, in other words, I want to remove (or match) the FIRST grouping of `1`'s.
2012/10/12
[ "https://Stackoverflow.com/questions/12866808", "https://Stackoverflow.com", "https://Stackoverflow.com/users/140811/" ]
Depending on the language, you should have a way to replace a string by regex. In Java, you can do it like this: ``` String s = "11111aA$xx1111xxdj$%%"; String res = s.replaceAll("^1+", ""); ``` The `^` "anchor" indicates that the beginning of the input must be matched. The `1+` means a sequence of one or more `1` characters. Here is a [link to ideone](http://ideone.com/MhsdN) with this running program. The same program in C#: ``` var rx = new Regex("^1+"); var s = "11111aA$xx1111xxdj$%%"; var res = rx.Replace(s, ""); Console.WriteLine(res); ``` ([link to ideone](http://ideone.com/fGilt)) In general, if you would like to make a match of anything only at the beginning of a string, add a `^` prefix to your expression; similarly, adding a `$` at the end makes the match accept only strings at the end of your input.
If you only want to replace consecutive "1"s at the beginning of the string, replace the following with an empty string: ``` ^1+ ``` If the consecutive "1"s won't necessarily be the first characters in the string (but you still only want to replace one group), replace the following with the contents of the first capture group (usually `\1` or `$1`): ``` 1+(.*) ``` Note that this is only necessary if you only have a "replace all" capability available to you, but most regex implementations also provide a way to replace only one instance of a match, in which case you could just replace `1+` with an empty string.
12,866,808
How do I use regex to convert ``` 11111aA$xx1111xxdj$%%` ``` to ``` aA$xx1111xxdj$%% ``` So, in other words, I want to remove (or match) the FIRST grouping of `1`'s.
2012/10/12
[ "https://Stackoverflow.com/questions/12866808", "https://Stackoverflow.com", "https://Stackoverflow.com/users/140811/" ]
Depending on the language, you should have a way to replace a string by regex. In Java, you can do it like this: ``` String s = "11111aA$xx1111xxdj$%%"; String res = s.replaceAll("^1+", ""); ``` The `^` "anchor" indicates that the beginning of the input must be matched. The `1+` means a sequence of one or more `1` characters. Here is a [link to ideone](http://ideone.com/MhsdN) with this running program. The same program in C#: ``` var rx = new Regex("^1+"); var s = "11111aA$xx1111xxdj$%%"; var res = rx.Replace(s, ""); Console.WriteLine(res); ``` ([link to ideone](http://ideone.com/fGilt)) In general, if you would like to make a match of anything only at the beginning of a string, add a `^` prefix to your expression; similarly, adding a `$` at the end makes the match accept only strings at the end of your input.
I'm not sure but you can try this `[^1](\w*\d*\W)*` - match all as a single group except starting "1"(n) symbols
12,866,808
How do I use regex to convert ``` 11111aA$xx1111xxdj$%%` ``` to ``` aA$xx1111xxdj$%% ``` So, in other words, I want to remove (or match) the FIRST grouping of `1`'s.
2012/10/12
[ "https://Stackoverflow.com/questions/12866808", "https://Stackoverflow.com", "https://Stackoverflow.com/users/140811/" ]
Depending on the language, you should have a way to replace a string by regex. In Java, you can do it like this: ``` String s = "11111aA$xx1111xxdj$%%"; String res = s.replaceAll("^1+", ""); ``` The `^` "anchor" indicates that the beginning of the input must be matched. The `1+` means a sequence of one or more `1` characters. Here is a [link to ideone](http://ideone.com/MhsdN) with this running program. The same program in C#: ``` var rx = new Regex("^1+"); var s = "11111aA$xx1111xxdj$%%"; var res = rx.Replace(s, ""); Console.WriteLine(res); ``` ([link to ideone](http://ideone.com/fGilt)) In general, if you would like to make a match of anything only at the beginning of a string, add a `^` prefix to your expression; similarly, adding a `$` at the end makes the match accept only strings at the end of your input.
In Javascript ``` var str = '11111aA$xx1111xxdj$%%'; var patt = /^1+/g; str = str.replace(patt,""); ```
12,866,808
How do I use regex to convert ``` 11111aA$xx1111xxdj$%%` ``` to ``` aA$xx1111xxdj$%% ``` So, in other words, I want to remove (or match) the FIRST grouping of `1`'s.
2012/10/12
[ "https://Stackoverflow.com/questions/12866808", "https://Stackoverflow.com", "https://Stackoverflow.com/users/140811/" ]
If this is the beginning, you can use this: ``` ^[1]* ``` As far as replacing, it depends on the language. In powershell, I would do this: ``` [regex]::Replace("11111aA$xx1111xxdj$%%","^[1]*","") ``` This will return: ``` aA$xx1111xxdj$%% ```
If you only want to replace consecutive "1"s at the beginning of the string, replace the following with an empty string: ``` ^1+ ``` If the consecutive "1"s won't necessarily be the first characters in the string (but you still only want to replace one group), replace the following with the contents of the first capture group (usually `\1` or `$1`): ``` 1+(.*) ``` Note that this is only necessary if you only have a "replace all" capability available to you, but most regex implementations also provide a way to replace only one instance of a match, in which case you could just replace `1+` with an empty string.
12,866,808
How do I use regex to convert ``` 11111aA$xx1111xxdj$%%` ``` to ``` aA$xx1111xxdj$%% ``` So, in other words, I want to remove (or match) the FIRST grouping of `1`'s.
2012/10/12
[ "https://Stackoverflow.com/questions/12866808", "https://Stackoverflow.com", "https://Stackoverflow.com/users/140811/" ]
If this is the beginning, you can use this: ``` ^[1]* ``` As far as replacing, it depends on the language. In powershell, I would do this: ``` [regex]::Replace("11111aA$xx1111xxdj$%%","^[1]*","") ``` This will return: ``` aA$xx1111xxdj$%% ```
I'm not sure but you can try this `[^1](\w*\d*\W)*` - match all as a single group except starting "1"(n) symbols