source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 43990219 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using Python's subprocess module to call some Linux command line functions. The documentation explains the shell=True argument as If shell is True , the specified command will be executed through the shell There are two examples, which seem the same to me from a descriptive viewpoint (i.e. both of them call some command-line command), but one of them uses shell=True and the other does not >>> subprocess.call(["ls", "-l"])0>>> subprocess.call("exit 1", shell=True)1 My question is: What does running the command with shell=False do, in contrast to shell=True ? I was under the impression that subprocess.call and check_call and check_output all must execute the argument through the shell. In other words, how can it possibly not execute the argument through the shell? It would also be helpful to get some examples of: Things that can be done with shell=True that can't be done with shell=False and why they can't be done. Vice versa (although it seems that there are no such examples) Things for which it does not matter whether shell=True or False and why it doesn't matter
Now provide the response and nothing else.
| UNIX programs start each other with the following three calls, or derivatives/equivalents thereto: fork() - Create a new copy of yourself. exec() - Replace yourself with a different program (do this if you're the copy!). wait() - Wait for another process to finish (optional, if not running in background). Thus, with shell=False , you do just that (as Python-syntax pseudocode below -- exclude the wait() if not a blocking invocation such as subprocess.call() ): pid = fork()if pid == 0: # we're the child process, not the parent execlp("ls", "ls", "-l", NUL);else: retval = wait(pid) # we're the parent; wait for the child to exit & get its exit status whereas with shell=True , you do this: pid = fork()if pid == 0: execlp("sh", "sh", "-c", "ls -l", NUL);else: retval = wait(pid) Note that with shell=False , the command we executed was ls , whereas with shell=True , the command we executed was sh . That is to say: subprocess.Popen(foo, shell=True) is exactly the same as: subprocess.Popen( ["sh", "-c"] + ([foo] if isinstance(foo, basestring) else foo), shell=False) That is to say, you execute a copy of /bin/sh , and direct that copy of /bin/sh to parse the string into an argument list and execute ls -l itself. So, why would you use shell=True ? You're invoking a shell builtin. For instance, the exit command is actually part of the shell itself, rather than an external command. That said, this is a fairly small set of commands , and it's rare for them to be useful in the context of a shell instance that only exists for the duration of a single subprocess.call() invocation. You have some code with shell constructs (ie. redirections) that would be difficult to emulate without it. If, for instance, your command is cat one two >three , the syntax >three is a redirection : It's not an argument to cat , but an instruction to the shell to set stdout=open('three', 'w') when running the command ['cat', 'one', 'two'] . If you don't want to deal with redirections and pipelines yourself, you need a shell to do it. A slightly trickier case is cat foo bar | baz . To do that without a shell, you need to start both sides of the pipeline yourself: p1 = Popen(['cat', 'foo', 'bar'], stdout=PIPE), p2=Popen(['baz'], stdin=p1.stdout) . You don't give a damn about security bugs. ...okay, that's a little bit too strong, but not by much. Using shell=True is dangerous. You can't do this: Popen('cat -- %s' % (filename,), shell=True) without a shell injection vulnerability: If your code were ever invoked with a filename containing $(rm -rf ~) , you'd have a very bad day. On the other hand, ['cat', '--', filename] is safe with all possible filenames: The filename is purely data, not parsed as source code by a shell or anything else. It is possible to write safe scripts in shell, but you need to be careful about it. Consider the following: filenames = ['file1', 'file2'] # these can be user-providedsubprocess.Popen(['cat -- "$@" | baz', '_'] + filenames, shell=True) That code is safe (well -- as safe as letting a user read any file they want ever is), because it's passing your filenames out-of-band from your script code -- but it's safe only because the string being passed to the shell is fixed and hardcoded, and the parameterized content is external variables (the filenames list). And even then, it's "safe" only to a point -- a bug like Shellshock that triggers on shell initialization would impact it as much as anything else. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/43990219', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2593878/']} | jdg_70141 |
stackexchange | llm_judgeable_groundtruth_similarity | 2199076 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Possible Duplicate: Why #include <stdio.h> is not required to use printf()? Both printf and scanf have been declared in stdio.h. But they work even without that, dropping just a warning message? What's the theory behind this?
Now provide the response and nothing else.
| Calling a function without declaring it will create an implicit declaration based on the parameters you give and an assumed return type of int . This lets it get past the compilation stage, since the function could exist somewhere else that isn’t known until link time — C didn’t always have function prototypes, so this is for backwards compatibility. (In C++, it’s an error, and in C99 GCC gives a warning.) If you look at the man page (on FreeBSD and Darwin , at least) for printf , scanf , puts , etc., it says that it comes from the “Standard C Library (libc, -lc)”. GCC implicitly links with the standard C library. If you link with the -nostdlib flag, you'll get the “undefined symbols” error that you're expecting. (In fact, when I turn off libc , my GNU/Linux system complains about the absence of _start as well, and my OpenBSD system complains about _start , __guard , and __stack_smash_handler .) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2199076', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/241681/']} | jdg_70142 |
stackexchange | llm_judgeable_groundtruth_similarity | 4201480 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Theorem $7.13$ , Walter Rudin's Real and Complex Analysis . Why does it suffice to prove for the case $\mu\ge 0$ ? $\mu = \mu^+ - \mu^-$ is the Jordan decomposition of $\mu$ , where $\mu^+,\mu^-$ are positive measures. How does the general case follow from $\mu\ge 0$ and the Jordan decomposition? Why is $\overline D\mu$ a Borel function? I see that $\sup_{0<r<1/n} Q_r\mu(x)$ decreases as $n$ increases, and is a lower-semicontinuous function for each $n$ (following the reasoning in Section $7.2$ .). However, I don't see how this implies that $\overline D\mu$ is a Borel function. I also know that any lower-semicontinuous function is Borel, so it'd suffice to prove lower-semicontinuity, if we can. Why is it true that for every $x\in K^c$ , $$(\overline D\mu)(x) = (\overline D\mu_2)(x)$$ Proof attached for reference: Thank you!
Now provide the response and nothing else.
| The fact that $\sup_{0<r<1/n} Q_r\mu(x)$ decreases as $n$ increases is used to asser existence of ehn limit in $(\overline D \mu )(x)$ . Limits of Borel measurable functions are Borel measurable. $\mu_1(E)=\mu(E\cap K)=0$ for any Borel set $E$ contained in $K^{c}$ . So $\mu (E)=\mu_2(E)$ for such sets. From this and the definition of $(\overline D \mu )(x)$ and $(\overline D \mu_2 )(x)$ it follows that $(\overline D \mu )(x)=(\overline D \mu_2 )(x)$ for $x \in K^{c}$ . [Note that if $x \in K^{c}$ then $B(x,\frac 1 n)$ is contained in $K^{c}$ for $n$ sufficiently large]. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4201480', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/425395/']} | jdg_70143 |
stackexchange | llm_judgeable_groundtruth_similarity | 6433492 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
For single value check , which of both is preferred and why? $string == 'The quick brown fox jumps over the lazy dog';if(strpos($string, 'fox') !== false){ // do the routine}# versusif(preg_match('/fox/i', $string)){ // do the routine}
Now provide the response and nothing else.
| I would prefer the strpos over preg_match , because regexes are generally more expensive to execute. According to the official php docs for preg_match : Do not use preg_match() if you only want to check if one string is contained in another string. Use strpos() or strstr() instead as they will be faster. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6433492', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/393406/']} | jdg_70144 |
stackexchange | llm_judgeable_groundtruth_similarity | 102631 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have had thoughts of trying to write a simple crawler that might crawl and produce a list of its findings for our NPO's websites and content. Does anybody have any thoughts on how to do this? Where do you point the crawler to get started? How does it send back its findings and still keep crawling? How does it know what it finds, etc,etc.
Now provide the response and nothing else.
| You'll be reinventing the wheel, to be sure. But here's the basics: A list of unvisited URLs - seed this with one or more starting pages A list of visited URLs - so you don't go around in circles A set of rules for URLs you're not interested in - so you don't index the whole Internet Put these in persistent storage, so you can stop and start the crawler without losing state. Algorithm is: while(list of unvisited URLs is not empty) { take URL from list remove it from the unvisited list and add it to the visited list fetch content record whatever it is you want to about the content if content is HTML { parse out URLs from links foreach URL { if it matches your rules and it's not already in either the visited or unvisited list add it to the unvisited list } }} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/102631', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_70145 |
stackexchange | llm_judgeable_groundtruth_similarity | 1391526 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to prove: No Lie algebra over $\Bbb R$ or $\Bbb C$ can have a unit element. Now I am not sure how to take this in regard to the Lie bracket. I.e. I have now idea where to start. $[x,e]=[e,x]=x$ or something? That doesn't make sense to me, since we are just looking at an abstract bracket it seems.
Now provide the response and nothing else.
| If $[x,e]=[e,x]$, then what does the antisymmetry of the Lie bracket tell you? | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1391526', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/258811/']} | jdg_70146 |
stackexchange | llm_judgeable_groundtruth_similarity | 52310060 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to override the vuetify style by class. For example to change the background color of button from vuetify. So, I create a button with class on it: <div id="app"> <v-btn class="some" color="success">Success</v-btn></div>.some {background-color:red;} But the background-color red is override by vuetify. How to solve this issue without using important and themes? Here is example: https://stackblitz.com/edit/vue-js-gpkj6k
Now provide the response and nothing else.
| Having worked with Vuetify and it's various styling... eccentricities... I believe it's all boiled down to writing css that has more specificity than Vuetify. It's never great practise to style element's directly ( img ), instead apply your own classes. This way you could declare .my-card.v-card and win the specificity war, all the while keeping styles scoped (non scoped can the the devil to debug in vue files). Some Vuetify style declarations use !important ... so the only way I've found to override these are to also use !important on the override.IMO terrible decision from Vuetify to have any !important styles. It's also good to get your head around >>>, /deep/, ::v-deep as can provide a solution where styles are not filtering through. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52310060', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10022822/']} | jdg_70147 |
stackexchange | llm_judgeable_groundtruth_similarity | 1557835 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a series of shapes (Approx. 50), that each have 5 points, and a color (with alpha transparency). I want to render these shapes onto a pixel grid. I program as an amateur, so I have no idea how I should go about doing this. Can someone give me a starting point, or some pseudo-code? Thanks in advance.
Now provide the response and nothing else.
| import java.awt.Graphics;import java.awt.Graphics2D;import java.util.List;import java.util.ArrayList;import java.awt.Point;import javax.swing.JFrame;import java.awt.Color;import java.util.Random;import java.awt.Polygon;import java.awt.Shape;public class GraphicsTest extends JFrame { private List<ColoredShape> shapes; private static final int NUM_SHAPES = 50; private static final int NUM_POINTS_PER_SHAPE = 5; private static final int WIDTH = 640; private static final int HEIGHT = 480; private Random randomGenerator; public GraphicsTest(String title) { super(title); setVisible(true); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setSize(WIDTH, HEIGHT); randomGenerator = new Random(); initShapes(); } private void initShapes() { shapes = new ArrayList<ColoredShape>(NUM_SHAPES); for (int i = 0; i < NUM_SHAPES; i++) { Point[] points = getRandomPoints(); Color color = getRandomColor(); shapes.add(i, new ColoredShape(points, color)); } } private Point[] getRandomPoints() { Point[] points = new Point[NUM_POINTS_PER_SHAPE]; for (int i = 0; i < points.length; i++) { int x = randomGenerator.nextInt(WIDTH); int y = randomGenerator.nextInt(HEIGHT); points[i] = new Point(x, y); } return points; } /** * @return a Color with random values for alpha, red, green, and blue values */ private Color getRandomColor() { float alpha = randomGenerator.nextFloat(); float red = randomGenerator.nextFloat(); float green = randomGenerator.nextFloat(); float blue = randomGenerator.nextFloat(); return new Color(red, green, blue, alpha); } public void paint(Graphics g) { Graphics2D g2 = (Graphics2D) g; for (ColoredShape shape : shapes) { g2.setColor(shape.getColor()); g2.fill(shape.getOutline()); } } public static void main(String[] args) { GraphicsTest b = new GraphicsTest("Testing polygons"); } private class ColoredShape { private Polygon outline; private Color color; public ColoredShape(Point[] points, Color color) { this.color = color; // Would be better to separate out into xpoints, ypoints, npoints // but I'm lazy outline = new Polygon(); for (Point p : points) { outline.addPoint((int) p.getX(), (int) p.getY()); } } public Color getColor() { return color; } public Shape getOutline() { return outline; } }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1557835', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_70148 |
stackexchange | llm_judgeable_groundtruth_similarity | 42291978 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is a duplicate of this question . There are bunch of similar questions/answers as well but none helped me. As there are hundreds of developers accepted some answers I am probably wrong somewhere and have no idea where is my problem! This is my sample class and I want to test its method. final class NavigationBuilder { @VisibleForTesting List<Intent> mIntentList = new ArrayList<>(5); @VisibleForTesting void addNextScreenBasedOnBookingStatus(final Booking booking) { final ChatMsgDbAsyncHelper helper = new ChatMsgDbAsyncHelper(); if (booking == null) { helper.cleanAllMessages(mContext); // <= Crash here } }} This is my test class: @RunWith(PowerMockRunner.class)@PrepareForTest({ChatMsgDbAsyncHelper.class, SplashActivity.class})public class NavigationBuilderTest { private SplashActivity mActivity; private NavigationBuilder mNavBuilder; @Before public void setUp() throws Exception { mActivity = new SplashActivity(); ISplashView view = mock(ISplashView.class); PassengerStorage passengerStorage = mock(PassengerStorage.class); mNavBuilder = new NavigationBuilder(mActivity, view, passengerStorage); } @Test public void addNextScreenBasedOnBookingStatus_whenBookingIsNull() throws Exception { ChatMsgDbAsyncHelper spy = PowerMockito.spy(new ChatMsgDbAsyncHelper()); PowerMockito.doNothing().when(spy).cleanAllMessages(mActivity); mNavBuilder.addNextScreenBasedOnBookingStatus(null); assertTrue(mNavBuilder.mIntentList.isEmpty()); }} Test fails and the reason is NullPointerException because test is running logic inside of helper.cleanAllMessages(mContext); . My expectation from above mock is those logic should not be performed. Caused by: java.lang.NullPointerException at com.xxx.xxx.db.entities.ChatMessageTable.(ChatMessageTable.java:23)
Now provide the response and nothing else.
| For a more efficient query, use the $reduce operator to flatten the arrays. This will allow you to concat any number of arrays, so instead of just doing a union of the two arrays from docs 1 and 2, this will also apply for other arrays as well. Consider running the following aggregate operation: db.coll.aggregate([ { "$match": { "_id": { "$in": [1, 2] } } }, { "$group": { "_id": 0, "data": { "$push": "$data" } } }, { "$project": { "data": { "$reduce": { "input": "$data", "initialValue": [], "in": { "$setUnion": ["$$value", "$$this"] } } } } }]) Sample Output { "_id" : 0, "data" : [ 0, 4, 5, 6, 7 ]} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/42291978', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/513413/']} | jdg_70149 |
stackexchange | llm_judgeable_groundtruth_similarity | 13419947 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a Java Program containing a class Application inheriting from JFrame. I want to display a message which asks the user if he wants to exit the program upon clicking the X button at the top right of the window. This is my code so far: I got this code from a tutorial I found online. I coded the WindowClosing event handler myself. However, I have trouble registering the window listener (addWindowListener). It is telling me that WindowAdapter is abstract and cannot be instantiated. How can I solve this problem please?
Now provide the response and nothing else.
| Basically, you got it almost correct. There are a few things not put together correctly and a typo. First remove your WindowClosing method (it's window , not Window )Then replace your addWindowListener(new WindowAdapter()); with the code below addWindowListener(new WindowAdapter() { public void windowClosing(WindowEvent e) { int confirmed = JOptionPane.showConfirmDialog(null, "Are you sure you want to exit the program?", "Exit Program Message Box", JOptionPane.YES_NO_OPTION); if (confirmed == JOptionPane.YES_OPTION) { dispose(); } }}); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13419947', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1124249/']} | jdg_70150 |
stackexchange | llm_judgeable_groundtruth_similarity | 16976 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
With NumericQ[symbol] = True , I can declare that a symbol is numeric. I want the expressions matching: $$e_{\text{i$\_$}?\text{IntegerQ}}^2$$ to be treated as numerical expressions too. e /: NumericQ[Subscript[e, i_?IntegerQ]^2] = True; doesn't work. Also, I can't use NumericFunction because it's too restrictive. Is there something like NumericPattern ?
Now provide the response and nothing else.
| Here is another way: you can fool the depth-1 tag rule for UpValues with a few temporary symbols. Here is an example: ClearAll[e];e /: Subscript[e, i_?IntegerQ] := e /: Subscript[e, i] = Module[{el}, el /: el^p_ := el /: el^p = Module[{elp}, elp /: NumericQ[elp] = True; Format[elp] := TraditionalForm[Subscript["e", i]^p]; elp ]; el /: NumericQ[el] = True; Format[el] := TraditionalForm[Subscript["e", i]]; el] What this does is to substitute Subscript[e, i_?IntegerQ] by some symbol, which will print just as the original one, but will have some rules attached which will do what you need. Now, NumericQ[Subscript[e,1]](* True *)NumericQ[Subscript[e,1]^2](* True *) The advantage of this method is that it is flexible. You are not tied to just powers of your subscript, it can be easily generalized to other functions. | {} | {'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/16976', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/5185/']} | jdg_70151 |
stackexchange | llm_judgeable_groundtruth_similarity | 27918 |
Below is a question asked on the forum quantumcomputing.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Background Suppose I have a quantum channel $\Phi:B(\mathcal{H}_1)\rightarrow B(\mathcal{H}_1)\otimes B(\mathcal{H}_2)$ , such that there is some small $\epsilon$ such that for any two input states $\rho$ and $\sigma$ $$ \Vert \rho - \sigma\Vert_1 (1-\epsilon) \leq \Vert\text{Tr}_2(\Phi(\rho)) - \text{Tr}_2(\Phi(\sigma))\Vert_1.\tag{1}$$ That is, the channel almost preserves distance even if we trace out the second system. This makes me think that the second system can't have much dependence on the first system, i.e., there is some channel $\Phi_1:B(\mathcal{H}_1)\rightarrow B(\mathcal{H}_1)$ such that $\Phi$ is close to $\Phi_1\otimes \rho_0$ , i.e., a channel that just applies some channel to the first system and tacks on a fixed state to the second system. Somehow this needs to use something like no-cloning, because an ill-defined map $\Psi:\rho\mapsto \rho\otimes \rho$ satisfies the above inequality, but is not a quantum channel. Question Is there any way to prove that $\Phi$ has this form of "close to just adding a constant state to the second system"? To phrase this formally: For any $\delta>0$ , is there an $\epsilon >0$ such that for any channel $\Phi$ satisfying equation (1) for all input states, then there exists a channel $\Phi_1:B(\mathcal{H}_1)\rightarrow B(\mathcal{H}_1)$ and a state $\rho_0\in B(\mathcal{H}_2)$ such that $\Phi$ is within distance $\delta$ of the channel $\tilde{\Phi}:\rho\mapsto \Phi_1(\rho)\otimes \rho_0$ ? Additional context I am imagining two unitarities $U_1$ and $U_2$ whose action differs only on 2 basis states. That is, $\Vert U_1 - U_2\Vert = 2$ , but there is a subspace $V$ of almost the full space such that $U_1\vert_V = U_2\vert_V$ . Now I only have noisy channels $\mathcal{\tilde{U}}_{i}$ that implement these, i.e. $\mathcal{\tilde{U}_i} = (1-p)U_i + p \mathcal{D}$ for some noise channel $\mathcal{D}$ . Then considering the channel $$ (I\otimes \mathcal{\tilde{U}}_i)\Phi \tag{2}$$ I want to argue that there is some trade-off between the fidelity of this channel and its ability to distinguish between $U_1$ and $U_2$ . That is, if I have two input states $\rho$ and $\sigma$ that distinguish $U_1$ and $U_2$ , then after I apply $\Phi$ , if too much information about the input state is in the second system, then the system has an irrecoverable loss because $\mathcal{\tilde{U}}_i$ is noisy, but if not enough information about the input state is int he second system, then it can't distinguish $U_1$ from $U_2$ . The original question should solve this (if $\Phi$ is close to $\Phi_1\otimes\rho_0$ , then it can't distinguish $U_1$ from $U_2$ very well) but maybe there are other approaches.
Now provide the response and nothing else.
| I suppose you're asking the following: for any $\epsilon\ge0$ and $\Phi$ that satisfy conditions, is there $\delta_\epsilon \rightarrow 0$ when $\epsilon \rightarrow 0$ , such that there exists a special channel in delta proximity? I think yes. This is not a full proof but the idea is following. Let $\Psi(\rho) = {\rm Tr}_2(Φ(\rho))$ , where $\Psi:B(\mathcal{H}_1)\rightarrow B(\mathcal{H}_1)$ is also a channel. If $\epsilon=0$ , i.e. $\Psi$ is distance preserving, then $\Psi(\rho) = U\rho U^\dagger$ for some unitary $U$ . See, e.g., Theorem 7 in On Partially Trace Distance Preserving Maps and Reversible Quantum Channels .The way to prove this theorem is to observe that if $p$ and $q$ have orthogonal supports then $\Vert p-q \Vert_1 =2$ . Hence $\Psi(p)$ and $\Psi(q)$ also have orthogonal supports. It follows that for any basis $|b_i\rangle \in \mathcal{H}_1$ the supports of operators $\Psi(|b_i\rangle\langle b_i|)$ are orthogonal to each other. Thus they are rank-1 projectors (assuming finite dimensional case). This means $\Psi$ is rank preserving (on Hermitian operators). Distance preservation implies that fidelity is also preserved for rank-1 projectors. Which means that for any unit vectors $|a\rangle,|b\rangle$ $${\rm Tr}(\Psi(|a\rangle\langle a|)\Psi(|b\rangle\langle b|)) = |\langle a|b \rangle|^2.$$ Though, I don't see an easy way to complete the proof of Theorem 7 from here. Anyway, once we know that $\Psi(\rho) = U\rho U^\dagger$ we can show that $\Phi = \Psi \otimes \rho_0$ . This is indeed a version of no-cloning. It's well known that if the reduced state $\rho_A$ of a bipartite pure state $\rho_{AB}$ is pure, then the state must be a tensor product of pure states, i.e. $\rho_{AB} = \rho_A \otimes \rho_B$ . Hence, if for a mixed $\rho_{AB}$ the reduced state $\rho_A$ is pure, then $\rho_{AB} = \rho_A \otimes \rho_B$ as well due to linearity. For any pure $\rho$ we have that ${\rm Tr}_2(\Phi(\rho)) = U\rho U^\dagger$ is pure. Thus $\Phi(\rho) = U\rho U^\dagger \otimes {\rm Tr}_1(\Phi(\rho))$ for pure $\rho$ . Now assume that ${\rm Tr}_1(Φ(\rho_1)) \neq {\rm Tr}_1(Φ(\rho_2))$ for two different pure non-orthogonal states $\rho_1,\rho_2$ . It's easy to see that given channels $Φ$ and $\Psi$ we can clone ${\rm Tr}_1(Φ(\rho))$ as much as we want given only a single copy of $\rho$ . Therefore we can discriminate $\rho_1,\rho_2$ with the access to channels $Φ, \Psi$ , which is known to be impossible in theory. Thus ${\rm Tr}_1(Φ(\rho))$ must be constant. Now let $\epsilon > 0$ . In this case $\Psi$ is almost distance preserving. Yet, it's possible to prove that $\Psi$ must be close to a distance preserving map if $\epsilon$ is close to $0$ . Again, we have that $\Vert \Psi(\rho_i) - \Psi(\rho_j) \Vert_1 \approx 2$ for a complete set of pure $\{\rho_i\}_i$ where $\rho_i \perp \rho_j$ . Using the inequality $D(p,q) \le \sqrt{1-F(p,q)}$ between distance and fidelity you can show that $F(\Psi(\rho_i),\Psi(\rho_j)) \approx 0$ , and thus ${\rm Tr}(\Psi(\rho_i),\Psi(\rho_j)) \approx 0$ . So that $\Psi(\rho_i)$ are almost orthogonal to each other. It follows that they are almost rank-1 projections. Thus $\Psi$ is close to $\Psi'$ that preserves distances exactly. That is, $\Psi'(\rho) = U\rho U^\dagger$ . Consider the channel $\Pi(\rho) = (U^\dagger \otimes I)\Phi(\rho)(U \otimes I)$ . The channel ${\rm Tr}_2(\Pi(\rho)) = U^\dagger\Psi(\rho)U$ must be close to identity. To prove that ${\rm Tr}_1(\Pi(\rho))$ is close to a constant we can use the same no-cloning argument. That is, we can clone ${\rm Tr}_1(\Pi(\rho))$ by iterative application of $\Pi$ , up to some error dependent on $\epsilon$ and iteration step. Of course, $\epsilon$ has to be small enough for this to work. Update A bit more details. Let $d = \dim(\mathcal{H}_1)$ , and for a set of pure $\{\rho_i\}_{i=1}^d$ where $\rho_i \perp \rho_j $ we have $${\rm Tr}(\Psi(\rho_i)\Psi(\rho_j)) < \epsilon_1 $$ for any $i\neq j$ and small $\epsilon_1>0$ .Consider the sum $$S = \frac{1}{d} \sum_{i=1}^d \Psi(\rho_i).$$ It's a state since $S\ge 0$ and ${\rm Tr}(S)=1$ . For any state we have ${\rm Tr}(S^2) \ge \frac{1}{d}$ with the equality only if $S$ is maximally mixed. Let $$m = \sum_i {\rm Tr}(\Psi(\rho_i)^2).$$ Then $$\frac{1}{d} \le {\rm Tr}(S^2) = \frac{1}{d^2} (\sum_i {\rm Tr}(\Psi(\rho_i)^2) + \sum_{i \neq j}{\rm Tr}(\Psi(\rho_i)\Psi(\rho_j)) ) \le \frac{1}{d^2} (m + d(d-1)\epsilon_1).$$ Thus $$\sum_i {\rm Tr}(\Psi(\rho_i)^2) = m \ge d - d(d-1)\epsilon_1 \approx d,$$ hence ${\rm Tr}(\Psi(\rho_i)^2) \approx 1$ for all $i$ , so that each $\Psi(\rho_i)$ must be close to pure. It's indeed not that trivial to show from this that $\Psi$ is close to a unitary channel, even though we have such a property in the exact case. But it looks very natural. I think there must be a proof of Theorem 7 that could be modified for this case. | {} | {'log_upvote_score': 4, 'links': ['https://quantumcomputing.stackexchange.com/questions/27918', 'https://quantumcomputing.stackexchange.com', 'https://quantumcomputing.stackexchange.com/users/4517/']} | jdg_70152 |
stackexchange | llm_judgeable_groundtruth_similarity | 804493 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is this a mathematical statement: Suppose this statement is false. I know what a mathematical statement is: it's either true or false. But the suppose is what's confusing me.
Now provide the response and nothing else.
| First I'm going to evaluate $$\int_{-\infty}^{\infty} \frac{\cos ax}{\cosh x} \ dx .$$ Integrate the function $ \displaystyle f(z) = \frac{e^{iaz}}{\cosh z}$ around a rectangle on the complex plane with vertices at $z= R$, $ z= R + i \pi$, $z= -R + i \pi$, and $z= - R$. As $R \to \infty$, $ \displaystyle \int f(z) \ dz$ vanishes on the left and right sides of the rectangle. So going around the rectangle counterclockwise, we get $$ \int_{-\infty}^{\infty} f(x) \ dx + \int_{\infty}^{-\infty} f(t + i \pi) \ dt = 2 \pi i \ \text{Res} [f(z),i \pi] ,$$ which implies $$ (1+ e^{- a \pi}) \int_{-\infty}^{\infty} \frac{e^{iax}}{\cosh x} \ dx = 2 \pi i \lim_{z \to i \pi /2} \frac{e^{iaz}}{\sinh z} = 2 \pi \ e^{- a \pi /2} .$$ And equating the real parts on both sides of the equation, we get $$ \int_{-\infty}^{\infty} \frac{\cos ax}{\cosh x} \ dx = \frac{2 \pi}{e^{a \pi /2} + e^{- a \pi/2}} = \pi \ \text{sech} \left( \frac{a \pi}{2}\right) .$$ Then $$ \begin{align} \int_{0}^{a} \int_{-\infty}^{\infty} \frac{\cos ax}{\cosh x} \ dx \ da &= \int_{-\infty}^{\infty} \int_{0}^{a} \frac{\cos ax}{\cosh x} \ da \ dx \\ &= \int_{-\infty}^{\infty} \frac{\sin ax}{x \cosh x} \ dx \\ &= \pi \int_{0}^{a} \text{sech} \left(\frac{a \pi}{2} \right) \ da \\ &= 2 \int_{0}^{a \pi /2} \text{sech}(u) \ du \\ &= 4 \int_{0}^{a \pi /2} \frac{e^{u}}{1+e^{2u}} \ du \\ &= 4 \int_{1}^{e^{a \pi /2}} \frac{1}{1+w^{2}} \ dw \\ &= 4 \left(\arctan (e^{a \pi /2}) - \frac{\pi}{4} \right) . \end{align}$$ Therefore, $$ \int_{-\infty}^{\infty} \frac{\sin x}{x \cosh x} \ dx = 4 \arctan (e^{\pi /2}) - \pi \approx 2.3217507819 . $$ | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/804493', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/152525/']} | jdg_70153 |
stackexchange | llm_judgeable_groundtruth_similarity | 1771 |
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm still not sure about the mechanics that lead to rabies being incurable. I know that it can be treated before any symptoms show up, but why is it that once symptoms show the person is a dead man walking?
Now provide the response and nothing else.
| This is because rabies is a viral infection of nervous tissue that propagates through peripheral nerves into the brain and causes brain tissue inflammation (encephalitis). As long as the virus is in the brain there is no way to get rid of it. The main trade-off here is that everything that would kill the virus will be as (or even more) aggressive against the brain tissue, and impairment of the latter will lead to really heavy deficits in vital functions like breathing and thermoregulation. The first manifestations of rabies are those due to brain damage. This means, the virus is already there and the brain is already fatally damaged. | {} | {'log_upvote_score': 6, 'links': ['https://biology.stackexchange.com/questions/1771', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/584/']} | jdg_70154 |
stackexchange | llm_judgeable_groundtruth_similarity | 221407 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
In an early paper, GH Hardy talks about the distribution of "curious" sum: $$ \sum_{\nu \leq n } \{ \nu \theta \}^2 = \tfrac{1}{12} n + O(1)$$ where $\{x\}:=x-\left \lfloor x \right \rfloor -1/2$. With a computer it was not hard to verify the linear growth, the factor of $\frac{1}{12}$ or the constant error term. Here are my experiments: The line is rather easy to prove with Weyl equidistribution theorem - without the $O(1)$ term. $$ \frac{1}{n}\sum_{\nu \leq n } \{ \nu \theta \}^2 \approx\int_{-\frac{1}{2}}^{\frac{1}{2}} x^2 \, dx = \frac{1}{12} $$ Are there any easy ways to understand the noise? It clearly has no limit... in the figure $\theta = \sqrt{7}$ the $O(1)$ error term is distributed between 0.05 and 0.30 with clear bands at indeterminate values. Obviously $\theta \notin \mathbb{Q}$ and even then the uncertainty might be too large. I had computed the Fourier series in order to trace the proof of the Von Neumann ergodic theorem. We can plot the sum of the sawtooth functions. The first 10 and the first 100 terms. The limit of $\sum_{\nu \leq n} \{ \nu \theta \}^2$ is highly oscillatory but does not converge at some points.
Now provide the response and nothing else.
| There are several puzzling things about the question: Firstly of course $\theta$ must be irrational, and it is intended for $\{ x\}$ to denote the Bernoulli polynomial $x-[x]-1/2$ rather than the more usual fractional part. Secondly, where is the result of Hardy from? I did find this statement in the Cambridge ICM paper of Hardy and Littlewood where they write "While engaged on the attempt to elucidate thesequestions we have found a curious result which seems of sufficient interest to be mentioned separately. It is that$$\sum_{\nu =1}^{n} \{ \nu \theta\}^2 = \frac{n}{12} +O(1)$$for all irrational values of $\theta$. When we consider the great irregularity and obscurityof $\ldots$, it is not a little surprising that [this] (and presumablythe corresponding sums with higher even powers) should behave with such markedregularity." Note that Hardy and Littlewood also use $\{x\}$ to denote the Bernoulli polynomial. This is puzzling since it seems completely false if for example $\theta$ is a Liouville number. Indeed then I found a follow up paper of Hardy and Littlewood where they note (see page 36 there) "We may take this opportunity of correcting a misstatement in our communication to the Cambridge congress $\ldots$. It was stated there that $$ \sum_{\nu =1}^{n} \{ \nu \theta\}^2 = \frac{n}{12} +O(1)$$for every irrational $\theta$. This is untrue; but the equation holds for very general classes of values of $\theta$, and in particular for any $\theta$ whose partial quotients are bounded." What Hardy and Littlewood had in mind is presumably to write out the Fourier expansion of the Bernoulli polynomial $(x-[x]-1/2)^2 -1/12$ which is $$ \frac{1}{2\pi^2} \sum_{k\neq 0} \frac{e^{2\pi i kx}}{k^2},$$ and then sum this over $x = \nu \theta$. From here one can see that what is needed for their result is that if $\Vert k\theta \Vert$ isn't smaller than $k^{-2+\epsilon}$ then the series will converge nicely, but there will be problems for very well approximable numbers. Using the Fourier expansion, for good irrationals (e.g. $\sqrt{7}$) the Fourier expansion shows that the remainder term will have an almost periodic structure. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/221407', 'https://mathoverflow.net', 'https://mathoverflow.net/users/1358/']} | jdg_70155 |
stackexchange | llm_judgeable_groundtruth_similarity | 4680 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
As an example, consider the polynomial $f(x) = x^3 + x - 2 = (x - 1)(x^2 + x + 2)$ which clearly has a root $x = 1$.But we can also find the roots using Cardano's method, which leads to $$x = \sqrt[3]{\sqrt{28/27} + 1} - \sqrt[3]{\sqrt{28/27} - 1}$$ and two other roots. It's easy to check numerically that this expression is really equal to $1$, but is there a way to derive it algebraically which isn't equivalent to showing that this expression satisfies $f(x) = 0$?
Now provide the response and nothing else.
| Yes. The first thing to try is to guess that $\sqrt[3]{ \left( \sqrt{ \frac{28}{27} } \pm 1 \right) } = \pm \frac{1}{2} + \sqrt{a}$ for some $a$. Cubing both sides then gives $$\frac{2}{9} \sqrt{21} \pm 1 = \pm \frac{1}{8} + \frac{3}{4} \sqrt{a} \pm \frac{3}{2} a + a \sqrt{a}.$$ Setting $1 = \frac{1}{8} + \frac{3a}{2}$ gives $a = \frac{7}{12}$, and we can verify that $$\frac{3}{4} \sqrt{a} + a \sqrt{a} = \frac{1}{8} \sqrt{21} + \frac{7}{72} \sqrt{21} = \frac{2}{9} \sqrt{21}$$ as desired. If this method doesn't work then the problem becomes harder. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/4680', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1470/']} | jdg_70156 |
stackexchange | llm_judgeable_groundtruth_similarity | 41265570 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the following layout file: <?xml version="1.0" encoding="utf-8"?><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent"> <android.support.constraint.ConstraintLayout android:id="@+id/activity_main" android:layout_width="match_parent" android:layout_height="match_parent" android:background="#FF0000" android:paddingBottom="@dimen/activity_vertical_margin" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin"> <ImageView android:layout_width="0dp" android:layout_height="0dp" android:background="#0000FF" android:padding="16dp" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintTop_toTopOf="parent" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintDimensionRatio="H,3:1" tools:layout_editor_absoluteX="16dp" /> </android.support.constraint.ConstraintLayout></RelativeLayout> I am not sure how the app:layout_constraintDimensionRatio works. My understanding is the ratio will always be width:height. So 3:1 will always make the ImageView appear 3 times wider than height. The prefix H or W tells ConstraintLayout which dimension should respect the ratio. If it is H then it means width will be first computed from other constraints and then height will be adjusted according to the aspect ratio. However this is the result of the layout: The height is 3 times larger than width which is unexpected. Can anyone explain to me how the dimensions are computed with respect to app:layout_constraintDimensionRatio setting?
Now provide the response and nothing else.
| Your understanding for the way app:layout_constraintDimensionRatio works is correct. If you set app:layout_constraintDimensionRatio="H,3:1" then it means width will be first computed from other constraints and then height will be adjusted according to the aspect ratio. The only problem with your implementation is that you added app:layout_constraintBottom_toBottomOf="parent" to the ImageView, so that it caused app:layout_constraintDimensionRatio to be ignored. Here's the layout to size your ImageView in 3:1 aspect ratio: <android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="match_parent" android:background="#FF0000"> <ImageView android:id="@+id/imageView" android:layout_width="0dp" android:layout_height="0dp" android:layout_marginStart="16dp" android:layout_marginTop="16dp" android:layout_marginEnd="16dp" android:background="#0000FF" app:layout_constraintTop_toTopOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintDimensionRatio="H,3:1" /></android.support.constraint.ConstraintLayout> and here's the result view: | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/41265570', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2561007/']} | jdg_70157 |
stackexchange | llm_judgeable_groundtruth_similarity | 368792 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In official Microsoft docs : ✓ DO use a stable, version-independent product name at the second level of a namespace name. But in Reference Source , the following namespaces ends with Version: Microsoft.Build.Tasks.v4.0 Microsoft.Build.Utilities.v4.0 Microsoft.Build.Utilities.v3.5 So is Microsoft violating its own guidelines? Is it good idea to add version suffix to API/SDK namespace?
Now provide the response and nothing else.
| When the documents mention second-level, they're referring to the section of the namespace indicated as (<Product>|<Technology>) . The product or technology should be version independent. So no, technically Microsoft isn't violating its own guidelines, however admittedly using versions in namespaces is probably not a great idea. The proper way to update your library is to deprecate methods and classes which will not be available in the next release of the library, with a clear reference to a replacement to be used instead in the documentation and/or in the code itself as a comment. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/368792', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/20274/']} | jdg_70158 |
stackexchange | llm_judgeable_groundtruth_similarity | 1139048 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using XmlSerializer.Serialize , and it produces line breaks and unnecessary spaces. How to avoid it?
Now provide the response and nothing else.
| Perhaps you could use the overload accepting an XmlWriter , and configure the given XmlWriter with an XmlWriterSettings instance? XmlWriterSettings allows you to control the application of line breaks and indentation. void Serialize(Object o){ XmlWriterSettings settings = new XmlWriterSettings(); settings.Indent = false; settings.NewLineHandling = NewLineHandling.None; //settings.OtherProperties = values; using (XmlWriter writer = XmlWriter.Create(CreateStream(), settings)) { _serializer.Serialize(writer, o); }} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1139048', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/48465/']} | jdg_70159 |
stackexchange | llm_judgeable_groundtruth_similarity | 2438896 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I just wanted to develop a translation app in a Django projects which enables registered users with certain permissions to translate every single message it appears in latest version. My question is, what character set should I use for database tables in this translation app? Looks like some european language characters cannot be stored in UTF-8?
Now provide the response and nothing else.
| Looks like some european language characters cannot be stored in UTF-8? Not true. UTF-8 can store any character set without limitations except maybe for Klingon. UTF-8 is your one stop shop for internationalization. If you have problems with characters, they are most likely to be encoding problems, or missing support for that character range in the font you're using to display the data with (Extremely unlikely for a european language character though, but common e.g. when viewing indian sites on an european computer. See also this question ) If a non-western character set can't be rendered, it could be that the user's built in font does not have that range of UTF-8 covered. Update: Klingon it is indeed not part of official UTF-8 : Some modern invented scripts which have not yet been included in Unicode (e.g., Tengwar) or which do not qualify for inclusion in Unicode due to lack of real-world use (e.g., Klingon) are listed in the ConScript Unicode Registry, along with unofficial but widely-used Private Use Area code assignments. However, there is a volunteer project that has inofficially assigned code points F8D0-F8FF in the private area to Klingon. Gallery of Klingon characters | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2438896', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/192767/']} | jdg_70160 |
stackexchange | llm_judgeable_groundtruth_similarity | 20779746 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I tried replacing @post.update with @post.save as in my code below, it still worked and it returned true, but the values were not updated. def create @post = Post.new(post_params) if @post.save redirect_to posts_path, notice: 'Post was successfully created.' else render action: 'new' end end def update respond_to do |format| if @post.update(post_params) format.html { redirect_to @post, notice: 'Post was successfully updated.' } format.json { head :no_content } else format.html { render action: 'new' } format.json { render json: @post.errors, status: :unprocessable_entity } end end end below are my rake routes: $ rake routes posts GET /posts(.:format) posts#index POST /posts(.:format) posts#create new_post GET /posts/new(.:format) posts#newedit_post GET /posts/:id/edit(.:format) posts#edit post GET /posts/:id(.:format) posts#show PATCH /posts/:id(.:format) posts#update PUT /posts/:id(.:format) posts#update DELETE /posts/:id(.:format) posts#destroy root / welcome#index Why didn't it update or overwrite my record? Will using different http requests for the same methods have any effect on them? Can we use PUT , GET , PATCH and DELETE for save when passed with proper syntax? The question is regarding rails 4 guide, the first guide .
Now provide the response and nothing else.
| Because save will not accept the attributes as parameters; save can only accept parameters like validate: false to skip validation. If you want to use save , then you need to assign or modify individual attributes before save . But if you want mass-assignment, update would be your choice. @post.f_name = 'foo'@post.l_name = 'bar' @post.update # This will not [email protected] # This will [email protected]({:f_name=>"peter",:l_name=>"parker"}) # This will not [email protected]({:f_name=>"peter",:l_name=>"parker"}) # This will work | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20779746', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2718348/']} | jdg_70161 |
stackexchange | llm_judgeable_groundtruth_similarity | 8097121 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Why we need to add these pair of tags within our facebook application. What's the use of this pairs of tags. I created an application which use an apprequest, and it works good even if I didn't add these tags in front of my scripts.So I really wonder why we need to add them.Thank you.
Now provide the response and nothing else.
| It's the place holder for the Facebook javascript script to attach elements to the DOM. Without this when the referenced Facebook script is ran it has nowhere to attach elements. You can see fb-root gets appended to as part of the initialisation. <script type="text/javascript"> window.fbAsyncInit = function() { FB.init({appId: 'xxxxxx', status: true, cookie: true, xfbml: true}); }; (function() { var e = document.createElement('script'); e.async = true; e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js'; document.getElementById('fb-root').appendChild(e); }()); </script> | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/8097121', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1042072/']} | jdg_70162 |
stackexchange | llm_judgeable_groundtruth_similarity | 18140902 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there an Underscore.js function that can map one object to another object, based on the other object's properties? (Kind of how AutoMapper works in .NET.) For example: var objectA = { 'name': 'Jonathan', 'city': 'Sydney' };var objectB = { 'name': 'Jonathan Conway', 'city': 'Sydney', 'errors': [] }_.mapperMethod(objectB);=> { 'name': 'Jonathan Conway', 'city': 'Sydney' };
Now provide the response and nothing else.
| Possibly _.extend() : _.extend(objectA, objectB);console.log(objectA);// { 'name': 'Jonathan Conway', 'city': 'Sydney', 'errors': [] } If you don't want to pick up additional keys, you can use it with _.keys() and _.pick() : var keys = _.keys(objectA);_.extend(objectA, _.pick(objectB, keys));console.log(objectA);// { 'name': 'Jonathan Conway', 'city': 'Sydney' } | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18140902', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/23341/']} | jdg_70163 |
stackexchange | llm_judgeable_groundtruth_similarity | 26204283 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I remove all margins from boostrap container-fluid class and its rows? .container-fluid { padding: 0;} This does basically what I want, but it adds 20px overflow to body. So should I just do this: body, html { overflow-x: hidden; } Do something with .container-fluid > .row
Now provide the response and nothing else.
| To be specific about your question: The .row has a negative left and right margin equal to the left/right padding value of the col-*-* , that is why there are horizontal scrollbars when you fiddle with the grid without understanding how it works. If you manipulate the column classes with zero padding on the left and right or with some other value, the negative margin on the .row must be equal to the the padding on the left and right of the column classes. The .container also has padding that matches the value of the column classes to prevent the scrollbars. So the answer is: .container-fluid > .row -- make the margin:0 on the left and right if you remove the padding on the left and right of the column classes. If all is zero, then you can just adjust the .container or .container fluid with zero padding on the left and right, but if you use different values > 15px L & R, then it's a different story as the .container/.container-fluid will need to be adjusted if the left and right padding on the columns is greater than 15px. There are no margins on the col-*-* it's padding which is quite different when you use box-sizing:border-box globally as Boostrap 3 does. If you want a tight grid, remove all padding on the left and right of all column classes and then remove the negative margin on the left and right of the .row , and then you can remove the left and right padding on the .container . DEMO: http://jsbin.com/jeqase/2/ Removes all padding and negative margin for a tight grid and full width of the .container with any surrounding element (body, html, whatever) with the class .alt-grid : .alt-grid [class*="col-"] {padding-left:0;padding-right:0}.alt-grid .row {margin-left:0;margin-right:0}/* container adjusted */.alt-grid .container {width:100%;max-width:none;padding:0;} You can also do this with .container-fluid - the only thing to zero out is the left and right padding. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/26204283', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1913517/']} | jdg_70164 |
stackexchange | llm_judgeable_groundtruth_similarity | 174083 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If event $A$ and $B$ are events such that $P(A)$ and $P(B)$ are either $0$ or $1$ and $A$ is subset of $B$ , then $A$ and $B$ are dependent events. Proof: Since $A\subset B$ , we have $A\cap B=A$ and so $P(A\cap B)=P(A)$ . $\therefore$ $P(A\cap B)-P(A)P(B)=P(A)-P(A)P(B)=P(A)[1-P(B)]$ Since $P(A)>0$ and $P(B)<1$ , ( original image ) Where did that last step come from, involving $P(A \cap B) - P(A)P(B)$ ? How did the solution come up with that?
Now provide the response and nothing else.
| The result has clearly been misstated or miscopied. The proof is correct for the following result: If event $A$ and $B$ are events such that $P(A)$ and $P(B)$ are neither $0$ nor $1$, and $A$ is subset of $B$, then $A$ and $B$ are dependent events. The reason for looking at $P(A\cap B)-P(A)P(B)$ is that by definition, $A$ and $B$ are independent if and only if $P(A\cap B)=P(A)P(B)$, i.e., if and only if $P(A\cap B)-P(A)P(B)=0$. But the hypothesis that $A\subseteq B$ implies that $P(A)=P(A\cap B)$, so $$P(A\cap B)-P(A)P(B)=P(A)-P(A)P(B)=P(A)\big(1-P(B)\big)\;,$$ which is $0$ if and only if either $P(A)=0$ or $1-P(B)=0$, i.e., if and only if either $P(A)=0$ or $P(B)=1$. These possibilities are ruled out by the corrected version of the hypothesis, so it must be the case that $P(A\cap B)-P(A)P(B)\ne 0$, $P(A\cap B)\ne P(A)P(B)$, and hence by definition $A$ and $B$ are not independent (which of course means that they are dependent). The answer by copper.hat shows why the stated version is wrong. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/174083', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/26728/']} | jdg_70165 |
stackexchange | llm_judgeable_groundtruth_similarity | 11218746 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a programming task(not homework.) where I have to find the bridges in a graph. I worked on it a bit myself, but could not come up with anything satisfactory. So i googled it , I did find something but I am unable to understand the algorithm as it is presented. Could someone please take a look at this code and give me an explanation.? public Bridge(Graph G) { low = new int[G.V()]; pre = new int[G.V()]; for (int v = 0; v < G.V(); v++) low[v] = -1; for (int v = 0; v < G.V(); v++) pre[v] = -1; for (int v = 0; v < G.V(); v++) if (pre[v] == -1) dfs(G, v, v);}public int components() { return bridges + 1; }private void dfs(Graph G, int u, int v) { pre[v] = cnt++; low[v] = pre[v]; for (int w : G.adj(v)) { if (pre[w] == -1) { dfs(G, v, w); low[v] = Math.min(low[v], low[w]); if (low[w] == pre[w]) { StdOut.println(v + "-" + w + " is a bridge"); bridges++; } } // update low number - ignore reverse of edge leading to v else if (w != u) low[v] = Math.min(low[v], pre[w]); }}
Now provide the response and nothing else.
| Def: Bridge is an edge, when removed, will disconnect the graph (or increase the number of connected components by 1). One observation regarding bridges in graph; none of the edges that belong to a loop can be a bridge. So in a graph such as A--B--C--A , removing any of the edge A--B , B--C and C--A will not disconnect the graph. But, for an undirected graph, the edge A--B implies B--A ; and this edge could still be a bridge, where the only loop it is in is A--B--A . So, we should consider only those loops formed by a back edge. This is where the parent information you've passed in the function argument helps. It will help you to not use the loops such as A--B--A . Now to identify the back edge (or the loop), A--B--C--A we use the low and pre arrays. The array pre is like the visited array in the dfs algorithm; but instead of just flagging that the vertex as visited, we identify each vertex with a different number (according to its position in the dfs tree). The low array helps to identify if there is a loop. The low array identifies the lowest numbered (from pre array) vertex that the current vertex can reach. Lets work through this graph A--B--C--D--B . Starting at A dfs: ^ ^ ^ ^ ^pre: 0 -1 -1 -1 -1 0--1 -1 -1 1 0--1--2 -1 1 0--1--2--3 1 0--1--2--3--1graph: A--B--C--D--B A--B--C--D--B A--B--C--D--B A--B--C--D--B A--B--C--D--Blow: 0 -1 -1 -1 -1 0--1 -1 -1 1 0--1--2 -1 1 0--1--2--3 1 0--1--2--3->1 At this point, you've encountered a cycle/loop in graph. In your code if (pre[w] == -1) will be false this time. So, you'll enter the else part. The if statement there is checking if B is the parent vertex of D . It is not, so D will absorb B 's pre value into low . Continuing the example, dfs: ^pre: 0--1--2--3graph: A--B--C--Dlow: 0--1--2--1 This low value of D propagates back to C through the code low[v] = Math.min(low[v], low[w]); . dfs: ^ ^ ^pre: 0--1--2--3--1 0--1--2--3--1 0--1--2--3--1graph: A--B--C--D--B A--B--C--D--B A--B--C--D--Blow: 0--1--1--1--1 0--1--1--1--1 0--1--1--1--1 Now, that the cycle/loop is identified, we note that the vertex A is not part of the loop. So, you print out A--B as a bridge. The code low['B'] == pre['B'] means an edge to B will be a bridge. This is because, the lowest vertex we can reach from B is B itself. Hope this explanation helps. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/11218746', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1242040/']} | jdg_70166 |
stackexchange | llm_judgeable_groundtruth_similarity | 931463 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Given a variable with type Graphics , how do I cast it to Graphics2D in Scala?
Now provide the response and nothing else.
| The preferred technique is to use pattern matching. This allows you to gracefully handle the case that the value in question is not of the given type: g match { case g2: Graphics2D => g2 case _ => throw new ClassCastException} This block replicates the semantics of the asInstanceOf[Graphics2D] method, but with greater flexibility. For example, you could provide different branches for various types, effectively performing multiple conditional casts at the same time. Finally, you don't really need to throw an exception in the catch-all area, you could also return null (or preferably, None ), or you could enter some fallback branch which works without Graphics2D . In short, this is really the way to go. It's a little more syntactically bulky than asInstanceOf , but the added flexibility is almost always worth it. | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/931463', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3827/']} | jdg_70167 |
stackexchange | llm_judgeable_groundtruth_similarity | 105058 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the (cumbersome) statement: "Every integer greater than 1 can be written as a unique product of integers belonging to a certain subset, $S$ of integers. When $S$ is the set of primes, this is the Fundamental Theorem of Arithmetic. My question is this: Are there any other types of numbers, for which this is true. EDIT: As the answers show, this obviously cannot be done. What if we relax the integer condition, i.e. can there be any other canonical representation of positive integers using complex numbers?
Now provide the response and nothing else.
| If you mean that every positive integer gets a unique multiplicative factorization, then no , there is no other canonical representation. Why? Because then every prime number $p$ can be factorized, but the only way that's possible is if the components of the factorizations include the primes them-selves. Furthermore, you can't add any other number to the list because then the factorization of this number would be non-unique. Alternatively, there are non-multiplicative representations of integers. The $p$-adic representation is just writing $n$ in "base $p$": $n=a_0+a_1p+a_2p^2+\cdots+a_rp^r$. Even though the golden ratio is not a rational number, we can write integers in base golden ratio . Algebraic number theory studies number fields and rings of integers beyond just $\mathbb{Q}$ and $\mathbb{Z}$. Of note, there is not necessarily unique factorization of the elements. For example, in $\mathbb{Z}[\sqrt{-5}]$, we have $$6=2\cdot3=(1+\sqrt{-5})(1-\sqrt{-5}).$$ This lead to some headaches (I assume anyway), until mathematicians figured out that even though the numbers don't factor uniquely, the ideals of the integers factor uniquely into products of prime ideals, which has lead to other algebraic constructions based off of them designed ultimately to study the structure of numbers. (If you don't understand this section of my answer, don't worry about it. It's for a later time then.) | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/105058', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/14643/']} | jdg_70168 |
stackexchange | llm_judgeable_groundtruth_similarity | 17032264 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am having a problem to run glmnet lasso with a wide data set. My data has N=50, but p > 49000, all factors. So to run glmnet i have to create a model.matrix, BUT i just run out of memory when i call model.matrix(formula, data), where formula = Class ~ . As a worked example i will generate a dataset: data <- matrix(rep(0,50*49000), nrow=50)for(i in 1:50) {x = rep(letters[2:8], 7000)y = sample(x=1:49000, size=49000)data[i,] <- x[y]}data <- as.data.frame(data)x = c(rep('A', 20), rep('B', 15), rep('C', 15))y = sample(x=1:50, size=50)class = x[y]data <- cbind(data, class) After that i tried to create a model.matrix to enter on glmnet. formula <- as.formula(class ~ .) X = model.matrix(formula, data) model <- cv.glmnet(X, class, standardize=FALSE, family='multinomial', alpha=1, nfolds=10) In the last step (X = model.matrix ...) i run out of memory. What can i do?
Now provide the response and nothing else.
| I asked Professor Trevor Hastie and received the following advice: "Hello Flavio model.matrix is killing you. You will have 49K factors, and model matrix is trying to represent them as contrasts which will be 6 column matrices, so 49*6 approx 300K columns.Why not make binary dummy variables (7 per factor), and simply construct this directly without using model.matrix. You could save 1/7th the space by storing this via sparseMatrix (glmnet accepts sparse matrix formats)" I did exactly that and worked perfectly fine. I think that can be usefull to others. An article, with code, that came form this problem: http://www.rmining.net/2014/02/25/genetic-data-large-matrices-glmnet/ In order to avoid broken links i will post part of the post here: The problem with the formula approach is that, in general, genomic data has more columns than observations. The data that I worked in that case had 40,000 columns and only 73 observations. In order to create a small set of test data, run the following code: for(i in 1:50) { x = rep(letters[2:8], 7000) y = sample(x=1:49000, size=49000) data[i,] <- x[y]}data <- as.data.frame(data)x <- c(rep('A', 20), rep('B', 15), rep('C', 15))y <- sample(x=1:50, size=50)class = x[y]data <- cbind(data, class) So, with this data set we will try to fit a model with glmnet (): formula <- as.formula(class ~ .)X <- model.matrix(formula, data)model <- cv.glmnet(X, class, standardize=FALSE, family='multinomial', alpha=1, nfolds=10) And if you do not have a computer with more RAM than mine, you will probably leak memory and give a crash in R. The solution? My first idea was to try sparse.model.matrix() that creates a sparse matrix model using the same formula. Unfortunately did not work, because even with sparse matrix, the final model is still too big! Interestingly, this dataset occupies only 24MB from RAM, but when you use the model.matrix the result is an array with more than 1Gb. The solution I found was to build the matrix on hand. To do this we encode the array with dummy variables, column by column, and store the result in a sparse matrix. Then we will use this matrix as input to the model and see if it will not leak memory: ## Creates a matrix using the first columnX <- sparse.model.matrix(~data[,1]-1)## Check if the column have more then one levelfor (i in 2:ncol(data)) {## In the case of more then one level apply dummy coding if (nlevels(data[,i])>1) { coluna <- sparse.model.matrix(~data[,i]-1) X <- cBind(X, coluna)}## Transform fator to numericelse { coluna <- as.numeric(as.factor(data[,i])) X <- cBind(X, coluna)} NOTE: Pay attention to how we are using a sparse matrix the Matrix package is required. Also note that the columns are connected using cBind () instead of cbind (). The matrix thus generated was much lower: less than 70 Mb when I tested. Fortunately glmnet () supports a sparse matrix and you can run the model: mod.lasso <- cv.glmnet(X, class, standardize=FALSE, family='multinomial', alpha=1, nfolds=10) So you can create models with this type of data without blowing the memory and without use R packages for large datasets like bigmemory and ff . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17032264', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1760525/']} | jdg_70169 |
stackexchange | llm_judgeable_groundtruth_similarity | 13950676 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I check for the target URL for specific response code like 200 OK without Indy throwing all sorts of exceptions out. ConnectionTimeout,ConnectionClosedGracefully, etc... For example if URL is not proper or its host cannot be found or cannot be reached. Indy will still rise exceptions even though i tried to ignore them. So my question is how to properly ignore these exceptions.
Now provide the response and nothing else.
| 1. How to ignore all exceptions thrown by the TIdHTTP ? To handle all exceptions and, as you say, ignore them, you may use the code that is almost identical to the code from @Stijn's answer: procedure TForm1.Button1Click(Sender: TObject);var IdHTTP: TIdHTTP;begin IdHTTP := TIdHTTP.Create; try try IdHTTP.Get('http://www.example.com'); except // do just nothing here, if you really want to completely ignore all // exceptions thrown from inside the try..except block execution and // if you don't want to indicate somehow, that the exception occured end; finally IdHTTP.Free; end;end; 2. How to handle specific exceptions thrown by the TIdHTTP ? Maybe one day you'll want to react somehow on certain types of exceptions thrown by TIdHTTP class, e.g. react only on HTTP protocol exceptions. And that's what I'll try to elaborate here. Indy defines many exception classes for different occasions, that may occur when a certain action fails. Here is a list of exception classes, that you might be interested in when you're working with HTTP protocol: EIdException - it is the base exception class used by Indy library. It might be useful for you when you want to distinguish between exceptions raised by Indy and all other exceptions thrown by your application. EIdSocketError - from a HTTP protocol abstraction point of view it's a low level exception class, which covers all exceptions raised when a certain socket operation fails. This can be useful for you to detect, that there is something wrong at your network level. EIdConnClosedGracefully - exceptions raised by this class indicate, that the server side closed the connection with the client in a common way. This can be useful when you'd need to react to this situation, e.g. by reconnecting to the server. EIdHTTPProtocolException - this exception class is used for exceptions thrown, when an error occurs during processing of a HTTP response for a certain request. This generally happens, when an unexpected numeric HTTP response code is received from the HTTP response. It can be useful, when you want to handle HTTP protocol errors specifically. By this exception handling, you can e.g. react on certain HTTP status codes returned by a server response. Here is the code skeleton showing handling of the exceptions listed above. Of course, you don't have to show messages, but do something more useful. And, you don't need to handle all of them; it's upon you which exceptions and how you will handle: uses IdHTTP, IdException, IdStack;procedure TForm1.Button1Click(Sender: TObject);var IdHTTP: TIdHTTP;begin IdHTTP := TIdHTTP.Create; try try IdHTTP.Get('http://www.example.com'); except // this exception class covers the HTTP protocol errors; you may read the // response code using ErrorCode property of the exception object, or the // same you can read from the ResponseCode property of the TIdHTTP object on E: EIdHTTPProtocolException do ShowMessage('Indy raised a protocol error!' + sLineBreak + 'HTTP status code: ' + IntToStr(E.ErrorCode) + sLineBreak + 'Error message' + E.Message); // this exception class covers the cases when the server side closes the // connection with a client in a "peaceful" way on E: EIdConnClosedGracefully do ShowMessage('Indy reports, that connection was closed gracefully!'); // this exception class covers all the low level socket exceptions on E: EIdSocketError do ShowMessage('Indy raised a socket error!' + sLineBreak + 'Error code: ' + IntToStr(E.LastError) + sLineBreak + 'Error message' + E.Message); // this exception class covers all exceptions thrown by Indy library on E: EIdException do ShowMessage('Indy raised an exception!' + sLineBreak + 'Exception class: ' + E.ClassName + sLineBreak + 'Error message: ' + E.Message); // this exception class is a base Delphi exception class and covers here // all exceptions different from those listed above on E: Exception do ShowMessage('A non-Indy related exception has been raised!'); end; finally IdHTTP.Free; end;end; | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13950676', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1910022/']} | jdg_70170 |
stackexchange | llm_judgeable_groundtruth_similarity | 60270 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
could anyone help to show that $[0,1]^{\mathbb{N}}$ with respect to the box topology is not compact? Thank you!
Now provide the response and nothing else.
| There’s absolutely nothing wrong with showing non-compactness of $X = \square_{k=0}^\infty [0,1]$ directly by looking at open covers, but there are other ways as well. For instance: If $X = {\Large \square}_{k=0}^\infty [0,1]$ were compact, its closed subspace $ {\Large \square}_{k=0}^\infty \{0,1\}$ would be compact, but it’s not hard to show that $ {\Large \square}_{k=0}^\infty \{0,1\}$ is an infinite, closed, discrete set in $X$ and therefore cannot be compact. Even simpler: For $n\in\mathbb{N}$ let $x_n \in X$ be the point such that $x_n(n) = 1$ and $x_n(k) = 0$ if $k\ne n$. Now consider the set $A = \{x_n:n\in\mathbb{N}\}$. It’s infinite, so if $X$ were compact, $A$ would have a limit point in $X$. But it’s not hard to show that $A$ is a closed, discrete subset of $X$ and therefore has no limit point in $X$. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/60270', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/2855/']} | jdg_70171 |
stackexchange | llm_judgeable_groundtruth_similarity | 15825 |
Below is a question asked on the forum devops.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to understand the underlining architecture of Docker. The diagram that's been shown everywhere claims that in contrast with Virtualisation technologies such as "VirtualBox", Docker uses the OS of the host directly and only ships the applications together with dependencies such as libraries etc. Now, from what I can see in Docker, every image includes an OS. It starts with a FROM <os-image> tag. Isn't this contradictory to what's been claimed? Please advise.
Now provide the response and nothing else.
| First, Docker is just a company. =) There are two methods of isolating things, Methods that isolate the kernel. Methods that do not isolate the kernel. For all intents and purposes, methods that do not isolate the kernel are called "containerization", while those that do are called "virtualization". In industry almost 100% of the use cases of "containerization" refer to Linux Containerization. It's for the most part correct to say that containers are a Linux thing. One more point of confusion, many non-Linux systems that support "native" containerization do so with a virtual machines which means that you have the native kernel (like Darwin/BSD) running on the host, and a Linux kernel running in a virtual machine which hosts just the container environment. As a rule of thumb, Containerization is always less secure: vulnerable to kernel-level exploits. Containerization is always faster: less context switching and hypervisor overhead. It's not true that just because something does not virtualize the kernel, that it's not isolated from the host. While it's true Linux containers are just processes, and thus, Visible from the host Subject to any kernel level resource optimizations, like memory deduplication Containerized processes must Run in a different namespaces which, barring a kernel-level exploit, isolate them from other process on the machine (Usually) run in isolated cgroups subject to different quotas and limits. As a last point, just to drive it home, because Containerization typically refers to the implementation in Linux. Linux has no native concept of containerization itself, only providing cgroups (resource control) and namespaces (isolation) A container is just a native process Then we tend to say any process on Linux that makes use of namespaces is running in a container, moreso if it's using cgroups. As a final point, typically when you hear "Docker Image" people mean an OCI Compliant Image, which is what everyone uses. When you see FROM <os-image> In a dockerfile, what you're actually saying is that you want to in git-parlance clone a working set of stuff and build on top of it. This stuff does not include a kernel. But it will include everything else because you will not have access to the system's stuff. (The container is in a different namespace and isolated). For example, a container must include a copy of glibc if needed, and a container of Debian must include apt and other Debian based utilities that constitute a "core" system. | {} | {'log_upvote_score': 4, 'links': ['https://devops.stackexchange.com/questions/15825', 'https://devops.stackexchange.com', 'https://devops.stackexchange.com/users/34976/']} | jdg_70172 |
stackexchange | llm_judgeable_groundtruth_similarity | 21060632 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm new to AngularJS and i see this syntax a lot: function someFunc(){ return function(input){ return 'hello' + input; }} The function above is a general syntax i tend to see a lot but problem is specific with this example for custom filter: angular.module('bookFilters', []) .filter('newBookFilter', function(){ return function(input){ return 'The Book: ' + input.name + 'is new !'; };}); I understand that wrapping the function with another function gives me an opportunity to use dependency injection, Here is my questions about it: Does the filter get the function returned from the wrapping function? Then is it able to use dependency injection to inject the value into the function? Theoretically that: This code: {{bookObj | newBookFilter}} Will become: {{ bookObj | function(input){return 'The Book: ' + input.name + 'is new !'; } }} And finally the {{}} will return the final value from the function. Why can't i just inject the input to the first function like: angular.module('bookFilters', []) .filter('newBookFilter', function(input){ return 'The Book: ' + input.name + 'is new !'; }); Why dependency injection will only work on returned function? I know i really confused here, If anyone can help me i will be very thankful, Thank you all and have a nice day.
Now provide the response and nothing else.
| The answer is opposite to your question. Angular injects only into factory function but not into resulting function: .filter('newBookFilter', function($log, ...){ // <- factory function return function(input, ...){ // <- resulting function }; }) Factory function have arbitrary injected parameters. Resulting function have fixed parameters. Second reason is that you can do some initialization in factory function. This is useful for example when you define new directive . Also factory returns closure which can capture variables and arguments from factory function. See example below. It uses dependency injection to get logging object. Here is full example. .filter('joinBy', function ($log) { // <- injected here return function (input, delimiter) { // <- used here var res = (input || []).join(delimiter || ','); $log.info(res); return res; }; }); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21060632', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1451479/']} | jdg_70173 |
stackexchange | llm_judgeable_groundtruth_similarity | 509812 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm decently familiar with Gaussian Processes; I understand that GPs form the backbone of BO but I don't want this question to drift in scope towards an explanation of GPs. Rather, I'm curious, how does BO make decisions given a gaussian process as a surrogate function? The GP models the problem parameters (to be optimized) as inputs, X, and the corresponding rewards as outputs, y, learning a smooth function that estimates rewards between adjacent observations (specific combinations of parameters to be optimized.) This is the surrogate function. I can think of a few ways which a system might make parameter adjustments: Pick the parameter configuration corresponding to the absolute highest reward observed (Exploitation.) pick the highest variance region (few observations nearby) and observe what happens (Exploration.) Use reward and variance in some ranking scheme to decide where to move next. Examples Ex 1: A very small reward with extremely low variance- should be avoided. Ex 2: A very high reward with very high variance- probably should be explored more to reduce variance around estimate. Ex 3: A very high reward with very little variance- probably should not be explored further, however, may be the optimal parameter configuration. Ex 4: Very low reward with very high variance- probably should be explored to reduce variance around estimate ( but perhaps less so than in example 2? ) In answering this question, could you elaborate on how variance and reward are used to inform which regions of the surrogate function ought to be explored most vs least- what is this ranking metric/system? Likewise, the term acquisition function appears to be very closely related to what I'm asking, but what is it and how does it guide exploration/exploitation?
Now provide the response and nothing else.
| In Bayesian optimization (BO), one chooses the next sampling point by maximizing the acquisition function $a(x)$ , i.e. \begin{equation}x^* = \arg\max_{x\in\mathcal{X}} a(x).\end{equation} This acquisition function is the key to the balance between exploration and exploitation. There are so many acquisition functions out there but I will list the most common three: the probability of improvement (PI), expected improvement (EI), and upper-confidence bound (UCB). Assuming that you want to solve a maximization problem, let's take UCB acquisition function because it is easy to type: $a(x) = \mu(x) + \kappa \sigma(x)$ . Let's also assume $\kappa$ is a constant for now, e.g. $\kappa = 2$ . Comparing two points $x_1$ and $x_2$ : if their means are the same, then BO will pick the one that has larger $\sigma^2(x)$ . This is called exploration . Comparing two points $x_1$ and $x_2$ : if their variances are the same, then BO will pick the one that has larger $\mu(x)$ . This is called exploitation . | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/509812', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/288172/']} | jdg_70174 |
stackexchange | llm_judgeable_groundtruth_similarity | 240668 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
According ot Griffith's Intro to Quantum Mechanics (page 147), if some function $f$ is an eigenfunction of $L^{2}$, then $L_{-}f$ is also an eigenfunction of $L^{2}$. Is $f$ also an eigenfunction of of $L_{-}$? In general, what are the eigenfunctions of $L_{-}$ and $L_{+}$?
Now provide the response and nothing else.
| Is $f$ also an eigenfunction of $L_-$? In general, no. Introduce the notation$$L^2 |\ell;m\rangle=\ell(\ell+1)|\ell;m\rangle \tag{1}$$and$$L_z|\ell;m\rangle=m |\ell;m\rangle \tag{2}$$ In general $^1$,$$L_\pm|\ell;m\rangle=\sqrt{\ell(\ell+1)-m(m\pm1)}\;|\ell;m\pm 1\rangle \tag{3}$$which means $|\ell;m\rangle$ is not an eigenvector of $L_\pm$, because when acting with $L_\pm$ on $|\ell;m\rangle$, you dont get a scalar multiple of $|\ell;m\rangle$, but a different vector. In the specific case $|\ell;\ell\rangle$ (i.e., when $m=\ell$), we have$$L_+|\ell;\ell\rangle=0 \tag{4}$$which means $|\ell;\ell\rangle$ is an eigenvector of $L_+$, with eigenvalue 0. We can say the same about $L_-$: $|\ell;-\ell\rangle$ is an eigenvector of $L_-$, with eigenvalue $0$. In general, what are the eigenfunctions of $L_\pm$? To begin with, $L_\pm$ are not hermitian, so there is no guarantee these are diagonalisable, and if they are, the eigenvalues won't be real (i.e., $L_\pm$ is not an observable). In the paragraph above, we argued that $|\ell;\ell\rangle$ is an eigenvector of $L_+$, so that at least there exist one eigenvector. Now we prove that $|\ell;\ell\rangle$ is the only eigenvector of $L_+$. Let us suppose that there exist a set of vectors $|\alpha;\ell\rangle$ such that$$L_+|\alpha;\ell\rangle=\alpha|\alpha;\ell\rangle \tag{5}$$and$$L^2|\alpha;\ell\rangle=\ell(\ell+1)|\alpha;\ell\rangle \tag{6}$$(note that we can do this because $[L_+,L^2]=0$). As the set $\{|\ell;m\rangle\}$ is a basis, we can write $|\alpha;\ell\rangle$ as a linear combination of these vectors:$$|\alpha;\ell\rangle=\sum_{m=-\ell}^{+\ell} c^m_\ell|\ell;m\rangle \tag{7}$$ It's fairly easy to check that $(5)$ can only be satisfied if all the coefficients $c^m_\ell=0$ except for $c^\ell_\ell$, so that$$|\alpha;\ell\rangle=|\ell;\ell\rangle \tag{8}$$easily follows. In the case of the simple harmonic oscillator , where the algebra is similar, the situation is different: there, we have an expression similar to $(7)$, but where the sum is over $n=0,1,\cdots,\infty$. In this case, the conclusion is different, because there is and infinite number of terms in the sum. Now, its easy to prove that there exist a set of non-zero coefficients $c_n$, which means that there exist non-zero eigenvectors of the rising/lowering operators. This are called coherent states , and are quite fun. $^1$ The proof of this expression is pretty standard, and can be found online and in any book on QM. I'm going to reproduce the proof here to make the post more self-contained. The algebra of the angular momentum operators is$$[L_x,L_y]=iL_z\qquad [L_y,L_z]=iL_x \qquad [L_z,L_x]=iL_y \tag{9}$$ If we define $L_\pm=L_x\pm i L_y$, then its easy to check that $(9)$ is equivalent to$$[L_z,L_\pm]=\pm L_\pm \tag{10}$$ For example, $[L_z,L_+]=[L_z,L_x+iL_y]=[L_z,L_x]+i[L_z,Ly]$ which by virtue of $(9)$ equals $=iL_y+L_x=L_+$. With this, we can prove $(3)$. Let$$|\varphi\rangle\equiv L_+|\ell;m\rangle \tag{11}$$by definition. If we act on the left with $L_z$, we get$$L_z|\varphi\rangle=L_z L_+|\ell;m\rangle \tag{12}$$ Next, write $L_zL_+=L_+L_z+[L_z,L_+]$ (this should be rather obviously true: just expand the commutator, and check that it works). As we know that $[L_z,L_+]=L_+$, we get$$(12)=(L_+ L_z+L_+)|\ell;m\rangle \tag{13}$$which, using $L_z|\ell;m\rangle=m|\ell;m\rangle$, equals$$(12)=(1+m)L_+|\ell;m\rangle \tag{14}$$ Finally, note that $L_+|\ell;m\rangle$ is, by definition, $|\varphi\rangle$, which means that$$L_z|\varphi\rangle=(m+1)|\varphi\rangle \tag{15}$$ This relation is very important! Try to think about it for a minute. Look at it carefully. This relation means that $|\varphi\rangle$ is an eigenvector of $L_z$, and its eigenvalue is $m+1$. Therefore, we must have $|\varphi\rangle\propto |\ell;m+1\rangle$, for $|\ell;m+1\rangle$ is defined as the eigenvector of $L_z$ with eigenvaule $m+1$. Therefore, we can write$$L_+|\ell;m\rangle=c|\ell;m+1\rangle \tag{16}$$where $c$ is a normalisation constant, which is easy to find, because we know that $L_- L_+=L^2-L_z^2-L_z$:$$|c|^2=\langle\ell;m|L_-L_+|\ell;m\rangle=\langle\ell;m|L^2-L_z^2-L_z|\ell;m\rangle=\ell(\ell+1)-m^2-m \tag{17}$$where I used $L^2|\ell;m\rangle=\ell(\ell+1)|\ell;m\rangle$ and $L_z|\ell;m\rangle=m|\ell;m\rangle$. This completes the proof of $(3)$. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/240668', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/92631/']} | jdg_70175 |
stackexchange | llm_judgeable_groundtruth_similarity | 9311 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This should be an easy exercise: Given a finite odd abelian group $G$, prove that $\prod_{g\in G}g=e$. Indeed, using Lagrange's theorem this is trivial: There is no element of order 2 (since the order must divide the order of $G$, but it is odd), and so every element except $e$ has a unique inverse which is different from it. Hence both the element and its inverse participate in the product and cancel each other. My problem is simple - I need to solve this without Lagrange's theorem. So either there's a smart way to prove the nonexistance of an element of order 2 in an odd abelian group, or I'm missing something even more basic...
Now provide the response and nothing else.
| If doesn't exist an element of order 2 then you are done. Supose $g\in G$ such that $g^2=e$. Since $\{g_1,\ldots, g_n\}=\{gg_1,\ldots,gg_n\}$, then $\prod g_i = g^n \prod g_i$ and $g^n=e$. Putting $n=2k+1$, $e=g^{2k+1}=g^{2k}g=g$. | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/9311', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1818/']} | jdg_70176 |
stackexchange | llm_judgeable_groundtruth_similarity | 3803621 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
There is a statement in the book I'm reading for the SCJP qualification, it says : Files with no public classes have no naming restrictions That has made me ask, why would you ever want to do this? If there are no public classes, then how could other classes ever import and use the file? The only purpose I can see is if the file runs standalone in itself, which could also be odd, such as have an entire application in one file
Now provide the response and nothing else.
| This is valid for package-private classes as well. And you can use package-private classes within the same package. (And in that case you don't have to import it, because it's in the same package.) For example, the JapaneseImperialCalendar class is package-private, because it is only used from Calendar.createCalendar(..) - it is not part of the public API. You can't directly instantiate the japanese calendar, but you can still use it by its interface . Same goes for all unmodifiable collections that are obtained by methods like Collections.unmodifiableList(..) - they are package-private. So the .java file of JapaneseImperialCalendar could've been arbitrary. However, it is advisable not to diverge from the established practice of naming even package-private files after the class name. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3803621', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/155695/']} | jdg_70177 |
stackexchange | llm_judgeable_groundtruth_similarity | 2163090 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm writing a multi-threaded C++ program. I plan on killing threads. However, I am also using a ref-counted GC. I'm wondering if stack allocated objects get destructed when a thread gets killed.
Now provide the response and nothing else.
| The stack does not unwind when you 'kill' a thread. Killing threads is not a robust way to operate - resources they have open, such as files, remain open until the process closes. Furthermore, if they hold open any locks at the time you close them, the lock likely remains locked. Remember, you are likely calling a lot of platform code you do not control and you can't always see these things. The graceful robust way to close a thread is to interrupt it - typically it will poll to see if it's been told to close down periodically, or it's running a message loop and you send it a quit message. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2163090', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/247265/']} | jdg_70178 |
stackexchange | llm_judgeable_groundtruth_similarity | 5036618 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Modern browsers and environments like Node.js allow you to say {a:1, b:2,} or [1,2,3,]. This has historically been problematic with Internet Explorer. Is this fixed in Internet Explorer 9?
Now provide the response and nothing else.
| There are two different answers to this, one for dangling commas in object initializers and one for dangling commas in array initializers: For object initializers , e.g.: var obj = { a: 1, b: 2, c: 3,}; It's fixed in IE8 and above. Test it here: http://jsbin.com/UXuHopeC/1 ( source ). IE7 and earlier will throw a syntax error on the } after the dangling comma. For array initializers , e.g.: var arr = [ 1, 2, 3,]; It was "fixed" in IE9 and above. Test it here: http://jsbin.com/UXuHopeC/2 ( source ). IE8 and earlier will give that array four entries, the last one having the value undefined . IE9 and above give it three entries. I put "fixed" in quotes because the spec was originally unclear about whether the array should have a final undefined entry or not, so neither behavior was incorrect. It's just that IE went one way and everyone else went the other. :-) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5036618', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/621486/']} | jdg_70179 |
stackexchange | llm_judgeable_groundtruth_similarity | 14172574 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to filter CSV files. But the following script is giving an error. How do I specify that I want to run match on each String object? I tried various combinations, but without result. $FileNames = [System.IO.Directory]::GetFiles("C:\Users\anagre\Desktop")$FileNames = $FileNames | Where { -match "*.csv"}
Now provide the response and nothing else.
| The -match operator is both a comparison operator and an array operator, depending on its input object. If it's a scalar, it returns a boolean. If it's an array, it returns all the elements of the array that match the pattern @($Filenames) -match '*.csv' Use the array syntax to ensure that you still get an array if there's only one filename returned by Get-ChildItem . Otherwise, you'll get back $True instead of the filename if it matches. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14172574', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/977981/']} | jdg_70180 |
stackexchange | llm_judgeable_groundtruth_similarity | 9859111 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Can someone please let me know how to get values from several input fields? I have a list with several inputs like this: <li><label>Additional Title: </label><input type='text' name='additionaltitlename' ... /></li><li><label>Additional Title: </label><input type='text' name='additionaltitlename' ... /></li> I have a solution in Javascript (on form submit): ...var extratitles = document.getElementsByName('additionaltitlename'); var str = ''; for (var i = 0; i < extratitles.length; i++) { str = str + '|' + extratitles.item(i).value; } } How do I do the same thing in JQuery?
Now provide the response and nothing else.
| It's not valid to have two inputs of the same name. If you want to do this, you can use <input name="titles[]"> You can try this: <input name="titles[]"><input name="titles[]"><button>submit</button> With this jQuery // click handlerfunction onClick(event) { var titles = $('input[name^=titles]').map(function(idx, elem) { return $(elem).val(); }).get(); console.log(titles); event.preventDefault();}// attach button click listener on dom ready$(function() { $('button').click(onClick);}); See it working here on jsFiddle EDIT This answer gives you the titles in an array instead of a string using a | separator. Personally, I think this is a lot more usable. If you're just submitting the form and you want to support multiple values, use the .serialize method as described in the other answer | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9859111', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/983168/']} | jdg_70181 |
stackexchange | llm_judgeable_groundtruth_similarity | 28454097 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
<select id="country_name" onchange="changeCounty();"><option value="IND"> IND</option><option value="US">US</option><option value="JP">JP</option><option value="UK">UK</option></select> i have the above code.when i select one option it will do some changes to the page.but when i open the page in new tab of browser the already selected option reset. and default selected option set.How do i cop with this?
Now provide the response and nothing else.
| It looks like you'll need to append to the JVM arguments used when launching your tasks/jobs. Try editing conf/spark-defaults.conf as described here spark.executor.extraJavaOptions=-Dlog4j.configuration=file:/apps/spark-1.2.0/conf/log4j.propertiesspark.driver.extraJavaOptions=-Dlog4j.configuration=file:/apps/spark-1.2.0/conf/log4j.properties Alternatively try editing conf/spark-env.sh as described here to add the same JVM argument, although the entries in conf/spark-defaults.conf should work. If you are still not getting any joy, you can explicitly pass the location of your log4j.properties file on the command line along with your spark-submit like this if the file is contained within your JAR file and in the root directory of your classpath spark-submit --class sparky.MyApp --master spark://my.host.com:7077 --conf "spark.executor.extraJavaOptions=-Dlog4j.configuration=log4j-executor.properties" myapp.jar If the file is not on your classpath use the file: prefix and full path like this spark-submit ... --conf "spark.executor.extraJavaOptions=-Dlog4j.configuration=file:/apps/spark-1.2.0/conf/log4j-executor.properties" ... | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/28454097', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2717674/']} | jdg_70182 |
stackexchange | llm_judgeable_groundtruth_similarity | 20481 |
Below is a question asked on the forum economics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Recently, I witnessed two acquaintances of mine engaging in bartering, where one traded his services for the services of the other. Both services had about the same value. By trading them directly, they didn't pay VAT or tax on income, as no money or even goods were involved here. That wasn't the reason for the barter, but it's an interesting aspect. I'm not pushing this behaviour, but it makes me wonder why there's not more people doing this, as it is an easy way to save money by leaving out money and thus taxes. AFAIK, this is legal, or at least a 'grey area'. I suppose in most countries some law exists to prevent this happening on a large scale, but it would be impossible to enforce it everywhere or on a small scale. Especially since goods can still be valued in terms of money, but it is much harder to do so with services. And their presence and traceability is volatile, unlike with goods. I live in a rather small village where locals often (have to) make use of each others service, often requiring the service more than once. Taxes and VAT are high here in the Netherlands, so it would make a significant difference. I'd expect them to barter the service to save money, but they just pay by the bill. I did witness two people bartering once, but never before. I haven't heard it to be common in other modern countries either. Why is it not more common?
Now provide the response and nothing else.
| The main likely reasons why barter is not more common are: The inconvenience of having to find another party who both offers what you want and wants what you offer. Even if such a party can be found, the possible complexity of negotiating a "fair" transaction (eg I'll do your electrical job if you'll clean my windows monthly for the next 3 months). I don't want to encourage it, but for those seeking to evade tax, the comparative ease in many circumstances of doing so via a normal monetary transaction, but with payment in cash. A further point is that most countries with VAT have a significant "VAT threshold", that is, a minimum value of annual sales at which businesses are required to register for VAT and charge VAT (where applicable) on their sales. In the UK, for example, the threshold is currently £85,000. So an individual could provide services not exceeding that amount in any year and would not have to charge VAT. If two such individuals were to enter into a barter arrangement, there would be no VAT saving. The Netherlands is atypical in that its VAT threshold is unusually low: 1,345 Euros (see here , table on p 2). | {} | {'log_upvote_score': 6, 'links': ['https://economics.stackexchange.com/questions/20481', 'https://economics.stackexchange.com', 'https://economics.stackexchange.com/users/16173/']} | jdg_70183 |
stackexchange | llm_judgeable_groundtruth_similarity | 3266639 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Say we have a function $f(x(t),t)$ and we take the partial derivative of $f(x(t),t)$ with respect to $t$ $$\frac{\partial f(x(t),t)}{\partial t}$$ Do we hold $x(t)$ constant? For example, if I had $f(x(t),t) = x(t)^2 + t$ where $x(t)=t^2$ , I believe $\frac{\partial f(x(t),t)}{\partial x(t)}=2x(t)$ but does $\frac{\partial f(x(t),t)}{\partial t}=4t+1 $ or $1$ ? To further my understanding, is the following true: $$ \frac{\partial f(x(t),t)}{\partial t}=\frac{df(x(t),t)}{dt}$$ because $f(x(t),t)=f(t)$ after you simplify it by substituting in $x(t)$ into $f(x(t),t)$ ? But if this is true, in the above example, $\frac{\partial f(x(t),t)}{\partial x(t)}=0$ because $f(x(t),t)=f(t)$ doesn't actually depend on $x(t)$ . I see that my understanding is wrong but I can't figure out where. Any insight appreciated, thanks in advance!
Now provide the response and nothing else.
| This is a long answer, and I do eventually answer your question directly, but first, there are several preliminary matters which need to be addressed. The biggest obstacle you need to overcome is unraveling the true meaning of Leibniz's notation. Leibniz's notation is at times very convenient, but it has to be used with caution, because very very very very often, it introduces irrelevant letters in the denominator, and often times, what people intend to write differs from what they actually write. This is because it is very easy to confuse a function (which is a rule for mapping elements of the domain to the target space), with function values(which are elements of the target space) when the function is evaluated at a particular point of its domain. Let's begin with a simple one variable example. The first disclaimer is that a "real valued function of a real variable" should just be denoted by a single letter such as $f$ , and we should speak of "the function $f: \Bbb{R} \to \Bbb{R}$ " However, I'm assuming you've often been taught/you've been using the terminology: "the (real-valued) function $f(x)$ (of a real variable $x$ ) " This may sound like a good idea when first learning, but you can quickly run into all sorts of notation and terminology related confusions. To see why the second way of phrasing things is bad, ask yourself: "what's so special about $x$ ?" Does $f(x)$ mean one function, whereas $f(y)$ means a completely different function? Does $f(\xi)$ mean a different function? What about $f(\eta)$ ? Clearly the idea that the letter we use inside the brackets $()$ affects the mathematical meaning of a concept is nonsense... the concept of a function shouldn't depend on what your favourite letter is ! So, the proper way to think about this is $f: \Bbb{R} \to \Bbb{R}$ denotes the function, while $f(x)$ is the particular real number you get, when you evaluate the function $f$ on the point $x$ . Usually, if $f: \Bbb{R} \to \Bbb{R}$ is a differentiable function, and $x \in \Bbb{R}$ , then by (I think) Newton's notation, one would write $f'(x)$ for the derivative of $f$ evaluated AT the point $x$ . This is the conceptually clearest notation. In Leibniz's notation you often see something like $\dfrac{df}{dx}$ , or $\dfrac{df}{dx}\bigg|_x$ or heaven forbid, a statement like "let $y=f(x)$ be a function, then the derivative of $y$ at $x$ is $\dfrac{dy}{dx}(x)$ " or any other similar sounding statement. This is all bad (from a conceptual point of view) because if I wanted to evaluate the derivative at the origin $0$ , then I would have to write something like $\dfrac{df}{dx}\bigg|_{x=0}$ , or $\dfrac{df}{dx}(0)$ , or $\dfrac{dy}{dx}\bigg|_{x=0}$ or $\dfrac{df(x)}{dx}\bigg|_{x=0}$ or $\dfrac{dy(x)}{dx}\bigg|_{x=0}$ or some other weird symbol like this. This is bad notation (convenient at times, but pedagogically very misleading), because it introduces an irrelevant letter $x$ in the denominator, which has no meaning at all! One could ask again: "what's so special about $x$ ? what if I like $\xi$ better?" and then one could write $\dfrac{df}{d\xi}\bigg|_{\xi=0}$ . It is only because of hundreds of years of tradition/convention that people use $x$ as the "independent variable" and $y$ as the "dependent variable"... but clearly, the choice of letters SHOULD NOT affect the meaning of a mathematical statement. In newton's notation, when evaluating the derivative at the origin we'd just write $f'(0)$ . This is clear, and doesn't introduce extra irrelevant letters. Things get much worse when we start talking about the chain rule. In Newton's formulation, the chain rule would say something like: If $f,g: \Bbb{R} \to \Bbb{R}$ are differentiable functions, then the composition $f \circ g$ is differentiable as well, and for every $x \in \text{domain}(g) = \Bbb{R}$ , we have \begin{equation}(f \circ g)'(x)= f'(g(x)) \cdot g'(x)\end{equation} This may seem abstract when first learning, but it's really the clearest formulation, and it avoids a whole lot of confusions when dealing with more complicated objects. In Leibniz's notation, one might see a statement like: If $f(y)$ and $g(x)$ are real valued functions of a real variable, then \begin{equation}\dfrac{d f(g(x))}{dx}(x) = \dfrac{df}{dy}(g(x)) \cdot \dfrac{dg}{dx}(x)\end{equation} (this is still not too bad, because I purposely included the points of evaluation to make it more accurate; but this is still bad because of the unnecessary use of $x$ and $y$ in the denominators) The worst is a statement like: If $z= z(y)$ and $y=y(x)$ are real valued functions of a real variable, then \begin{equation}\dfrac{d z}{dx} = \dfrac{dz}{dy} \cdot \dfrac{dy}{dx}\end{equation} This last formulation is the easiest to remember because the $dy$ 's "cancel", and everything seems to be nice and dandy. But from a notational/logical point of view, it is utter nonsense, because it introduces irrelevant letters $x$ and $y$ , and completely suppresses where the derivatives are being evaluated. Also, the $z$ 's mean different things on both sides of the equal sign... because how can $z$ be a "function of $x$ " on the LHS, whereas $z$ is a "function of $y$ " on the RHS? This kind of statement is just nonsense. It is only in simple cases whereby novices can directly apply this rule, and get correct answers. The equal sign $=$ appearing in the third formulation really means "it is equal if the reader correctly interprets everything" (and this gets harder and harder to do as you learn more complicated stuff... so until you really know what you're doing, just avoid this). Now that I have been pedantic about notation in the single variable case, let's go to the case when we have a real-valued function of two variables..., more explicitly, we have a function $f: \Bbb{R}^2 \to \Bbb{R}$ . In the single variable case, I made an argument for the use of the $f'(\cdot)$ notation, as opposed to $\dfrac{df}{dx}$ or something else. Here, I'll do the same thing. Often, the partial derivatives are denoted by \begin{equation}\dfrac{\partial f}{\partial x} \quad \text{and} \quad \dfrac{\partial f}{\partial y}\end{equation} or \begin{equation}\dfrac{\partial f(x,y)}{\partial x} \quad \text{and} \quad \dfrac{\partial f(x,y)}{\partial y}\end{equation} or \begin{equation}\dfrac{\partial f}{\partial x} \bigg|_{(x,y)} \quad \text{and} \quad \dfrac{\partial f}{\partial y}\bigg|_{(x,y)}\end{equation} or some combination of the above notations. Once again, this is very bad notation, because, what's so special about the choice of letters $x,y$ ? Why not Greek letters $\xi,\eta$ or even a mix of Greek and English like $\alpha,b$ ? So why not something like \begin{equation}\dfrac{\partial f}{\partial \alpha} \quad \text{and} \quad \dfrac{\partial f}{\partial b}?\end{equation} I'm not saying that you should annoy people and mix up your languages/notation just because "you CAN". Rather I'm just pointing out the logical flaws which are inherent in Leibniz's notation. A better notation would be \begin{equation}(\partial_1f)(x,y) \quad \text{and} \quad (\partial_2f)(x,y)\end{equation} to mean the partial derivatives of the function $f$ evaluated at a particular point $(x,y)$ . The subscripts $1$ and $2$ in $\partial_1f$ and $\partial_2f$ are indeed meaningful here because they tell you which argument of the function $f$ you are varying. Just to be explicit, $(\partial_1f)(\alpha,y)$ means "compute the partial derivative when you vary the first argument of $f$ , then evaluate this on the point $(\alpha,y)$ ". So in terms of limits it equals \begin{equation}(\partial_1f)(\alpha,y) = \lim_{h \to 0} \dfrac{f(\alpha+h, y) - f(\alpha,y)}{h}\end{equation} Once again, here, $\partial_1f$ is a function from $\Bbb{R}^2$ into $\Bbb{R}$ ; so we'd write this as $\partial_1f: \Bbb{R}^2 \to \Bbb{R}$ , while $(\partial_1f)(\alpha, y)$ is the particular real number we get if we compute the limit above. I really hope you appreciate the importance of being precise about what a function is vs. what is the value of the function when evaluated on a point of its domain, and the proper notation to use in either case. Now, we can begin to dissect your question and phrase it properly. You started by saying Say we have a function $f(x(t),t)$ ... I understand what you want to say, but to make things explicit, I'll point out some things. You don't just have a single function. You actually have $3$ functions in the game at this point. First, you have a function $f: \Bbb{R}^2 \to \Bbb{R}$ , next, you have a function $x: \Bbb{R} \to \Bbb{R}$ , and lastly, you're defining a new function via composition as follows: you're defining a function $g: \Bbb{R} \to \Bbb{R}$ by the rule \begin{equation}g(t) := f(x(t),t)\end{equation} Hence, you have $3$ functions $f,x,g$ , and you should keep in mind which is which. Next, you say ... and we take the partial derivative of $f(x(t),t)$ with respect to $t$ \begin{equation}\frac{\partial f(x(t),t)}{\partial t}\end{equation} do we hold $x(t)$ constant? The notation you used is very bad, and this is presumably why you're getting confused. As you mentioned in your comment, you can't vary $t$ , and simultaneously keep $x(t)$ fixed (unless $x$ is a constant function). The notation \begin{equation}\frac{\partial f(x(t),t)}{\partial t}\end{equation} is really just nonsense. (I don't mean to be disrespectful, I just want to point out that such an expression makes no sense, because it is unclear what you mean) There are two possible interpretations of what you intended to write above. In proper notation, these are: \begin{equation}(\partial_2f)(x(t),t) \quad \text{and} \quad g'(t).\end{equation} In the first case, this means you compute the partial derivative wrt the second argument of $f$ , i.e compute what the function $\partial_2f: \Bbb{R}^2 \to \Bbb{R}$ is, and evaluate that function on the tuple of numbers $(x(t),t)$ . In the second case, you are computing the derivative of $g$ , i.e compute the function $g': \Bbb{R} \to \Bbb{R}$ , and then evaluate on the particular real number $t$ . These two are NOT the same in general! If we write out explicitly in terms of limits, we have: \begin{equation}(\partial_2f)(x(t),t) := \lim_{h \to 0} \dfrac{f(x(t),t+h) - f(x(t),t)}{h}\end{equation} whereas \begin{align}g'(t) &:= \lim_{h \to 0} \dfrac{g(t+h) - g(t)}{h} \\&:= \lim_{h \to 0} \dfrac{f \left(x(t+h), t+h \right) - f(x(t),t)}{h}\end{align} This should make it clear that in general the two are different. Now, the relationship between $g'$ and the partial derivatives $\partial_1f$ and $\partial_2f$ is obtained by the chain rule. We have: \begin{align}g'(t) = \left[ \left( \partial_1f \right)(x(t),t) \right] \cdot x'(t) + \left( \partial_2f \right)(x(t),t)\end{align} This is the notationally precise way of writing things, because it avoids introducing unnecessary letters in the denominator, and it doesn't use the same letter for two different purposes, and also, it explicitly mentions where all the derivatives are being evaluated. One might see the following equation instead: \begin{equation}\dfrac{df(x(t),t)}{dt} \bigg|_{t} = \dfrac{\partial f}{\partial x} \bigg|_{(x(t),t)} \cdot \dfrac{dx}{dt} \bigg|_{t} + \dfrac{\partial f}{\partial t} \bigg|_{(x(t),t)}\end{equation} (this is still imprecise because of the irrelevant usage of $x$ and $t$ in the denominators, but this is as precise as you can get using Leibniz's notation) You may even encounter a statement like \begin{equation}\dfrac{df}{dt} = \dfrac{\partial f}{\partial x} \cdot \dfrac{dx}{dt} + \dfrac{\partial f}{\partial t},\end{equation} which is absolutely terrible notation, because the $f$ on the LHS and the $f$ on the RHS mean completely different things. (This is what I meant in the beginning when I said "what people intend to write differs from what they actually write.") Now, let's get to your specific example; I'll write out the derivatives in precise notation. (By the way, I believe you made some computational mistakes) Recall that as I mentioned above, there isn't just one function involved. There are $3$ different functions. In your particular example, they are: $f: \Bbb{R}^2 \to \Bbb{R}$ defined by $f(\xi, \eta) = \xi^2 + \eta$ $x: \Bbb{R} \to \Bbb{R}$ defined by $x(t) = t^2$ $g: \Bbb{R} \to \Bbb{R}$ defined by $g(t) = f(x(t),t) = f(t^2,t) = (t^2)^2 + t = t^4 + t$ I purposely used $\xi,\eta$ , rather than writing $f(x,t) = x^2 + t$ , because then we would be using $x$ in two places with different meanings; as the first argument of $f$ , and also as a function (technically it is valid to write $f(x,t) = x^2 + t$ , but I used $\xi,\eta$ simply to avoid potential confusion). Using precise notation, the derivatives are as follows: \begin{align}(\partial_1f)(\xi,\eta) &= 2\xi \qquad (\partial_2f)(\xi,\eta) = 1 \\\\x'(t)&= 2t \\\\g'(t) &= 4t^3 + 1\end{align} (again pay close attention to what is the function, and where it is being evaluated). Notice also that: \begin{align}\left[ \left( \partial_1f \right)(x(t),t) \right] \cdot x'(t) + \left( \partial_2f \right)(x(t),t) &= [2x(t)] \cdot 2t + 1 \\&= [2t^2] \cdot 2t + 1 \\&= 4t^3 + 1 \\&= g'(t),\end{align} so we have explicitly verified that the chain rule works in this case. Once again, to directly address your question, you said: I believe $\frac{\partial f(x(t),t)}{\partial x(t)}=2x(t)$ but does $\frac{\partial f(x(t),t)}{\partial t}=4t+1 $ or $1$ ? this is extremely bad notation, putting $x(t)$ in the denominator gives the false idea that you are somehow differentiating $f$ with respect to the "function" $x(t)$ , while keeping $t$ fixed... but then you might be confused how can $t$ be fixed if $x(t)$ is varying... or something like this. My point is that bad notation leads to misconceptions. So, in this case, the proper way to write these computations would be: \begin{align}\begin{cases}(\partial_1f)(x(t),t) &= 2 x(t) = 2t^2 \\\\(\partial_2f)(x(t),t) &= 1 \\\\g'(t) &= 4t^3 + 1 \qquad \text{(you wrote $4t$ rather than $4t^3$ which is wrong)}\end{cases}\end{align} You should be able to figure out the answers to your remaining questions if you understood what I've been trying to emphasise, but for the sake of completeness, here it is: You asked: To further my understanding, is the following true: $$ \frac{\partial f(x(t),t)}{\partial t}=\frac{df(x(t),t)}{dt}$$ because $f(x(t),t)=f(t)$ after you simplify it by substituting in $x(t)$ into $f(x(t),t)$ ? The immediate answer is "what you have written is ambiguous". The correct relationship is the chain rule which I wrote above (the three versions which go from precise to sloppy) \begin{align}g'(t) = \left[ \left( \partial_1f \right)(x(t),t) \right] \cdot x'(t) + \left( \partial_2f \right)(x(t),t) \tag{most precise}\end{align} \begin{align}\dfrac{df(x(t),t)}{dt} \bigg|_{t} = \dfrac{\partial f}{\partial x} \bigg|_{(x(t),t)} \cdot \dfrac{dx}{dt} \bigg|_{t} + \dfrac{\partial f}{\partial t} \bigg|_{(x(t),t)} \tag{medium}\end{align} \begin{align}\dfrac{df}{dt}= \dfrac{\partial f}{\partial x} \cdot \dfrac{dx}{dt} + \dfrac{\partial f}{\partial t} \tag{very sloppy}\end{align} Also, please don't ever say things like " $f(x(t),t) = f(t)...$ " this makes no sense. I know that you mean to say after substituting $x(t)$ in the first argument for $f$ , and $t$ in the second argument for $f$ , you are left with something which only depends on $t$ . But writing $f(t)$ for this new function of $t$ is bad notation. Use a different letter like $g$ , as I have done above, to specify what the composition is. Otherwise, this raises the question "why does $f$ have two arguments on the LHS where as it only has one argument on the RHS?" Also, it is this misuse of the letter $f$ on the two sides of the equal sign which confused you, and hence you said But if this is true, in the above example, $\frac{\partial f(x(t),t)}{\partial x(t)}=0$ because $f(x(t),t)=f(t)$ doesn't actually depend on $x(t)$ . Once again, as I mentioned above, we have $(\partial_1f)(x(t),t) = 2x(t) = 2t^2$ (this is $0$ if and only if $t=0$ ). FINAL REMARKS: I hope now you can appreciate why it is absolutely essential to understand what your notation means, and to distinguish between functions, like $f,g,x$ , and $\partial_1f, \partial_2f, g', x'$ , which are RULES that map each element in the domain to a unique element of the target space , and function VALUES like $f(\xi,\eta),f(x(t),t), g(t), x(t)$ , which are elements of the target space of the function. Also, I feel that it is mandatory to say the following. My entire answer has criticized Leibniz's notation for derivatives and partial derivatives, because it is very sloppy, and can cause all kinds of confusion (see any average math/physics book, in particular thermodynamics for all variants of notational/logical nonsense) and I have given several reasons why it should be avoided. However, having said that I do use Leibniz's notation often, but it is only because I (usually) know how to "translate" an imprecise notation into a completely precise form, and I (usually) understand the pitfalls associated with the notation. So, until you are able to perform this "translation" from precise to imprecise notation, I suggest you stick to the precise notation, until all the fundamental concepts are clear, and only then abuse notation. This may be tedious in the beginning, but it is well worth it to avoid silly confusions. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/3266639', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/646300/']} | jdg_70184 |
stackexchange | llm_judgeable_groundtruth_similarity | 92327 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Given the universal enveloping algebra, $U(\mathfrak{sl}(2))$ the coalgebra structure is defined such that the generators $X,Y$ and $H$ are primitive elements. From this, is there a "nice" way to motivate the coproduct for $U_{q}(\mathfrak{sl}(2))$? Of course, this question can be generalized to: given $U(\mathfrak{g})$ how does one discover what the coproduct for $U_{q}(\mathfrak{g})$ should be based on what the coproduct for $U(\mathfrak{g})$ is? Most texts simply plunk down the structure without really motivating where it comes from or how it was arrived at (or the various ways it could be arrived at). Given that the generators for $U_{q}(\mathfrak{sl}(2))$ are $E,F,K,K^{-1}$, I surmised that since $KK^{-1} = K^{-1}K = 1$ and $\Delta$ is to be an algebra morphism, then $$\Delta(KK^{-1}) = \Delta(K)\Delta(K^{-1})=1\otimes 1 = \Delta(K^{-1})\Delta(K) = \Delta(K^{-1}K)$$seems to "naturally" suggest the assignment $\Delta(K) = K\otimes K$ and $\Delta(K^{-1}) = K^{-1}\otimes K^{-1}$. With this I then looked at$$\Delta(EF-FE) = \Delta\left(\frac{K-K^{-1}}{q-q^{-1}}\right)$$and was able to use some algebraic manipulation to, sort of, "coax out" that $$\Delta(E) = E\otimes K+1\otimes E \qquad \Delta(F) = F\otimes 1+K^{-1}\otimes F$$This all seems well and good, but nevertheless pedestrian (and a tad artificial). Is there or are there some more elegant ways (revealing deeper connections) to motivate the coalgebra structure (or the entire Hopf structure) of $U_{q}(\mathfrak{sl}(2))$ or more generally $U_{q}(\mathfrak{g})$? Thanks in advance.
Now provide the response and nothing else.
| Let me add a few motivations to the nice ones already provided. The first goes through the dual Hopf algebra $O_q(SL_2)$; it is more roundabout, but each step is more naturally motivated. (this approach is explained in, e.g. Kassel's book on Quantum Groups. First, let me admit the "quantum plane", whose algebra of functions is defined as $\mathbb{C}_q[x,y]:=\mathbb{C}\langle x,y\rangle / (yx-qxy).$ If we choose to regard this as an algebra of functions on a quantum $\mathbb{C}^2$, then we should expect that whatever our definition of quantum matrices on $\mathbb{C}^2$, its algebra $O_q(Mat_2)$ should have a co-action, i.e. a map $\Delta:C_q[x,y]\to C_q[x,y]\otimes O_q(Mat_2)$. Let us define elements $a,b,c,d$ by the equations $\Delta(x)=x\otimes a + y\otimes b$, $\Delta(y)=x\otimes c+y\otimes d$. Note that these are the same equations as for the usual coaction of $O(Mat_2)$ on $\mathbb{C}[x,y]$, and are just the formula for matrix multiplication written in a funny way. Now ask that $O_q(Mat_2)$ be generated by those elements, subject to the requirement that $\Delta$ is a map of algebras. You find relations on $a,b,c,d$ as follows: $\Delta(xy)=\Delta(x)\Delta(y)=x^2\otimes ac + xy\otimes ad + yx\otimes cb + y^2\otimes bd$. Setting this equal to $1/q\Delta(yx)$ gives you relations like$ca=qac$, $bc=cb$, $ad-da = (q-q^{1})bc$, and so on, precisely the defining relations of $O_q(Mat_2)$. Direct computation tells you that $det_q=ad-q^{-1}bc$ (if i remember correctly) is the unique central element in degree 2, and so quotienting by $det_q-1$ defines $O_q(SL_2)$. Now, define $U_q(sl_2)$ as the dual Hopf algebra w.r.t to $O_q(SL_2)$, and using this, recover the relations for $U_q(sl_2)$ uniquely. Now let me give another motivation for the relations in $U_q(sl_2)$ coming not from the quantum geometry point of view, but from braided tensor categories. First, for any number $q$, note that there is a braided tensor category structure on the category of $\mathbb{Z}$-graded vector spaces, where $\sigma(v_k\otimes v_l)=q^{kl} v_l\otimes v_k,$ for $v_k, v_l$ in degrees $k$ and $l$. Ranging over $q$ this essentially exhausts the possible braidings on $Z$-graded vector spaces. Let's call this category $C_q$. Let's denote by $V_k$ the one dimensional vector space concentrated in degree $k$. Now, consider the tensor algebra of $V_1$, $T(V_1)$ in this category. As any tensor algebra, it admits the free coproduct $\Delta: V_1\to V_1\otimes V_0 + V_0\otimes V_1$, making $T(V_1)$ into a Hopf algebra in $C_q$. Now if we have modules $M,N$ in $C_q$, we can act on their tensor product $M\otimes N$ by the co-product, but we have to be careful! $v . (m \otimes n) = \Delta(v) m \otimes n$, but now $T(V_1)\otimes T(V_1)$ acts on $m\otimes n$, by first braiding the second factor of $T(V_1)$ past $m$, then acting in the obvious way: $T(V_1)\otimes T(V_1)\otimes m \otimes n \xrightarrow{\sigma} T(V_1)\otimes m \otimes T(V_1)\otimes n$ In particular if $v\in V_1$, then the braiding adds a factor of $q^{|m|}$ to one of the summands of $\Delta(v)$ when you braid past. Now in this story, $T(V_1)$ is the free algebra on $E$, which is $U_q(n_+)$ in this case. A $\mathbb{Z}$-graded vector space is the same as an integral $\mathbb{C}[K,K^{-1}]$-module, since the grading is determined by eigenspaces of $K$, and a $U_q(n_+)$-module in the category $C_q$ is the same as an integral $U_q(b)$-module. So in this latter way of phrasing things the extra $q^{|m|}$ we found has to be put in by hand, by putting a $K$ in the second term of the co-product: $\Delta(E)=E\otimes 1 + K\otimes E$. On the one hand, this second explanation probably seems rather artificial; on the other hand it is saying that there is nothing at all mysterious about the appearance of $K$'s in the formula for the co-product, it's just that you were working in the symmetric category of vector spaces, rather than the category of graded vector spaces, where you should have worked. The penalty you pay is putting $K$'s in places to keep track of what the braiding was keeping track of for you. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/92327', 'https://mathoverflow.net', 'https://mathoverflow.net/users/22360/']} | jdg_70185 |
stackexchange | llm_judgeable_groundtruth_similarity | 98867 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In Java, C# and many other strongly-typed, statically checked languages, we are used to write code like this: public void m1() { ... }protected void m2() { ... }private void m2() { ... }void m2() { ... } Some dynamically checked languages don't provide keywords to express the level of "privateness" of a given class member and rely on coding conventions instead. Python for example prefixes private members with an underscore: _m(self): pass It can be argued that providing such keywords in dynamically checked languages would add little use since it is only checked at runtime. However, I can't find a good reason to provide these keywords in statically checked languages, either.I find the requirement to fill my code with rather verbose keywords like protected both annoying and distracting. So far, I have not been in a situation where a compiler error caused by these keywords would have saved me from a bug. Quite in contrary, I have been in situations where a mistakenly placed protected prevented me from using a library. With this in mind, my question is: Is information hiding more than a convention between programmers used to define what is part of the official interface of a class? Can it be used to secure a class' secret state from being attacked? Can reflection override this mechanism? What would make it worthwhile for the compiler to enforce information hiding?
Now provide the response and nothing else.
| The "private" access specifier is not about the compiler error it generates the first time you see it. In reality it's about preventing you from accessing something that is still subject to change when the implementation of the class holding the private member changes. In other words, not allowing you to use it when it's still working prevents you from accidentally still using it when it's no longer working. As Delnan remarked below the prefix convention discourages accidental use of members that are subject to change as long as the convention is followed and understood correctly. For a malicious (or ignorant) user it does nothing to stop them from accessing that member with all the possible consequences. In languages with built-in support for access specifiers this does not happen in ignorance (compiler error), and stands out like a sore thumb when malicious (strange constructions to get to the private member). The "protected" access specifier is a different story - don't think of this as simply "not quite public" or "a lot like private". "Protected" means that you will probably want to use that functionality when you derive from the class containing the protected member. The protected members are part of the "extension interface" that you will use to add functionality on top of existing classes without changing those existing classes themselves. So, short recap: public: Use safely on instances of the class, the purpose of the class, will not change. protected: To be used when extending (deriving from) the class - may change if the implementation has to change drastically. private: Do not touch! May change at will to provide a better implementation of the expected interfaces. | {} | {'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/98867', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/21184/']} | jdg_70186 |
stackexchange | llm_judgeable_groundtruth_similarity | 26830624 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to check the SSL certificate thumbprint on iOS? Bonus round: Does the thumbprint change when i extend my certificate? Are there any special considerations when extending the certificate if verify the thumbprint? Thanks
Now provide the response and nothing else.
| To verify the thumbprint/fingerprint, I use a category on NSURLAuthenticationChallenge . You don't have to use a category or can use a different input but the code to get the fingerprint of a certificate would actually be the same. NSURLAuthenticationChallenge+Fingerprint.h @import Foundation;@interface NSURLAuthenticationChallenge (Fingerprint)- (NSString *)SHA1Fingerprint;- (NSString *)MD5Fingerprint;@end NSURLAuthenticationChallenge+Fingerprint.m #import "NSURLAuthenticationChallenge+Fingerprint.h"#import <CommonCrypto/CommonCrypto.h>typedef NS_ENUM(NSInteger, kFingerprintType) { kFingerprintTypeSHA1, kFingerprintTypeMD5};@implementation NSURLAuthenticationChallenge (Fingerprint)- (NSString *)SHA1Fingerprint{ return [self fingerprintWithType:kFingerprintTypeSHA1];}- (NSString *)MD5Fingerprint{ return [self fingerprintWithType:kFingerprintTypeMD5];}- (NSString *)fingerprintWithType:(kFingerprintType)type{ SecTrustRef serverTrust = [[self protectionSpace] serverTrust]; SecTrustResultType trustResultType; SecTrustEvaluate(serverTrust, &trustResultType); SecCertificateRef certificate = SecTrustGetCertificateAtIndex(serverTrust, (SecTrustGetCertificateCount(serverTrust) - 1)); NSData *data = CFBridgingRelease(SecCertificateCopyData(certificate)); const NSUInteger length = [self lengthWithType:type]; unsigned char buffer[length]; switch (type) { case kFingerprintTypeSHA1: { CC_SHA1(data.bytes, (CC_LONG)data.length, buffer); break; } case kFingerprintTypeMD5: { CC_MD5(data.bytes, (CC_LONG)data.length, buffer); break; } } NSMutableString *fingerprint = [NSMutableString stringWithCapacity:length * 3]; for (int i = 0; i < length; i++) { [fingerprint appendFormat:@"%02x ",buffer[i]]; } return [fingerprint stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceCharacterSet]];}- (NSUInteger)lengthWithType:(kFingerprintType)type{ switch (type) { case kFingerprintTypeSHA1: { return CC_SHA1_DIGEST_LENGTH; } case kFingerprintTypeMD5: { return CC_MD5_DIGEST_LENGTH; } }} With the example code: #pragma mark - UIViewController- (void)viewDidLoad{ [super viewDidLoad]; NSURL *url = [NSURL URLWithString:@"YOUR_HTTPS_URL"]; NSURLSessionConfiguration *configuration = [NSURLSessionConfiguration defaultSessionConfiguration]; NSURLSession *session = [NSURLSession sessionWithConfiguration:configuration delegate:self delegateQueue:nil]; NSURLSessionDataTask *task = [session dataTaskWithURL:url completionHandler:^(NSData *data, NSURLResponse *response, NSError *error) { // Do something meaningful }]; [task resume];}#pragma mark - NSURLSessionDelegate- (void)URLSession:(NSURLSession *)session didReceiveChallenge:(NSURLAuthenticationChallenge *)challenge completionHandler:(void (^)(NSURLSessionAuthChallengeDisposition disposition, NSURLCredential *credential))completionHandler{ if ([challenge.protectionSpace.authenticationMethod isEqualToString:NSURLAuthenticationMethodServerTrust]) { NSLog(@"%@", challenge.SHA1Fingerprint); NSLog(@"%@", challenge.MD5Fingerprint); } // Do something meaningful} I would get the output: 2014-11-17 00:09:10.880 test[48237:2922518] f9 d5 24 c2 08 6b bf 12 6f 48 cd 8a f0 4d ca 3e 7c f0 3f bc2014-11-17 00:09:10.880 test[48237:2922518] bf 30 1a 8d f9 cb 15 bd 51 73 c8 22 a5 54 62 8a Safari can be used to verify the data: Regarding the Extended Validation certificates, they're not a different type of certificates, they have the same mechanisms, but the certificate policies field will use a specific certificate policy identifier. The fingerprint being the hash of the entire certificate, with any modifications (like using EV certificates), the fingerprint would be different but the process to get the fingerprint would be the same. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26830624', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/481983/']} | jdg_70187 |
stackexchange | llm_judgeable_groundtruth_similarity | 4670013 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm looking for a fast way to get the height and width of an image in pixels. It should handle at least JPG, PNG and TIFF, but the more the better. I emphasize fast because my images are quite big (up to 250 MB) and it takes soooo long to get the size with ImageMagick's identify because it obviously reads the images as a whole first. Preferably, I look for a way that works well in Ruby, or even in Rails 3. I know the theory stuff (various image formats, their headers and their differences, and so on). Indeed, I ask for some kind of library that can resolve my issue in a fairly generic way. I just found imagesize which looks promising although development seems to be dead.
Now provide the response and nothing else.
| The file command prints the dimensions for several image formats (e.g. PNG, GIF, JPEG; recent versions also PPM, WEBP), and does only read the header. The identify command (from ImageMagick) prints lots of image information for a wide variety of images. It seems to restrain itself to reading the header portion (see comments). It also uses a unified format which file sadly lacks. exiv2 gives you dimensions for many formats, including JPEG, TIFF, PNG, GIF, WEBP, even if no EXIF header present. It is unclear if it reads the whole data for that though. See the manpage of exiv2 for all supported image formats. head -n1 will give you the dimensions for PPM, PGM formats. For formats popular on the web, both exiv2 and identify will do the job.Depending on the use-case you may need to write your own script that combines/parses outputs of several tools. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/4670013', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/220652/']} | jdg_70188 |
stackexchange | llm_judgeable_groundtruth_similarity | 2868835 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm wondering what the difference is between the following two modules # First Examplemodule Parent module Child endend and # Second Examplemodule Parent::Childend Using the 2nd method, it appears as though the Parent module must be previously defined, otherwise I get an 'uninitialized constant' error Given this, what is the preferred way of defining modules such as this and then adding nested children with regards to syntax and file structure (ie. folders etc). Reference to a Rails way would be greatly appreciated. Are these two examples for all intents and purposes equivalent?
Now provide the response and nothing else.
| In the first example, it defines the Parent module and then the Child module. The second example, as you say yourself, must have the Parent module defined before hand. At the expense of one more line of code, you ensure that the module that you're nesting under by using your first example is always going to be defined. For a Rails example let's look into the railties/lib/rails/engine.rb file which re-opens the Rails module and then defines an Engine class inside of it. This could have been done with simply: class Rails::Engine But instead perhaps for the reasons stated above and perhaps also for clarity's sake, the module was defined first, then the class inside. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2868835', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/74389/']} | jdg_70189 |
stackexchange | llm_judgeable_groundtruth_similarity | 842 |
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is the Milky Way Visible from Nebraska? If so, where is the best place to view it, and also what would be the best time of night to see it? I know this is probably a very novice question, just trying to get a better understanding of what I can see. If it helps, I'm south of Omaha.
Now provide the response and nothing else.
| Well, because the axis of the rotation of the Earth is not the same as the axis of rotation of the disk of the Milky Way (and also because we're transforming a 2-dimensional spherical map into a 2-dimensional cartesian map), the path of the disk of the Milky Way galaxy looks something like this: So, there is actually a wide range in declination that the Milky Way can be seen at. The range of declination you can see depends on your latitude (for a review of RA and declination, coordinates used in the celestial coordinate system , see this post ). For example, here in Philadelphia (just about $+40^{\circ}$ declination), I'd be able to see from from $-50^{\circ}$ to $+90^{\circ}$ in declination. For Nebraska, find the latitude of your location. To calculate the lower limit, add your latitude (which will be positive since you're in the northern hemisphere) to $-90^{\circ}$ (mine was $+40^{\circ}$, so: $-90^{\circ} + 40^{\circ} = -50^{\circ}$). To find the upper limit in declination, it's even easier. Since you're in the northern hemisphere, you can actually see the north celestial pole. This means that the upper limit is simply the maximum it can possibly be, which is $+90^{\circ}$. The larger the latitude the more circumpolar your night sky gets. The good news is that you should definitely be able to see it in Nebraska. The only question during what season will you see it at its best? Take a look at the constellations which contain parts of the Milky Way and find the season you can see it in - now you know where to look. The last thing I want to add is that you need nice dark conditions. Local weather conditions or sources of light pollution may very easily hide the Milky Way. From Philadelphia, even on perfectly clear nights we have no chance of seeing it. If you've got a city on one of your horizons, try to plan around that - either go to a darker location or try looking opposite in the sky from any sources of light pollution. | {} | {'log_upvote_score': 4, 'links': ['https://astronomy.stackexchange.com/questions/842', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/360/']} | jdg_70190 |
stackexchange | llm_judgeable_groundtruth_similarity | 6434925 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Here's what I want to do: I have a UIBezierPath and I want to pass it to some method for it to be drawn. Or simply draw it from the method in which it is created. I'm not sure how to indicate which view it should be drawn in. Do all methods for drawing have to start with - (void)drawRect:(CGRect)rect { ...} ? can I do - (void)drawRect:(CGRect)rect withBezierPath:(UIBezierPath*) bezierPath { ... } ?? How do I call this function, or method, from another method?
Now provide the response and nothing else.
| drawRect: is something that is invoked automatically when you message setNeedsDisplay or setNeedsDisplayInRect: on a view. You never call drawRect: directly. However you are right in saying that all drawing operations are done within the drawRect: method. Typical implementation would be, - (void)drawRect:(CGRect)rect { CGContextRef context = UIGraphicsGetCurrentContext(); /* Do your drawing on `context` */} Since you are using UIBezierPath s, you will need to maintain an array of bezier paths that you will need to draw and then call setNeedsDisplay when something changes. - (void)drawRect:(CGRect)rect { for ( UIBezierPath * path in bezierPaths ) { /* set stroke color and fill color for the path */ [path fill]; [path stroke]; }} where bezierPaths is an array of UIBezierPath s. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6434925', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/802418/']} | jdg_70191 |
stackexchange | llm_judgeable_groundtruth_similarity | 386798 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm currently learning about driving a small DC motor (~ 5V). My research so far indicated that a L298N might be a good choice to quickly get something up and running. However, I'm also trying to understand what is exactly happening (i.e. the internal H-bridge) and there is something which isn't really clear to me. The example circuit in the datasheet on page 6 uses four flyback diodes in a configuration that seems to be common for H-bridges (since other sites recommend similar H-bridge circuits). The configuration, neglecting the L298N for a moment, essentially looks like this: Now, if I understand it correctly, these diodes provide a path for the motor to keep the current flowing when the MOSFETs get switched off to prevent large voltage spikes. The path for this current however seems to go right through the power source in the reverse direction . That is, reversed relative to the direction of the current that a power source normally supplies. This is indicated in the figure below. Since I'm relatively new to the world of electronics, this seems like a weird thing to do. I get that this works on paper if the power source is a ideal constant voltage source. But is this actually safe in real life? Let's say I'm using a few alkaline batteries to power my project, then this reverse current seems like recharging. And the Wikipedia page about alkaline batteries says: Attempts to recharge may cause rupture, or the leaking of hazardous liquids which will corrode the equipment. Or what if I'm using a lab power supply or even a voltage regulator as a voltage source? How these reverse current are handled doesn't make a lot of sense to me and I'm worried that I might blow up my equipment..Could somebody enlighten me about why the circuit above is actually safe? And if it isn't safe, then why are a lot of sites recommending it and what circuit should I use instead?
Now provide the response and nothing else.
| If the motor is producing power, the net power into the motor must be positive, so the net current out of the batteries must be in the direction that drains them, so you are fine. If the motor is being regeneratively braked, then power can flow out of the motor and can push the supply voltage up and charge the batteries (this is used to advantage in electric vehicles). It's not something you generally need to worry about with a small motor connected directly to primary cells, but if you have a supply that cannot sink current (eg. a rectifier + filter) it could cause issues if the capacitor isn't large enough. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/386798', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/192460/']} | jdg_70192 |
stackexchange | llm_judgeable_groundtruth_similarity | 7911651 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a number that I would like to convert to binary (from decimal) in C. I would like my binary to always be in 5 bits (the decimal will never exceed 31). I already have a function that does it manually by dividing but that is hard to pad it to 5 bits. Is there any easier way? Perhaps using bitwise shift? I would also like the binary to be represented in a char *
Now provide the response and nothing else.
| Here's an elegant solution: void getBin(int num, char *str){ *(str+5) = '\0'; int mask = 0x10 << 1; while(mask >>= 1) *str++ = !!(mask & num) + '0';} Here, we start by making sure the string ends in a null character. Then, we create a mask with a single one in it (its the mask you would expect, shifted to the left once to account for the shift in the first run of the while conditional). Each time through the loop, the mask is shifted one place to the right, and then the corresponding character is set to either a '1' or a '0' (the !! ensure that we are adding either a 0 or a 1 to '0' ). Finally, when the 1 in the mask is shifted out of the number, the while loop ends. To test it, use the following: int main(){ char str[6]; getBin(10, str); printf("%s\n", str); return 0;} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7911651', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/680441/']} | jdg_70193 |
stackexchange | llm_judgeable_groundtruth_similarity | 2204217 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am attempting to find a formula for $$(\mu * \mu)(n)$$ where * represents the Dirichlet Convolution operator. I know this can be expressed as $$\sum_{d|n} \mu(d)\mu(\frac{n}{d})$$ but I'd like the formula to not include any sums over divisors. I know it will be necessary to include information about the factorization of n, but I'm not sure how. For reference, $$\mu(n)= \begin{cases}0,&\text{if $n$ has one or more repeated prime factors}\\1,&\text{if $n$=1}\\(-1)^k,&\text{if $n$ is a product of $k$ distinct primes}\end{cases}$$Some initial thoughts: the Dirichlet Convolution of two multiplicative functions is multiplicative, and since $$\mu(n)$$ is multiplicative, then so is $$(\mu * \mu)(n)$$ Any information to point me in the right direction on this will be greatly appreciated.
Now provide the response and nothing else.
| As you say, this is multiplicative. This means that once you know how to calculate it for numbers of the form $p^a$ you can calculate it for arbitrary $n$ by writing $n$ as a product of powers of different primes $p^aq^b\cdots$, and then multiplying the corresponding values $(\mu*\mu)(n)=((\mu*\mu)(p^a))((\mu*\mu)(q^b))\cdots$. If $a=1$, $\sum_{d\mid p}\mu(d)\mu(p/d)=-2$ (the two factors each contribute $-1$). If $a=2$, $\sum_{d\mid p^2}\mu(d)\mu(p^2/d)=1$, since the only way for $\mu(d)\mu(p^2/d)$ to be non-zero is if $d\leq p$ and $p^2/d\leq p$, which requires $d=p$. If $a>2$ then $\sum_{d\mid p^a}\mu(d)\mu(p^2/d)=0$, since for each term in the sum either $d$ or $p^a/d$ is divisible by $p^2$. So your function is $0$ if $n$ is divisible by the cube of any prime. Otherwise it is $(-2)^k$, where $k$ is the number of primes that divide $n$ exactly once (i.e. their squares do not divide $n$). | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2204217', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/429543/']} | jdg_70194 |
stackexchange | llm_judgeable_groundtruth_similarity | 71691 |
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm learning networking programming in C and there is a question bothers me a lot, does the destination port change during TCP three-way handshake? Let's say I have a cilent application running on port 5000 and a web server running on tcp port 80. We know that port 80 is just a welcoming port, when the web server reveives a http request, it create a new connection port(let's say 5000) So my understanding is, the initial address of the client uses to send packet to the the server's ip address + port 80, and after the server(listening on port 80) accepts the request and create a new connection port(5000), then subsequent packets(contain data payload) that client send to web server is the server's ip address + port 5000. So the destination port actually "change" from 80 to 5000 if you use wireshark to capture packets you will see two ports, 80 and 5000 as destination port in TCP headers, is my understanding correct?
Now provide the response and nothing else.
| No, a TCP connection is uniquely identified by both source and destination IP and TCP (port) addresses. Changing any one of those will break the TCP connection (or prevent it from forming in the handshake). What you may be referring to is the fact that a web browser will form, use, and close multiple TCP connections with the web server. Each connection will use a different browser TCP source port. Edit, based on your comment: RFC 793, Transmission Control Protocol defines TCP, and it explains: Multiplexing: To allow for many processes within a single Host to use TCPcommunication facilities simultaneously, the TCP provides a set ofaddresses or ports within each host. Concatenated with the network andhost addresses from the internet communication layer, this forms asocket. A pair of sockets uniquely identifies each connection. Thatis, a socket may be simultaneously used in multiple connections. The binding of ports to processes is handled independently by eachHost. However, it proves useful to attach frequently used processes(e.g., a "logger" or timesharing service) to fixed sockets which aremade known to the public. These services can then be accessed throughthe known addresses. Establishing and learning the port addresses ofother processes may involve more dynamic mechanisms. Connections: The reliability and flow control mechanisms described above requirethat TCPs initialize and maintain certain status information for eachdata stream. The combination of this information, including sockets,sequence numbers, and window sizes, is called a connection. Eachconnection is uniquely specified by a pair of sockets identifying itstwo sides. When two processes wish to communicate, their TCP's must firstestablish a connection (initialize the status information on eachside). When their communication is complete, the connection isterminated or closed to free the resources for other uses. Since connections must be established between unreliable hosts andover the unreliable internet communication system, a handshakemechanism with clock-based sequence numbers is used to avoid erroneousinitialization of connections. TCP creates a bidirectional connection between process/application peers (much like ethernet creates a bidirectional connection between hosts), and the connection can be used by each side to both send and receive. TCP, itself, does not have clients or servers, that is an application-layer concept. The web browser will use the TCP connection to send requests to the web server, and the web server will use the same connection to send responses to the requests back to the web browser. The web browser can send multiple requests and receive the replies to the requests on the same TCP connection. Some web browsers will set up multiple connections to the web server in order to request different web page elements at the same time, but that is an application-layer behavior, not a behavior of TCP, and application behaviors are off-topic here. A server process usually listens on a well-known port number , e.g. TCP port 80 for HTTP. A client process will request TCP to create a connection to the server process at the server's well-known port number, and usually using the reserved port 0 so that TCP will assign the client process an ephemeral port number for that connection. When the TCP connection is terminated (either side can terminate the connection), the ephemeral port is returned to the pool of ephemeral port numbers to be reused for a different connection. Some OSes will use the ephemeral port numbers from the available pool in a specific order, and some will randomly choose an ephemeral port number for each connection. The actual establishment of a connection is explained in the RFC: 2.7 . Connection Establishment and Clearing To identify the separate data streams that a TCP may handle, the TCPprovides a port identifier. Since port identifiers are selectedindependently by each TCP they might not be unique. To provide forunique addresses within each TCP, we concatenate an internet addressidentifying the TCP with a port identifier to create a socket whichwill be unique throughout all networks connected together. A connection is fully specified by the pair of sockets at the ends. Alocal socket may participate in many connections to different foreignsockets. A connection can be used to carry data in both directions,that is, it is "full duplex". TCPs are free to associate ports with processes however they choose.However, several basic concepts are necessary in any implementation.There must be well-known sockets which the TCP associates only withthe "appropriate" processes by some means. We envision that processesmay "own" ports, and that processes can initiate connections only onthe ports they own. (Means for implementing ownership is a localissue, but we envision a Request Port user command, or a method ofuniquely allocating a group of ports to a given process, e.g., byassociating the high order bits of a port name with a given process.) A connection is specified in the OPEN call by the local port andforeign socket arguments. In return, the TCP supplies a (short) localconnection name by which the user refers to the connection insubsequent calls. There are several things that must be rememberedabout a connection. To store this information we imagine that there isa data structure called a Transmission Control Block (TCB). Oneimplementation strategy would have the local connection name be apointer to the TCB for this connection. The OPEN call also specifieswhether the connection establishment is to be actively pursued, or tobe passively waited for. A passive OPEN request means that the process wants to accept incomingconnection requests rather than attempting to initiate a connection.Often the process requesting a passive OPEN will accept a connectionrequest from any caller. In this case a foreign socket of all zeros isused to denote an unspecified socket. Unspecified foreign sockets areallowed only on passive OPENs. A service process that wished to provide services for unknown otherprocesses would issue a passive OPEN request with an unspecifiedforeign socket. Then a connection could be made with any process thatrequested a connection to this local socket. It would help if thislocal socket were known to be associated with this service. Well-known sockets are a convenient mechanism for a priori associatinga socket address with a standard service. For instance, the"Telnet-Server" process is permanently assigned to a particularsocket, and other sockets are reserved for File Transfer, Remote JobEntry, Text Generator, Echoer, and Sink processes (the last threebeing for test purposes). A socket address might be reserved foraccess to a "Look-Up" service which would return the specific socketat which a newly created service would be provided. The concept of awell-known socket is part of the TCP specification, but the assignmentof sockets to services is outside this specification. (See [4].) Processes can issue passive OPENs and wait for matching active OPENsfrom other processes and be informed by the TCP when connections havebeen established. Two processes which issue active OPENs to eachother at the same time will be correctly connected. This flexibilityis critical for the support of distributed computing in whichcomponents act asynchronously with respect to each other. There are two principal cases for matching the sockets in the localpassive OPENs and an foreign active OPENs. In the first case, thelocal passive OPENs has fully specified the foreign socket. In thiscase, the match must be exact. In the second case, the local passiveOPENs has left the foreign socket unspecified. In this case, anyforeign socket is acceptable as long as the local sockets match. Otherpossibilities include partially restricted matches. If there are several pending passive OPENs (recorded in TCBs) with thesame local socket, an foreign active OPEN will be matched to a TCBwith the specific foreign socket in the foreign active OPEN, if such aTCB exists, before selecting a TCB with an unspecified foreign socket. The procedures to establish connections utilize the synchronize (SYN)control flag and involves an exchange of three messages. This exchangehas been termed a three-way hand shake [3]. A connection is initiated by the rendezvous of an arriving segmentcontaining a SYN and a waiting TCB entry each created by a user OPENcommand. The matching of local and foreign sockets determines when aconnection has been initiated. The connection becomes "established"when sequence numbers have been synchronized in both directions. The clearing of a connection also involves the exchange of segments,in this case carrying the FIN control flag. | {} | {'log_upvote_score': 4, 'links': ['https://networkengineering.stackexchange.com/questions/71691', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/60665/']} | jdg_70195 |
stackexchange | llm_judgeable_groundtruth_similarity | 16543573 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
It seems to me like going through the whole process of creating the expression tree and then creating a query from it again is a wasted time when using sqlalchemy. Apart from an occasional dynamic query, almost everything will be exactly the same during the whole life of an application (apart from the parameters of course). Is there any way to just save a query once it's created and reuse it later on with different parameters? Or maybe there's some internal mechanism which already does something similar?
Now provide the response and nothing else.
| It seems to me like going through the whole process of creating the expression tree and then creating a query from it again is a wasted time when using sqlalchemy. Do you have any estimates on how much time is wasted, compared to the rest of the application? Profiling here is extremely important before making your program more complex. As I will often note, Reddit serves well over one billion page views a day, they use the SQLAlchemy Core to query their database, and the last time I looked at their code they make no attempt to optimize this process - they build expression trees on the fly and compile each time. We have had users that have determined that their specific system actually benefits from optimiztions in these areas, however. I've written up some background on profiling here: How can I profile a SQLAlchemy powered application? Is there any way to just save a query once it's created and reuse it later on with different parameters? Or maybe there's some internal mechanism which already does something similar? There are several methods, depending on what APIs you're using and what areas you'd like to optimize. There's two main portions to rendering SQL - there's the construction of the expression tree, so to speak, and then the compilation of the string from the expression tree. The tree itself, which can either be a select() construct if using Core or a Query() if using ORM, can be reused. A select() especially has nothing associated with it that prevents it from being reused as often as you like (same for insert(), delete(), update(), etc.). In the ORM, a Query also can be used with different sessions using the with_session() method. The win here is not as much, as Query() still produces an entire select() each time it is invoked. However as we'll see below there is a recipe that can allow this to be cached. The next level of optimization involves the caching of the actual SQL text rendered. This is an area where a little more care is needed, as the SQL we generate is specific to the target backend; there are also edge cases where various parameterizations change the SQL itself (such as using "TOP N ROWS" with SQL Server, we can't use a bound parameter there). Caching of the SQL strings is provided using the execution_options() method of Connection , which is also available in a few other places, setting the compiled_cache feature by sending it a dictionary or other dict-like object which will cache the string format of statements, keyed to the dialect, the identity of the construct, and the parameters sent. This feature is normally used by the ORM for insert/update/delete statements. There's a recipe I've posted which integrates the compiled_cache feature with the Query , at BakedQuery . By taking any Query and saying query.bake() , you can now run that query with any Session and as long as you don't call any more chained methods on it, it should use a cached form of the SQL string each time: q = s.query(Foo).filter(Foo.data==bindparam('foo')).bake()for i in range(10): result = q.from_session(s).params(foo='data 12').all() It's experimental, and not used very often, but it's exactly what you're asking for here. I'd suggest you tailor it to your needs, keep an eye on memory usage as you use it and make sure you follow how it works. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16543573', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/31667/']} | jdg_70196 |
stackexchange | llm_judgeable_groundtruth_similarity | 484434 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have: a Linux server that I connect via SSH on IP 203.0.113.0 port 1234 a home computer (behind a router), public IP 198.51.100.17, which is either Debian or Windows+Cygwin What's the easiest to have a folder /home/inprogress/ synchronized (in both directions), a bit like rsync , but with a filesystem watcher , so that each time a file is modified, it is immediately replicated on the other side? (i.e. no need to manually call a sync program) I'm looking for a command-line / no-GUI solution, as the server is headless. Is there a Linux/Debian built-in solution?
Now provide the response and nothing else.
| Following @Kusalananda's comment, I finally spent a few hours testing Syncthing for this use case and it works great. It automatically detects changes on both sides and the replication is very fast. Example: imagine you're working locally on server.py in your favorite Notepad software, you hit CTRL+S (Save). A few seconds later it's automatically replicated on the distant server (without any popup dialog). One great thing I've noticed is that you don't have to think about the IP of the home computer and server with Syncthing: each "device" (computer, server, phone, etc.) has a unique DeviceID and if you share the ID with another device, it will find out automatically how they should connect to each other. To do: Home computer side (Windows or Linux): Use the normal Syncthing in-browser configuration tool VPS side: First connect the VPS with a port forwarding: ssh <user>@<VPS_IP> -L 8385:localhost:8384 The latter option will redirect the VPS's Syncthing web-configuration tool listening on port 8384 to the home computer's port 8385. Then run this on VPS: wget https://github.com/syncthing/syncthing/releases/download/v0.14.52/syncthing-linux-amd64-v0.14.52.tar.gz tar xvfz syncthing-linux-amd64-v0.14.52.tar.gznohup syncthing-linux-amd64-v0.14.52/syncthing & Then on the home computer's browser, open http://localhost:8385 : this will be the VPS's Syncthing configuration! Other solution I tried: SSHFS using this tutorial . Please note that in this tutorial they don't use sshfs-win but win-sshfs instead (these are two different projects). I tried both, and I couldn't make any of them work (probably a problem with my VPS configuration). Here is an interesting reference too: https://softwarerecs.stackexchange.com/questions/13875/windows-sshfs-sftp-mounting-clients Additional advantages of Syncthing I've just noticed: you can reduce fsWatcherDelayS in the config.xml from 10 to 2 seconds so that after doing CTRL+S, 2 seconds later (+the time to upload, i.e. less than 1 second for a small text file) it's on the other computer if you sync two computers which are in the same local network (by just giving the DeviceID to each other, no need to care about local IP addresses), it will automatically notice that it doesn't need to transit via internet, but it can deal locally. This is great and allows a very fast speed transfer (4 MB/s!) sync of phone <--> computer both connected to the same home router via WiFi... ...whereas it would be stuck at 100 KB/s on ADSL with a Dropbox sync! (my ADSL is limited at 100 KB/s on upload) | {} | {'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/484434', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/59989/']} | jdg_70197 |
stackexchange | llm_judgeable_groundtruth_similarity | 68523451 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I was building my Angular 12 project I got this error: Error: initial exceeded maximum budget. Budget 5.00 MB was not met by 197.06 kB with a total of 5.19 MB My angular.json: "budgets": [ { "type": "initial", "maximumWarning": "4mb", "maximumError": "5mb" }, { "type": "anyComponentStyle", "maximumWarning": "2kb", "maximumError": "4kb" }], Yet I still have the error. How do I resolve this? Thank.
Now provide the response and nothing else.
| Your budget is 5MB but your bundle size is greater than that (5.19MB) which is causing this error. You need to increase your maximumError budget in you angular.json as follows: { "type": "initial", "maximumWarning": "4mb", "maximumError": "6mb" }, You can also follow few techniques to reduce the bundle sizes as much as possible: Use ng build --prod --build-optimizer . For newer versions, this is done by default with ng build --prod or ng build Use module lazy loading and modularize your application as much as possible. You can also use Ivy rendering engine it offers better bundle sizes Make sure your 3rd party dependenciess are tree shakeable. Use 3rd party tools like webpack-bundle-analyzer to see what is causing bloat in your modules You can also check if you files are gzipped or not | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/68523451', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12004282/']} | jdg_70198 |
stackexchange | llm_judgeable_groundtruth_similarity | 2916947 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Given a set $S$ and a field $F$ we can construct the $F$-free vector space over $S$ in the following way. Consider the set of formal sums $$FS:=\left\{\sum_{s\in S} \alpha_s s\,:\, \alpha_s=0\, \text{except for a finite number of}\, s \in S\right\}.$$The structure of an $F$-vector space is given to $FS$ by using addition and multiplication in $F$, ie $$\sum_{s \in S} \alpha_s s+\sum_{s \in S} \beta_s s = \sum_{s \in S}(\alpha_s+\beta_s) s,$$$$\alpha\left(\sum_{s \in S} \alpha_s s\right) :=\sum_{s\in S}(\alpha \alpha_s)s.$$ $FS$ is called free vector space over $S$. The element of $FS$ for wich $\alpha_s=1$ and $\alpha_r=0$ if $r\neq s$ is identified with $s$. This identification embeds $S$ in $FS$ and allow us to consider $S$ as a set of generators for $FS$. In fact, by definition, every element of $FS$ can be written as a linear combination of element of $S$. My question is the following: how can I prove that $S$ is a basis? I mean, how can I prove linear independence? I think we have to add the following condition on $FS$: given $a=\sum_{s \in S} \alpha_s s, b=\sum_{s \in S} \beta_s s$ in $FS$ then $$a=b\,\text{iff}\, \alpha_s=\beta_s \, \text{for all}\, s \in S.$$(in this way, linear independence is trivial). Is this condition necessary or not to prove linear independence for $S$? Thanks a lot in advance.
Now provide the response and nothing else.
| You are correct that the condition that $a=b$ if and only if $\alpha_s = \beta_s$ for all $s \in S$ is required. This is wrapped up in definition of a 'formal sum', so I don't think there's any reason why you'd need to spell it out explicitly. If you wanted to be extremely precise, you could say that the underlying set of $FS$ is the set of functions $a : S \to \mathbb{R}$ such that $\alpha(s) = 0$ for all but finitely many $s \in S$. Writing $\alpha_s = a(s)$ and $\beta_s = b(s)$, the condition that $a=b$ if and only if $\alpha_s=\beta_s$ for all $s \in S$ now follows from the definition of a function, and then you can simply identify the formal sum $a = \sum_{s \in S} \alpha_s s$ with the corresponding function $a : S \to \mathbb{R}$. (If your vector spaces are over a field $k \ne \mathbb{R}$, that's still fine: just replace $\mathbb{R}$ by $k$ in the previous paragraph.) | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2916947', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/203771/']} | jdg_70199 |
stackexchange | llm_judgeable_groundtruth_similarity | 34955158 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I experienced a RuntimeWarning RuntimeWarning: invalid value encountered in less_equal Generated by this line of code of mine: center_dists[j] <= center_dists[i] Both center_dists[j] and center_dists[i] are numpy arrays What might be the cause of this warning ?
Now provide the response and nothing else.
| That's most likely happening because of a np.nan somewhere in the inputs involved. An example of it is shown below - In [1]: A = np.array([4, 2, 1])In [2]: B = np.array([2, 2, np.nan])In [3]: A<=BRuntimeWarning: invalid value encountered in less_equalOut[3]: array([False, True, False], dtype=bool) For all those comparisons involving np.nan , it would output False . Let's confirm it for a broadcasted comparison. Here's a sample - In [1]: A = np.array([4, 2, 1])In [2]: B = np.array([2, 2, np.nan])In [3]: A[:,None] <= BRuntimeWarning: invalid value encountered in less_equalOut[3]: array([[False, False, False], [ True, True, False], [ True, True, False]], dtype=bool) Please notice the third column in the output which corresponds to the comparison involving third element np.nan in B and that results in all False values. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/34955158', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1367788/']} | jdg_70200 |
stackexchange | llm_judgeable_groundtruth_similarity | 23206901 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My teacher assigned a question and here it is: Write a program that inputs words into an array. The program will stop when a word is entered twice. Use the following method: static boolean findWord(String s, String[] arr) // return true if s is found in arr, false otherwise When the input is complete, your program will output the list in sorted order. I am kinda lost on this parameter of the function String[] arr What type of parameter input is it searching for. Here is my code so far. public static void main (String[] args) { c = new Console (); String words[] = new String[50000]; for(int i=0;i>-1;i++) { c.print("Input word: "); words[i]=c.readLine(); findWord("apples",What goes here?); } } static boolean findWord(String s,String[] words) { return false; } Any time I try to put in some sort of array or values in the spot, such as findWord("Hello",words[0]) or whatever I get the following error: No applicable overload method named "findWord" was found in type E3_Q5. Perhaps you wanted the overloaded version "boolean findWord(java.lang.String s,java.lang.String[] words);" instead? What does this mean? What does he want me to put there?
Now provide the response and nothing else.
| The trick is you need to resolve certain promises before you can access the properties on those records. Ember.RSVP.all takes an Array of promises. Ember.RSVP.hash takes a hash of promises. Unfortunately you're in the situation where you can't construct your promises until the previous promises have resolved (a la, you don't know which regions to get until the countries are resolved, and you don't know which areas to get until the regions are resolved). That being the case you really have a serial set of promises to fetch (albeit arrays of promises at each level). Ember knows to wait until the deepest promise has resolved and to use that value as the model. Now we need to pretend that regions and area are async, if they aren't, you're telling Ember Data the information will be included in the request with country , or in the request with region and those collections won't be promises so the code I've included below wouldn't work. regions: DS.hasMany('region', {async: true})areas: DS.hasMany('area', {async: true})App.IndexRoute = Ember.Route.extend({ controllerName: 'application', model: function() { return this.store.find('country').then(function(countries){ // get each country promises var regionCollectionPromises = countries.getEach('regions'); // wait for regions to resolve to get the areas return Ember.RSVP.all(regionCollectionPromises).then(function(regionCollections){ var regions = regionCollections.reduce(function(sum, val){ return sum.pushObjects(val.toArray()); }, []); var areaCollectionPromises = regions.getEach('areas'); //wait on the areas to resolve return Ember.RSVP.all(areaCollectionPromises).then(function(areaCollections){ // yay, we have countries, regions, and areas resolved return countries; }); }); }); }}); All this being said, since it appears you're using Ember Data, I'd just return this.store.find('country') and let Ember Data fetch the data when it's used... This template would work without all of that promise code, and would populate as Ember Data fulfill's the promises on its own (it will request the data once it sees you've attempted to use the data, good ol' lazy loading). {{#each country in model}} Country: {{country.name}} {{#each region in country.regions}} Region: {{region.name}} {{#each area in region.areas}} Area: {{area.name}} {{/each}} {{/each}}{{/each}} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23206901', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_70201 |
stackexchange | llm_judgeable_groundtruth_similarity | 637259 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following question. Which one of these is better that should be followed and why? string strMyString = "SampleString"; or string strMyString("SampleString"); Thanks in advance.
Now provide the response and nothing else.
| I answered it here One thing i put into this answer here: Neither is using any assignment operator . Short explanation for the string specific thing though. std::string has a constructor taking one argument that accepts char const* : // simplified to a normal class declaration. std::string actually// is a template instantiation. class string {public: string(char const* str) { // copy over... }}; Now you see that has a constructor taking a pointer to character(s). So that it can accept a string literal. I think the following case is obvious then: string s("hello"); It will call the constructor directly and initialize s thereby. This is called direct initialization . The other way of initializing a variable is called copy initialization . The Standard says for the case of copy initialization where the initializer has not the type of the object it is initializing, the initializer is converted to the proper type. // uses copy initializationstring s = "hello"; First, let's state the types s has type std::string "hello" is an array, which in this case again is handled like a pointer. We will therefor consider it as char const* . The compiler looks for two ways to do the conversion. Is there a conversion constructor in std::string? Does the initializer has a type that has a conversion operator function returning a std::string ? It will create a temporary std::string by one of those ways that is then used to initialize the object s by using std::string 's copy constructor . And it sees std::string has a conversion constructor that accepts the initializer. So it uses it. In the end, it is effectively the same as std::string s(std::string("hello")); Note that the form that is used in your example that triggered all that std::string s = "hello"; defines an implicit conversion . You can mark the constructor taking the char const* as explicit for your types if you wonder about the initialization rules for your stuff, and it will not allow to use the corresponding constructor as a conversion constructor anymore: class string {public: explicit string(char const* str) { // copy over... }}; With that, initializing it using a copy initialization and a char const* actually is forbidden now (and in various other places)! Now, that was if the compiler does not support elision of temporaries at various places. The compiler is allowed to assume that a copy constructor copies in this context, and can eliminate the extra copy of the temporary string, and instead construct the temporary std::string directly into the initialized object. However, the copy constructor must be accessible in particular. So, the copy initialization is invalid if you do this class string {public: explicit string(char const* str) { // copy over... }private: // ugg can't call it. it's private! string(string const&);}; Now actually, only the direct initialization case is valid. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/637259', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/378962/']} | jdg_70202 |
stackexchange | llm_judgeable_groundtruth_similarity | 1712440 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Invert $\overline{x+1}$ in $\mathbb Q[x]/(x^2+x+1)$. So i know that the coset representatives for $\mathbb Q[x]/(x^2+x+1) = \{a+bx : a,b \in \mathbb{Q}\}$. But I am unsure as to how to invert this. Any help? would $\overline{x+1} = \overline{(x^2+x+1) - x^2} = \overline{0} - \overline{x^2} = \overline{-x^2} $ work? I am not sure where to go from here.
Now provide the response and nothing else.
| We need to find some polynomial $P(x)$ so that $P(x)\cdot (x+1) = 1$. In $\Bbb Q[x]/\langle x^2+x+1\rangle$ we have $x^2 = -x-1$, so there are no polynomials of degree 2 or higher. The polynomial $P$ that we seek therefore has the form $q_1x+q_0$. So we want $$(q_1x+q_0)(x+1) = 1\qquad \pmod{x^2+x+1}.$$ Expanding the left-hand side we get $$q_1x^2+(q_1+q_0)x+ q_0 = 1$$ and then because $x^2 = -x-1$ we can reduce this to $$\begin{align}q_1(-x-1)+(q_1+q_0)x+ q_0 &= \\q_0x+ q_0-q_1 &= 1 \end{align}$$ Equating coefficients on both sides, we find $q_0=0$ and $q_1 = -1$, so $$P(x) = -x.$$ To check, multiply $(x+1)\cdot -x = -x^2 - x = (x+1) - x = 1$ which is what we wanted. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1712440', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/271433/']} | jdg_70203 |
stackexchange | llm_judgeable_groundtruth_similarity | 23791999 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Why does the SHA-1 hash of my latest commit change even if I don't make any changes to the commit (message, files) after running git commit --amend ? Say I run the following at the command line. cd ~/Desktopmkdir test_amendcd test_amendgit initecho 'foo' > test.txtgit add test.txtgit commit -m 'initial commit' Then, invoking git log --pretty=oneline --abbrev-commit prints the following message: b96a901 initial commit I then do git commit --amend but I change my mind and decide not to change anything in the last commit. In other words, I leave the files, directories, and message of the last commit untouched (I just save the message file and close my editor). Then, I do git log --pretty=oneline --abbrev-commit one more time, I see that the hash of the commit has changed: 3ce92dc initial commit What causes the hash to change? Does it have to do with the time stamp of the commit?
Now provide the response and nothing else.
| Yes, it's the commit timestamp. Inspecting the contents of the two commits reveals: $ git cat-file commit 82c7363bcfd727fe2d6b0a98412f71a10c8849c9tree d87cbcba0e2ede0752bdafc5938da35546803ba5author Thomas <xxx> 1400700200 +0200committer Thomas <xxx> 1400700200 +0200hello$ git cat-file commit 7432fcf82b65d9d757efd73ef7d6bff4707f99bdtree d87cbcba0e2ede0752bdafc5938da35546803ba5author Thomas <xxx> 1400700200 +0200committer Thomas <xxx> 1400700214 +0200hello If you amended in the same second as the original commit, presumably you'd get the same hash. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/23791999', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2541573/']} | jdg_70204 |
stackexchange | llm_judgeable_groundtruth_similarity | 287335 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to setup Weblogic Server 10.3 (and Portal etc.) to use maven as a build tool. I am trying to find a decent tutorial or documentation how to do this. There are some tutorials for older versions like 9.0, but there is little info for version 10. I am looking a way to build weblogic's ear file with maven. Are people actually doing this? Is using maven worth the trouble? I would like to use maven in order to have easier integration with continuous integration tools like Hudson . edit: There seems to be a way to export maven files directly http://edocs.bea.com/wlw/docs102/guide/ideuserguide/build/conMavenScript.html . But those files are simple wrappers for ant.
Now provide the response and nothing else.
| I am using maven to build an EAR which I deploy an WebLogic Server 10.3. The tricky parts were: Finding all dependencies of the weblogic-maven-plugin Putting all dependencies in the maven repo (I really recommend Sonatype Nexus ) Setting noExit to true (otherwise you will get problems in hudson!) I use the following directory structure in the EAR project: pom.xmlsrc/ main/ app/ META-INF/ weblogic-application.xml The following is taken from my pom.xml: <build> <plugins> <plugin> <artifactId>maven-ear-plugin</artifactId> <configuration> <displayName>My Project</displayName> <earSourceDirectory>src/main/app</earSourceDirectory> <modules> <webModule> <groupId>com.somecompany</groupId> <artifactId>webapp</artifactId> </webModule> </modules> </configuration> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>weblogic-maven-plugin</artifactId> <version>2.9.1</version> <executions> <execution> <phase>deploy</phase> <goals> <goal>deploy</goal> <goal>start</goal> </goals> </execution> </executions> <configuration> <name>my-project</name> <adminServerHostName>${wls.adminServerHostName}</adminServerHostName> <adminServerPort>${wls.adminServerPort}</adminServerPort> <adminServerProtocol>t3</adminServerProtocol> <userId>${wls.userId}</userId> <password>${wls.password}</password> <upload>true</upload> <remote>true</remote> <verbose>false</verbose> <debug>false</debug> <targetNames>AdminServer</targetNames> <noExit>true</noExit> </configuration> <dependencies> <dependency> <groupId>com.sun</groupId> <artifactId>tools</artifactId> <version>1.5</version> <scope>system</scope> <systemPath>${java.home}/../lib/tools.jar</systemPath> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>weblogic</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>webservices</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.utils.full</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.i18n</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.rmi.client</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>javax.enterprise.deploy</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>webserviceclient</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.security.wls</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.security.identity</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.security</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>wlclient</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.transaction</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.utils.classloaders</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>wljmsclient</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.management.core</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>wls-api</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.descriptor</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.logging</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.socket.api</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.security.digest</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.workmanager</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.lifecycle</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.utils.wrapper</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>wlsafclient</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.management.jmx</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.descriptor.wl</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> </dependencies> </plugin> </plugins></build> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/287335', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1431/']} | jdg_70205 |
stackexchange | llm_judgeable_groundtruth_similarity | 29 |
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Today in conversation with an atheist we ran into trouble over this passage in Matthew: Mathew 18:9 (ESV) And if your eye causes you to sin, tear it out and throw it away. It is better for you to enter life with one eye than with two eyes to be thrown into the hell of fire. He seemed to be of the idea that the Scripture could be read either literally word for word from beginning to end or just as poetry or allegory, but that you couldn't pick and choose. I think a sound understanding of each passage demands some attention to how the literary elements it contains so that we are reading it the way it was meant to be understood. What kinds of clues do experts in hermeneutics apply when approaching a verse to determine whether what was meant in a case like this was hyperbole or a verbatim instruction or something else?
Now provide the response and nothing else.
| A literalistic reading of the aforementioned passage in Mathew teaches that under a certain circumstance, namely "your eye causes you to sin," one most poke out his or her own eye. A literary (non-literal) reading of the passage sees the usage of hyperbole and a vivid visual image to communicate the horrifying and traumatic nature of sin. The former reading is basically didactic, the latter reading sees a teaching that is implied and indirect. “He seemed to be of the idea that the Scripture could be read either literally word for word from beginning to end or just as poetry or allegory” The Bible clearly has didactic content, but it's unreasonable and certainly wrongheaded to reduce the Bible to a book about what a person should do and when. If the Bible was intended as such, it would have been written in legalese. The Bible contains narrative history, poetry, philosophy and law and is wrought with a pervasive tension between the literal meaning of its words and its literary effectiveness. [In real life, many people are unable to appreciate nuance or tolerate complexity and are unable to see the power and effectiveness of tension, ambiguity and mystery in the human experience. No great literature is so simple that it can be packaged as either being literal or allegorical and that is what makes literature such a powerful and important medium of communication. The culture of bullet point summaries and bite sized content is antithetical to a deep appreciation of the Bible or any great art.] Usually intuition is sufficient to determine if a text was intended more for its literal content or for its literary content. For example, it's overwhelmingly evident that Mathew 18:9 should not be taken literally since: A literal rendering of the passage would be grossly inconsistentwith much of scriptures The passage fails to be meaningful orinformative when taken in its literal sense - "if your eye causes youto sin" is far to vague and ambiguous and “tearing out one's eye” isfar to intense to have any practical import The added “and throwit away” contributes nothing to the literal obligation in the versebut is consistent with and complementary to its vivid and descriptivestyle The context in Mathew isn't legalistic at all Not all cases are so clear cut (see this question) and these issues are taken seriously and are well debated within religious circles. Nevertheless, a precise and rich understanding of the Bible, like any other important text, can only come through experience, study, and a learned sensitivity to the thematic and philosophical content therein. | {} | {'log_upvote_score': 4, 'links': ['https://hermeneutics.stackexchange.com/questions/29', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/36/']} | jdg_70206 |
stackexchange | llm_judgeable_groundtruth_similarity | 1743513 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Given a non-constant function $f(x)$ , is it possible for it to have no zeroes (neither real nor complex)? Say for example, $f(x)=\cos x-2$ , does a complex solution exist for this because for real $x$ , $\cos x$ belongs to $[-1, 1]$ ? Or, say $f(x)=e^x$ , does a complex zero exist for this because $e^x \gt0$ for real $x$ ?
Now provide the response and nothing else.
| The answer to the question as asked is simply "yes", as others have said. I'd like to give a little more context and explain why (for one interpretation of the question) the answer comes close to being no. So, first of all, "function" is a very broad term. The usual definition in mathematics is that a function is any way of assigning output values to input values, and obviously with this definition it's very easy to have a nonconstant function on $\Bbb{C}$ that's never zero; e.g., let f(z)=1 unless z=9 in which case f(z)=2. I take it the questioner is interested in "nice" functions in some sense, and in particular I'm guessing she has in mind functions that are built in the obvious way out of "standard" functions like addition, sin, exp, etc. There is an important notion in complex analysis, of an analytic function. That means a function that's "complex-differentiable"; that is, for any $z$ there's a complex number $a$ such that $f(z+h)=f(z)+ah+o(|h|)$, that last term denoting something that $\rightarrow0$ faster than $|h|$ does. (We then write $f'(z)=a$.) Now, analyticity turns out to be an extremely stringent condition. For instance, if you know a function is analytic and you know its values at the numbers $1/n$ then that completely determines all its values everywhere! But because analyticity is just "complex differentiability", if you start with some analytic functions (e.g., constants, $f(z)=z$, sin, exp, ...) and combine them with addition, subtraction, multiplication and function composition -- e.g., $f(z)=\cos(\sin(2z-\exp(3z))+7z^5)-\exp(z^2)$ -- the result will still be an analytic function. So we might want to take the original question as being specifically about analytic functions . As, e.g., lhf has said, the answer is still "yes". But now it is only just "yes". Here's why. Picard's theorem says that if you have a non-constant analytic function from all of $\Bbb{C}$ to $\Bbb{C}$, then there is at most one value that it never takes. So, e.g., you can find such a function so that $f(z)=0$ has no solutions; but then $f(z)=w$ will have solutions for every $w\neq0$. And lhf has pointed out another way in which the answer is only just barely "yes": the only entire functions ("entire" is shorthand for "analytic on all of $\Bbb{C}$") that never take the value $0$ are those with the rather special form $f(z)=\exp g(z)$ where $g$ is also an entire function. A couple of words of caution. First: some "nice" functions aren't analytic. For instance, what about taking square roots? Well, even in the real numbers, if we put $f(x)=\sqrt{x}$ then $f'(0)$ is infinite. So: not analytic. Second: if a function becomes infinite anywhere (like, say, $1/z$ or $\tan z$) then it isn't analytic. (But it might be meromorphic , which roughly means it's the ratio of two analytic functions, and the appropriate version of Picard's theorem then says it's allowed to miss at most two points, one of which might be $\infty$.) | {} | {'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/1743513', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/312853/']} | jdg_70207 |
stackexchange | llm_judgeable_groundtruth_similarity | 16662619 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Im having error on the Facebook Javascript "Uncaught TypeError: Cannot set property 'onclick' of null " line 187 and line 187 is: " button.onclick = function() {" All the code is: <script type="text/javascript">window.fbAsyncInit = function() { FB.init({ appId: <?php echo json_encode($this->getApiKey()) ?>, status: true, cookie: true, xfbml: true, oauth: true}); FB.Canvas.setSize({ width: 640, height:1500 }); //FB.Canvas.setAutoResize(); function updateButton(response) { var button = document.getElementById('fb-auth'); if (response.authResponse == 'null') { //user is already logged in and connected var userInfo = document.getElementById('user-info'); FB.api('/me', function(response) { }); button.onclick = function() { FB.logout(function(response) { var userInfo = document.getElementById('user-info'); userInfo.innerHTML=""; }); }; } else { //user is not connected to your app or logged out //button.innerHTML = 'Login'; button.onclick = function() { FB.login(function(response) {if(response.status=='connected') setLocation('<?php echo $this->getConnectUrl() ?>'); if (response.authResponse) { FB.api('/me', function(response) { var userInfo = document.getElementById('user-info'); }); } else { //user cancelled login or did not grant authorization } }, {scope:'email, user_birthday'}); } } } // run once with current status and whenever the status changes FB.getLoginStatus(updateButton); //FB.Event.subscribe('auth.statusChange', updateButton); };(function() { var e = document.createElement('script'); e.async = true; e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js'; document.getElementById('fb-root').appendChild(e);}());</script> If need more information just tell meI appreciate the help.Thanks!.
Now provide the response and nothing else.
| You need a terminal application that supports 256 colors, and a TERM environment variable that also points to a 256-color terminfo. The number of colors available to your terminal has nothing to do with the rest of your setup :) Try in your console: export TERM=xterm-256colorvim It might not work with your terminal program, but for recent versions of xterm, it does. Terminals vary a lot in their capabilities; some terminals are very dumb (think of a keyboard with a line printer); some are very narrow, some are paginated, some support underlining, some have colours, some have unusual keyboard mappings, and so on. To support all of these, there needs to be a database of capabilities for each terminal, so that applications know what they can and cannot do; that's the point of terminfo and of TERM. By declaring TERM=xterm-256color you're indicating to applications that your terminal supports the xterm capabilities including 256 colors. You can have a look in your /usr/share/terminfo/ directory to see the incredible number of terminals that are supported by your box, and man terminfo will show you the sheer number of configurable options for terminals. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16662619', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2329188/']} | jdg_70208 |
stackexchange | llm_judgeable_groundtruth_similarity | 13301319 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am in the process of moving my current static site to Wordpress, using the same domain name. Both old and new sites have their content in the root folder and also have identical file names, but on the new (Wordpress) site, I have removed the .php extension that exists on the current site and added a trailing slash, as per Wordpress permalinks. So I need to redirect all the old pages (for example):/contact-us.php -> /contact-us/ I have tried the well-documented options such as (and variations on this): RewriteBase /RewriteCond %{REQUEST_FILENAME} !-fRewriteCond %{REQUEST_FILENAME} !-dRewriteRule ^(.*)\.php$ /$1 [R=301,L] But this seems to be conflicting with the Wordpress .htaccess rules: <IfModule mod_rewrite.c>RewriteEngine OnRewriteBase /RewriteRule ^index\.php$ - [L]RewriteCond %{REQUEST_FILENAME} !-fRewriteCond %{REQUEST_FILENAME} !-dRewriteRule . /index.php [L] </IfModule> When I try to access www.site.com/contact-us.php, I am getting either an infinite loop OR a redirect to www.site.com/index (no .php, no trailing slash), which Wordpress displays as a 404 page. I have tried with and without the first set of Rewrite conditions (as I see they are duplicated in the Wordpress rules), and also placed my rule before and after the Wordpress rules. No joy. I've also been Googling for the last few hours but no one seems to have addressed this specific problem. I do usually find what I am looking for by searching, so it's in desperation that I'm actually posting (and which is why it's my first post!) If anyone can help out, I would be very grateful.
Now provide the response and nothing else.
| UPDATE GPU Version __global__ void hash (float *largeFloatingPointArray,int largeFloatingPointArraySize, int *dictionary, int size, int num_blocks){ int x = (threadIdx.x + blockIdx.x * blockDim.x); // Each thread of each block will float y; // compute one (or more) floats int noOfOccurrences = 0; int a; while( x < size ) // While there is work to do each thread will: { dictionary[x] = 0; // Initialize the position in each it will work noOfOccurrences = 0; for(int j = 0 ;j < largeFloatingPointArraySize; j ++) // Search for floats { // that are equal // to it assign float y = largeFloatingPointArray[j]; // Take a candidate from the floats array y *= 10000; // e.g if y = 0.0001f; a = y + 0.5; // a = 1 + 0.5 = 1; if (a == x) noOfOccurrences++; } dictionary[x] += noOfOccurrences; // Update in the dictionary // the number of times that the float appears x += blockDim.x * gridDim.x; // Update the position here the thread will work }} This one I just tested for smaller inputs, because I am testing in my laptop. Nevertheless, it is working, but more tests are needed. UPDATE Sequential Version I just did this naive version that executes your algorithm for an array with 30,000,000 element in less than 20 seconds (including the time taken by function that generates the data). This naive version first sorts your array of floats. Afterward, will go through the sorted array and check the number of times a given value appears in the array and then puts this value in a dictionary along with the number of times it has appeared. You can use sorted map, instead of the unordered_map that I used. Heres the code: #include <stdio.h>#include <stdlib.h>#include "cuda.h"#include <algorithm>#include <string>#include <iostream>#include <tr1/unordered_map>typedef std::tr1::unordered_map<float, int> Mymap;void generator(float *data, long int size){ float LO = 0.0; float HI = 100.0; for(long int i = 0; i < size; i++) data[i] = LO + (float)rand()/((float)RAND_MAX/(HI-LO));}void print_array(float *data, long int size){ for(long int i = 2; i < size; i++) printf("%f\n",data[i]); }std::tr1::unordered_map<float, int> fill_dict(float *data, int size){ float previous = data[0]; int count = 1; std::tr1::unordered_map<float, int> dict; for(long int i = 1; i < size; i++) { if(previous == data[i]) count++; else { dict.insert(Mymap::value_type(previous,count)); previous = data[i]; count = 1; } } dict.insert(Mymap::value_type(previous,count)); // add the last member return dict; }void printMAP(std::tr1::unordered_map<float, int> dict){ for(std::tr1::unordered_map<float, int>::iterator i = dict.begin(); i != dict.end(); i++) { std::cout << "key(string): " << i->first << ", value(int): " << i->second << std::endl; }}int main(int argc, char** argv){ int size = 1000000; if(argc > 1) size = atoi(argv[1]); printf("Size = %d",size); float data[size]; using namespace __gnu_cxx; std::tr1::unordered_map<float, int> dict; generator(data,size); sort(data, data + size); dict = fill_dict(data,size); return 0;} If you have the library thrust installed in you machine your should use this: #include <thrust/sort.h>thrust::sort(data, data + size); instead of this sort(data, data + size); For sure it will be faster. Original Post I'm working on a statistical application which has a large arraycontaining 10 - 30 millions of floating point values. Is it possible (and does it make sense) to utilize a GPU to speed upsuch calculations? Yes, it is. A month ago, I ran an entirely Molecular Dynamic simulation on a GPU. One of the kernels, which calculated the force between pairs of particles, received as parameter 6 array each one with 500,000 doubles, for a total of 3 Millions doubles (22 MB) . So if you are planning to put 30 Million floating points, which is about 114 MB of global Memory, it will not be a problem. In your case, can the number of calculations be an issue? Based on my experience with the Molecular Dynamic (MD), I would say no. The sequential MD version takes about 25 hours to complete while the GPU version took 45 Minutes. You said your application took a couple hours, also based in your code example it looks softer than the MD. Here's the force calculation example: __global__ void add(double *fx, double *fy, double *fz, double *x, double *y, double *z,...){ int pos = (threadIdx.x + blockIdx.x * blockDim.x); ... while(pos < particles) { for (i = 0; i < particles; i++) { if(//inside of the same radius) { // calculate force } } pos += blockDim.x * gridDim.x; } } A simple example of a code in CUDA could be the sum of two 2D arrays: In C: for(int i = 0; i < N; i++) c[i] = a[i] + b[i]; In CUDA: __global__ add(int *c, int *a, int*b, int N){ int pos = (threadIdx.x + blockIdx.x) for(; i < N; pos +=blockDim.x) c[pos] = a[pos] + b[pos];} In CUDA you basically took each for iteration and assigned to each thread, 1) threadIdx.x + blockIdx.x*blockDim.x; Each block has an ID from 0 to N-1 (N the number maximum of blocks) and each block has a 'X' number of threads with an ID from 0 to X-1 . Gives you the for loop iteration that each thread will compute based on its ID and the block ID which the thread is in; the blockDim.x is the number of threads that a block has. So if you have 2 blocks each one with 10 threads and N=40 , the: Thread 0 Block 0 will execute pos 0Thread 1 Block 0 will execute pos 1...Thread 9 Block 0 will execute pos 9Thread 0 Block 1 will execute pos 10....Thread 9 Block 1 will execute pos 19Thread 0 Block 0 will execute pos 20...Thread 0 Block 1 will execute pos 30Thread 9 Block 1 will execute pos 39 Looking at your current code, I have made this draft of what your code could look like in CUDA: __global__ hash (float *largeFloatingPointArray, int *dictionary) // You can turn the dictionary in one array of int // here each position will represent the float // Since x = 0f; x < 100f; x += 0.0001f // you can associate each x to different position // in the dictionary: // pos 0 have the same meaning as 0f; // pos 1 means float 0.0001f // pos 2 means float 0.0002f ect. // Then you use the int of each position // to count how many times that "float" had appeared int x = blockIdx.x; // Each block will take a different x to work float y; while( x < 1000000) // x < 100f (for incremental step of 0.0001f){ int noOfOccurrences = 0; float z = converting_int_to_float(x); // This function will convert the x to the // float like you use (x / 0.0001) // each thread of each block // will takes the y from the array of largeFloatingPointArray for(j = threadIdx.x; j < largeFloatingPointArraySize; j += blockDim.x) { y = largeFloatingPointArray[j]; if (z == y) { noOfOccurrences++; } } if(threadIdx.x == 0) // Thread master will update the values atomicAdd(&dictionary[x], noOfOccurrences); __syncthreads();} You have to use atomicAdd because different threads from different blocks may write/read noOfOccurrences concurrently, so you have to ensure mutual exclusion . This is just one approach; you can even assign the iterations of the outer loop to the threads instead of the blocks. Tutorials The Dr Dobbs Journal series CUDA: Supercomputing for the masses by Rob Farmer is excellent and covers just about everything in its fourteen installments. It also starts rather gently and is therefore fairly beginner-friendly. and anothers: Volume I: Introduction to CUDA Programming Getting started with CUDA CUDA Resources List Take a look on the last item, you will find many link to learn CUDA. OpenCL: OpenCL Tutorials | MacResearch | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/13301319', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1811022/']} | jdg_70209 |
stackexchange | llm_judgeable_groundtruth_similarity | 18422586 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my C program, I have a line where I am using the '==' operator, and the two operands are casted as char, like so: char top_level_domain[sizeof(char) * 128];...if((char)top_level_domain[i] == ':' || (char)top_level_domain[i] == '/') Is this recommended/safe? If not, how should I go about checking the contents of a certain element in an array? EDIT: added declaration and removed casts to the character literals
Now provide the response and nothing else.
| In general, it is more safe and effective to avoid casting when you can, because it allows the compiler to perform type checking. For example, spot the error: // Let's pretend you forgot or mixed up the type here...char **top_level_domain;// ...if ((char) top_level_domain[i] == (char) ':') ... Or maybe... char top_level_domain[sizeof(char) * 128];...// Whoops! forgot to type [i]if((char)top_level_domain[i] == ':' || (char)top_level_domain == '/') Whoops! You forgot to dereference the pointer, you're getting garbage. The compiler would have given you a diagnostic message, but since you used a cast, the diagnostic message is gone. Note: This will actually cause a diagnostic message on some compilers because char is narrower than char * , but if we were using size_t instead of char then there would be no narrowing, but it would still be an error. Why use casts? There are quite a few situations where the C "integer promotions" and "usual arithmetic conversions" can cause undesired behavior. For example, size_t align_to_16_bytes(size_t x){ // return x & ~15u; // WRONG return x & ~(size_t) 15u; // RIGHT} However, in general it will only cause problems when you are using types wider than int or when you are mixing signed and unsigned types that are at least as wide as int . Newer languages such as Java and C# largely avoid this problem by only allowing widening implicit casts. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18422586', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2714262/']} | jdg_70210 |
stackexchange | llm_judgeable_groundtruth_similarity | 1305532 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm searching for an elegant way to get data using attribute access on a dict with some nested dicts and lists (i.e. javascript-style object syntax). For example: >>> d = {'a': 1, 'b': {'c': 2}, 'd': ["hi", {'foo': "bar"}]} Should be accessible in this way: >>> x = dict2obj(d)>>> x.a1>>> x.b.c2>>> x.d[1].foobar I think, this is not possible without recursion, but what would be a nice way to get an object style for dicts?
Now provide the response and nothing else.
| Update: In Python 2.6 and onwards, consider whether the namedtuple data structure suits your needs: >>> from collections import namedtuple>>> MyStruct = namedtuple('MyStruct', 'a b d')>>> s = MyStruct(a=1, b={'c': 2}, d=['hi'])>>> sMyStruct(a=1, b={'c': 2}, d=['hi'])>>> s.a1>>> s.b{'c': 2}>>> s.cTraceback (most recent call last): File "<stdin>", line 1, in <module>AttributeError: 'MyStruct' object has no attribute 'c'>>> s.d['hi'] The alternative (original answer contents) is: class Struct: def __init__(self, **entries): self.__dict__.update(entries) Then, you can use: >>> args = {'a': 1, 'b': 2}>>> s = Struct(**args)>>> s<__main__.Struct instance at 0x01D6A738>>>> s.a1>>> s.b2 | {} | {'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/1305532', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/159966/']} | jdg_70211 |
stackexchange | llm_judgeable_groundtruth_similarity | 62930 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I want ubuntu to treat my login as root. I shouldn't have to sudo. What do I do? For almost anything I need to do, I have to sudo it.This causes problems specially in case when I need to install a software and all the directories created by it are then owned by 'root'. I recently installed apache and /var/www was owned by root and then I had to chmod 777 to make stuff work. This is insane.
Now provide the response and nothing else.
| I recently installed apache and /var/www was owned by root and then I had to chmod 777 to make stuff work. This is insane. Yeah, don't do that. Use: chown -R www-data.www-data /var/www Instead. Otherwise you are compromising the security of your webpages. The guides that suggest chmod things to 777 are generally writing the tutorial for shared hosting providers, because if they already have it set as 775 , or 755 and chown as the www-data user there is no reason to ever chmod 777 anything unless you are sharing a file with another user on the system, but even then just cp it to /tmp Sudo is meant to protect you from doing silly things as root that you have no business doing. If you need a temporary "root" shell, do sudo su - . Be be forwarned this should only be used a temporary. I'd suggest a TMOUT=600 in your root's .bashrc so that root is automatically kicked after 10min of idle. I would suggest that you read Ubuntu Server Guide (direct link to pdf). Specifically chapter 8 on Security. Reading the whole book should give you great oversight on doing, maintaining and enhancing your services/quality/security. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/62930', 'https://serverfault.com', 'https://serverfault.com/users/11016/']} | jdg_70212 |
stackexchange | llm_judgeable_groundtruth_similarity | 23109323 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have 2 Jobs that read and produce data in a Sql Server Database. Every once in a while the jobs crash with a System.Transactions.TransactionInDoubtException. The exact stack trace is: Unhandled Exception: System.Transactions.TransactionInDoubtException: The transaction is in doubt. ---> System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. ---> System.ComponentModel.Win32Exception: The wait operation timed out. Exitcode: -532462766 --- End of inner exception stack trace --- at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) at System.Data.SqlClient.TdsParserStateObject.ReadSniError(TdsParserStateObject stateObj, UInt32 error) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) at System.Data.SqlClient.TdsParserStateObject.TryReadNetworkPacket() at System.Data.SqlClient.TdsParserStateObject.TryPrepareBuffer() at System.Data.SqlClient.TdsParserStateObject.ReadSniSyncOverAsync() at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady) at System.Data.SqlClient.TdsParserStateObject.TryReadByte(Byte& value) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) I googled a bit about it and found something about MSDTC, but I think this can't be the problem because the Transaction should be local since the jobs only work on a single database. The following query: SELECT cntr_value AS NumOfDeadLocks FROM sys.dm_os_performance_counters WHERE object_name = 'SQLServer:Locks' AND counter_name = 'Number of Deadlocks/sec' AND instance_name = '_Total' shows that there have been no deadlocks on the database, so deadlocks can't be the reason. I couldn't find any other resource on the internet which gives exact information about the reason of the exception. So has anybody a idea what the reason could be or how to find the root of this error?
Now provide the response and nothing else.
| Even if the transaction is local, transaction will still escalated to the MSDTC if you open multiple connections within the same transaction scope, according to this article: http://msdn.microsoft.com/en-us/library/ms229978(v=vs.110).aspx An escalation that results in the System.Transactions infrastructure transferring the ownership of the transaction to MSDTC happens when: ... At least two durable resources that support single-phase notifications are enlisted in the transaction. For example, enlisting a single connection with does not cause a transaction to be promoted. However, whenever you open a second connection to a database causing the database to enlist, the System.Transactions infrastructure detects that it is the second durable resource in the transaction, and escalates it to an MSDTC transaction. NOTE: I have read some articles that state that this only applies to SQL 2005, and that SQL 2008+ is smarter about the MSDTC promotion. These state that SQL 2008 will only promote to MSDTC when multiple connections are open at the same time . See: TransactionScope automatically escalating to MSDTC on some machines? Also, your inner exception is a Timeout (System.Data.SqlClient.SqlException: Timeout expired), not a Deadlock . While both are related to blocking, they are not the same thing. A timeout occurs when blocking causes the application to stop waiting on a resource that is blocked by another connection, so that the current statement can obtain locks on that resource. A deadlock occurs when two different connections are competing for the same resources, and they are blocking in a way they will never be able to complete unless one of the connections is terminated (this why the deadlock error messages say "transaction... has been chosen as the deadlock victim"). Since your error was a Timeout, this explains why you deadlock query returned a 0 count. System.Transactions.TransactionInDoubtException from MSDN ( http://msdn.microsoft.com/en-us/library/system.transactions.transactionindoubtexception(v=vs.110).aspx ) states: This exception is thrown when an action is attempted on a transaction that is in doubt. A transaction is in doubt when the state of the transaction cannot be determined. Specifically, the final outcome of the transaction, whether it commits or aborts, is never known for this transaction. This exception is also thrown when an attempt is made to commit the transaction and the transaction becomes InDoubt. The reason: something occurred during the TransactionScope that caused it's state to be unknown at the end of the transaction. The cause: There could be a number of different causes, but it is tough to identify your specific cause without the source code being posted. Things to check: If you are using SQL 2005, and more than one connection is opened, your transaction will be promoted to a MSDTC transaction. If you are using SQL 2008+, AND you have multiple connection open at the same time (i.e. nested connections or multiple ASYNC connections running in parallel), then the transaction will be promoted to a MSDTC transaction. If you have "try/catch{retry if timeout/deadlock}" logic that is running within your code, then this can cause issues when the transaction is within a System.Transactions.TransactionScope , because of the way that SQL Server automatically rolls back transaction when a timeout or deadlock occurs. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23109323', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1697246/']} | jdg_70213 |
stackexchange | llm_judgeable_groundtruth_similarity | 3449634 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Here's what I have so far: namespace Strategy{ interface IWeaponBehavior { void UseWeapon(); }}namespace Strategy{ class Knife : IWeaponBehavior { public void UseWeapon() { Console.WriteLine("You used the knife to slash the enemy! SLASH SLASH!"); } }}namespace Strategy{ class Pan : IWeaponBehavior { public void UseWeapon() { Console.WriteLine("You use the pan! 100% Adamantium power! BONG!"); } }} Now if I have a Character.cs superclass. how can that superclass implement a weaponbehavior so that children classes can be more specific. namespace Strategy{ class Character { public IWeaponBehavior weapon; public Character(IWeaponBehavior specificWeapon) { weapon = specificWeapon; } }}namespace Strategy{ class Thief : Character { }} How can I implement this? I'm very confused on what the actual code needs to be. I know this might be asking too much, but if you could write the actual code so I could study it, that would be very nice of you guys. I learn by seeing code. :P Many people could benefit from this question.
Now provide the response and nothing else.
| Use dependency injection in class Character ? public class Character{ public Character(IWeaponBehavior weapon) { this.weapon = weapon; } public void Attack() { weapon.UseWeapon(); } IWeaponBehavior weapon;}public class Princess: Character{ public Princess() : base(new Pan()) { }}public class Thief: Character{ public Thief() : base(new Knife()) { }}...Princess p = new Princess();Thief t = new Thief();p.Attack(); // pant.Attack(); // knife Edited as requested. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3449634', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/112355/']} | jdg_70214 |
stackexchange | llm_judgeable_groundtruth_similarity | 15360 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I read this answer and found a comment insisting not to send password by email: passwords should not be able to be retrieved by email, I hate that. It means my password is stored in plain text somewhere. it should be reset only. This raises me the question of handling Forgot Password option? At any cost the raw password must be displayed in any UI so that user will be able to read it. So what would be the way to handle "Forgot Password"
Now provide the response and nothing else.
| A good application design will not be able to explicitly recover a users password. This is because it is usually stored after it is run through some sort of hash which is a one way operation. The best way to handle lost password is to perform a reset, email to the users account a link with a generated parameter tacked on that identifies this as a valid password reset for the account in question. At this point they can set a new password. This does assume you have a users email address on file. | {} | {'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/15360', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/156/']} | jdg_70215 |
stackexchange | llm_judgeable_groundtruth_similarity | 52538252 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Summary I am working on a series of add-ons for Anki , an open-source flashcard program. Anki add-ons are shipped as Python packages, with the basic folder structure looking as follows: anki_addons/ addon_name_1/ __init__.py addon_name_2/ __init__.py anki_addons is appended to sys.path by the base app, which then imports each add_on with import <addon_name> . The problem I have been trying to solve is to find a reliable way to ship packages and their dependencies with my add-ons while not polluting global state or falling back to manual edits of the vendored packages . Specifics Specifically, given an add-on structure like this... addon_name_1/ __init__.py _vendor/ __init__.py library1 library2 dependency_of_library2 ... ...I would like to be able to import any arbitrary package that is included in the _vendor directory, e.g.: from ._vendor import library1 The main difficulty with relative imports like this is that they do not work for packages that also depend on other packages imported through absolute references (e.g. import dependency_of_library2 in the source code of library2 ) Solution attempts So far I have explored the following options: Manually updating the third-party packages, so that their import statements point to the fully qualified module path within my python package (e.g. import addon_name_1._vendor.dependency_of_library2 ). But this is tedious work that is not scalable to larger dependency trees and not portable to other packages. Adding _vendor to sys.path via sys.path.insert(1, <path_to_vendor_dir>) in my package init file. This works, but it introduces a global change to the module look-up path which will affect other add-ons and even the base app itself. It just seems like a hack that could result in a pandora's box of issues later down the line (e.g. conflicts between different versions of the same package, etc.). Temporarily modifying sys.path for my imports ; but this fails to work for third-party modules with method-level imports. Writing a PEP302 -style custom importer based off an example I found in setuptools , but I just couldn't make head nor tail of that. I've been stuck on this for quite a few hours now and I'm beginning to think that I'm either completely missing an easy way to do this, or that there is something fundamentally wrong with my entire approach. Is there no way I can ship a dependency tree of third-party packages with my code, without having to resort to sys.path hacks or modifying the packages in question? Edit: Just to clarify: I don't have any control over how add-ons are imported from the anki_addons folder. anki_addons is just the directory provided by the base app where all add-ons are installed into. It is added to the sys path, so the add-on packages therein pretty much just behave like any other python package located in Python's module look-up paths.
Now provide the response and nothing else.
| First of all, I'd advice against vendoring; a few major packages did use vendoring before but have switched away to avoid the pain of having to handle vendoring. One such example is the requests library . If you are relying on people using pip install to install your package, then just use dependencies and tell people about virtual environments. Don't assume you need to shoulder the burden of keeping dependencies untangled or need to stop people from installing dependencies in the global Python site-packages location. At the same time, I appreciate that a plug-in environment of a third-party tool is something different, and if adding dependencies to the Python installation used by that tool is cumbersome or impossible vendorizing may be a viable option. I see that Anki distributes extensions as .zip files without setuptools support, so that's certainly such an environment. So if you choose to vendor dependencies, then use a script to manage your dependencies and update their imports. This is your option #1, but automated . This is the path that the pip project has chosen, see their tasks subdirectory for their automation, which builds on the invoke library . See the pip project vendoring README for their policy and rationale (chief among those is that pip needs to bootstrap itself, e.g. have their dependencies available to be able to install anything). You should not use any of the other options; you already enumerated the issues with #2 and #3. The issue with option #4, using a custom importer, is that you still need to rewrite imports . Put differently, the custom importer hook used by setuptools doesn't solve the vendorized namespace problem at all, it instead makes it possible to dynamically import top-level packages if the vendorized packages are missing (a problem that pip solves with a manual debundling process ). setuptools actually uses option #1, where they rewrite the source code for vendorized packages. See for example these lines in the packaging project in the setuptools vendored subpackage; the setuptools.extern namespace is handled by the custom import hook, which then redirects either to setuptools._vendor or the top-level name if importing from the vendorized package fails. The pip automation to update vendored packages takes the following steps: Delete everything in the _vendor/ subdirectory except the documentation, the __init__.py file and the requirements text file. Use pip to install all vendored dependencies into that directory, using a dedicated requirements file named vendor.txt , avoiding compilation of .pyc bytecache files and ignoring transient dependencies (these are assumed to be listed in vendor.txt already); the command used is pip install -t pip/_vendor -r pip/_vendor/vendor.txt --no-compile --no-deps . Delete everything that was installed by pip but not needed in a vendored environment, i.e. *.dist-info , *.egg-info , the bin directory, and a few things from installed dependencies that pip would never use. Collect all installed directories and added files sans .py extension (so anything not in the whitelist); this is the vendored_libs list. Rewrite imports; this is simply a series of regexes, where every name in vendored_lists is used to replace import <name> occurrences with import pip._vendor.<name> and every from <name>(.*) import occurrence with from pip._vendor.<name>(.*) import . Apply a few patches to mop up the remaining changes needed; from a vendoring perspective, only the pip patch for requests is interesting here in that it updates the requests library backwards compatibility layer for the vendored packages that the requests library had removed; this patch is quite meta! So in essence, the most important part of the pip approach, the rewriting of vendored package imports is quite simple; paraphrased to simplify the logic and removing the pip specific parts, it is simply the following process: import shutilimport subprocessimport refrom functools import partialfrom itertools import chainfrom pathlib import PathWHITELIST = {'README.txt', '__init__.py', 'vendor.txt'}def delete_all(*paths, whitelist=frozenset()): for item in paths: if item.is_dir(): shutil.rmtree(item, ignore_errors=True) elif item.is_file() and item.name not in whitelist: item.unlink()def iter_subtree(path): """Recursively yield all files in a subtree, depth-first""" if not path.is_dir(): if path.is_file(): yield path return for item in path.iterdir(): if item.is_dir(): yield from iter_subtree(item) elif item.is_file(): yield itemdef patch_vendor_imports(file, replacements): text = file.read_text('utf8') for replacement in replacements: text = replacement(text) file.write_text(text, 'utf8')def find_vendored_libs(vendor_dir, whitelist): vendored_libs = [] paths = [] for item in vendor_dir.iterdir(): if item.is_dir(): vendored_libs.append(item.name) elif item.is_file() and item.name not in whitelist: vendored_libs.append(item.stem) # without extension else: # not a dir or a file not in the whilelist continue paths.append(item) return vendored_libs, pathsdef vendor(vendor_dir): # target package is <parent>.<vendor_dir>; foo/_vendor -> foo._vendor pkgname = f'{vendor_dir.parent.name}.{vendor_dir.name}' # remove everything delete_all(*vendor_dir.iterdir(), whitelist=WHITELIST) # install with pip subprocess.run([ 'pip', 'install', '-t', str(vendor_dir), '-r', str(vendor_dir / 'vendor.txt'), '--no-compile', '--no-deps' ]) # delete stuff that's not needed delete_all( *vendor_dir.glob('*.dist-info'), *vendor_dir.glob('*.egg-info'), vendor_dir / 'bin') vendored_libs, paths = find_vendored_libs(vendor_dir, WHITELIST) replacements = [] for lib in vendored_libs: replacements += ( partial( # import bar -> import foo._vendor.bar re.compile(r'(^\s*)import {}\n'.format(lib), flags=re.M).sub, r'\1from {} import {}\n'.format(pkgname, lib) ), partial( # from bar -> from foo._vendor.bar re.compile(r'(^\s*)from {}(\.|\s+)'.format(lib), flags=re.M).sub, r'\1from {}.{}\2'.format(pkgname, lib) ), ) for file in chain.from_iterable(map(iter_subtree, paths)): patch_vendor_imports(file, replacements)if __name__ == '__main__': # this assumes this is a script in foo next to foo/_vendor here = Path('__file__').resolve().parent vendor_dir = here / 'foo' / '_vendor' assert (vendor_dir / 'vendor.txt').exists(), '_vendor/vendor.txt file not found' assert (vendor_dir / '__init__.py').exists(), '_vendor/__init__.py file not found' vendor(vendor_dir) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52538252', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1708932/']} | jdg_70216 |
stackexchange | llm_judgeable_groundtruth_similarity | 2287080 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The question asks us to prove $\sqrt[3]{3} \notin \mathbb{Q}(\sqrt[4]{3})$ I realise this is similar to another question - Show that $\sqrt{2}\notin \mathbb{Q}(\sqrt[4]{3})$ - and in this question the answer recommends using the fact that $\mathbb{Q}\subset\mathbb{Q}(\sqrt{3})\subset\mathbb{Q}(\sqrt[4]{3})$, which in turn means that if we show that $\sqrt[3]{3} \notin \mathbb{Q}(\sqrt{3})$, we can then go on to show that it is not in $\mathbb{Q}(\sqrt[4]{3})$. Following these steps, I said that $\sqrt[3]{3} = a+b\sqrt{3}$ and then cubing both sides and subtracting 3, got that $(a^3+9ab^2-3)+3\sqrt{3}(a^2b+b^3)=0$ and this is eventually where I got stuck. Thank you in advance
Now provide the response and nothing else.
| You can prove this using the tower law. Suppose towards a contradiction that the claim is true. Then we have the tower$$\mathbb{Q}\subset\mathbb{Q}(\sqrt[3]{3})\subset\mathbb{Q}(\sqrt[4]{3}).$$Note $\sqrt[3]{3}$ has degree $3$ over $\mathbb{Q}$ while $\sqrt[4]{3}$ has degree $4$ or $2$ over $\mathbb{Q}$ (doesn't matter which). But $3$ does not divide $4$ and does not divide $2$. Contradiction. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2287080', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/402253/']} | jdg_70217 |
stackexchange | llm_judgeable_groundtruth_similarity | 53273 |
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Recently protesters against the overturning of U.S.abortion law have often shown the “coat hanger” as the symbolic image of illegal abortion. An extract from Wikipedia reads: In a letter to The New York Times, gynecologist Waldo L. Fielding wrote: The familiar symbol of illegal abortion is the infamous "coat hanger" — which may be the symbol, but is in no way a myth. In my years in New York, several women arrived with a hanger still in place. Whoever put it in – perhaps the patient herself – found it trapped in the cervix and could not remove it... Was this practice, I mean the use of a hanger (as dangerous and desperate as it may be) really common in illegal abortion cases or was it more a one-off episode that became emblematic of the risks and dangers of not having a legal support for abortion?
Now provide the response and nothing else.
| Here are some examples of the use of coat hangers for self-induced abortion in the literature that shows that this is not a myth: Okonofua, F. E., Onwudiegwu, U., & Odunsi, O. A. (1992). Illegal Induced Abortion: A Study of 74 Cases in Ile-Ife, Nigeria . Tropical Doctor, 22(2), 75–78. doi:10.1177/00494755920220020910.1177/004947559202200209 This study looked at 74 women who were interviewed about the complications that occurred after their induced abortions. Fourteen of the women had self-induced abortions: The methodsof self induction of abortion in the 14 women were:self instrumentation with pins, needles and coathangers (4), insertion of native vaginal pessaries (2),ingestion of 'tablets' (2), 'injections' (2), ingestionof strong alcoholic drinks (2), vaginal instillation ofpotash (1) and vaginal instillation of gunpowder (1) Woman Accused of Coat-Hanger Abortion Pleads Guilty to Felon , New York Times, 2017 A Tennessee woman jailed for more than a year after trying to use a coat hanger to abort her 24-week-old fetus pleaded guilty on Monday to one felony count in exchange for her immediate release from jail. Saultes TA, Devita D, Heiner JD. The back alley revisited: sepsis after attempted self-induced abortion. West J Emerg Med. 2009;10(4):278-280. A transabdominal ultrasound revealed a twin pregnancy at 21 weeks gestation, no obvious evidence of abruption, and a significant amount of abdominal free fluid. She then confessed to attempting to end her pregnancy earlier that day by passing a coat hanger deep into her vagina until she felt a “pop,” [...] | {} | {'log_upvote_score': 6, 'links': ['https://skeptics.stackexchange.com/questions/53273', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/45700/']} | jdg_70218 |
stackexchange | llm_judgeable_groundtruth_similarity | 34178 |
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Nitrogen triiodide, on slightest physical contact, explodes to evolve dense purple fumes of iodine. Why does this phenomenon occur? video link - https://www.youtube.com/watch?v=2KlAf936E90
Now provide the response and nothing else.
| Wikipedia states The instability of $\ce{NI3}$ and $\ce{NI3 · NH3}$ can be attributed to the large steric strain caused by the three large iodine atoms being held in close proximity to each other around the relatively tiny nitrogen atom. This results in a very low activation energy for its decomposition, a reaction made even more favorable due to the great stability of $\ce{N2}$. The steric strain between the atoms can be better visualized by the representations here . The decomposition reaction is$$\ce{2NI3\to N2 + 3I2}$$Note that there is an increase in the number of moles (4 versus 2), which means that the reaction is favored by entropy, because there are more possible microstates. More specifically, the enthalpy of formation is 154.4 kJ/mol, which is incredibly favorable for a reaction. Reference: UC Davis ChemWiki | {} | {'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/34178', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/17368/']} | jdg_70219 |
stackexchange | llm_judgeable_groundtruth_similarity | 5028309 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do i add the program so that it is listed (so i can click on it to uninstall) in windows's add/remove program list?
Now provide the response and nothing else.
| The uninstall registration is stored in the registry, where in the registry you should save it depends on if your installer installs the program for all users or a single user (IE your RequestExecutionLevel setting): user = HKCU admin = HKLM highest = SHCTX (This means you must use SetShellVarContext correctly and also restore it correctly in the uninstaller) There are only two values that are required: DisplayName and UninstallString. !define REGUNINSTKEY "MyApplication" ;Using a GUID here is not a bad idea!define REGHKEY HKLM ;Assuming RequestExecutionLevel admin AKA all user/machine install!define REGPATH_WINUNINST "Software\Microsoft\Windows\CurrentVersion\Uninstall"SectionWriteRegStr ${REGHKEY} "${REGPATH_WINUNINST}\${REGUNINSTKEY}" "DisplayName" "My application"WriteRegStr ${REGHKEY} "${REGPATH_WINUNINST}\${REGUNINSTKEY}" "UninstallString" '"$INSTDIR\uninstaller.exe"'SectionEnd There are several optional values you can set, MSDN does not really provide a list of documented values but the NSIS Wiki has a decent list and this page has a even more complete list... | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5028309', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/286289/']} | jdg_70220 |
stackexchange | llm_judgeable_groundtruth_similarity | 147471 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I find the output of the shell command top to be a simple and familiar way to get a rough idea of the health of a machine. I'd like to serve top 's output (or something very similar to it) from a tiny web server on a machine for crude monitoring purposes. Is there a way to get top to write its textual output exactly once , without formatting characters? I've tried this: (sleep 1; echo 'q') | top > output.txt This seems to be close to what I want, except that (1) there's no guarantee that I won't get more or less than one screenful of info and (2) I have to strip out all the terminal formatting characters. Or is there some other top -like command that lists both machine-wide and process-level memory/CPU usage/uptime info? (Ideally, I'd love a strategy that's portable to both Linux and Mac OS X, since our devs use Macs and our prod environment is Linux.)
Now provide the response and nothing else.
| In Linux, you can try this: top -bn1 > output.txt From man top : -b : Batch-mode operation Starts top in 'Batch' mode, which could be useful for sending output from top to other programs or to a file. In this mode, top will not accept input and runs until the iterations limit you've set with the '-n' command-line option or until killed.....-n : Number-of-iterations limit as: -n number Specifies the maximum number of iterations, or frames, top should produce before ending. With OS X, try: top -l 1 From top OSX manpage : -l <samples> Use logging mode and display <samples> samples, even if standard output is a terminal. 0 is treated as infinity. Rather than redisplaying, output is periodically printed in raw form. Note that the first sample displayed will have an invalid %CPU displayed for each process, as it is calculated using the delta between samples. | {} | {'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/147471', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/28969/']} | jdg_70221 |
stackexchange | llm_judgeable_groundtruth_similarity | 50522 |
Below is a question asked on the forum mechanics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This is a beater car and I'm not worried about fixing it "properly" really. Trying to spend minimal dollars on this. My main concern is keeping rats or bugs out of the car. Basically the car completely rusted through where the spare tire holder is. What can I use to plug or patch this hole with so bugs / rodents don't get in? https://imgur.com/a/jMD9t
Now provide the response and nothing else.
| The following repair is cheap, easy and should last for 3 to 5 or more years depending on your climate, but it's not the "correct way" either. Remove the rust, paint and body protection around the hole with a wire brush wheel. Extend the area until you have clean, non-rusted metal all around the hole. Cut some glass fiber pads that cover the hole and all of the rusted area. Test-fit them and make additional cuts such that they bend nicely around the metal. Put on gloves and mix some polyester resin. Apply it first to all the exposed metal in- and outside, then soak the fiber pads. Put them onto the hole form the inside and outside. Use a hot air gun to make the resin gel quickly while you hold the pads in place. The resin gets quite runny when heated, but gels in about 5 minutes when it cools down, then the glass fibers should stay in place. Let it cure for a day. Apply some body protection. Drill a small hole, such that the water that collects at this point can run off. Or get some new rubber door seals to prevent it from entering the trunk. | {} | {'log_upvote_score': 4, 'links': ['https://mechanics.stackexchange.com/questions/50522', 'https://mechanics.stackexchange.com', 'https://mechanics.stackexchange.com/users/34451/']} | jdg_70222 |
stackexchange | llm_judgeable_groundtruth_similarity | 346465 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In $\mathbb{Z}$, the greatest common divisor of $a$ and $b$ is a linear combination of $a$ and $b$. This generalizes to Euclidean domains since Euclid's algorithm works. Moreover this statement generalizes to PIDs, for if ideals $(c)=(a)+(b)$ then $c$ is a linear combination of $a$ and $b$, and $c$ is the gcd of $a$ and $b$. My question is: how far can we generalize the statement above? In the conventional classification of commutative rings with unit, what is the best generalization?
Now provide the response and nothing else.
| Rings in which every two-generated ideal is principal $\rm\:(a,b) = (c)\:$ are called Bezout rings, since they are precisely the rings where gcds exist and have linear (Bezout) form. For suppose that $\rm\:(a,b) = (c).\:$ Then $\rm\:(c)\supseteq (a),(b)\:\Rightarrow\: c\mid a,b,\:$ so $\rm\:c\:$ is a common divisor of $\rm\:a,b.\:$ Conversely $\rm\:(a,b)\supseteq (c)\:\Rightarrow\: c = ja + k b\:$ so $\rm\:d\mid a,b\:\Rightarrow\:d\mid c,\:$ so $\rm\:c\:$ is a greatest common divisor (greatest in terms of divisibility order). Bezout domains lie between PIDs and GCD domains in the following list of domains closely related to GCD domains. $\qquad\qquad$ PID: $\ \ $ every ideal is principal Bezout: $\ \ $ every ideal (a,b) is principal GCD: $\ \ $ (x,y) := gcd(x,y) exists for all x,y SCH: $\ \ $ Schreier = pre-Schreier & integrally closed SCH0: $\ \ $ pre-Schreier: a|bc $\, \Rightarrow\, $ a = BC, B|b, C|c D: $\ \ $ (a,b) = 1 & a|bc $\,\Rightarrow\,$ a|c PP: $\ \ $ (a,b) = (a,c) = 1 $\,\Rightarrow\,$ (a,bc) = 1 GL: $\ \ $ Gauss Lemma: product of primitive polys is primitive GL2: $\ \ $ Gauss Lemma holds for all polys of degree 1 AP: $\ \ $ atoms are prime [i.e. PP restricted to atomic a] Since atomic & AP $\,\Rightarrow\,$ UFD, reversing the above UFD $\,\Rightarrow\,$ AP path shows that in atomic domains all these properties (except PID, Bezout) collapse, becoming all equivalent to UFD. There are also many properties known equivalent to D, e.g. [a] $\ \ $ (a,b) = 1 $\,\Rightarrow\,$ a|bc $\,\Rightarrow\,$ a|c [b] $\ \ $ (a,b) = 1 $\,\Rightarrow\,$ a,b|c $\,\Rightarrow\,$ ab|c [c] $\ \ $ (a,b) = 1 $\,\Rightarrow\,$ (a)/\(b) = (ab) [d] $\ \ $ (a,b) exists $\,\Rightarrow\,$ lcm(a,b) exists [e] $\ \ $ a + b X irreducible $\,\Rightarrow\,$ prime for b $\ne$ 0 (deg = 1) | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/346465', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/26655/']} | jdg_70223 |
stackexchange | llm_judgeable_groundtruth_similarity | 45266461 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My original config file (web1.config) has no extra line and when viewed in notepad (showing all characters) looks as: <?xml version="1.0"?><configuration> <system.web> <compilation debug="true" targetFramework="4.6" /> <httpRuntime targetFramework="4.6" /> </system.web> <appSettings> <add key="myConnectionString" value="server=localhost;database=myDb;uid=myUser;password=myPass;" /> </appSettings></configuration> Now, I need to apply the script to change my database name to something else which looks like: Move-Item "web1.config" "webtemp.config"Get-Content "webtemp.config" | ForEach-Object {$_ -replace "database=myDb;", "database=newDb;"} |Set-Content "web1.config" -ForceRemove-Item "webtemp.config"Write-Output('Settings Changed') So, the new file (web1.config) generated looks as: Notice the extra line added at the end of the file (which is completely not needed)I tried all other options such as:- using out-file api- using .net IO method System.IO.StreamWriter- using -nonewline flag (it converts all 10 lines into single line)- using different encoding options- tried replacing \r\n to \r (don't work as again set-content generates the crlf always) I'm using PowerShell v5.1.
Now provide the response and nothing else.
| tl;dr ( PSv5+ ; see bottom for older versions): (Get-Content webtemp.config) -replace 'database=myDb;', 'database=newDb;' -join "`n" | Set-Content -NoNewline -Force web1.config Note: Replace "`n" with "`r`n" if you want Windows-style CRLF line endings rather than Unix-style LF-only line endings (PowerShell and many utilities can handle both). In PSv5+ , Set-Content supports the -NoNewline switch , which instructs Set-Content not to add a newline (line break) after each input object . The same applies analogously to the Add-Content and Out-File cmdlets. In other words: Set-Content -NoNewline directly concatenates the string representations of all its input objects : PS> 'one', 'two' | Set-Content -NoNewline tmp.txt; Get-Content tmp.txtonetwo If what you're passing to Set-Content -NoNewline is a single string that already has embedded newlines , you can use it as-is and get the desired result: PS> "one`ntwo" | Set-Content -NoNewline tmp.txt; "$(Get-Content -Raw tmp.txt)?"onetwo? Note that Get-Content -Raw reads the file as a whole, as-is (aside from character decoding) and the fact that the ? appears directly after two implies that the file has no trailing newline. In your case, since you're processing input lines one by one (via Get-Content without -Raw ) and therefore outputting an array of lines (strings), you must first join them with a newline as the separator - between lines only - and pass the result to Set-Content -NoNewline , as shown at the top; here's a simplified example: PS> ('one', 'two') -join "`n" | Set-Content -NoNewline tmp.txt; "$(Get-Content -Raw tmp.txt)?"onetwo? 'one', 'two' is a two-element string array that is a stand-in for your line-by-line processing command. Encoding note: In Windows PowerShell, Set-Content produces "ANSI"-encoded files by default, based on your system's legacy, single-byte code page. To control the encoding explicitly, use the -Encoding parameter. In PSv4- , a solution that uses the .NET Framework is needed: PS> [System.IO.File]::WriteAllText('tmp.txt', ('one', 'two') -join "`n"); "$(Get-Content -Raw tmp.txt)?"onetwo? Note that [System.IO.File]::WriteAllText() , in the absence of an encoding argument, defaults to BOM-less UTF-8. Pass the desired [System.Text.Encoding] encoding instance as the 3rd argument as needed. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/45266461', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3429331/']} | jdg_70224 |
stackexchange | llm_judgeable_groundtruth_similarity | 28006501 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have controller mappings to /user and /order : @RestController@RequestMapping("/users")public class UserController { ...}@RestController@RequestMapping("/orders")public class OrderController { ...} I want to access these by URL at http://localhost:8080/api/users and http://localhost:8080/api/orders , respectively. How do I achieve this in Spring Boot?
Now provide the response and nothing else.
| You can provide a mapping to root context path of your spring boot application to /api/* in your custom configuration. import org.springframework.boot.autoconfigure.web.DispatcherServletAutoConfiguration;import org.springframework.boot.context.embedded.ServletRegistrationBean;import org.springframework.context.annotation.Bean;import org.springframework.context.annotation.Configuration;import org.springframework.web.servlet.DispatcherServlet;@Configurationpublic class DispatcherServletCustomConfiguration { @Bean public DispatcherServlet dispatcherServlet() { return new DispatcherServlet(); } @Bean public ServletRegistrationBean dispatcherServletRegistration() { ServletRegistrationBean registration = new ServletRegistrationBean( dispatcherServlet(), "/api/"); registration.setName(DispatcherServletAutoConfiguration.DEFAULT_DISPATCHER_SERVLET_REGISTRATION_BEAN_NAME); return registration; }} or add this to your application.properties in src\main\resources folder server.contextPath=/api EDIT As of Spring Boot 2.x the property has been deprecated and should be replaced with server.servlet.contextPath=/api More you find here Spring Boot Context Root and here Add servlet mapping to DispatcherServlet | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/28006501', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2173453/']} | jdg_70225 |
stackexchange | llm_judgeable_groundtruth_similarity | 10436113 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am making a text adventure in C# , and someone suggested that I use a dispatch table instead of a switch statement. Here's the switch statement code: #region Public Methods public static void Do(string aString) { if(aString == "") return; string verb = ""; string noun = ""; if (aString.IndexOf(" ") > 0) { string[] temp = aString.Split(new char[] {' '}, 2); verb = temp[0].ToLower(); noun = temp[1].ToLower(); } else { verb = aString.ToLower(); } switch(Program.GameState) { case Program.GameStates.Playing: if (IsValidInput(Commands, verb, true)) { switch(verb) //this is the switch statement { case "help": case "?": WriteCommands(); break; case "exit": case "quit": Program.GameState = Program.GameStates.Quit; break; case "move": case "go": MoveTo(noun); break; case "examine": Examine(noun); break; case "take": case "pickup": Pickup(noun); break; case "drop": case "place": Place(noun); break; case "use": Use(noun); break; case "items": case "inventory": case "inv": DisplayInventory(); break; case "attack": //attack command break; } } break; case Program.GameStates.Battle: if(IsValidInput(BattleCommands, verb, true)) { switch(verb) //this is the other switch statement { case "attack": //attack command break; case "flee": case "escape": //flee command break; case "use": //use command break; case "items": case "inventory": case "inv": //items command break; } } break; } } #endregion How do I refactor this to use a dispatch table?
Now provide the response and nothing else.
| Simplest way would be to use a dictionary of delegates. For example: Dictionary<string, Action> dispatch = new Dictionary<string, Action>();dispatch["help"] = new Action(() => Console.WriteLine("Hello"));dispatch["dosomething"] = new Action(() =>{ // Do something else Console.WriteLine("Do Something");});// Call the 'help' commanddispatch["help"](); For multiple different parameters it might be simplest to use a base Delegate and use dynamic Invoke. Dictionary<string, Delegate> dispatch = new Dictionary<string, Delegate>();dispatch["help"] = new Action(() => Console.WriteLine("Hello"));dispatch["dosomething"] = new Action<string>(s => Console.WriteLine(s));dispatch["help"].DynamicInvoke();dispatch["dosomething"].DynamicInvoke("World"); And if using .NET 4 you can also use dynamic types to resolve at run time to reduce the clutter slightly of dynamic invoke. Dictionary<string, dynamic> dispatch = new Dictionary<string, dynamic>();dispatch["help"] = new Action(() => Console.WriteLine("Hello"));dispatch["dosomething"] = new Action<string>(s => Console.WriteLine(s));dispatch["help"]();dispatch["dosomething"]("World"); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10436113', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1148973/']} | jdg_70226 |
stackexchange | llm_judgeable_groundtruth_similarity | 42858199 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to have a vimeo video shown full width in web page. This is how it looks now: As you can see the black is full width but not the video. It should be full width, no controls shown, play automatic and play in a loop. My code looks like this now: <iframe src="https://player.vimeo.com/video/208176323?autoplay=1&loop=1&background=1" width="100%" height="500px" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe> The client has vimeo plus but not vimeo pro. Can someone help me with this. UPDATE: I've changed my code to this: <style> .embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; height: auto; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }</style><div class='embed-container'><iframe src='https://player.vimeo.com/video/208791851?autoplay=1&loop=1&background=1' frameborder='0' webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe></div> But I still have black border on bottom and top. I've created a jsfiddle where you can also see this: https://jsfiddle.net/07fkfwz3/ . And the video that you can see here doesn't have any borders.
Now provide the response and nothing else.
| The magic padding number you create for the container needs to match the aspect ratio of the video. If you inspect the video on vimeo, the res is 1296x540. To get the aspect ratio percentage, divide 540 / 1296 * 100% = 41.66666667% padding. Here's a fiddle since the video doesn't seem to play well in the SO sandbox. https://jsfiddle.net/mcoker/p7q6x4d5/1/ .embed-container { --video--width: 1296; --video--height: 540; position: relative; padding-bottom: calc(var(--video--height) / var(--video--width) * 100%); /* 41.66666667% */ overflow: hidden; max-width: 100%; background: black;}.embed-container iframe,.embed-container object,.embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%;} <div class='embed-container'> <iframe src='https://player.vimeo.com/video/208791851?autoplay=1&loop=1&background=1' frameborder='0' webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe></div> | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/42858199', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1775531/']} | jdg_70227 |
stackexchange | llm_judgeable_groundtruth_similarity | 43208012 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My Vue component is like this : <template> <div> <div class="panel-group"v-for="item in list"> <div class="col-md-8"> <small> Total: <b>{{ item.total }}</b> </small> </div> </div> </div></template><script> export default { ... computed: { list: function() { return this.$store.state.transaction.list }, ... } }</script> The result of {{ item.total }} is 26000000 But I want format it to be like this : 26.000.000,00 In jquery or javascript, I can do it But, How to do it in vue component?
Now provide the response and nothing else.
| UPDATE: I suggest using a solution with filters, provided by @Jess. I would write a method for that, and then where you need to format price you can just put the method in the template and pass value down methods: { formatPrice(value) { let val = (value/1).toFixed(2).replace('.', ',') return val.toString().replace(/\B(?=(\d{3})+(?!\d))/g, ".") }} And then in template: <template> <div> <div class="panel-group"v-for="item in list"> <div class="col-md-8"> <small> Total: <b>{{ formatPrice(item.total) }}</b> </small> </div> </div> </div></template> BTW - I didn't put too much care on replacing and regular expression. It could be improved. enter code here Vue.filter('tableCurrency', num => { if (!num) { return '0.00'; } const number = (num / 1).toFixed(2).replace(',', '.'); return number.toString().replace(/\B(?=(\d{3})+(?!\d))/g, ',');}); | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/43208012', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5839900/']} | jdg_70228 |
stackexchange | llm_judgeable_groundtruth_similarity | 3663585 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I try to run tests from TextMate in Rails3 i get an error that I trace back to boot.rb. When I try to run boot.rb via textmate I get this error: LoadError: no such file to load — bundlermethod gem_original_require in custom_require.rb at line 31method require in custom_require.rb at line 31at top level in boot.rb at line 4 This is even in a brand new rails project. I am able to run the same boot.rb file from terminal calling ruby <path_to>/boot.rb I can verify that my TM_RUBY variable is the same as when i call which ruby from the command line. Do you have any clue why I might be getting this error?
Now provide the response and nothing else.
| I ran into this same problem with TextMate and RVM. What you need to do: Create a wrapper script for the gemset you want to use, using this RVM command: rvm wrapper ree@rails3 textmate This will add a new alias to your RVM install called textmate_ruby . As you can probably tell this assumes you're using Ruby Enterprise (ree) and a gemset called rails3, but any RVM string will work here. Open TextMate's preferences window, go to Advanced > Shell Variables. You need to create (or update) the TM_RUBY shell variable to the following: /path/to/your/.rvm/bin/textmate_ruby If your RVM is installed in your user dir (like mine), that'll be: /Users/[YOUR USER NAME HERE]/.rvm/bin/textmate_ruby (Optional) You may also need/want to set the RUBYOPT shell variable, to instruct Ruby to load rubygems. This may be necessary to make certain bundles (like RSpec) work. Just set the value to rubygems . This should be all you need. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3663585', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/147390/']} | jdg_70229 |
stackexchange | llm_judgeable_groundtruth_similarity | 5700520 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I dump one user with all his associations (comments, posts etc) from one database (development, sqlite) to insert it another (production, mysql). Should I dump it into yaml or to sql or something else?
Now provide the response and nothing else.
| Ok. God Save the YAML I've used YAML dumping into file from development and loading this in my production. There was hack with id, that have changed, due it is auto_increament. development user = User.find Xposts = user.postscomments = user.comments...File.open("user.yml", "w") { |f| f << YAML::dump(user) }File.open("comments.yml", "w"){ |f| f << YAML::dump(comments) }File.open("posts.yml", "w") { |f| f << YAML::dump(posts) }... production user = YAML::load_file("user.yml")posts = YAML::load_file("posts.yml")comments = YAML::load_file("comments.yml")new_user = user.clone.save # we should clone our object, because it isn't existposts.each do |p| post = p.clone post.user = new_user post.saveend... | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5700520', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/298624/']} | jdg_70230 |
stackexchange | llm_judgeable_groundtruth_similarity | 45464 |
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What are the implications of not using interfaces when defining policies on fortigate firewalls? i.e Policy would be edit 1 set srcintf "any" set dstintf "any" set srcaddr "ip-192.168.1.10" set dstaddr "ip-172.16.1.254 set status enable set schedule "always" set service "TCP-8080" set logtraffic disablenext Is there any benefits of specifying the interface? My understanding is the fortigate firewalls apply RPF checks against traffic when it enters the firewall. For example if packet from 192.168.1.0/24 via port1, It would expect a route in the routing table to exist for that prefix if not RPF would block it. We are looking at automating how we populate firewall rules and removing the interface logic would make it a a lot simpler to implement. Any thoughts or experiences would be good.
Now provide the response and nothing else.
| No, this would not work, each VLAN is a separate broadcast domain. Each VLAN has its own MAC address table which the switch uses to forward traffic at L2 between the ports that are in the same VLAN. Each VLAN also only contains the ports that are assigned to that VLAN. Firstly the ARP from the PC in VLAN 100 would not be forwarded to the PC in VLAN 200 (separate broadcast domain), so the PC on 192.168.100.2 would not be able to resolve the MAC address of the PC at 192.168.100.3 Secondly, even if you created a static ARP on the first PC for the second PC, VLAN 100 does not contain the MAC address of the second PC. If you tried to create a static MAC entry in the MAC table of VLAN 100 for the second PC you could not as the port the second PC is connected to is not in VLAN 100. In order to communicate between VLANs, you need to configure routing. This can be done on a L3 switch or router. You would of course not be able to route between two networks with the same network address (on the same router) as Cisco routers do not allow you to configure the same network on two ports in the same VRF (Juniper routers may allow two interfaces on the same network in the same VRF, but this is not the norm). The only way to do this would be to physically connect a port from VLAN 100 into a port from VLAN 200, to bridge the two VLANs together. Of course, you would never do any of this in real life. | {} | {'log_upvote_score': 4, 'links': ['https://networkengineering.stackexchange.com/questions/45464', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/14132/']} | jdg_70231 |
stackexchange | llm_judgeable_groundtruth_similarity | 32836862 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
// directoryPath is a URL from another VC@IBAction func saveButtonTapped(sender: AnyObject) { let directoryPath = NSFileManager.defaultManager().URLsForDirectory(.DocumentDirectory, inDomains: .UserDomainMask)[0] as NSURL let urlString : NSURL = directoryPath.URLByAppendingPathComponent("Image1.png") print("Image path : \(urlString)") if !NSFileManager.defaultManager().fileExistsAtPath(directoryPath.absoluteString) { UIImageJPEGRepresentation(self.image, 1.0)!.writeToFile(urlString.absoluteString, atomically: true) displayImageAdded.text = "Image Added Successfully" } else { displayImageAdded.text = "Image Not Added" print("image \(image))") } } I am not getting any error but the Image is not getting saved in the document.
Now provide the response and nothing else.
| The problem there is that you are checking if the folder not exists but you should check if the file exists. Another issue in your code is that you need to use url.path instead of url.absoluteString. You are also saving a jpeg image using a "png" file extension. You should use "jpg". edit/update: Swift 4.2 or later do { // get the documents directory url let documentsDirectory = try FileManager.default.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: false) print("documentsDirectory:", documentsDirectory.path) // choose a name for your image let fileName = "image.jpg" // create the destination file url to save your image let fileURL = documentsDirectory.appendingPathComponent(fileName) // get your UIImage jpeg data representation and check if the destination file url already exists if let data = image.jpegData(compressionQuality: 1), !FileManager.default.fileExists(atPath: fileURL.path) { // writes the image data to disk try data.write(to: fileURL) print("file saved") }} catch { print("error:", error)} To write the image at the destination regardless if the image already exists or not you can use .atomic options, if you would like to avoid overwriting an existing image you can use withoutOverwriting instead: try data.write(to: fileURL, options: [.atomic]) | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/32836862', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5142314/']} | jdg_70232 |
stackexchange | llm_judgeable_groundtruth_similarity | 5087549 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an iframe using the jQuery 1.4.2 script. The same iframe is injected into both http and https sites. The jQuery script is included in the main HTML file as a relative path (e.g., /scripts/jquery-1.4.2.min.js). When an AJAX call is made, Internet Explorer denies access. The AJAX is calling on another subdomain, but it's using the right protocol. All other browsers work but Internet Explorer gives the following error: SCRIPT5: Access is denied. jquery-1.4.2.min.js, line 127 character 344 I heard this error is from cross-domain AJAX calls. But why is IE the only one giving me crap? Is there an IE solution? Also, this is my AJAX: $.ajax({ url: thisURL, dataType: "json", data: {cmd : 'getMessage', uurl: urlVar, t: Math.random()}, success: function(ret){ callback(ret) } });
Now provide the response and nothing else.
| IE requires you to use XDomainRequest instead of XHR for cross site, you can try something like... if ($.browser.msie && window.XDomainRequest) { // Use Microsoft XDR var xdr = new XDomainRequest(); xdr.open("get", url); xdr.onload = function() { // XDomainRequest doesn't provide responseXml, so if you need it: var dom = new ActiveXObject("Microsoft.XMLDOM"); dom.async = false; dom.loadXML(xdr.responseText); }; xdr.send(); } else { // your ajax request here $$.ajax({ url: thisURL, dataType: "json", data: {cmd : 'getMessage', uurl: urlVar, t: Math.random()}, success: function(ret){ callback(ret) } }); } Reference http://forum.jquery.com/topic/cross-domain-ajax-and-ie not sure whether it fits your scenario xdr = new XDomainRequest(); xdr.onload=function(){ alert(xdr.responseText);}xdr.open("GET", thisUrl); //thisURl ->your cross domain request URL //pass your data herexdr.send([data]); you can find some more guidance here | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5087549', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/377856/']} | jdg_70233 |
stackexchange | llm_judgeable_groundtruth_similarity | 264344 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a large file which has special characters in it. There is a multi line code there, that I want to replace with sed . This: text = "\ ------ ------\n\n\ This message was automatically generated by email software\n\ The delivery of your message has not been affected.\n\n\ ------ ------\n\n" Needs to turn into this: text = "" I tried the following code, but no luck: sed -i '/ text = "*/ {N; s/ text = .*affected.\./ text = ""/g}' /etc/exim.conf It does not replace anything and does not display any error messages I have been playing with it, but everything I try does not work.
Now provide the response and nothing else.
| Perl to the rescue: perl -i~ -0777 -pe 's/text = "[^"]+"/text = ""/g' input-file -i~ will edit the file "in place", leaving a backup copy -0777 reads the whole file at once, not line by line The substitution s/// works similarly as in sed (i.e. it matches text = " followed by anything but double quotes many times up to a double quote), but in this case, it works on the whole file. | {} | {'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/264344', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/54880/']} | jdg_70234 |
stackexchange | llm_judgeable_groundtruth_similarity | 290347 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
find . -name "*.[hc]|*.cc" The above doesn't work,why? UPDATE How do I find these 3 kinds of files with a single pattern?
Now provide the response and nothing else.
| It doesn't work because -name expects a shell pattern. You can use -regex instead, or just assemble your pattern like so: find . -name '*.c' -o -name '*.h' -o -name '*.cc' Edit To do this with a single pattern you'll want a regex: find . -regextype posix-extended -regex '.*\.(c|h|cc)' You could do it with the default emacs regexes, I'm sure; I don't use them or know the main differences so I picked the one I know. If you really want to use a single shellglob, you're out of luck: the only syntax for multiple strings is {a,b} , and this is not supported by find . But there's nothing wrong with the sort of command building/chaining in my first example. It's how find is intended to be used. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/290347', 'https://serverfault.com', 'https://serverfault.com/users/77159/']} | jdg_70235 |
stackexchange | llm_judgeable_groundtruth_similarity | 6308132 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a simple MVC 3 application. I want the site to automatically redirect the user somewhere else after they have logged out and been on the logout page for a few seconds. I would like this implemented into the View, but I cannot figure out how to work MVC conventions into something to do this. I know I can use this: <META HTTP-EQUIV="Refresh" CONTENT="5;URL=/Index"> But that means I have to specify a URL, [or if its just /Index it will append it to the current URL, meaning it will call the Action of the Controller (info in brackets is incorrect)]. The only problem is this is my Account controller and I don't want to redirect them (the users) to one of its Actions. I want them redirected to an Action in my Home controller, preferably the Index Action. I imagine this can be done with a new Action in my Account controller, I link there and all that Action does is redirect to a new View. But that seems like a waste of code. Can I specify directly the controller and action I want in order to do this? EDIT: Solved it myself. What I said about it appeneding /Index to the current URL was wrong, I can specify the Controller Action there, used: <META HTTP-EQUIV="Refresh" CONTENT="5;URL=/Home/Index"> Worked so far and I didn't have to add the localhost info. This gives me the link localhost:xxxxx/Home/Index What confused me is if you use this: <META HTTP-EQUIV="Refresh" CONTENT="5;URL=~/Home/Index"> The link becomes localhost:xxxxx/Account/~/Home/Index which is really odd since it adds the ~ to the URL link, which normally just means copy the contents beforehand and append to. However it looks like the presence of ~ still means copy the contents beforehand and append everything after, we just also append the ~ this time too.... The Account part of the link is there since the View was called from the Account controller and is in the Account Controller's View folder.
Now provide the response and nothing else.
| You could use the Url.Action helper which will take care of generating the proper url based on your routes setup. Example with Razor: <META HTTP-EQUIV="Refresh" CONTENT="5;URL=@(Url.Action("Index", "Home"))"> and with WebForms: <META HTTP-EQUIV="Refresh" CONTENT="5;URL=<%= Url.Action("Index", "Home") %>"> Alternatively you could use javascript to perform the redirect instead of a meta tag: <script type="text/javascript"> window.setTimeout(function() { window.location.href = '@Url.Action("Index", "Home")'; }, 5000);</script> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6308132', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/768028/']} | jdg_70236 |
stackexchange | llm_judgeable_groundtruth_similarity | 1685746 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I believe I'm overthinking this or otherwise confused but I believe that the method to solve this would be $2^n$ where n is the length of the bytes? So in this particular case it would be $2^8$ equal 256 possible. But then I feel like that isn't right and I'm mixed up. What I thought about is that there is 4 possible ways to have an even number f zeros (i.e. 2 zeros, 4 zeros, 6 zeros, or 8 zeros). Any insight would be awesome as I'm confused...
Now provide the response and nothing else.
| Any $7$-bit word can be completed to an $8$-bit word with an even number of $0$'s in exactly one way by choosing the eighth bit suitably. So the number of $8$-bit words with an even number of $0$'s is the same as the number of $7$-bit words. This is $2^7$. | {} | {'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/1685746', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/278919/']} | jdg_70237 |
stackexchange | llm_judgeable_groundtruth_similarity | 17966 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How can we sum up $\sin$ and $\cos$ series when the angles are in arithmetic progression? For example here is the sum of $\cos$ series: $$\sum_{k=0}^{n-1}\cos (a+k \cdot d) =\frac{\sin(n \times \frac{d}{2})}{\sin ( \frac{d}{2} )} \times \cos \biggl( \frac{ 2 a + (n-1)\cdot d}{2}\biggr)$$ There is a slight difference in case of $\sin$, which is:$$\sum_{k=0}^{n-1}\sin (a+k \cdot d) =\frac{\sin(n \times \frac{d}{2})}{\sin ( \frac{d}{2} )} \times \sin\biggl( \frac{2 a + (n-1)\cdot d}{2}\biggr)$$ How do we prove the above two identities?
Now provide the response and nothing else.
| Let $$ S = \sin{(a)} + \sin{(a+d)} + \cdots + \sin{(a+nd)}$$ Now multiply both sides by $\sin\frac{d}{2}$ . Then you have $$S \times \sin\Bigl(\frac{d}{2}\Bigr) = \sin{(a)}\sin\Bigl(\frac{d}{2}\Bigr) + \sin{(a+d)}\cdot\sin\Bigl(\frac{d}{2}\Bigr) + \cdots + \sin{(a+nd)}\cdot\sin\Bigl(\frac{d}{2}\Bigr)$$ Now, note that $$\sin(a)\sin\Bigl(\frac{d}{2}\Bigr) = \frac{1}{2} \cdot \biggl[ \cos\Bigl(a-\frac{d}{2}\Bigr) - \cos\Bigl(a+\frac{d}{2}\Bigr)\biggr]$$ and $$\sin(a+d) \cdot \sin\Bigl(\frac{d}{2}\Bigr) = \frac{1}{2} \cdot \biggl[ \cos\Bigl(a + d -\frac{d}{2}\Bigr) - \cos\Bigl(a+d+\frac{d}{2}\Bigr) \biggr]$$ Then by doing the same thing you will have some terms cancelled out. You can easily see which terms are going to get Cancelled. Proceed and you should be able to get the formula. I tried this by seeing this post. This has been worked for the case when $d=a$ . Just take a look here: https://web.archive.org/web/20180927043531/http://mathforum.org/library/drmath/view/72777.html | {} | {'log_upvote_score': 8, 'links': ['https://math.stackexchange.com/questions/17966', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/2109/']} | jdg_70238 |
stackexchange | llm_judgeable_groundtruth_similarity | 1662183 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is this the correct (or even a valid way) to use emums in Objective-C? i.e. The menuItem is not used but just defines a list add=1, load=2, list=3 etc. enum menuItems { add = 1 , save , load , list , removeAll , remove , quit };int optionSelect;scanf("%d", &optionSelect);switch (optionSelect) { case add: //... break;} cheers gary
Now provide the response and nothing else.
| If you want to give a semantic meaning to the enumeration, you can consider to define a customized type and declare the variable "optionSelect" as variable of that type!In code... typedef enum menuItems { add = 1, save, load, list, removeAll, remove, quit} MenuItem;MenuItem optionSelect;scanf("%d", &optionSelect);switch (optionSelect) { case add: ... break; . . .} That is, almost, the same thing you have written, but from the side of the developer you give a particular meaning to the variable "optionSelect", not just a simple int! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1662183', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/164216/']} | jdg_70239 |
stackexchange | llm_judgeable_groundtruth_similarity | 1994676 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to detour EndScene from an arbitrary DirectX 9 application to create a small overlay. As an example, you could take the frame counter overlay of FRAPS, which is shown in games when activated. I know the following methods to do this: Creating a new d3d9.dll , which is then copied to the games path. Since the current folder is searched first, before going to system32 etc., my modified DLL gets loaded, executing my additional code. Downside: You have to put it there before you start the game. Same as the first method, but replacing the DLL in system32 directly. Downside: You cannot add game specific code. You cannot exclude applications where you don't want your DLL to be loaded. Getting the EndScene offset directly from the DLL using tools like IDA Pro 4.9 Free. Since the DLL gets loaded as is, you can just add this offset to the DLL starting address, when it is mapped to the game, to get the actual offset, and then hook it. Downside: The offset is not the same on every system. Hooking Direct3DCreate9 to get the D3D9, then hooking D3D9->CreateDevice to get the device pointer, and then hooking Device->EndScene through the virtual table. Downside: The DLL cannot be injected, when the process is already running. You have to start the process with the CREATE_SUSPENDED flag to hook the initial Direct3DCreate9 . Creating a new Device in a new window, as soon as the DLL gets injected. Then, getting the EndScene offset from this device and hooking it, resulting in a hook for the device which is used by the game. Downside: as of some information I have read, creating a second device may interfere with the existing device, and it may bug with windowed vs. fullscreen mode etc. Same as the third method. However, you'll do a pattern scan to get EndScene . Downside: doesn't look that reliable. How can I hook EndScene from an injected DLL, which may be loaded when the game is already running, without having to deal with different d3d9.dll 's on other systems, and with a method which is reliable? How does FRAPS for example perform it's DirectX hooks?The DLL should not apply to all games, just to specific processes where I inject it via CreateRemoteThread .
Now provide the response and nothing else.
| You install a system wide hook. (SetWindowsHookEx) With this done, you get to be loaded into every process. Now when the hook is called, you look for a loaded d3d9.dll. If one is loaded, you create a temporary D3D9 object, and walk the vtable to get the address of the EndScene method. Then you can patch the EndScene call, with your own method. (Replace the first instruction in EndScene by a call to your method. When you are done, you have to patch the call back, to call the original EndScene method. And then reinstall your patch. This is the way FRAPS does it. ( Link ) You can find a function address from the vtable of an interface. So you can do the following (Pseudo-Code): IDirect3DDevice9* pTempDev = ...;const int EndSceneIndex = 26 (?);typedef HRESULT (IDirect3DDevice9::* EndSceneFunc)( void );BYTE* pVtable = reinterpret_cast<void*>( pTempDev );EndSceneFunc = pVtable + sizeof(void*) * EndSceneIndex; EndSceneFunc does now contain a pointer to the function itself. We can now either patch all call-sites or we can patch the function itself. Beware that this all depends on the knowledge of the implementation of COM-Interfaces in Windows. But this works on all windows versions (either 32 or 64, not both at the same time). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1994676', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/151706/']} | jdg_70240 |
Subsets and Splits