source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 3424696 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to construct queries dynamically, and my next target is add JOIN clauses (I don't know how can I use the API). By now, for example, this code work for me : ...Class baseClass; ...CriteriaBuilder cb = JpaHandle.get().getCriteriaBuilder();CriteriaQuery cq = cb.createQuery(this.baseClass);Root entity_ = cq.from(this.baseClass); Predicate restrictions = null;...restrictions = cb.conjunction();restrictions = cb.and(restrictions, entity_.get("id").in(this.listId));...cq.where(restrictions);...Query qry = JpaHandle.get().createQuery(cq); (Note : JpaHandle is from wicket-JPA implementation) My desire is add JOIN clause (as generical as possible)! I have the particular annotations in the classes (this.baseClass) For example : @ManyToOne(fetch = FetchType.LAZY)@JoinColumn(name = "assay_id", nullable = false) So,Is there a way to something like this in standard JPA ? (Note : this don't compile) Here a practical fail aproaches : ...Join<Experiment,Assay> experimentAssays = entity_.join( entity_.get("assay_id") ); Or like that : ...CriteriaQuery<Customer> q = cb.createQuery(Customer.class);Root<Customer> c = q.from(Customer.class);SetJoin<Customer, PurchaseOrder> o = c.join(Customer_.orders); For me, if it could be more generical as possible it will be great... : ...Join joinClause = entity_join(entity_.get("assay_id"), entity2_.get("id")); Of course, I have the particular annotations in the classes (this.baseClass) Thank you for your time. I'll appreciate all kind of comments!
Now provide the response and nothing else.
| Maybe the following extract from the Chapter 23 - Using the Criteria API to Create Queries of the Java EE 6 tutorial will throw some light (actually, I suggest reading the whole Chapter 23): Querying Relationships Using Joins For queries that navigate to related entity classes, the query must define a join to the related entity by calling one of the From.join methods on the query root object, or another join object. The join methods are similar to the JOIN keyword in JPQL. The target of the join uses the Metamodel class of type EntityType<T> to specify the persistent field or property of the joined entity. The join methods return an object of type Join<X, Y> , where X is the source entity and Y is the target of the join. Example 23-10 Joining a Query CriteriaQuery<Pet> cq = cb.createQuery(Pet.class);Metamodel m = em.getMetamodel();EntityType<Pet> Pet_ = m.entity(Pet.class);Root<Pet> pet = cq.from(Pet.class);Join<Pet, Owner> owner = pet.join(Pet_.owners); Joins can be chained together to navigate to related entities of the target entity without having to create a Join<X, Y> instance for each join. Example 23-11 Chaining Joins Together in a Query CriteriaQuery<Pet> cq = cb.createQuery(Pet.class);Metamodel m = em.getMetamodel();EntityType<Pet> Pet_ = m.entity(Pet.class);EntityType<Owner> Owner_ = m.entity(Owner.class);Root<Pet> pet = cq.from(Pet.class);Join<Owner, Address> address = cq.join(Pet_.owners).join(Owner_.addresses); That being said, I have some additional remarks: First, the following line in your code: Root entity_ = cq.from(this.baseClass); Makes me think that you somehow missed the Static Metamodel Classes part. Metamodel classes such as Pet_ in the quoted example are used to describe the meta information of a persistent class. They are typically generated using an annotation processor ( canonical metamodel classes ) or can be written by the developer ( non-canonical metamodel ). But your syntax looks weird, I think you are trying to mimic something that you missed. Second, I really think you should forget this assay_id foreign key, you're on the wrong path here. You really need to start to think object and association, not tables and columns. Third, I'm not really sure to understand what you mean exactly by adding a JOIN clause as generical as possible and what your object model looks like, since you didn't provide it (see previous point). It's thus just impossible to answer your question more precisely. To sum up, I think you need to read a bit more about JPA 2.0 Criteria and Metamodel API and I warmly recommend the resources below as a starting point. See also the section 6.2.1 Static Metamodel Classes in the JPA 2.0 specification Dynamic, typesafe queries in JPA 2.0 Using the Criteria API and Metamodel API to Create Basic Type-Safe Queries Related question How to generate JPA 2.0 metamodel? | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3424696', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/413145/']} | jdg_70441 |
stackexchange | llm_judgeable_groundtruth_similarity | 64171607 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I set the layout_constraintHorizontal_bias prop on a Composable that is in a Constraint Layout? Here is the XML code: <TextView ... tool:layout_constraintStart_toStartOf="parent" tool:layout_constraintEnd_toEndOf="parent" tool:layout_constraintWidth_max="wrap" tool:layout_constraintHorizontal_bias="0"/> Here is how my Jetpack Compose code looks right now: ConstraintLayout(modifier = modifier.fillMaxSize()) { val (button1, button2) = createRefs() Button( onClick = {}, modifier = Modifier.constrainAs(button1) { top.linkTo(parent.top, margin = 16.dp) } ) { Text(text = "Button 1") } Button( onClick = {}, modifier = Modifier.constrainAs(button2) { top.linkTo(button1.bottom, margin = 4.dp) start.linkTo(button1.end, margin = 20.dp) end.linkTo(parent.end, margin = 20.dp) width = Dimension.preferredWrapContent } ) { Text(text = "Button 2") }} So my question is how do I set the horizontal bias of Button 2 to be 0?
Now provide the response and nothing else.
| You have to use the linkTo function of the ConstrainScope which has more parameters: ConstraintLayout(modifier = modifier.fillMaxSize()) { val (button1, button2) = createRefs() Button( onClick = {}, modifier = Modifier.constrainAs(button1) { top.linkTo(parent.top, margin = 16.dp) } ) { Text(text = "Button 1") } Button( onClick = {}, modifier = Modifier.constrainAs(button2) { top.linkTo(button1.bottom, margin = 4.dp) linkTo(button1.end, parent.end, startMargin = 20.dp, endMargin = 20.dp, bias = 0F) width = Dimension.preferredWrapContent } ) { Text(text = "Button 2") }} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/64171607', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7091861/']} | jdg_70442 |
stackexchange | llm_judgeable_groundtruth_similarity | 12252826 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working on the GPU algorithm which is supposed to do a lot of modular computations. Particularly, various operations on matrices in a finite field which in the long runreduce to primitive operations like: (a*b - c*d) mod m or (a*b + c) mod m where a,b,c and d are residues modulo m and m is a 32-bit prime. Through experimentation I learned that the performance of the algorithm is mostly limited by slow modular arithmetic because integer modulo (%) and division operations are not supported on the GPU in hardware. I appreciate if somebody can give me an idea how to realize efficient modular computations with CUDA ? To see how this is implemented on CUDA, I use the following code snippet: __global__ void mod_kernel(unsigned *gout, const unsigned *gin) {unsigned tid = threadIdx.x;unsigned a = gin[tid], b = gin[tid * 2], m = gin[tid * 3];typedef unsigned long long u64;__syncthreads();unsigned r = (unsigned)(((u64)a * (u64)b) % m);__syncthreads();gout[tid] = r;} This code is not supposed to work, I just wanted to see how modular reduction isimplemented on CUDA. When I disassemble this with cuobjdump --dump-sass (thanks njuffa for advice!), I see the following: /*0098*/ /*0xffffdc0450ee0000*/ BAR.RED.POPC RZ, RZ;/*00a0*/ /*0x1c315c4350000000*/ IMUL.U32.U32.HI R5, R3, R7;/*00a8*/ /*0x1c311c0350000000*/ IMUL.U32.U32 R4, R3, R7;/*00b0*/ /*0xfc01dde428000000*/ MOV R7, RZ;/*00b8*/ /*0xe001000750000000*/ CAL 0xf8;/*00c0*/ /*0x00000007d0000000*/ BPT.DRAIN 0x0;/*00c8*/ /*0xffffdc0450ee0000*/ BAR.RED.POPC RZ, RZ; Note that between the two calls to bar.red.popc there is a call to 0xf8 procedure which implements some sophisticated algorithm (about 50 instructions or even more). Not surpising that mod (%) operation is slow
Now provide the response and nothing else.
| Some time ago I experimented a lot with modular arithmetic on the GPU. On Fermi GPUs you can use double-precision arithmetic to avoid expensive div and mod operations. For example, modular multiplication can be done as follows: // fast truncation of double-precision to integers#define CUMP_D2I_TRUNC (double)(3ll << 51)// computes r = a + b subop c unsigned using extended precision#define VADDx(r, a, b, c, subop) \ asm volatile("vadd.u32.u32.u32." subop " %0, %1, %2, %3;" : \ "=r"(r) : "r"(a) , "r"(b), "r"(c));// computes a * b mod m; invk = (double)(1<<30) / m__device__ __forceinline__ unsigned mul_m(unsigned a, unsigned b, volatile unsigned m, volatile double invk) { unsigned hi = __umulhi(a*2, b*2); // 3 flops // 2 double instructions double rf = __uint2double_rn(hi) * invk + CUMP_D2I_TRUNC; unsigned r = (unsigned)__double2loint(rf); r = a * b - r * m; // 2 flops // can also be replaced by: VADDx(r, r, m, r, "min") // == umin(r, r + m); if((int)r < 0) r += m; return r;} However this only works for 31-bit integer modulos (if 1 bit is not critical for you)and you also need to precompute 'invk' beforehand. This gives absolute minimum of instructions I can achieve, ie.: SHL.W R2, R4, 0x1;SHL.W R8, R6, 0x1;IMUL.U32.U32 R4, R4, R6;IMUL.U32.U32.HI R8, R2, R8;I2F.F64.U32 R8, R8;DFMA R2, R2, R8, R10;IMAD.U32.U32 R4, -R12, R2, R4;ISETP.GE.AND P0, pt, R4, RZ, pt;@!P0 IADD R4, R12, R4; For description of the algorithm, you can have a look at my paper: gpu_resultants . Other operations like (x y - z w) mod m are also explained there. Out of curiosity, I compared the performance of the resultant algorithm using your modular multiplication: unsigned r = (unsigned)(((u64)a * (u64)b) % m); against the optimized version with mul_m. Modular arithmetic with default % operation: low_deg: 11; high_deg: 2481; bits: 10227nmods: 330; n_real_pts: 2482; npts: 2495res time: 5755.357910 ms; mod_inv time: 0.907008 ms; interp time: 856.015015 ms; CRA time: 44.065857 msGPU time elapsed: 6659.405273 ms; Modular arithmetic with mul_m: low_deg: 11; high_deg: 2481; bits: 10227nmods: 330; n_real_pts: 2482; npts: 2495res time: 1100.124756 ms; mod_inv time: 0.192608 ms; interp time: 220.615143 ms; CRA time: 10.376352 msGPU time elapsed: 1334.742310 ms; So on the average it is about 5x faster. Note also that, you might not see a speed-up if you just evaluate raw arithmetic performance using a kernel with a bunch of mul_mod operations (like saxpy example). But in real applications with control logic, synchronization barriers etc. the speed-up is very noticeable. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12252826', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_70443 |
stackexchange | llm_judgeable_groundtruth_similarity | 7643391 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to show an user oriented choice list in a form and I don't manage to access to the container to get the current User. I don't see how to get it in the Repository neither than in the Type. Any Idea?
Now provide the response and nothing else.
| Let say you created a FormType class. You don't know how to pass the container in this object. Create now your own type extended from FormType and pass the container through the constructor class MyType extends FormType{ private $container; public function __construct(ContainerInterface $container) { $this->container = $container; }} In your config.yml, define your new type mytype: class: ...\MyType arguments: ["@service_container"] tags: - { name: form.type } Now, use MyType instead of FormType in all your controllers | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7643391', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/822459/']} | jdg_70444 |
stackexchange | llm_judgeable_groundtruth_similarity | 1415370 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How do we prove that there is always a unique parabola (with equation $y=ax^2+bx+c$) that passes through 3 distinct points $P_1 (p_1,q_1), P_2 (p_2,q_2), P_3(p_3,q_3)$ ? If I choose to use matrices and row reduction, how do I deal with the fact that the coordinates might be zero?
Now provide the response and nothing else.
| Here is a derivation of a formula for $E(X^2)$. The calculation is excessively informal. For "nice" density functions it is not difficult to justify. A similar calculation gives us $E(X)$. Then the variance is $E(X^2)-(E(X))^2$. We find $\int_0^\infty x^2f(x)\,dx$ by integration by parts. Let $u=x^2$ and $dv=f(x)\,dx$. Then $du=2x\,dx$ and we can take $v=F(x)-1$. (Here we are being a little tricky.) Then our integral is$$\left. x^2(1-F(x))\right|_0^\infty +\int_0^\infty 2x(1-F(x))\,dx.$$The first part vanishes at both ends. So we find that $$E(X^2)=\int_0^\infty 2x(1-F(x))\,dx.$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1415370', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/265665/']} | jdg_70445 |
stackexchange | llm_judgeable_groundtruth_similarity | 36782687 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Can someone please help me with the below Mock object.I want to write a mock test case for ServiceImpl class. I want to mock OrderIF interface: public interface OrderIF{ List<Order> ordersFor(String type);} The implementation of service is: public class ServiceImpl implements Service { private List <Order> orders ; private OrderIF orderif ; // this is 3rd party interface public int getval(String type) { //some code // this returns a list of objects (orders) orders = orderif.ordersFor(type); // some code return orders.get(0) }} My code give NullPoinerException: public class ServiceImplTest { private List <Order> ll ; private service reqService ; @InjectMocks private orderIF order; @Before public void setUp() throws Exception { ll = new ArrayList<Order> (); ll.add(new Order("Buy" , 11 , "USD" )); ll.add(new Order("Sell" , 22 , "USD" )); reqService = spy(new ServiceImpl()); } @Test public void test() { String type= "USD" ; when(order.ordersFor(type)).thenReturn(ll); q = reqService.getval(type); assertTrue(q.get().ask == 232.75); }}
Now provide the response and nothing else.
| @InjectMocks will not instantiate or mock your class. This annotation is used for injecting mocks into this field. If you want to test serviceImpl you will need to mock in this way: @Mockprivate OrderIF order;@InjectMocksprivate Service reqService = new ServiceImpl(); To make it work you either need to use runner or MockitoAnnotations.initMocks(this); in @Before method. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36782687', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5609463/']} | jdg_70446 |
stackexchange | llm_judgeable_groundtruth_similarity | 36360541 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The question is to develop a context free grammar for language containing all strings having more number of As than Bs. I can't think of a logical solution . Is there a way to approach such problems , what can help me approach such problems better ? Can someone suggest a logical way to analyse such grammar problems ?
Now provide the response and nothing else.
| The following grammar generates all strings over {a,b} that have more a 's than b 's. I denote by eps the empty string. S -> Aa | RS | SRAA -> Aa | epsR -> RR | aRb | bRa | eps It's obvious it always generates more a 's than b 's. It's less obvious it generates all possible strings over {a,b} that have more a 's than b 's The production R -> RR | aRb | bRa | eps generates all balanced strings (this is easy to see), and the production A -> Aa generates the language a* (i.e. strings with zero or more a 's). Here's the logic behind the grammar. Notice that if w=c1,c2,c3,...,cn is a string over {a,b} with more a 's than b 's then we can always decompose it into a concatenation of balanced strings (i.e. equal number of a 's and b 's, which includes the empty string) and strings of the form a+ . For example, ababaaaba = abab (can be generated by R ), aaa (can be generated by A ), ba (can be generated by R ). Now notice that the production S -> Aa | RS | SRA generates precisely strings of this form. It suffices to verify that S covers the following cases (because every other case can be covered by breaking into such subcases, as you should verify): [a][balanced] : use S => SRA => AaR . [balanced][a] : use S => RS => RA => RAa . [balanced][a]balanced] : use S => SRA => RSRA => RAaR . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36360541', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5301167/']} | jdg_70447 |
stackexchange | llm_judgeable_groundtruth_similarity | 4884390 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I try to redirect http://mydomain.com to http://www.mydomain.com I add this to my htaccess file, but it not work : RewriteCond %{HTTP_HOST} ^mydomain\.fr [NC] RewriteRule ^(.*)$ http://www.mydomain.fr/$1 [L,R=301] This is the complete file : Options +FollowSymLinks +ExecCGI<IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{HTTP_HOST} ^mydomain\.fr [NC] RewriteRule ^(.*)$ http://www.mydomain.fr/$1 [L,R=301] # uncomment the following line, if you are having trouble # getting no_script_name to work #RewriteBase / # we skip all files with .something #RewriteCond %{REQUEST_URI} \..+$ #RewriteCond %{REQUEST_URI} !\.html$ #RewriteRule .* - [L] # we check if the .html version is here (caching) RewriteRule ^$ index.html [QSA] RewriteRule ^([^.]+)$ $1.html [QSA] RewriteCond %{REQUEST_FILENAME} !-f # no, so we redirect to our front web controller RewriteRule ^(.*)$ index.php [QSA,L]</IfModule>
Now provide the response and nothing else.
| Simply take off the Namespace = : [XmlRoot("register-account"), XmlType("register-account")]public class RegisterAccountResponse {...} since your xml doesn't seem to be in an xml-namespace. Also, [Serializable] isn't used by XmlSerializer . If your xml was using a namespace it would have an xmlns at the root. Also, to help with callers you could add where T : class, new() (the , new() being the addition) to your Deserialize method, since XmlSerializer demands a public parameterless constructor. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4884390', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/260810/']} | jdg_70448 |
stackexchange | llm_judgeable_groundtruth_similarity | 56567554 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
A simple and straight one: How can I attach or detach new records when using a Laravel hasManyThrough relation the Laravel way ?Model retrieving is obvious, from the docs here . EDIT : In other words, is there a Laravelish way of doing the following smarter (model names taken from docs)? $user = $country->users()->first;$post->user_id = $user->id;$post->save(); Thanks in advance.
Now provide the response and nothing else.
| Updated answer in pure SwiftUI - in this example the data is of type String . Tested on Xcode 11.1 - may not work on previous versions. struct MultiPicker: View { typealias Label = String typealias Entry = String let data: [ (Label, [Entry]) ] @Binding var selection: [Entry] var body: some View { GeometryReader { geometry in HStack { ForEach(0..<self.data.count) { column in Picker(self.data[column].0, selection: self.$selection[column]) { ForEach(0..<self.data[column].1.count) { row in Text(verbatim: self.data[column].1[row]) .tag(self.data[column].1[row]) } } .pickerStyle(WheelPickerStyle()) .frame(width: geometry.size.width / CGFloat(self.data.count), height: geometry.size.height) .clipped() } } } }} Demo: struct ContentView: View { @State var data: [(String, [String])] = [ ("One", Array(0...10).map { "\($0)" }), ("Two", Array(20...40).map { "\($0)" }), ("Three", Array(100...200).map { "\($0)" }) ] @State var selection: [String] = [0, 20, 100].map { "\($0)" } var body: some View { VStack(alignment: .center) { Text(verbatim: "Selection: \(selection)") MultiPicker(data: data, selection: $selection).frame(height: 300) } }} Result: | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56567554', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3099449/']} | jdg_70449 |
stackexchange | llm_judgeable_groundtruth_similarity | 19151 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How would one create an iterative function (or iterator object) in python?
Now provide the response and nothing else.
| Iterator objects in python conform to the iterator protocol, which basically means they provide two methods: __iter__() and __next__() . The __iter__ returns the iterator object and is implicitly calledat the start of loops. The __next__() method returns the next value and is implicitly called at each loop increment. This method raises a StopIteration exception when there are no more value to return, which is implicitly captured by looping constructs to stop iterating. Here's a simple example of a counter: class Counter: def __init__(self, low, high): self.current = low - 1 self.high = high def __iter__(self): return self def __next__(self): # Python 2: def next(self) self.current += 1 if self.current < self.high: return self.current raise StopIterationfor c in Counter(3, 9): print(c) This will print: 345678 This is easier to write using a generator, as covered in a previous answer: def counter(low, high): current = low while current < high: yield current current += 1for c in counter(3, 9): print(c) The printed output will be the same. Under the hood, the generator object supports the iterator protocol and does something roughly similar to the class Counter. David Mertz's article, Iterators and Simple Generators , is a pretty good introduction. | {} | {'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/19151', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/145/']} | jdg_70450 |
stackexchange | llm_judgeable_groundtruth_similarity | 6723157 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
i have 2 arrays of points (x,y), with those points I can draw 2 curves. Anyone have ideas how to calculate how those curves are similar?
Now provide the response and nothing else.
| You can always calculate the area between those two curves. (This is a bit easier if the endpoints match.) The curves are similar if the area is small, not so similar if the area is not small. Note that I did not define 'small'. That was intentional. Then again, you didn't define 'similar'. Edit Sometimes area isn't the best metric. For example consider the function f(x)=0 and f(x)=1e6*sin(x). If the range of x is some integral multiple of 2*pi, the area between these curves is zero. A function that oscillates between plus and minus one million is not a good approximation of f(x)=0. A better metric is needed. Here are a couple. Note: I am assuming here that the x values are identical in the two sets; the only things that differ are the y values. Sum of squares. For each x value, compute delta_y i = y 1,i - y 2,i and accumulate delta_y i 2 . This metric is the basis for a least square optimization, where the goal is to minimize the sum of the squares of the errors. This is a widely used approach because oftentimes it is fairly easy to implement. Maximum deviation. Find the abs_delta_y i = |y 1,i - y 2,i | that maximizes the |y 1,i - y 2,i | for all x values. This metric is the basis for a lot of the implementations of the functions in the math library, where the goal is to minimize the maximum error. These math library implementations are approximations of the true function. As a consumer of such an approximation, I typically care more about the worst thing that the approximation is going to do to my application than I care about how that approximation is going to behave on average. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6723157', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/199232/']} | jdg_70451 |
stackexchange | llm_judgeable_groundtruth_similarity | 3364037 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
$$x^e = e^x$$ Hello, I had this question in my calc 2 exam, I needed to write the proof of why the only real solution to the equation was $e$ .
Now provide the response and nothing else.
| First note that the left side is only defined for $x\geq 0$ and the equation is not satisfied at $x=0$ . So, consider only $x>0$ : Write $x^e = e^{e\ln x}$ $e^{e\ln x} = e^x \Leftrightarrow e\ln x = x$ Now, consider $f(x) = x- e \ln x$ $\Rightarrow f'(x) = 1- \frac{e}{x}$ $\Rightarrow f$ has the global minimum $f(e) =0$ since $f$ is strictly decreasing for $0<x<e$ and strictly increasing for $x>e$ $\Rightarrow x=e$ is the only solution to the equation. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3364037', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/707081/']} | jdg_70452 |
stackexchange | llm_judgeable_groundtruth_similarity | 20255954 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I read many many many posts on this subject and I tried many solutions and I can't convert this multi-array into a JSON string. This is what I see when I print_r($result) : Array ( [profiles] => Array ( [0] => Array ( [ID] => 00000000-0000-0000-0000-000000000001 [UserName] => Administrator GU [Age] => 37 [CityStateCode] => Montréal [OnlineSince] => En ligne depuis 6 heures 39 minutes [IsPaying] => true [LabelOnlineStatus] => En ligne ) [1] => Array ( [ID] => ab3dd04e-5621-11e3-b448-103f0c805f5a [UserName] => Guillaume Le Genie [Age] => 68 [CityStateCode] => Montréal [OnlineSince] => En ligne depuis 1 jour 9 heures [IsPaying] => true [LabelOnlineStatus] => Hors-Ligne ) [2] => Array ( [ID] => 00000000-0000-0000-0000-000000000050 [UserName] => Baby-dragoon [Age] => 25 [CityStateCode] => Québec [OnlineSince] => En ligne depuis 5 jours 6 heures [IsPaying] => true [LabelOnlineStatus] => Hors-Ligne ) ) ) I try this (with and without true parameter): $result = json_encode($result, true);$error = json_last_error_msg();echo "[ERROR : $error]-----[$result]-----"; And I receive: [ERROR : Malformed UTF-8 characters, possibly incorrectly encoded]-----[]----- When I try this: $result = json_encode(htmlspecialchars(utf8_encode($result))); I receive: Warning: utf8_encode() expects parameter 1 to be string, array given in /Applications/XAMPP/xamppfiles/htdocs/cdn/php/functionsv1.php on line 2839 [ERROR : No error]-----[""]----- When I try this: $result = json_encode(htmlspecialchars($result)); I receive: Warning: htmlspecialchars() expects parameter 1 to be string, array given in /Applications/XAMPP/xamppfiles/htdocs/cdn/php/functionsv1.php on line 2839 [ERROR : No error]-----[null]----- I'm really lost! N.B. You see the language is French so we have a char with accent like éèàô etc... The data provide from MySQL Database and database is set to: mysql_query("SET NAMES 'utf8'");mysql_query('SET CHARACTER SET utf8');
Now provide the response and nothing else.
| I am running PHP 5.4.7, for me the following code works flawlessly: $result = json_encode($result, true); I know you have already tried that. Leonardo's suggestion also works for me: $result = json_encode($result, JSON_UNESCAPED_UNICODE); The issue is that in PHP 5.5.0 json_encode requires the strings to be UTF-8. So.. you will have to pass a valid utf8 string, how to do it depend on what encoding you have your strings in. You are right in thinking you need utf8_encode or similar function. You may also want to give a look to iconv . Now the issue with utf8_encode is that this function will not work with arrays, for that you need a helper function, such as: function utf8_encode_recursive ($array){ $result = array(); foreach ($array as $key => $value) { if (is_array($value)) { $result[$key] = utf8_encode_recursive($value); } else if (is_string($value)) { $result[$key] = utf8_encode($value); } else { $result[$key] = $value; } } return $result;} Note 1: utf8_encode only accepts strings in ISO-8859-1. Verify what encoding you are using. Note 2: htmlspecialchars and htmlentities will not convert all the characters of your encoding, only those "dangerous" ( htmlspecialchars ) or that have html equivalent named entities ( htmlentities ). For this use case use mb_encode_numericentity instead. Note 3: Both iconv and mb_encode_numericentity will allow you to specify the encoding of your string. Also they don't work with arrays either, so you will need to write recursive helper functions for them too. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20255954', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1742769/']} | jdg_70453 |
stackexchange | llm_judgeable_groundtruth_similarity | 21341 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there an efficient way to find the positions of the duplicates in a list? I would like the positions grouped according to duplicated elements. For instance, given list = RandomInteger[15, 20] {3, 3, 6, 11, 13, 13, 11, 1, 2, 3, 12, 8, 9, 9, 4, 15, 5, 6, 9, 12} the output should be positionDuplicates[list] {{{1}, {2}, {10}}, {{3}, {18}}, {{4}, {7}}, {{5}, {6}}, {{11}, {20}}, {{13}, {14}, {19}}} Here's my first naive thought: positionDuplicates1[expr_] := Position[expr, #, 1] & /@ First /@ Select[Gather[expr], Length[#] > 1 &] And my second: positionDuplicates2[expr_] := Module[{seen, tags = {}}, MapIndexed[ If[seen[#1] === True, Sow[#2, #1], If[Head[seen[#1]] === List, AppendTo[tags, #1]; Sow[seen[#1], #1]; Sow[#2, #1]; seen[#1] = True, seen[#1] = #2]] &, expr] ] The first works as desired but is horrible on long lists. In the second, Reap does not return positions in order, so if necessary, one can apply Sort . I feel the work done by Gather is about what it should take for this task; DeleteDuplicates is (and should be) faster. Here is a summary of timings on a big list. list = RandomInteger[10000, 5 10^4];positionDuplicates1[list]; // AbsoluteTimingpositionDuplicates2[list] // Sort; // AbsoluteTimingSort[Map[{#[[1, 1]], Flatten[#[[All, 2]]]} &, Reap[MapIndexed[Sow[{#1, #2}, #1] &, list]][[2, All, All]]]]; // AbsoluteTiming (* Daniel Lichtblau *)Select[Last@Reap[MapIndexed[Sow[#2, #1] &, list]], Length[#] > 1 &]; // AbsoluteTimingpositionOfDuplicates[list] // Sort; // AbsoluteTiming (* Leonid Shifrin *)Module[{a, o, t}, Composition[o[[##]] &, Span] @@@ Pick[Transpose[{Most[ Prepend[a = Accumulate[(t = Tally[#[[o = Ordering[#]]]])[[All, 2]]], 0] + 1], a}], Unitize[t[[All, 2]] - 1], 1]] &[list]; // AbsoluteTiming (* rasher *)GatherBy[Range@Length[list], list[[#]] &]; // AbsoluteTiming (* Szabolcs *)GatherByList[Range@Length@list, list]; // AbsoluteTiming (* Carl Woll *)Gather[list]; // AbsoluteTimingDeleteDuplicates[list]; // AbsoluteTiming {27.7134, Null} (* my #1 *){0.586742, Null} (* my #2 *){0.14921, Null} (* Daniel Lichtblau *){0.074334, Null} (* Szabolcs's suggested improvement of my #2 *){0.028313, Null} (* Leonid Shifrin *){0.020012, Null} (* rasher *){0.004821, Null} (* Szabolcs's answer *){0.003127, Null} (* Carl Woll *){0.002999, Null} (* Gather - for comparison purposes *){0.000181, Null} (* DeleteDuplicates *)
Now provide the response and nothing else.
| You can use GatherBy for this. You can map List onto Range[...] first if you wish to have exactly the same output you showed. positionDuplicates[list_] := GatherBy[Range@Length[list], list[[#]] &]list = {3, 3, 6, 11, 13, 13, 11, 1, 2, 3, 12, 8, 9, 9, 4, 15, 5, 6, 9, 12}positionDuplicates[list](* ==> {{1, 2, 10}, {3, 18}, {4, 7}, {5, 6}, {8}, {9}, {11, 20}, {12}, {13, 14, 19}, {15}, {16}, {17}} *) If you prefer a Sow / Reap solution, I think this is simpler than your version (but slower than GatherBy ): positionDuplicates[list_] := Last@Reap[MapIndexed[Sow[#2, #1] &, list]] If you need to remove the positions of non-duplicates, I'd suggest doing that as a post processing step, e.g. Select[result, Length[#] > 1&] | {} | {'log_upvote_score': 8, 'links': ['https://mathematica.stackexchange.com/questions/21341', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/4999/']} | jdg_70454 |
stackexchange | llm_judgeable_groundtruth_similarity | 3950743 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm currently working on Newton's Method, and my instructor gave four instances where Newton's Method will fail. (A) Newton's method converges to another solutions x=b such that f(b)=0 instead of converging to the desired solution x=a. (B) Newton's method eventually gets into the never ending cycle, bouncing between the same two approximations $x_i$ and $x_{i+1}$ . (C) Eventually, each next approximation $x_{i+1}$ falls further from desired solution $x_a$ than the previous approximation $x_i$ determined by the Newton's method. (D) Newton's method is not able to find the next approximation $x_{i+1}$ because f'( $x_i$ )=0 or f'( $x_i$ ) Does Not Exist. However, there aren't any examples of when this happens. Would anyone be willing to provide examples of these instances?
Now provide the response and nothing else.
| Example for Case (A): $$f(x) = \frac{1}{1+x^2} - \frac{1}{2},$$ which has roots at $x \in \{-1,1\}$ . The initial choice $x_0 = 2$ converges to the negative root. Example for Case (B): $$f(x) = \begin{cases}\sqrt{x}, & x \ge 0 \\ -\sqrt{-x}, & x < 0 \end{cases}$$ has the peculiar property that for any initial guess $x_0 \ne 0$ , the orbit is trapped in a cycle of period $2$ , with $x_k = -x_{k-1}$ . This is quite easy to prove and is left as an exercise for the reader. Example for Case (C): $$f(x) = x^{1/3}.$$ The Newton's method recursion has no fixed point except for the initial guess $x_0 = 0$ . | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3950743', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/862902/']} | jdg_70455 |
stackexchange | llm_judgeable_groundtruth_similarity | 326688 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've searched a lot for a simple explanation of this. Given a Jordan block $J_k(\lambda)$, its $n$-th power is: $$J_k(\lambda)^n = \begin{bmatrix}\lambda^n & \binom{n}{1}\lambda^{n-1} & \binom{n}{2}\lambda^{n-2} & \cdots & \cdots & \binom{n}{k-1}\lambda^{n-k+1} \\ & \lambda^n & \binom{n}{1}\lambda^{n-1} & \cdots & \cdots & \binom{n}{k-2}\lambda^{n-k+2} \\ & & \ddots & \ddots & \vdots & \vdots\\ & & & \ddots & \ddots & \vdots\\ & & & & \lambda^n & \binom{n}{1}\lambda^{n-1}\\ & & & & & \lambda^n\end{bmatrix}$$ Why does the $n$th power involve the binomial coefficient?
Now provide the response and nothing else.
| Let $N$ denote the nilpotent matrix whose superdiagonal contains ones and all other entries are zero. Then $N^k=0$. Hence, by the binomial theorem:$$J_k(\lambda)^n=(\lambda I+N)^n=\sum_{r=0}^\color{red}{n} \binom{n}{r}\lambda^{n-r} N^r=\sum_{r=0}^\color{red}{\min(n,k-1)} \binom{n}{r}\lambda^{n-r} N^r.$$ | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/326688', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/43488/']} | jdg_70456 |
stackexchange | llm_judgeable_groundtruth_similarity | 13591970 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following piece of code which fails with the following error: RuntimeError: maximum recursion depth exceeded I attempted to rewrite this to allow for tail recursion optimization (TCO). I believe that this code should have been successful if a TCO had taken place. def trisum(n, csum): if n == 0: return csum else: return trisum(n - 1, csum + n)print(trisum(1000, 0)) Should I conclude that Python does not do any type of TCO, or do I just need to define it differently?
Now provide the response and nothing else.
| No, and it never will since Guido van Rossum prefers to be able to have proper tracebacks: Tail Recursion Elimination (2009-04-22) Final Words on Tail Calls (2009-04-27) You can manually eliminate the recursion with a transformation like this: >>> def trisum(n, csum):... while True: # Change recursion to a while loop... if n == 0:... return csum... n, csum = n - 1, csum + n # Update parameters instead of tail recursion>>> trisum(1000,0)500500 | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/13591970', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9979/']} | jdg_70457 |
stackexchange | llm_judgeable_groundtruth_similarity | 285963 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
Since a tmpfs exists only in memory, can it become fragmented like memory can? Are there mechanisms to automatically defragment the memory, and thus increase - even slightly - the access speed of stored files?
Now provide the response and nothing else.
| As far as I know there are no ways to defragment a tmpfs filesystem (at least not on FreeBSD, Linux or Solaris). Depending on the underlying implementation it's possible for tmpfs to become fragmented (e.g. some tmpfs implementations aren't in RAM - they chew up swap space: Depending on the allocation algorithm the swap space allocated could be fragmented and show a noticeable speed drop). It's also possible for tmpfs in RAM to become "fragmented", but this isn't a practical concern: RAM is "fast enough" that you'd waste more CPU time trying to "defragment" it than you would lose accessing non-contiguous sections of RAM (no rotational delay to worry about, and the extra overhead for the MMU or equivalent to access non-contiguous sections of RAM versus a contiguous series of pages is small enough that you can probably neglect it. Frankly if you're concerned about this level of performance you don't want to be using tmpfs anyway because the overhead of the VFS layer to make RAM look like a filesystem is many orders of magnitude worse than any memory fragmentation issues you may encounter (context switching, etc.). | {} | {'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/285963', 'https://serverfault.com', 'https://serverfault.com/users/66863/']} | jdg_70458 |
stackexchange | llm_judgeable_groundtruth_similarity | 3332078 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following code: try { //jaw-ws service port operation port.login();} catch (Exception e) { logger.error("Caught Exception in login(): " + e.getMessage());} When the above is run with an incorrect hostname, I get: Caught Exception in login(): HTTP transport error: java.net.UnknownHostException: abc That is correct and expected. I re-wrote the code to specifically catch UnknownHostException, as follows: import java.net.UnknownHostException;try { //jaw-ws service port operation port.login();} catch (UnknownHostException uhe) { //do something specific to unknown host exception} catch (Exception e) { logger.error(Caught Exception in login(): " + e.getMessage());} However, when I try to compile this I get: [javac] foo.java: exception java.net.UnknownHostException is never thrown in body of corresponding try statement[javac] } catch (UnknownHostException uhe) {[javac] ^ This is clearly false, since the exception is thrown, as I have caught it before. What am I missing here? tia,rouble
Now provide the response and nothing else.
| It isn't throwing an UnknownHostException . It's just appearing in the message of the exception you actually caught. It's likely the underlying root cause of the exception you caught. To determine the actual exception, you should print a bit more detail. E.g. } catch (Exception e) { logger.error("Caught Exception in login(): " + e.getClass().getName() + ": " + e.getMessage());} or just using Throwable#toString() which already includes both exception type and message: } catch (Exception e) { logger.error("Caught Exception in login(): " + e);} or just pass the Exception in as 2nd logger argument, if well configured its stacktrace will be printed: } catch (Exception e) { logger.error("Caught Exception in login(): " + e.getMessage(), e);} Update as per your comments: your best bet is to update the catch as follows: } catch (ClientTransportException e) { if (e.getCause() instanceof UnknownHostException) { // UHE. } else { // Other. }} You should absolutely not differentiate based on the message. It's receipe for portability trouble. The message is namely a sensitive subject for changes which may even be locale dependent! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3332078', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/215120/']} | jdg_70459 |
stackexchange | llm_judgeable_groundtruth_similarity | 54158994 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to use the new React Lazy and Suspense to create a fallback loading component. This works great, but the fallback is showing only a few ms. Is there a way to add an additional delay or minimum time, so I can show animations from this component before the next component is rendered? Lazy import now const Home = lazy(() => import("./home"));const Products = lazy(() => import("./home/products")); Waiting component: function WaitingComponent(Component) { return props => ( <Suspense fallback={<Loading />}> <Component {...props} /> </Suspense> );} Can I do something like this? const Home = lazy(() => { setTimeout(import("./home"), 300);});
Now provide the response and nothing else.
| lazy function is supposed to return a promise of { default: ... } object which is returned by import() of a module with default export. setTimeout doesn't return a promise and cannot be used like that. While arbitrary promise can: const Home = lazy(() => { return new Promise(resolve => { setTimeout(() => resolve(import("./home")), 300); });}); If an objective is to provide minimum delay, this isn't a good choice because this will result in additional delay. A minimum delay would be: const Home = lazy(() => { return Promise.all([ import("./home"), new Promise(resolve => setTimeout(resolve, 300)) ]) .then(([moduleExports]) => moduleExports);}); | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/54158994', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/964012/']} | jdg_70460 |
stackexchange | llm_judgeable_groundtruth_similarity | 373394 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Take as an example a CPU that is capable of changing its clock speed, like a modern computer CPU (Intel, AMD, whatever). When it does a certain calculation at a particular clock speed, does it generate the same amount of heat as when it does the identical calculation at a slower clock speed? I know that heat dissipation and heat buildup are different issues, so let's just talk about raw heat generated.
Now provide the response and nothing else.
| Besides clock speed, keep in mind that in a real "big" CPU (one that has caches, a MMU, TLBs, pipelined execution, SIMD, etc) another important factor of how much heat is generated would be how well the instruction flow utilizes the available CPU resources. If you run a program that is memory-intensive, most of the time the CPU would be just starving for data, doing nothing, so the heat produced would be relatively low. A purely computation code with tight loops would heat more. Even more heat could be generated with tightly optimized code that incurs almost no branch mispredictions, uses the SIMD units intensively, makes optimal use of the caches and so forth. There are programs specifically designed to operate the CPU in this mode - stressing it as much as possible, Prime95 being a prominent example for the PC. In fact, if a PC already runs Prime95 on all available CPU cores, and then you concurrently start another CPU-intensive application (e.g., 3D rendering), you'd notice that the CPU cools down . This is because it has to timeshare the very heavy Prime95 code (that "lights up all the transistors" ) with the relatively lower-demand rendering code (which likely has a lot of cache misses and branch mispredictions - those allow the CPU to stop for a while and cool down). Another thing you should factor in is that usually each CPU has a table with allowable clock speeds, and core voltages associated with each speed. Lower clocks are also matched with lower voltage, as the manufacturer has determined that the CPU will be stable at that voltage. Power consumption varies roughly linearly with clock speed, but quadratically with voltage. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/373394', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/185798/']} | jdg_70461 |
stackexchange | llm_judgeable_groundtruth_similarity | 3484688 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Solve the system: $$\begin{array}{|l} \dfrac{x}{y}-\dfrac{y}{x}=\dfrac{5}{6} \\ x^2-y^2=5 \end{array}$$ First, we have $x,y \ne 0$ . Let's write the first equation as: $$\dfrac{x}{y}-\dfrac{y}{x}=\dfrac{5}{6} \Leftrightarrow \dfrac{x^2-y^2}{xy}=\dfrac{5}{6}$$ We have $x^2-y^2=5$ , therefore $xy=6$ . What to do next?
Now provide the response and nothing else.
| Rewrite $\dfrac{x}{y}-\dfrac{y}{x}=\dfrac{5}{6}$ as $6x^2-5xy-6y^2=0$ and then factorize, $$(2x-3y)(3x+2y)=0$$ to have $x=\frac32y$ and $x=-\frac23 y$ . Plug them into $x^2-y^2=5$ to obtain the real solutions $(3,2)$ and $(-3,-2)$ . | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3484688', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/640691/']} | jdg_70462 |
stackexchange | llm_judgeable_groundtruth_similarity | 100303 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In the derivations of the Aharonov-Bohm phase, it is directly mentioned that due to the introduction of the vector potential $A$, an extra phase is introduced into the wavefunction for case $A\neq0$ i.e. $$ \psi(A\neq0) = \exp(\iota\varphi)\psi(A=0),$$ where $$ \varphi = \frac{q}{\hbar} \int_P \mathbf{A} \cdot d\mathbf{x}. $$ How to derive it from the following Schordinger equation $$ \left[\frac{1}{2m}(\frac{\hbar}{i}\triangledown-eA)^{2}+V(r)\right]\psi=\epsilon\psi. $$ I tried taking the terms containing $A$ on the right and treating the equation as an inhomogeneous equation but it just becomes tedious. What is the straightforward simple way?
Now provide the response and nothing else.
| First, I will set $e=1$ for simplicity. Let $\psi_0$ denote the wave function that satisfies the free Schrodinger equation:\begin{equation}i \frac{\partial \psi_0}{\partial t} = -\frac{1}{2m}\mathbf{\nabla}^2 \psi_0 + V \psi_0 \tag{1}\end{equation}Furthermore, let $\psi$ be the wave function that obeys the Schrodinger equation for a non-vanishing vector potential $\mathbf{A}$:\begin{equation}i \frac{\partial \psi}{\partial t} = -\frac{1}{2m}(\mathbf{\nabla}-i\mathbf{A})^2 \psi+ V \psi \tag{2}\end{equation}Let us now write:\begin{equation}\psi=\exp \left( i \int_{\gamma} \mathbf{A} \cdot \mathrm{d} \mathbf{l} \right)\psi_0\end{equation}where $\gamma$ is a path from some arbitrary point $\mathbf{x}_0$ to some other point $\mathbf{x}_1$. We can then write:\begin{equation}\left( \mathbf{\nabla} -i \mathbf{A} \right)^2 \psi = \exp \left( i \int_{\gamma} \mathbf{A} \cdot \mathrm{d} \mathbf{l} \right) \mathbf{\nabla}^2 \psi_0\end{equation}Substituting this expression into equation $(2)$ gives equation $(1)$. This implies that the wave function of an electrically charged particle travelling through space where $\mathbf{A} \neq 0$ will gain an additional phase. We know that the wave function at the point $Q$ (see the figure below) is a result of quantum superposition, i.e. we can write:\begin{equation}\begin{aligned}\begin{split}\psi_{\scriptscriptstyle Q} & = \psi(\mathbf{x},\gamma_1) + \psi(\mathbf{x},\gamma_2) \\&= \exp \left( i \int_{\gamma_1} \mathbf{A} \cdot \mathrm{d} \mathbf{l} \right)\psi_{0}(\mathbf{x},\gamma_1) + \exp \left( i \int_{\gamma_2} \mathbf{A} \cdot \mathrm{d} \mathbf{l} \right)\psi_{0}(\mathbf{x},\gamma_2) \\&= \exp \left( i \int_{\gamma_2} \mathbf{A} \cdot \mathrm{d} \mathbf{l} \right) \left( \exp \left( i \int_{\gamma_1} \mathbf{A} \cdot \mathrm{d} \mathbf{l} - i \int_{\gamma_2} \mathbf{A} \cdot \mathrm{d} \mathbf{l} \right)\psi_{0}(\mathbf{x},\gamma_1) + \psi_{0}(\mathbf{x},\gamma_2) \right)\end{split}\end{aligned}\end{equation}We can use Stoke's theorem on the first term inside the brackets, because $\gamma_1-\gamma_2$ is a closed path:\begin{equation}\int_{\gamma_1} \mathbf{A} \cdot \mathrm{d} \mathbf{l} - \int_{\gamma_2} \mathbf{A} \cdot \mathrm{d} \mathbf{l} = \int \mathbf{B} \cdot \mathrm{d}\mathbf{S} = F\end{equation}where $F$ is the total magnetic flux due to the solenoid through a surface defined by the closed boundary $\gamma_2-\gamma_1$. The wave function at $Q$ can now be written as:\begin{equation}\psi_{\scriptscriptstyle Q} = \exp \left( i \int_{\gamma_2} \mathbf{A} \cdot \mathrm{d} \mathbf{l} \right) \left( \exp \left( i F \right)\psi_{0}(\mathbf{x},\gamma_1) + \psi_{0}(\mathbf{x},\gamma_2) \right)\end{equation}This shows that the relative phase difference, and thus the interference pattern, is dependent on the magnetic flux due to the solenoid. This is the Aharonov-Bohm effect. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/100303', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/38579/']} | jdg_70463 |
stackexchange | llm_judgeable_groundtruth_similarity | 101851 |
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
(source: iupac.org ) I can't seem to understand why these compounds are meso compounds..... I don't see any plane of symmetry here. I have trouble identifying meso compounds so can you give me heads up on how to identify them
Now provide the response and nothing else.
| Why don't use the definition just above the image? meso -compound A term for the achiral member(s) of a set of diastereoisomers which also includes one or more chiral members. Achiral, i.e. the molecule is superimposable with its mirror image. So, if finding a plane of symmetry, or other symmetry elements in the molecule is difficult for you, a simpler method (and according to the definition) might be drawing a mirror image, i.e. all chiral elements inverted or unchanged but drawn "right-to-left", i.e. flipped or flipped vertically .. (just use any of the mirroring methods) and deciding whether it's the same molecule (stereoisomer), by rotating it, in 3D, in your head. (Some internal rotations around single bonds might be needed as well.) In the first case, the mirrored structure can be superimposed with the original one by in-plane 180° rotation around an axis perpendicular to the paper plane. In the second case it can be done by 180° rotation around the in-plane horizontal axis. In the third case it can be done by 180° rotation around the in-plane vertical axis. | {} | {'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/101851', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/65408/']} | jdg_70464 |
stackexchange | llm_judgeable_groundtruth_similarity | 12608788 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to fix how python plots my data.Say: x = [0,5,9,10,15]y = [0,1,2,3,4]matplotlib.pyplot.plot(x,y)matplotlib.pyplot.show() The x axis' ticks are plotted in intervals of 5. Is there a way to make it show intervals of 1?
Now provide the response and nothing else.
| You could explicitly set where you want to tick marks with plt.xticks : plt.xticks(np.arange(min(x), max(x)+1, 1.0)) For example, import numpy as npimport matplotlib.pyplot as pltx = [0,5,9,10,15]y = [0,1,2,3,4]plt.plot(x,y)plt.xticks(np.arange(min(x), max(x)+1, 1.0))plt.show() ( np.arange was used rather than Python's range function just in case min(x) and max(x) are floats instead of ints.) The plt.plot (or ax.plot ) function will automatically set default x and y limits. If you wish to keep those limits, and just change the stepsize of the tick marks, then you could use ax.get_xlim() to discover what limits Matplotlib has already set. start, end = ax.get_xlim()ax.xaxis.set_ticks(np.arange(start, end, stepsize)) The default tick formatter should do a decent job rounding the tick values to a sensible number of significant digits. However, if you wish to have more control over the format, you can define your own formatter. For example, ax.xaxis.set_major_formatter(ticker.FormatStrFormatter('%0.1f')) Here's a runnable example: import numpy as npimport matplotlib.pyplot as pltimport matplotlib.ticker as tickerx = [0,5,9,10,15]y = [0,1,2,3,4]fig, ax = plt.subplots()ax.plot(x,y)start, end = ax.get_xlim()ax.xaxis.set_ticks(np.arange(start, end, 0.712123))ax.xaxis.set_major_formatter(ticker.FormatStrFormatter('%0.1f'))plt.show() | {} | {'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/12608788', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1464677/']} | jdg_70465 |
stackexchange | llm_judgeable_groundtruth_similarity | 14429724 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using python logging and I have a formatter that looks like the following: formatter = logging.Formatter( '%(asctime)s - %(pathname)86s - %(lineno)4s - %(message)s', '%d %H:%M' ) As you can see, I like the information in my log files to line up neatly in columns. The reason I have 86 spaces reserved for the pathname is because the full paths to some of the files used in my program are that long. However, all I really need is the actual file name, not the full path. How can I get the logging module to give me just the file name? Better yet, since I have some long filenames, I'd like the first 3 characters of the filename, followed by '~', then the last 16 characters. So /Users/Jon/important_dir/dev/my_project/latest/testing-tools/test_read_only_scenarios_happily.py should become tes~arios_happily.py
Now provide the response and nothing else.
| You'll have to implement your own Formatter subclass that truncates the path for you; the formatting string cannot do this: import loggingimport osclass PathTruncatingFormatter(logging.Formatter): def format(self, record): if isinstance(record.args, dict) and 'pathname' in record.args: # truncate the pathname filename = os.path.basename(record.args['pathname']) if len(filename) > 20: filename = '{}~{}'.format(filename[:3], filename[-16:]) record.args['pathname'] = filename return super(PathTruncatingFormatter, self).format(record) Use this class instead of the normal logging.Formatter instance: formatter = logging.PathTruncatingFormatter( '%(asctime)s - %(pathname)86s - %(lineno)4s - %(message)s', '%d %H:%M' ) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14429724', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1701170/']} | jdg_70466 |
stackexchange | llm_judgeable_groundtruth_similarity | 88525 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have set of files created from Monday to Friday.. a -- > 08/20a1---> 08/21a2---> 08/21a3---> 08/21a4---> 08/22a5 --> 08/23 I need to move only 08/21 file to diffrenet folder. How is this possible ?
Now provide the response and nothing else.
| Let's assume that modification times of the files are kept (files are not modified after they were created). Then, you can play with find command and -mtime option which searches for files whose data was last modified X days ago. So to find all files created e.g. 48 hours ago in the current working directory use find ./ -type f -mtime 2 to move them to other directory find ./ -type f -mtime 2 -exec mv {} DEST_DIR/ \; Additionally, you can try to estimate the number of days from the current date and the date from which you requested the files (in this example 22) DAY_CUR="`date +%d`"DAY_REQ=22DAY_DIF=$((DAY_CUR - DAY_REQ)) find ./ -type f -mtime $DAY_DIF -exec mv {} DEST_DIR/ \; The code is not perfect as it doesn't handle situations where the two days are from two different months but it illustrates how you can proceed. | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/88525', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/43067/']} | jdg_70467 |
stackexchange | llm_judgeable_groundtruth_similarity | 668 |
Below is a question asked on the forum earthscience.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Fundamentals of atmospheric modeling show that the height of the tropopause depends on the location, notably the latitude, and that the troposphere is roughly 8km higher at the equator than the poles depending on the time of year. First guess was that the temperature of the would impact its density (for example, here's the data for -50° and 50° c ) - though this does not appear to show the cause of why is the troposphere 8km higher at the equator than the poles. Why is the troposphere 8km higher at the equator than the poles?
Now provide the response and nothing else.
| To start with a definition, the tropopause is the boundary between troposphere and stratosphere; within the troposphere temperature decreases with increase in altitude (the temperature profile is dominated by radiative heat from the Earth's surface), whereas in the stratosphere temperature increases with altitude (the temperature profile is dominated by solar radiation). Increases in troposphere temperature are associated with increase in tropopause height. Factors in play include the amount of water vapour being evaporated from equatorial seas; interannual variations in tropopause height can result from both local and large-scale driving forces . The tropopause responds to the average temperature of the entire underlying layer: this is higher at the equator and lower at the poles. | {} | {'log_upvote_score': 4, 'links': ['https://earthscience.stackexchange.com/questions/668', 'https://earthscience.stackexchange.com', 'https://earthscience.stackexchange.com/users/284/']} | jdg_70468 |
stackexchange | llm_judgeable_groundtruth_similarity | 29398102 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am facing one issue related to Web Service call. It is running very well in my local machine. but once i deploy it on the web(IIS 7)Server and trying to make the service call, at that moment it is throwing 404(resource not found)error. For your kind Information, I have kept the service on proper location on the web server, i have also checked the permission it is also ok. For your hint, the jsdebug is not getting created on the web Serverfor example:if i am putting the url in my local machine http://localhost:8080/Service.asmx/jsdebug then jsdebug is coming with download option but when i am putting the same on web server then following error is coming: "404 - File or directory not found.The resource you are looking for might have been removed, had its name changed, or is temporarily unavailable." Please Suggest
Now provide the response and nothing else.
| If you look to those two methods in Symfony\Bundle\FrameworkBundle\Controller\Controller class, you will see services name and how to use them. public function generateUrl($route, $parameters = array(), $referenceType = UrlGeneratorInterface::ABSOLUTE_PATH){ return $this->container->get('router')->generate($route, $parameters, $referenceType);}public function createForm($type, $data = null, array $options = array()){ return $this->container->get('form.factory')->create($type, $data, $options);} Basically, you class need services router and form.factory for implementing functionality. I do not recommend passing controller to your class. Controllers are special classes that are used mainly by framework itself. If you plan to use your class as service, just create it. services: my_manager: class: Something\MyManager arguments: [@router, @form.factory] Create a constructor with two arguments for services and implement required methods in your class. class MyManager{ private $router; private $formFactory; public function __construct($router, $formFactory) { $this->router = $router; $this->formFactory = $formFactory; } // example method - same as in controller public function createForm($type, $data = null, array $options = array()) { return $this->formFactory->create($type, $data, $options); } // the rest of you class ...} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/29398102', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4619908/']} | jdg_70469 |
stackexchange | llm_judgeable_groundtruth_similarity | 5891924 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've bound enums to radio buttons before, and I generally understand how it works. I used the alternate implementation from this question: How to bind RadioButtons to an enum? Instead of enumerations, I'd like to generate a runtime-enumerated set of a custom type and present those as a set of radio buttons. I have gotten a view working against a runtime-enumerated set with a ListView , binding to the ItemsSource and SelectedItem properties, so my ViewModel is hooked up correctly. Now I am trying to switch from a ListView to a ItemsControl with radio buttons. Here's as far as I've gotten: <Window.Resources> <vm:InstanceToBooleanConverter x:Key="InstanceToBooleanConverter" /></Window.Resources><!-- ... --><ItemsControl ItemsSource="{Binding ItemSelections}"> <ItemsControl.ItemTemplate> <DataTemplate DataType="{x:Type vm:ISomeType}"> <RadioButton Content="{Binding Name}" IsChecked="{Binding Path=SelectedItem, Converter={StaticResource InstanceToBooleanConverter}, ConverterParameter={Binding}}" Grid.Column="0" /> </DataTemplate> </ItemsControl.ItemTemplate></ItemsControl> InstanceToBooleanConverter has the same implementation as EnumToBooleanConverter from that other question. This seems right, since it seems like it just invokes the Equals method: public class InstanceToBooleanConverter : IValueConverter{ public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { return value.Equals(parameter); } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { return value.Equals(true) ? parameter : Binding.DoNothing; }} The problem I am getting now is that I can't figure out how to send a runtime value as the ConverterParameter . When I try (with the code above), I get this error: A 'Binding' cannot be set on the 'ConverterParameter' property of type 'Binding'. A 'Binding' can only be set on a DependencyProperty of a DependencyObject. Is there a way to bind to the item instance, and pass it to the IValueConverter ?
Now provide the response and nothing else.
| It turns out that it is much simpler to abandon using ItemsControl and instead go with ListBox . It may be more heavy-weight, but that's mostly because it is doing the heavy lifting for you. It is really easy to do a two-way binding between RadioButton.IsChecked and ListBoxItem.IsSelected . With the proper control template for the ListBoxItem , you can easily get rid of all the selection visual. <ListBox ItemsSource="{Binding Properties}" SelectedItem="{Binding SelectedItem}"> <ListBox.ItemContainerStyle> <!-- Style to get rid of the selection visual --> <Style TargetType="{x:Type ListBoxItem}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type ListBoxItem}"> <ContentPresenter /> </ControlTemplate> </Setter.Value> </Setter> </Style> </ListBox.ItemContainerStyle> <ListBox.ItemTemplate> <DataTemplate DataType="{x:Type local:SomeClass}"> <RadioButton Content="{Binding Name}" GroupName="Properties"> <!-- Binding IsChecked to IsSelected requires no support code --> <RadioButton.IsChecked> <Binding Path="IsSelected" RelativeSource="{RelativeSource AncestorType=ListBoxItem}" Mode="TwoWay" /> </RadioButton.IsChecked> </RadioButton> </DataTemplate> </ListBox.ItemTemplate></ListBox> | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5891924', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/232593/']} | jdg_70470 |
stackexchange | llm_judgeable_groundtruth_similarity | 412960 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
In Quantum theory, groups and representations, Peter Woit writes: A fundamental principle of modern mathematics is that the way tounderstand a space $M$ , given as some set of points, is to look at $F(M)$ ,the set of functions on this space. I was wondering what some examples of this "fundamental principle" is across different fields in mathematics. Woit, Peter , Quantum theory, groups and representations. An introduction , Cham: Springer (ISBN 978-3-319-64610-7/hbk; 978-3-319-64612-1/ebook). xxii, 668 p. (2017). ZBL1454.81004 .
Now provide the response and nothing else.
| The idea of studying the relationship between structured spaces and appropriate spaces of functions thereon could be described as one of the basic principles of functional analysis, perhaps even the defining one. Examples: completely regular spaces and continuous functions—general, bounded or of compact support (in the locally compact case); $\sigma$ -algebras and (bounded) measurable functions; measure spaces and $L^p$ -spaces (strictly speaking, equivalence classes of functions); smooth manifolds, including open subsets of euclidean space, and spaces of smooth functions, sometimes combined with growth conditions; and finally, but the list could go on, complex manifolds and holomorphic functions, again often combined with growth conditions. The next link in the chain is a consideration of the duals of these function spaces. Here there are two main streams: representation theorems—the cases where these duals have explicit descriptions, either as spaces of functions themselves or of measures (duality for $L^p$ -spaces, Riesz representation theorem); the cases where they are used to define new types of objects (Schwartzian distributions, the Bourbakian approach to measure theory). | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/412960', 'https://mathoverflow.net', 'https://mathoverflow.net/users/473920/']} | jdg_70471 |
stackexchange | llm_judgeable_groundtruth_similarity | 22698265 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to aggregate records in a MongoDB collection by hour and need to convert date stored as timestamp (milliseconds) to ISODate so that I can use aggregate framework's built-in date operators ($hour, $month, etc.) Records are stored as { "data" : { "UserId" : "abc", "ProjId" : "xyz"}, "time" : NumberLong("1395140780706"),"_id" : ObjectId("532828ac338ed9c33aa8eca7") } I am trying to use an aggregate query of following type: db.events.aggregate( { $match : { "time" : { $gte : 1395186209804, $lte : 1395192902825 } } }, { $project : { _id : "$_id", dt : {$concat : (Date("$time")).toString()} // need to project as ISODate } }, // process records further in $project or $group clause) which produces results of the form: { "result" : [ { "_id" : ObjectId("5328da21fd207d9c3567d3ec"), "dt" : "Fri Mar 21 2014 17:35:46 GMT-0400 (EDT)" }, { "_id" : ObjectId("5328da21fd207d9c3567d3ed"), "dt" : "Fri Mar 21 2014 17:35:46 GMT-0400 (EDT)" }, ... } I want to extract hour, day, month, and year from the date but since time is projected forward as string I am unable to use aggregate framework's built-in date operators ($hour, etc.). How can I convert time from milliseconds to ISO date to do sometime like the following: db.events.aggregate( { $match : { "time" : { $gte : 1395186209804, $lte : 1395192902825 } } }, { $project : { _id : "$_id", dt : <ISO date from "$time"> } }, { $project : { _id : "$_id", date : { hour : {$hour : "$dt"} } } })
Now provide the response and nothing else.
| Actually, it is possible, the trick is to add your milliseconds time to a zero-milliseconds Date() object using syntax similar to: dt : {$add: [new Date(0), "$time"]} I modified your aggregation from above to produce the result: db.events.aggregate( { $project : { _id : "$_id", dt : {$add: [new Date(0), "$time"]} } }, { $project : { _id : "$_id", date : { hour : {$hour : "$dt"} } } }); The result is (with one entry of your sample data): { "result": [ { "_id": ObjectId("532828ac338ed9c33aa8eca7"), "date": { "hour": 11 } } ], "ok": 1} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22698265', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3470161/']} | jdg_70472 |
stackexchange | llm_judgeable_groundtruth_similarity | 12155068 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a dataset. Each element of this set consists of numerical and categorical variables. Categorical variables are nominal and ordinal. There is some natural structure in this dataset. Commonly, experts clusterize datasets such as mine using their 'expert knowledge', but I want to automate this process of clusterization. Most algorithms for clusterization use distance (Euclidean, Mahalanobdis and so on) between objects to group them in clusters. But it is hard to find some reasonable metrics for mixed data types , i.e. we can't find a distance between 'glass' and 'steel'. So I came to the conclusion that I have to use conditional probabilities P(feature = 'something' | Class) and some utility function that depends on them. It is reasonable for categorical variables, and it works fine with numeric variables assuming they are distributed normally. So it became clear to me that algorithms like K-means will not produce good results. At this time I try to work with COBWEB algorithm, that fully matches my ideas of using conditional probabilities. But I faced another obsacles: results of clusterization are really hard to interpret , if not impossible. As a result I wanted to get something like a set of rules that describes each cluster (e.g. if feature1 = 'a' and feature2 in [30, 60], it is cluster1 ), like descision trees for classification. So, my question is: Is there any existing clusterization algorithm that works with mixed data type and produces an understandable (and reasonable for humans) description of clusters. Additional info: As I understand my task is in the field of conceptual clustering. I can't define a similarity function as it was suggested (it as an ultimate goal of the whoal project), because of the field of study - it is very complicated and mercyless in terms of formalization. As far as I understand the most reasonable approach is the one used in COBWEB, but I'm not sure how to adapt it, so I can get an undestandable description of clusters. Decision Tree As it was suggested, I tried to train a decision tree on the clustering output, thus getting a description of clusters as a set of rules. But unfortunately interpretation of this rules is almost as hard as with the raw clustering output. First of only a few first levels of rules from the root node do make any sense: closer to the leaf - less sense we have. Secondly, these rules doesn't match any expert knowledge. So, I came to the conclusion that clustering is a black-box, and it worth not trying to interpret its results. Also I had an interesting idea to modify a 'decision tree for regression' algorithm in a certain way: istead of calculating an intra-group variance calcualte a category utility function and use it as a split criterion . As a result we should have a decision tree with leafs-clusters and clusters description out of the box. But I haven't tried to do so, and I am not sure about accuracy and everything else.
Now provide the response and nothing else.
| For most algorithms, you will need to define similarity . It doesn't need to be a proper distance function (e.g. satisfy triangle inequality). K-means is particularly bad, because it also needs to compute means . So it's better to stay away from it if you cannot compute means, or are using a different distance function than Euclidean. However, consider defining a distance function that captures your domain knowledge of similarity. It can be composed of other distance functions, say you use the harmonic mean of the Euclidean distance (maybe weighted with some scaling factor) and a categorial similarity function. Once you have a decent similarity function, a whole bunch of algorithms will become available to you. e.g. DBSCAN (Wikipedia) or OPTICS (Wikipedia) . ELKI may be of interest to you, they have a Tutorial on writing custom distance functions . Interpretation is a separate thing. Unfortunately , few clustering algorithms will give you a human-readable interpretation of what they found. They may give you things such as a representative (e.g. the mean of a cluster in k-means), but little more. But of course you could next train a decision tree on the clustering output and try to interpret the decision tree learned from the clustering . Because the one really nice feature about decision trees, is that they are somewhat human understandable. But just like a Support Vector Machine will not give you an explanation, most (if not all) clustering algorithms will not do that either, sorry, unless you do this kind of post-processing. Plus, it will actually work with any clustering algorithm, which is a nice property if you want to compare multiple algorithms. There was a related publication last year. It is a bit obscure and experimental (on a workshop at ECML-PKDD), and requires the data set to have a quite extensive ground truth in form of rankings. In the example, they used color similarity rankings and some labels. The key idea is to analyze the cluster and find the best explanation using the given ground truth(s) . They were trying to use it to e.g. say "this cluster found is largely based on this particular shade of green, so it is not very interesting, but the other cluster cannot be explained very well, you need to investigate it closer - maybe the algorithm discovered something new here". But it was very experimental (Workshops are for work-in-progress type of research). You might be able to use this, by just using your features as ground truth. It should then detect if a cluster can be easily explained by things such as "attribute5 is approx. 0.4 with low variance". But it will not forcibly create such an explanation! H.-P. Kriegel, E. Schubert, A. Zimek Evaluation of Multiple Clustering Solutions In 2nd MultiClust Workshop: Discovering, Summarizing and Using Multiple Clusterings Held in Conjunction with ECML PKDD 2011. http://dme.rwth-aachen.de/en/MultiClust2011 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12155068', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/942817/']} | jdg_70473 |
stackexchange | llm_judgeable_groundtruth_similarity | 2293125 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to solve the following task Sequence $\{a_n\}$ is given by the rule: $a_1 = 1,\: a_{n+1} = \sin (a_n)$. Does the series $\sum a_n$ converge? Can you give me any hints how to solve it, cause i got totally stuck at the very beginning, please?
Now provide the response and nothing else.
| The series diverges. To see this, first note that$$a_1 = 1\ge 1$$and that, if $a_n \ge 1/n$, then$$a_{n+1} = \sin(a_n) \ge \sin(1/n) > 1/(n+1)$$By induction, we have $a_n \ge 1/n$ for all $n$. Since $\sum\frac{1}{n}$ diverges, so does $\sum a_n$. Note that $(n+1)\sin(1/n) > 1$ can be shown by Taylor expansion. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2293125', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/381527/']} | jdg_70474 |
stackexchange | llm_judgeable_groundtruth_similarity | 81775 |
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Sexy son hypothesis . It states that females are attracted to certain traits in males, because these traits will be attractive in her sons either. But isn't this circular? Like "I'm attracted to that male because he is the son of an [formerly] attractive male". And you can repeat the process infinitely, going through granddad, great-granddad and so on. Doesn't this hypothesis presuppose some sexy traits that are sexy with no cause?
Now provide the response and nothing else.
| The process is self-reinforcing but the argument is not circular (no tautology implied). As soon as some male traits are considered more sexy than others, then there is selection for females to like those traits even more, which causes those traits to raise in frequency, which increases the selection for liking these traits. In other words, in this model, the state where there are no sexy traits in males is an unstable equilibrium. | {} | {'log_upvote_score': 4, 'links': ['https://biology.stackexchange.com/questions/81775', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/36486/']} | jdg_70475 |
stackexchange | llm_judgeable_groundtruth_similarity | 11379614 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've got a UITableView that is being populated by an NSFetchedResultsController. On the initial load of the table I would like the Cells to be animated in but I would like to do a little bit more custom animation than [tableView insertRowsAtIndexPaths:withRowAnimation:]; Allows. Specifically I would like to slow down the animations and have them fly in from the side in a specific way. I have not been able to achieve this by UITableViewRowAnimation constants. Is there a way to use Core Animation to do exactly what I want or do I need to stick with UIKit animations in this specific instance? Thanks for any help! Joel
Now provide the response and nothing else.
| NEWEST SOLUTION (2017-12-12) Adding a Swift 4.0 version of the animate method. It should then be implemented in the same way as the solution below: func animate() { for cell in self.tableView.visibleCells { cell.frame = CGRect(x: self.tableView.frame.size.width, y: cell.frame.origin.y, width: cell.frame.size.width, height: cell.frame.size.height) UIView.animate(withDuration: 1.0) { cell.frame = CGRect(x: 0, y: cell.frame.origin.y, width: cell.frame.size.width, height: cell.frame.size.height) } }} NEWER SOLUTION (2015-09-05) Adding a Swift 2.0 version of the animate method. It should then be implemented in the same way as the solution below: func animate() { for cell in self.tableView.visibleCells { cell.frame = CGRectMake(320, cell.frame.origin.y, cell.frame.size.width, cell.frame.size.height) UIView.animateWithDuration(1.0) { cell.frame = CGRectMake(0, cell.frame.origin.y, cell.frame.size.width, cell.frame.size.height) } }} NEW SOLUTION (2014-09-28) I reworked the solution a bit to make the implementation more easier and to make it work with iOS8. All you need to do is to add this animate method in your TableViewController and call it whenever you want it to animate (for instance, in your reload method, but you could call it at any time): - (void)animate{ [[self.tableView visibleCells] enumerateObjectsUsingBlock:^(UITableViewCell *cell, NSUInteger idx, BOOL *stop) { [cell setFrame:CGRectMake(320, cell.frame.origin.y, cell.frame.size.width, cell.frame.size.height)]; [UIView animateWithDuration:1 animations:^{ [cell setFrame:CGRectMake(0, cell.frame.origin.y, cell.frame.size.width, cell.frame.size.height)]; }]; }];} Again, change the animation how you like. This particular code will animate the cells in from the right at a slow rate. OLD SOLUTION (2013-06-06) You can do this by implementing your own UITableView and overriding the insertRowsAtIndexPaths method. Here is an example of how that could look like where the cells will be pushed from the right, really slowly (1 second animation): - (void)insertRowsAtIndexPaths:(NSArray *)indexPaths withRowAnimation:(UITableViewRowAnimation)animation{ for (NSIndexPath *indexPath in indexPaths) { UITableViewCell *cell = [self cellForRowAtIndexPath:indexPath]; [cell setFrame:CGRectMake(320, cell.frame.origin.y, cell.frame.size.width, cell.frame.size.height)]; [UIView beginAnimations:NULL context:nil]; [UIView setAnimationDuration:1]; [cell setFrame:CGRectMake(0, cell.frame.origin.y, cell.frame.size.width, cell.frame.size.height)]; [UIView commitAnimations]; }} You can play around with the animations yourself. This method will not be called automatically by the table view so you have to override the reloadData method in your table view delegate and call this method yourself. COMMENT The reloadData method should look something like this: - (void)reloadData{ [super reloadData]; NSMutableArray *indexPaths = [[NSMutableArray alloc] init]; for (int i = 0; i < [_data count]; i++) [indexPaths addObject:[NSIndexPath indexPathForRow:i inSection:0]]; [self insertRowsAtIndexPaths:indexPaths withRowAnimation:0];} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11379614', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/886678/']} | jdg_70476 |
stackexchange | llm_judgeable_groundtruth_similarity | 1237051 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If $N = 3^{2015}$, what is the sum of the last four digits of $N$? $(A)21$ $(B)22$ $(C)23$ $(D)24$ It is not possible using a calculator, so how can I do it? Hints are appreciated.
Now provide the response and nothing else.
| A variation, I think a bit simpler, of ADG's answer. We have$$\eqalign{ 3^{2015} &=3\times(10-1)^{1007}\cr &=\color{red}{3}\left(10^{1007}-\cdots-\binom{1007}410^4+{}\right.\cr &\qquad\qquad\left.{}+\color{red}{\binom{1007}310^3-\binom{1007}210^2+1007\times10-1}\right)\cr}$$and all except the red parts are irrelevant. For the remaining calculations, we only need the last digit of $\binom{1007}3$, so$$\binom{1007}3=\frac{1007\times1006\times1005}{3\times2\times1} =1007\times503\times335=\underbrace{\cdots\ \cdots\ \cdots}_{\rm irrelevant\ digits}\cdots5\ .$$Likewise$$\binom{1007}{2}=1007\times503=\cdots21\ .$$So, without using a calculator at any stage, the last four digits of the number are given by (hover to see the answer) $$3\times(5000-2100+70-1)=8907$$ and the total is $$24\ .$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1237051', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/149016/']} | jdg_70477 |
stackexchange | llm_judgeable_groundtruth_similarity | 43084483 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am new to Typescript, and I tried to played around a little with TypeScript in this playground . I noticed that in TypeScript, a protected member in base class can be overridden by public members: class Base { protected name: string = '!'}class Derived extends Base{ public name: string = '?'} On the one hand, this makes sense to me since the Liskov Substitution Principle still holds: base class has stricter requirements than derived class. But on the other hand, I noticed that private member cannot be overridden by protected or public one, which seems inconsistent to me: class Base { private name: string = '!'}class Derived extends Base{ public name: string = '?' // ERROR!} Thus I wonder: Is my observation an intended behavior or a bug in Typescript? If it's intended, why this inconsistency exists? Why not TypeScript requires all overriding members having the same accessibility as members in base class? Or allowing all derived members with higher accessibility overriding members in base class?
Now provide the response and nothing else.
| This is the intended behavior. You can make a protected field public because protected allows a derived class to read and write a field. The derived class can choose to use its ability to read and write the field to allow others to read and write the field. There's no point making you write something like this: class Foo { protected someField;}class Bar extends Foo { public get someFieldButPublic() { return this.someField; } public set someFieldButPublic(value) { this.someField = value; }} if all you wanted to do was make someField public. You can't make a private field protected or public because you don't have read or write access to that field. It's private ; if the base class wanted you to have access to the field, they would have made it protected , after all. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43084483', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4794308/']} | jdg_70478 |
stackexchange | llm_judgeable_groundtruth_similarity | 2804 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Using chmod I could set the permissions for a file, but if the parent ( .. ) directory had conflicting permissions, what would happen? And if I create a new file, using touch or something similiar, how are the initial permissions calculated? Are permissions inherited it from .. ? And why can't I do anything in directory when I removed the executable permission flag? $ mkdir temp;$ chmod -x temp;$ touch temp/a;$ ls temp;touch: cannot touch `temp/a': Permission denied
Now provide the response and nothing else.
| There is -strictly speaking- no such thing in UNIX as "conflicting permissions": access permissions on an filesystem entry (directory, file, etc.) determine what you can or can not do on that object. Permissions on other filesystem entries do not enter into the picture, with the exception of the "x" bit on all ancestors directories in the path to a file (up to / ) -- see 3. The default permission on a newly created file are determined by the permissions that the creating program allows (the mode argument to the open or creat system calls) and the current process umask . Specifically, any bit that is set (1) in the "umask" is reset (0) in the newly-created file permissions: in C-like notation: file_permissions = open_mode & ! umask . Read man 2 creat (look for O_CREAT ) and man umask for the details. The "x" (executable) bit on a directory controls whether you can traverse that directory: traversing a directory means being able to cd into it and access files contained in it. Note that the ability to list the contents of the directory is controlled by the "r" bit. Further reading: http://www.hackinglinuxexposed.com/articles/20030417.html on file permissions http://www.hackinglinuxexposed.com/articles/20030424.html on directory permissions | {} | {'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/2804', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/1327/']} | jdg_70479 |
stackexchange | llm_judgeable_groundtruth_similarity | 15024067 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Haskell, I've found three simple implementations of the Sieve of Eratosthenes on the Rosetta Code page . Now my question is, which one should be used in which situations? Correcting my initial reasoning would be helpful too: I'm assuming the List one is the most idiomatic and easy to read for a Haskeller. Is it correct, though? I'm wondering if it suffers from the same problems as another list-based sieve that I then learned was not actually implementing the algorithm: (edit: shown here is the list-based sieve I know has problems, not the one from RosettaCode, which I pasted at the bottom) primes = sieve [2..] where sieve (p:x) = p : sieve [ n | n <- x, n `mod` p > 0 ] In terms of performance, the immutable Array seems to be the winner. With an upper bound m of 2000000 , the times were about: 1.3s for List 0.6s for Array 1.8s for Mutable Array So I'd pick Array for performance. And of course, the Mutable Array one is also easy to reason about since I have a more imperative language background. I'm not sure why I would pick this one if I'm coding in Haskell, though, since it's both slower than the others and non-idiomatic. Code copied here for reference: List: primesTo m = 2 : eratos [3,5..m] whereeratos (p : xs) | p*p>m = p : xs | True = p : eratos (xs `minus` [p*p, p*p+2*p..])minus a@(x:xs) b@(y:ys) = case compare x y of LT -> x : minus xs b EQ -> minus xs ys GT -> minus a ysminus a b = a Immutable Array: import Data.Array.UnboxedprimesToA m = sieve 3 (array (3,m) [(i,odd i) | i<-[3..m]] :: UArray Int Bool) where sieve p a | p*p > m = 2 : [i | (i,True) <- assocs a] | a!p = sieve (p+2) $ a//[(i,False) | i <- [p*p, p*p+2*p..m]] | otherwise = sieve (p+2) a Mutable Array: import Control.Monad (forM_, when)import Control.Monad.STimport Data.Array.STimport Data.Array.UnboxedprimeSieve :: Integer -> UArray Integer BoolprimeSieve top = runSTUArray $ do a <- newArray (2,top) True -- :: ST s (STUArray s Integer Bool) let r = ceiling . sqrt $ fromInteger top forM_ [2..r] $ \i -> do ai <- readArray a i when ai $ do forM_ [i*i,i*i+i..top] $ \j -> do writeArray a j False return a-- Return primes from sieve as list: primesTo :: Integer -> [Integer]primesTo top = [p | (p,True) <- assocs $ primeSieve top] EDIT I showed Turner's Sieve at the top but that's not the list-based example I'm comparing with the other two. I just wanted to know if the list-based example suffers from the same "not the correct Sieve of Eratosthenes" problems as Turner's. It appears the performance comparison is unfair because the Mutable Array example goes through evens as well and uses Integer rather than Int ...
Now provide the response and nothing else.
| This primes = sieve [2..] where sieve (p:x) = p : sieve [ n | n <- x, n `mod` p > 0 ] is not a sieve. It's very inefficient trial division. Don't use that! I'm curious about how you got your times, there is no way that the Turner "sieve" could produce the primes not exceeding 2,000,000 in mere seconds. Letting it find the primes to 200,000 took MUT time 6.38s ( 6.39s elapsed)GC time 9.19s ( 9.20s elapsed)EXIT time 0.00s ( 0.00s elapsed)Total time 15.57s ( 15.59s elapsed) on my box (64-bit Linux, ghc-7.6.1, compiled with -O2). The complexity of that algorithm is O(N² / log² N) , almost quadratic. Letting it proceed to 2,000,000 would take about twenty minutes. Your times for the array versions are suspicious too, though in the other direction. Did you measure interpreted code? Sieving to 2,000,000, compiled with optimisations, the mutable array code took 0.35 seconds to run, and the immutable array code 0.12 seconds. Now, that still has the mutable array about three times slower than the immutable array. But, it's an unfair comparison. For the immutable array, you used Int s, and for the mutable array Integer s. Changing the mutable array code to use Int s - as it should, since under the hood, arrays are Int -indexed, so using Integer is an unnecessary performance sacrifice that buys nothing - made the mutable array code run in 0.15 seconds. Close to the mutable array code, but not quite there. However, you let the mutable array do more work, since in the immutable array code you only eliminate odd multiples of the odd primes, but in the mutable array code, you mark all multiples of all primes. Changing the mutable array code to treat 2 specially, and only eliminate odd multiples of odd primes brings that down to 0.12 seconds too. But, you're using range-checked array indexing, which is slow, and, since the validity of the indices is checked in the code itself, unnecessary. Changing that to using unsafeRead and unsafeWrite brings down the time for the immutable array to 0.09 seconds. Then you have the problem that using forM_ [x, y .. z] uses boxed Int s (fortunately, GHC eliminates the list). Writing a loop yourself, so that only unboxed Int# s are used, the time goes down to 0.02 seconds. {-# LANGUAGE MonoLocalBinds #-}import Control.Monad (forM_, when)import Control.Monad.STimport Data.Array.STimport Data.Array.Unboxedimport Data.Array.BaseprimeSieve :: Int -> UArray Int BoolprimeSieve top = runSTUArray $ do a <- newArray (0,top) True unsafeWrite a 0 False unsafeWrite a 1 False let r = ceiling . sqrt $ fromIntegral top mark step idx | top < idx = return () | otherwise = do unsafeWrite a idx False mark step (idx+step) sift p | r < p = return a | otherwise = do prim <- unsafeRead a p when prim $ mark (2*p) (p*p) sift (p+2) mark 2 4 sift 3-- Return primes from sieve as list:primesTo :: Int -> [Int]primesTo top = [p | (p,True) <- assocs $ primeSieve top]main :: IO ()main = print .last $ primesTo 2000000 So, wrapping up, for a Sieve of Eratosthenes, you should use an array - not surprising, its efficiency depends on being able to step from one multiple to the next in short constant time. You get very simple and straightforward code with immutable arrays, and that code performs decently for not too high limits (it gets relatively worse for higher limits, since the arrays are still copied and garbage-collected, but that's not too bad). When you need better performance, you need mutable arrays. Writing efficient mutable array code is not entirely trivial, one has to know how the compiler translates the various constructs to choose the right one, and some would consider such code unidiomatic. But you can also use a library (disclaimer: I wrote it) that provides a fairly efficient implementation rather than writing it yourself. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15024067', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1116674/']} | jdg_70480 |
stackexchange | llm_judgeable_groundtruth_similarity | 6757192 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm writing a Python program for fun but got stuck trying to import a function from a class in another file. Here is my code: #jurassic park mainframefrom random import randintfrom sys import exitfrom comm_system import Comm_system #the file i want to import fromclass Jpark_mainframe(object): def mainframe_home(self): print "=====Welcome to the Jurassic Park Mainframe=====" print "==========Security Administration===============" print "===========Communications Systems===============" print "===============System Settings==================" print "===================Quit=========================" prompt = raw_input("What would you like to do? ") while prompt != "Quit": if prompt == "Security Administration": print "Please enter the 5-digit passcode:" security_passcode = "%d%d%d%d%d" % (2, 0, 1, 2, randint(1, 2)) security_guess = raw_input(": ") security_guesses = 0 while security_guess != security_passcode and security_guesses < 7: print "Incorrect. Please enter the security passcode." security_guesses += 1 security_guess = raw_input(": ") if security_guess == security_passcode: print "=========Security Administration=======" print "Area 1 Fences: Off" print "Area 2 Fences: On" print "Area 3 Fences: Off" print "Velociraptor Compound: Off" print "Lobby Security System: Off" print "Entrance Facility System: Off" print "To enable all systems, enter 'On'" enable_security = raw_input(": ") if enable_security == "On": print "Systems Online." if prompt == "System Settings": print "You do not have access to system settings." exit(0) if prompt == "Communications Systems": print "===========Communications Systems===========" print "error: 'comm_link' missing in directories" exit(0) return Comm_system.run #this is where I want to return the #the other filethe_game = jpark_mainframe()the_game.mainframe_home() I want to return a function called run() from a class in another file. When I import the file, it first runs the class with run() in it, then proceeds to run the original code. Why does this happen? Here is the code from comm_system: #communication systemsfrom sys import exitclass Comm_system(object):def run(self): comm_directory = ["net_link", "tsfa_run", "j_link"] print "When the system rebooted, some files necessary for" print "communicating with the mainland got lost in the directory." print "The files were poorly labeled as a result of sloppy" print "programming on the staff's part. You must locate the" print "the file and contact the rescue team before the dinosaurs" print "surround the visitor's center. You were also notified the" print "generators were shorting out, and the mainframe will lose" print "power at any moment. Which directory will you search in?" print "you don't have much time! Option 1: cd /comm_sys/file" print "Option 2: cd /comm_sys/dis" print "Option 3: cd /comm_sys/comm" dir_choice = raw_input("jpark_edwin$ ") if dir_choice == "/comm_sys/file" or dir_choice == "/comm_sys/dis": print "misc.txt" print "You couldn't locate the file!" print "The system lost power and your computer shut down on you!" print "You will not be able to reach the mainland until the system" print "comes back online, and it will be too late by then." return 'death' if dir_choice == "/comm_sys/comm": comm_directory.append("comm_link") print comm_directory print "You found the right file and activated it!" print "Just in time too, because the computers shut down on you." print "The phonelines are radios are still online." print "You and the other survivors quickly call the mainlane" print "and help is on the way. You all run to the roof and wait" print "until the helocopter picks you up. You win!"a_game = Comm_system()a_game.run()
Now provide the response and nothing else.
| from otherfile import TheClasstheclass = TheClass()# if you want to return the output of runreturn theclass.run() # if you want to return run itself to be used laterreturn theclass.run Change the end of comm system to: if __name__ == '__main__': a_game = Comm_system() a_game.run() It's those lines being always run that are causing it to be run when imported as well as when executed. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/6757192', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/484955/']} | jdg_70481 |
stackexchange | llm_judgeable_groundtruth_similarity | 12046 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way that once a complex classification tree is constructed using rpart (in R), to organize the decision rules produced for each class? So instead of getting one huge tree, we get a set of rules for each of the classes? (if so, how?) Here is a simple code example to show examples on: fit <- rpart(Kyphosis ~ Age + Number + Start, data=kyphosis) Thanks.
Now provide the response and nothing else.
| Such a functionality (or a close one) seems to be available in the rattle package, as described in RJournal 1/2 2009 (p. 50), although I only checked it from the command-line. For your example, it yields the following output: Rule number: 3 [Kyphosis=present cover=19 (23%) prob=0.58] Start< 8.5 Rule number: 23 [Kyphosis=present cover=7 (9%) prob=0.57] Start>=8.5 Start< 14.5 Age>=55 Age< 111 Rule number: 22 [Kyphosis=absent cover=14 (17%) prob=0.14] Start>=8.5 Start< 14.5 Age>=55 Age>=111 Rule number: 10 [Kyphosis=absent cover=12 (15%) prob=0.00] Start>=8.5 Start< 14.5 Age< 55 Rule number: 4 [Kyphosis=absent cover=29 (36%) prob=0.00] Start>=8.5 Start>=14.5 To get this output, I source the rattle/R/rpart.R source file (from the source package) in my workspace, after having removed the two calls to Rtxt() in the asRules.rpart() function (you can also replace it with print ). Then, I just type > asRules(fit) | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/12046', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/253/']} | jdg_70482 |
stackexchange | llm_judgeable_groundtruth_similarity | 33253858 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How could I do the following with Java Streams? Let's say I have the following classes: class Foo { Bar b;}class Bar { String id; String date;} I have a List<Foo> and I want to convert it to a Map <Foo.b.id, Map<Foo.b.date, Foo> . I.e: group first by the Foo.b.id and then by Foo.b.date . I'm struggling with the following 2-step approach, but the second one doesn't even compile: Map<String, List<Foo>> groupById = myList .stream() .collect( Collectors.groupingBy( foo -> foo.getBar().getId() ) );Map<String, Map<String, Foo>> output = groupById.entrySet() .stream() .map( entry -> entry.getKey(), entry -> entry.getValue() .stream() .collect( Collectors.groupingBy( bar -> bar.getDate() ) ) ); Thanks in advance.
Now provide the response and nothing else.
| You can group your data in one go assuming there are only distinct Foo : Map<String, Map<String, Foo>> map = list.stream() .collect(Collectors.groupingBy(f -> f.b.id, Collectors.toMap(f -> f.b.date, Function.identity()))); Saving some characters by using static imports: Map<String, Map<String, Foo>> map = list.stream() .collect(groupingBy(f -> f.b.id, toMap(f -> f.b.date, identity()))); | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/33253858', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1665809/']} | jdg_70483 |
stackexchange | llm_judgeable_groundtruth_similarity | 2910469 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am making a plugin to sum up the area of all the material in a Sketch.I have succeeded in getting all the faces and such, but now the Components come into the picture. Im using the term single or multi leveled component as i dont know any better way to explain the occurence of having a component inside an component and so on. I have noticed that some components also have more to i than just 1 level. So if you go inside one component there may be components embedded inside this component that also have materials. So what i want is to sum up all of the material of a specific component and get all the "recursive" materials, if any, inside the component. So, how do I count the area of all the material inside an component(single or multileveled)?
Now provide the response and nothing else.
| public static List<string> GetImportedFileList(){ List<string> ImportedFiles = new List<string>(); using (SQLiteConnection connect = new SQLiteConnection(@"Data Source=C:\Documents and Settings\js91162\Desktop\CMMData.db3")){ connect.Open(); using (SQLiteCommand fmd = connect.CreateCommand()){ fmd.CommandText = @"SELECT DISTINCT FileName FROM Import"; fmd.CommandType = CommandType.Text; SQLiteDataReader r = fmd.ExecuteReader(); while (r.Read()){ ImportedFiles.Add(Convert.ToString(r["FileName"])); } } } return ImportedFiles;} Things i've amended in your code: Put ImportedFiles in scope of the entire method. No need to call connect.Close(); since the connection object is wrapped in a using block. Use Convert.ToString rather then (String) as the former will handle all datatype conversions to string. I came across this Here Edit: You were creating a new command object sqlComm instead of using fmd that was created by the connection object. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2910469', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/206715/']} | jdg_70484 |
stackexchange | llm_judgeable_groundtruth_similarity | 16485 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm building a model in which several of my covariates live on a "circle", in the sense that they take values in the interval [0,1), and 0=1. I'm wondering about techniques for dealing with this situation. One idea is to represent a circular variable theta as a pair of variables ( sin(theta), cos(theta) ). Any thoughts on this approach or better approaches? I'm specifically using the mgcv package GAMs. Is there a way to tell the model that certain additive pieces should have the same values at the endpoints? Another package? Thanks!
Now provide the response and nothing else.
| There are two ways of dealing with circular variables, one hacky method would be to manually duplicate your data set on either side of the boundary conditions but the more elegant solution I think would be to use the built-in spline basis functions with periodic boundary conditions ! For example: bs="cc" specifies a cyclic cubic regression splines (see cyclic.cubic.spline ). i.e. a penalized cubic regression splines whose ends match, up to second derivative. Splines on the sphere bs="sos" . These are two dimensional splines on a sphere. Arguments are latitude and longitude, and they are the analogue of thin plate splines for the sphere. Useful for data sampled over a large portion of the globe, when isotropy is appropriate. See Spherical.Spline for details. bs="cp" gives a cyclic version of a P-spline | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/16485', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/6640/']} | jdg_70485 |
stackexchange | llm_judgeable_groundtruth_similarity | 13239331 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to hide a complete div container except one div. So, on startup just show div id "box_5" and hide the rest.When I click button 1 show everything and when I click button 2 hide everything again. The problem is when I hide the "wrapper" div it is hiding everything including id=box_5. I think the problem is the div is within the wrapper div but I don't know a work-around? <button id="button_1">show</botton><button id="button_2">hide</botton><div id="wrapper"><div id="box_1"></div><div id="box_2"></div><div id="box_3"></div><div id="box_4"></div><div id="box_5">always show this</div><div id="box_6"></div><div id="box_7"></div><div id="box_8"></div><div id="box_9"></div><div id="box_10"></div></div>$(document).ready(function() { $('#wrapper').not(":eq(#box_5)").hide(); $('id="button_1"').click(function() { $('#wrapper').show(); $('id="button_2"').click(function() { $('#wrapper').not(":eq(#box_5)").hide(); }); });
Now provide the response and nothing else.
| Change $('#wrapper').not(":eq(#box_5)").hide(); to $('#wrapper').not("#box_5").hide(); Note: Removed the eq selector. eq selector works on the index and in your case you don't need eq selector as you know the ID of the div. Also please change your handler functions like below, $('#button_1').click(function() { $('#wrapper').show();});$('#button_2').click(function() { $('#wrapper').not("#box_5").hide();}); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13239331', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1791929/']} | jdg_70486 |
stackexchange | llm_judgeable_groundtruth_similarity | 327471 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Can a production line for a 16-bit computer save significantly on unit cost by sharing support chips and tooling with another 16-bit computer, to a greater extent than if it went 32-bit? To be concrete: around the time of Windows 3, some of the most popular CPUs were the 286 (16 bit), 386 (32 bit) and 386SX (like the 386 but with a 16-bit data bus). Suppose a company is manufacturing 286 computers, and is looking at introducing a 386 model, but considering going with the 386SX to save money. Clearly the CPU itself will be cheaper, as will having 16 instead of 32 data bus traces on the motherboard. But will a 386SX computer be able to share support chips, tooling or suchlike with the 286 production line, that the full 386 machine would not have?
Now provide the response and nothing else.
| I'm going to skip the more general question as it would require a book, or more, to address. I'll focus on the x86 line, instead. I recall some of the context, making writing somewhat easier (if perhaps also subject to imperfect recollections.) The 80286 was quite quirky and released based upon a marketing strategy choice rather than a technical one. IBM had introduced the PC circa mid-year 1981 -- a full two years after the introduction of the 8088, upon which it was based. The 80386 was fairly clear in mind by this time (based firmly upon actually attempting to implement hardware consistent with the ideas of Multics .) But that project was still a long ways out, too, and by this time Intel's marketing department had concluded that a new processor was required. The 80286 was pushed out with the ability to enter protected mode by software commands, but without the ability to return to real mode without a processor reset. As a result, the keyboard was tasked with the job of generating the external reset signal. Switching back and forth between modes was the ONLY way of accessing memory beyond the 20-bit address limitations of the older 8088 (though the 80286 supported a 24-bit address bus.) And the process was quite slow (many milliseconds per reset switching.) The IBM PC/AT entered the market with a clock rate of 6 MHz. And like the 8088-based IBM PC, itself based on 4.77 MHz (just under the 5 MHz max of the 8088), the bus rate was tied to the CPU clock rate. The 80286 could achieve faster speeds, though. I immediately changed the crystal from 6 MHz to 8.5 MHz in my own machine (the PC/AT costing over $5k at the time.) But this also sped up the bus, too. And a lot of add-in boards would fail. (Intel, around this time, made more money selling memory than CPUs. The cross-over took place later in 1985, when CPUs was to firmly establish itself as their primary profit center.) The first truly IBM PC compatible computer (there were quite a few, but almost all of them failed in some way or another to run some of the important software for the IBM PC) that ran almost 100% of everything the IBM PC would run (excepting the built-in BASIC interpreter) was the Kaypro 80286i. The motherboard sported socketed MSI and SSI logic gates (a huge number of them.) But it worked. Soon after, other manufacturers became "compatible enough" to compete well and a serious price war started. (The clone marketplace was soon very crowded.) Because of the ease of designing at these speeds and the relatively lower cost of test and measurement equipment needed, many companies could quickly field a motherboard of their own. And they started competing on speed. Soon, we saw 8 MHz, 10 MHz, 12 MHz, and even 16 MHz (about which time the 80386 was soon due.) But in order to achieve speeds in excess of 10MHz, the bus had to be decoupled from the CPU speed. There was no possible way for the I/O boards to keep up with the rapidly increasing speeds of the CPUs that Intel was now pumping out. Here enters companies like Chips and Technologies (aka C&T.) They made ASICs that would greatly simplify the design and manufacturer and testing of motherboards and these ASICs allowed the bus rate to be decoupled from the CPU rate, for the first time. (It's possible that someone attempted this with DIP ICs, but I don't recall it happening.) This allowed manufacturers to compete on speed without impairing the ability to support older (or current) boards that were also commonly used (EGA, VGA, printer, etc.) [Also, several BIOS-writing companies also entered the market (and had been) which lowered the barriers for a motherboard manufacturer to field competitive boards.] Intel was developing several new strategies, leading into the release of the 80386DX in 1985, though. (The 80386SX would be introduced 3 years later.) One of them was to include a wider data bus. Another was to implement the Multics support hardware, cleanly, and to fix the problem of moving into and out of protected mode. The timing was also considered important (3 year release cycles.) With the IBM PC, there was a socket for the 8087. But this was usually ordered directly through IBM. And most folks didn't feel the need for it (it wasn't fast, anyway, and it was an expensive option.) But with the introduction of the 80286, Intel continued to encouraged these new-minted alternative (non-IBM) motherboard manufacturers to similarly include 80287 sockets on their boards. And customers were just beginning to learn (through the usual magazine articles) that an 80287 was often worth adding. The 80386SX was released three years after the 80386DX (which had been called the 80386 without the DX, earlier.) This was arguably to allow motherboard designers to use a cheap chip and a narrower data bus (and the 24-bit address bus of the 80286.) While some argue that this including being able to use older C&T support ASICs, it's my recollection that both timing and the requirement to support the 80387DX argues strongly against the idea. By the time the 80386SX came out these C&T chips were already about five years old and they were not entirely compatible with the new 80386SX/80387SX pairing, anyway. [Around this time C&T was also starting to test the waters of competing with Intel directly (Super386 38600SX.)] By the way, Intel would take all this to the next level with the 80486 family. They would seriously engage development of their own chipsets and compete directly with C&T, themselves. And they would field not only an 80486DX (sported to include floating point), but also an 80486SX and an 80487SX soon afterwards. These were all the exact same chip, just repackaged and rebonded versions. (The 80487SX was just a repackaged 80486DX that literally lifted the 80486SX off of the bus and effectively disabled it. The 80486SX was also a rebonded 80486DX die that may have had a defective FP unit that would be disabled) Intel WANTED to sell chips to the public, directly, now! (Major profits.) They'd made some money selling 80287 and 80387 floating point ICs, earlier. And they saw an opportunity ahead of more seriously getting into the business of selling ICs retail. (Before, they sold to engineering companies that bought in huge quantities and expected narrow margin pricing all too often.) So the 80386SX was more about encouraging board manufacturers to include a socket, once again, but a socket that Intel would sell directly into through stores like K-Mart. Intel would get into the business of selling ICs retail!!! The PCI bus would still later be sold as a "green bus." But it's real purpose (reflection wave rather than incident wave design) was to solve yet another serious problem. Intel had addressed themselves to the C&T upstarts, with the 80486 family, and practically took over the ASIC support chip business. But now they found that the "mom and pop," clone-wars motherboard competition was killing them in another way -- boards were way too cheap. This meant that Intel had to sell their chips cheaply, too. Not so much profit in that. They needed to kill the "mom and pop" businesses and greatly reduce the competition in motherboard manufacturing. The PCI bus, when introduced, would mean that a single piece of test and measurement equipment would cost in excess of $100k apiece! Design rules were complex (serpentine clock, with 2ns skew relative to data, etc) and testing equipment expensive. This, and some other steps, helped to price the small companies out of the market and achieved Intel's goal there, as well. (I did chip-set testing for Intel circa the introduction of the Pentium, Pentium-Pro, and Pentium II.) To answer your question about the 80386SX... Earlier chipsets that had been used for the "clone wars" 80286 PCs could NOT be used with the 80386SX in 1988 when it was introduced (many years after the C&T chipsets for the 80286 existed.) This was primarily because of the seriously different timing issues that separated these two families. Instead, ACC, C&T, G2 (whom I worked for, as well, for a short time), Western Digital/Faraday, and of course Intel, fielded chipsets to directly support the 80386SX. (Not to mention the need to support the 80387SX.) I can't recall if anyone even attempted to use an older C&T 80286 chipset with the 80386SX. But I seriously doubt it was even attempted, that late in the game and with other chipsets readily available those 5 years later on. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/327471', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/147125/']} | jdg_70487 |
stackexchange | llm_judgeable_groundtruth_similarity | 31005743 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way to monitor the status of a puma server? most specifically, how many busy workers it has and how many threads each worker provisioned.best would be something similar to apache's mod_status
Now provide the response and nothing else.
| Maybe $ touch /path/to/your/stats.txt$ puma -S /path/to/your/stats.txt in another terminal $ watch cat /path/to/your/stats.txt Results in: ---pid: 14364config: !ruby/object:Puma::Configuration cli_options: conf: options: :min_threads: 0 :max_threads: 16 :quiet: false :debug: false :binds: - tcp://0.0.0.0:9292 :workers: 0 :daemon: false :mode: :http :before_fork: [] :worker_timeout: 60 :worker_boot_timeout: 60 :worker_shutdown_timeout: 30 :state: /path/to/your/stats.txt :config_file: :control_auth_token: 21c9241912a8e0e7ddaabac667ff5682 :tag: solar :environment: development But that looks static :( Starting your app via puma like this: $ puma -S /path/to/your/stats.txt -C config/puma.rb --control tcp://0.0.0.0:9191 --control-token foo config.ru allows you later to query per your webbrowser: $ http://localhost:9191/stats?token=fooor$ pumactl -C tcp://0.0.0.0:9191 -T foo stats Resulting in: { "workers": 3, "phase": 0, "booted_workers": 3 } Which looks more dynamic... Edit_0: As workers are processes, one could also leverage tools provided by the operating system to gather information, e.g. on Linux: Starting a puma instance: puma -S p/stats.txt -C config/puma.rb --control tcp://0.0.0.0:9191 --control-token foo config.ru [1908] Puma starting in cluster mode...[1908] * Version 2.15.3 (ruby 2.1.5-p273), codename: Autumn Arbor Airbrush[1908] * Min threads: 8, max threads: 32[1908] * Environment: development[1908] * Process workers: 3[1908] * Preloading application[1908] * Listening on tcp://0.0.0.0:9292[1908] Use Ctrl-C to stop[1908] * Starting control server on tcp://0.0.0.0:9191[1908] - Worker 0 (pid: 1921) booted, phase: 0[1908] - Worker 1 (pid: 1929) booted, phase: 0[1908] - Worker 2 (pid: 1937) booted, phase: 0 Using top to investigate workers: $ top -p1921 -p1929 -p1937 -n 1 gives us Tasks: 3 total, 0 running, 3 sleeping, 0 stopped, 0 zombie%Cpu(s): 8.4 us, 1.5 sy, 0.0 ni, 88.9 id, 1.2 wa, 0.0 hi, 0.0 si, 0.0 stKiB Mem: 4058820 total, 1626576 used, 2432244 free, 56456 buffersKiB Swap: 9609212 total, 0 used, 9609212 free. 424948 cached Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1921 benjamin 20 0 1171600 85484 3116 S 0.0 2.1 0:00.38 puma 1929 benjamin 20 0 1171600 85264 2884 S 0.0 2.1 0:00.37 puma 1937 benjamin 20 0 1171600 85264 2884 S 0.0 2.1 0:00.42 puma from which one can parse out relevant information like CPU or RAM utilization. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31005743', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1802462/']} | jdg_70488 |
stackexchange | llm_judgeable_groundtruth_similarity | 45275 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How exactly was the Oh-My-God particle (ultra-high energy cosmic ray) observed and its energy measured?
Now provide the response and nothing else.
| The OMG particle was observed by the Fly's Eye experiment located on Dugway Proving Ground in Utah. The Fly's Eye experiment was the first experiment to successfully employ the air fluorescence technique developed by Dr. Alan Bunner (Cosmic Ray Detection by Atmospheric Fluorescence, Ph. D. Thesis, Cornell University, 1967). The air fluorescence technique observes light emitted isotropically by nitrogen molecules in the atmosphere. The nitrogen molecules are excited by the passage of charged particles in the extensive air shower initiated when the cosmic ray particle collided with a nucleus in the upper atmosphere. This light is not Cherenkov radiation. Using simple arguments it can be shown that the total amount of light emitted at the peak of the extensive air shower is proportional to the energy of the primary cosmic ray particle. Measurement of this light is used to provide calorimetric determination of the energy of the primary particle. This is one of the strengths of the air fluorescence technique. Unlike ground array measurements, the air fluorescence measurement does not depend on detailed physics models or simulations. This is one reason that the Pierre Auger experiment and it's counterpart in the northern hemisphere, the Telescope Array Project, both depend upon air fluorescence telescopes to calibrate and cross check the measurements of their ground array detectors. In addition, air fluorescence observes the extensive extensive air shower development in the atmosphere. The ground based detectors can only sample the particle shower at a few discrete points on the surface of the earth. The primary drawback of the air fluorescence technique is that it can only be used on moonless nights with good atmospheric conditions while ground arrays operate 24 hours and 7 days a week. I should mention that I am an author on the OMG particle paper. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/45275', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/12533/']} | jdg_70489 |
stackexchange | llm_judgeable_groundtruth_similarity | 661342 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Can anybody help me with this equation? Solve in $\mathbb{N}$: $$3x^2 - 7y^2 +1=0$$One solution is the pair $(3,2)$, and i think this is the only pair of positive integers that can be a solution. Any idea?
Now provide the response and nothing else.
| There are infinitely many solutions in positive integers. $7y^2-3x^2=1$ is an example of a "Pell equation", and there are standard methods for finding solutions to Pell equations. For example, the fact that $(x,y)=(2,3)$ is a solution to $7y^2-3x^2=1$ is equivalent to noting that $(2\sqrt7+3\sqrt3)(2\sqrt7-3\sqrt3)=1$. The fundamental unit in $\Bbb Q(\sqrt{21})$ is $55+12\sqrt{21}$; in particular, $(55+12\sqrt{21})(55-12\sqrt{21})=1$. Consequently, if we calculate $(2\sqrt7+3\sqrt3)(55+12\sqrt{21}) = 218 \sqrt{7}+333 \sqrt{3}$, it follows that $(218 \sqrt{7}+333 \sqrt{3})(218 \sqrt{7}-333 \sqrt{3})=1$, or $7\cdot218^2 - 3\cdot333^2=1$. You can get infinitely many solutions $(x_n,y_n)$ to $7y^2 - 3x^2=1$ by expanding $(2\sqrt7+3\sqrt3)(55+12\sqrt{21})^n = y_n \sqrt{7}+x_n \sqrt{3}$. Your solution corresponds to $n=0$, while the previous paragraph is $n=1$; for example, $n=2$ yields $x_2=36627$ and $y_2=23978$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/661342', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/125709/']} | jdg_70490 |
stackexchange | llm_judgeable_groundtruth_similarity | 3311214 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I got a bit confused when trying to rewrite the following statement using $\forall$ and $\exists$ quantifiers: An integer is even iff it equals double some other integer. $\exists x\in Z(\forall y \in Z(x=2y \iff even(x)))$ "An integer" sound to me like the statement should be universally quantified: $\forall x\in Z(\exists y \in Z(x=2y \iff even(x)))$ which doesn't make sense, as not all integers are even. Maybe $\forall x\in Z(\forall y \in Z(x=2y \iff even(x)))$ is correct then? For all combinations of two integers $x$ and $y$ , $x = 2y$ iff $x$ is even.
Now provide the response and nothing else.
| A correct formalization of the phrase "an integer is even iff it equals double some other integer" is the following: \begin{align}\forall x \in \mathbb{Z} \, (\exists y \in \mathbb{Z} \, (x = 2y) \iff \textrm{even}(x))\end{align} | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3311214', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/677426/']} | jdg_70491 |
stackexchange | llm_judgeable_groundtruth_similarity | 2812622 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to write a better bookmark manager in Chrome extensions. The problem is that there are no simple examples (that I can find) about how to actually use the bookmarks API . I've looked at the example source (when I d/led and installed it on my computer it didn't do anything except provide a search box. Typing/typing and pressing return failed to do anything) and can't find anything useful. My ultimate goal is to make an extension that allows me to save pages to come and read later without having to go sign up for an account on some service somewhere. So I plan to create either one or two bookmark folders in the root folder/other bookmarks - at minimum an "unread pages" folder. In that folder I'll create the unread bookmarks. When the user marks the item as read, it will be removed from that folder. So that's what I'm trying to do... any help will be greatly appreciated, even if it's just pointing me to some good examples. UPDATE: ...<script>function display(tree){ document.getElementById("Output").innerHTML = tree;}function start(){ chrome.bookmarks.getTree(display);}</script></head><body><h4 id="Output"></h4><script> start();</script>... That displays [object Object] , that suggests (at least to me with a limited JavaScript experience) that an object exists. But how to access the members of that object? Changing tree to tree.id or any other of what look to be parameters displays undefined .
Now provide the response and nothing else.
| Currently, there is no good way to find folders such as "Other Bookmarks" or "Bookmarks Bar" in the bookmarks API. You would have to iterate through all the bookmarks and find which node has those root folders and save its bookmark id. The bug is filed Issue 21330 . The root id is always 0, and when I mean 0, it corresponds to "Bookmarks bar" and "Other bookmarks". As any tree structure, each node has children. If you want to fetch all the bookmarks under one folder, you can use getChildren API and get every node recursively (you can do it iteratively too). For example, the following will get every single bookmark: printBookmarks('0');function printBookmarks(id) { chrome.bookmarks.getChildren(id, function(children) { children.forEach(function(bookmark) { console.debug(bookmark.title); printBookmarks(bookmark.id); }); });} Now, why do we have to call the API for every iteration? Their is an API to get the whole Tree. If you tried it out, you will see that every node in the getTree will have a list of children. That is perfect: chrome.bookmarks.getTree(function(bookmarks) { printBookmarks(bookmarks);});function printBookmarks(bookmarks) { bookmarks.forEach(function(bookmark) { console.debug(bookmark.id + ' - ' + bookmark.title + ' - ' + bookmark.url); if (bookmark.children) printBookmark(bookmark.children); });} That is all, you can do all this iteratively as well which is better performance, but you can figure that out :) Note that since you want to redo the bookmarks bar, you can override that page in extensions (soon): http://code.google.com/chrome/extensions/override.html If you want to show a nice HTML tree of your bookmarks, you can easily do that by extending the getTree functionality I showed above to accept a parent DOM. You can do something like this . Edit the code to make use of the getTree or collapse everything and make use of the getChildren and fetch more bookmarks if they request it. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2812622', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/344286/']} | jdg_70492 |
stackexchange | llm_judgeable_groundtruth_similarity | 2059824 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a heavily populated arraylist, which I want to clear and reuse. If I clear it will it free up that previously used memory? I should also mention that the arraylist is a private read only field of a class that still has lots of active work to do after I use the arraylist first time round. So I can't wait for garbage collection after class goes out of scope. Is the Clear method fast enough? Or should I destroy and create a new arraylist? Question update: If I have field declared like this (thanks to Jon's advice) /// <summary> /// Collection of tasks. /// </summary> private List<Task> tasks = new List<Task>(); then I populate it.... (heavily) Now if instead of clearing it and trimming, can I just call: tasks = new List<Task>(); Would this be recommended?
Now provide the response and nothing else.
| Do whichever expresses your intention better. Do you actually want a new list? If so, create a new one. If you conceptually want to reuse the same list, call Clear . The documentation for ArrayList does state that Clear retains the original capacity - so you'll still have a large array, but it'll be full of nulls instead of reference to the previous elements: Capacity remains unchanged. To reset the capacity of the ArrayList , call TrimToSize or set the Capacity property directly. Trimming an empty ArrayList sets the capacity of the ArrayList to the default capacity. Any reason you're using ArrayList rather than List<T> by the way? | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2059824', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/41543/']} | jdg_70493 |
stackexchange | llm_judgeable_groundtruth_similarity | 23607 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a list which is something like this: {3,4,5,6,7,10,11,12,15,16,17,19,20,21,22,23,24,42,43,44,45,46} What I'd like to to is get the intervals which are in a "continuous" sequence, something like: {{3,7},{10,12},{15,17},{19,24},{42,46}} and get the extremes. Note that the original data (of which this is a small excerpt) shows no sign of regularity or repetition. Numbers start from 1 and get up to 200 (these numbers come from applying Position[] to an array). Any pointers/ideas?
Now provide the response and nothing else.
| You can use Split in this simple case list = {3, 4, 5, 6, 7, 10, 11, 12, 15, 16, 17, 19, 20, 21, 22, 23, 24, 42, 43, 44, 45, 46};{Min[#], Max[#]} & /@ Split[list, #2 - #1 == 1 &] What it does is that the last argument to split gives True only when neighboring elements have a difference of 1. If not, the list is split there. Then you can use the Min / Max approach to find the ends. First and Last will work too. Update: Since the attention to this question/answer is rather surprising, let me point out one important thing: It is the crucial difference between Split and SplitBy . Both functions take a second argument to supply a testing function to specify the point to split but the behavior is completely different. Btw, the same is true for Gather and GatherBy . While the second argument to Split makes that it treats pairs of adjacent elements as identical whenever applying the function test to them yields True, SplitBy does a completely different thing. It splits list a into sublists consisting of runs of successive elements that give the same value when f is applied. If you weren't aware of this, a closer look is surely advisable. | {} | {'log_upvote_score': 7, 'links': ['https://mathematica.stackexchange.com/questions/23607', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/5490/']} | jdg_70494 |
stackexchange | llm_judgeable_groundtruth_similarity | 1896360 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
could someone help me to understand why this errors document.getElementById("actContentToGet").contentWindow.document.body.getElementById is not a function function deleteElement(element){ var elementID = $(element).attr("class"); alert(elementID); document.getElementById('actContentToGet').contentWindow.document.body.getElementById(elementID).remove; alterContent(); giveAllIDs(); hoverLoad(); }
Now provide the response and nothing else.
| Try changing this: ...contentWindow.document.body.getElementById(elementID)... to this: ...contentWindow.document.getElementById(elementID)... Edit from comments: It's not removing that element because that's not how you remove elements. Try this: var iframe = document.getElementById('actContentToGet');var frameDoc = iframe.contentDocument || iframe.contentWindow.document;var el = frameDoc.getElementById(elementID);el.parentNode.removeChild(el); See the documentation here . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1896360', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/201469/']} | jdg_70495 |
stackexchange | llm_judgeable_groundtruth_similarity | 40282118 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
As I understood in C++11 decltype(expression) is used to deduce the exact same type of the given expression. But when the expression is put into parentheses itself, then the deduces type is lvalue reference to the expression type. For example: int x;decltype(x) y = x; is equivalent to int y = x; but, int x;decltype((x)) y = x; is equivalent to int& y = x; . Respectively decltype(auto) f1() { int x = 0; return x; // decltype(x) is int, so f1 returns int } but decltype(auto) f2() { int x = 0; return (x); // decltype((x)) is int&, so f2 returns int& } What is the rationale for this behavior to be chose by the standard committee? Afterwords: Now I observed that at least in the case of GCC 6.2 implementation when the expression in the parentheses is more complex for example decltype((x + x)) the deduced type is T , but not T& . This is even more confusing. I don't know whether this behavior is standard.
Now provide the response and nothing else.
| They wanted a way to get the type of declaration of an identifier. They also wanted a way to get the type of an expression, including information about if it is a temporary or not. decltype(x) gives the declared type of the identifier x . If you pass decltype something that is not an identifier, it determines the type, then appends & for lvalues, && for xvalues, and nothing for prvalues. Conceptually you can think of it as the difference between the type of a variable and the type of an expression. But that is not quite how the standard describes it. They could have used two different keywords to mean these two things. They did not. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40282118', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/853791/']} | jdg_70496 |
stackexchange | llm_judgeable_groundtruth_similarity | 16566389 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am writing and browsing through a lot of methods in the project im working with and as much as I think overloads are useful I think that having a simple optional parameter with a default value can get around the problem aiding in writing more readable and I would think efficient code. Now I hear that using these parmeters in the methods could carry nasty side effects. What are these side effects and is it worth the risk of using these parameters to keep the code clean ???
Now provide the response and nothing else.
| I'll start by prefacing my answer by saying Any language feature can be used well or it can be used poorly . Optional parameters have some drawbacks, just like declaring locals as var does, or generics. What are these side effects Two come to mind. The first being that the default value for optional parameters are compile time constants that are embedded in the consumer of the method. Let's say I have this class in AssemblyA: public class Foo{ public void Bar(string baz = "cat") { //Omitted }} And this in AssemblyB: public void CallBar(){ new Foo().Bar();} What really ends up being produced is this, in assemblyB: public void CallBar(){ new Foo().Bar("cat");} So, if you were to ever change your default value on Bar , both assemblyA and assemblyB would need to be recompiled. Because of this, I tend not to declare methods as public if they use optional parameters, rather internal or private. If I needed to declare it as public, I would use overloads. The second issue being how they interact with interfaces and polymorphism. Take this interface: public interface IBar{ void Foo(string baz = "cat");} and this class: public class Bar : IBar{ public void Foo(string baz = "dog") { Console.WriteLine(baz); }} These lines will print different things: IBar bar1 = new Bar();bar1.Foo(); //Prints "cat"var bar2 = new Bar();bar2.Foo(); //Prints "dog" Those are two negatives that come to mind. However, there are positives, as well. Consider this method: void Foo(string bar = "bar", string baz = "baz", string yat = "yat"){} Creating methods that offer all the possible permutations as default would be several if not dozens of lines of code. Conclusion: optional parameters are good, and they can be bad. Just like anything else. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16566389', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1073481/']} | jdg_70497 |
stackexchange | llm_judgeable_groundtruth_similarity | 9492503 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
bit of a silly question perhaps. But I want to understand why the syntax on the self-executing function and the callback it has is so different to all the other JS syntax.. (function () {})() I just need to understand why its valid to encapsulate it with () I wouldn't have guessed that to be valid, and then the extra () afterwards for the callback, (which just sits directly after it, I also wouldn't have expected that to be valid. Is anyone able to explain this to me?
Now provide the response and nothing else.
| The function (...) {...} part is a function expression , that is, an expression that represents a function. The only reason it has to be wrapped in parentheses in this case is that if the keyword function is the very first thing in a statement, then the statement is assumed to be a function statement , that is, a function declaration. (Actually, it doesn't necessarily have to be wrapped in parentheses; it also works to prefix it with a + , or in general to put any sort of token before function that prevents the function-statement interpretation.) The () part after the function expression is the same as the normal () for calling a function. This: (function (...) {...})(...); is (aside from the temporary variable) the same as this: var f = function (...) {...};f(); which is equivalent to this: function f(...) {...};f(); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9492503', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/314963/']} | jdg_70498 |
stackexchange | llm_judgeable_groundtruth_similarity | 12368561 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Using PHP PDO MySQL, exception mode. Does a PDOException code for [MySQL] deadlocks exist? Is there an alternative if it doesn't?
Now provide the response and nothing else.
| PDO::errorInfo, PDOException::errorInfo MySQL Server Error Codes and Messages; Error: 1213 SQLSTATE: 40001 (ER_LOCK_DEADLOCK) /*[...]*///error mode is exception$pdoDBHandle->setAttribute(\PDO::ATTR_ERRMODE, \PDO::ERRMODE_EXCEPTION);try{ /*[...]*/}catch(\PDOException $exc){ if( $exc->errorInfo[0]==40001 /*(ISO/ANSI) Serialization failure, e.g. timeout or deadlock*/; && $pdoDBHandle->getAttribute(\PDO::ATTR_DRIVER_NAME)=="mysql" && $exc->errorInfo[1]==1213 /*(MySQL SQLSTATE) ER_LOCK_DEADLOCK*/ ) { /*[...]*/ } else throw $exc;} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12368561', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/584490/']} | jdg_70499 |
stackexchange | llm_judgeable_groundtruth_similarity | 40102703 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like know the steps for converting tabs to 4 spaces and auto formatting the java code on Save which is similar to eclipse. In Eclipse I used to do the settings For Converting tabs to Spaces I used to do the below setting Open references-> Java->Code style -> Formatter->Create a new profile-> Change the setting from tabs only to Spaces only To format the entire Java code of that particular file on save I used to do the below settings in Eclipse Open preferences-> Java -> Editor -> Save Actions->Check the appropriate options in the dialog But I don't aware to do the similar settings in Android Studio, can anyone help me to setup this.Thanks in advance.
Now provide the response and nothing else.
| For Auto Formatting your Java/xml files in Android Studio use following commands: CTRL + ALT + L If you are using ubuntu , using this command will Log you out, use : Shift + CTRL + ALT + L A popUp will appear : select whole file and Rearrange code and You are done. :) Update : As you are looking for a single point to go with your work as eclipse did, there is no such built in functionality in Android studio but it gives you the ability to do it through Macro , Follow these steps: In studio open any of your source code file. Press Ctrl + Alt + O This is used to find/optimize unused import. If a dialog open select 'Dont show it again' and hit Run. Go to Edit > Macros > Start Macro Recording. Press Ctrl + Alt + O . Press Ctrl + Alt + L . This formats your code. ( Shift+ctrl+Alt+L for ubuntu). Press Ctrl + S ,This saves your file;) Go to Edit > Macros > Stop Macro Recording. Save this created macro with name for instance:“Auto Formatting". Boom, You have now successfully created your Macro, Now just add/assign a shortcut to it: Open Preferences. Search in the left bar for Keymap. In the right hand pane, click in the search bar and type the name of your saved macro("Auto Formatting".) Double click on your item. There might be two, it doesn’t matter which one you click on. Click Add Keyboard Shortcut. Set your keyboard shortcut to Ctrl + S . Finally,Confirm overriding Ctrl + S to be your new macro. That's it! your done man. Cheers :) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40102703', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5036531/']} | jdg_70500 |
stackexchange | llm_judgeable_groundtruth_similarity | 50030277 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My Nav Bar Font will not change too my custom font when I open the app for the first time but when I go to another page and come back to the page the Font changes. Any idea on how to fix it where the Nav bar font is always the correct font? Default font below. This is right when I open the app. My Custom Font Above when I clicked into a page and then came back to this page the Font updated. This is my App Delegate code below. Any help would be great :) let navigationFont = UIFont(name: "MuseoSansRounded-900", size: 17)! let navigationLargeTitleFont = UIFont(name: "MuseoSansRounded-900", size: 34)! let navigationFontAttributes = [kCTFontAttributeName : navigationFont] let navigationLargeFontAttributes = [kCTFontAttributeName : navigationLargeTitleFont] UINavigationBar.appearance().titleTextAttributes = navigationFontAttributes as [NSAttributedStringKey : Any] UINavigationBar.appearance().largeTitleTextAttributes = navigationLargeFontAttributes as [NSAttributedStringKey : Any] UIBarButtonItem.appearance().setTitleTextAttributes(navigationFontAttributes as [NSAttributedStringKey : Any], for: .normal) I have solved the problem. I moved the code higher up on the App Delegation and the font is now changing when I first come to the page. The updated code is below. Thank you to everyone for the help! func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool { // NAV BAR COLOR FOR ALL THE NAVS AND THE FONTS BELOW. UINavigationBar.appearance().shadowImage = UIImage() UINavigationBar.appearance().barTintColor = .white UINavigationBar.appearance().tintColor = blackSupportalColor // Navigation Bar Font Below let navigationFont = UIFont(name: "MuseoSansRounded-900", size: 17)! let navigationLargeTitleFont = UIFont(name: "MuseoSansRounded-900", size: 34)! let navigationFontAttributes = [kCTFontAttributeName : navigationFont] let navigationLargeFontAttributes = [kCTFontAttributeName : navigationLargeTitleFont] UINavigationBar.appearance().titleTextAttributes = navigationFontAttributes as [NSAttributedStringKey : Any] UINavigationBar.appearance().largeTitleTextAttributes = navigationLargeFontAttributes as [NSAttributedStringKey : Any] UIBarButtonItem.appearance().setTitleTextAttributes(navigationFontAttributes as [NSAttributedStringKey : Any], for: .normal)
Now provide the response and nothing else.
| The use cases for kubectl attach are discussed in kubernetes/issue 23335 . It can attach to the main process run by the container, which is not always bash. As opposed to exec, which allows you to execute any process within the container (often: bash) # Get output from running pod 123456-7890, using the first container by defaultkubectl attach 123456-7890# Get output from ruby-container from pod 123456-7890kubectl attach 123456-7890 -c ruby-container This article proposes: In addition to interactive execution of commands, you can now also attach to any running process. Like kubectl logs , you’ll get stderr and stdout data, but with attach, you’ll also be able to send stdin from your terminal to the program. Awesome for interactive debugging, or even just sending ctrl - c to a misbehaving application. $> kubectl attach redis -i Again, the main difference is in the process you interact with in the container: exec: any one you want to create attach: the one currently running (no choice) | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/50030277', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9671782/']} | jdg_70501 |
stackexchange | llm_judgeable_groundtruth_similarity | 64936411 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I created a ProgressView in SwiftUI (using Xcode) and edited a bit but haven’t figured out how to change its height. struct ProgressBar: View { var body: some View { VStack { ProgressView("Progres:", value: 50, total: 100) }.foregroundColor(Color(UIColor.systemBlue)) .scaleEffect(1, anchor: .center) .accentColor(Color(UIColor.systemGreen)) }}
Now provide the response and nothing else.
| There's no direct way that I know of to change the height, but you can use the .scaleEffect modifier. Make sure to specify 1 for the x scale in order to only increase the height. struct ContentView: View { var body: some View { ProgressBar() .padding([.leading, .trailing], 10) }}struct ProgressBar: View { var body: some View { VStack { ProgressView(value: 50, total: 100) .accentColor(Color.green) .scaleEffect(x: 1, y: 4, anchor: .center) } }} Result: A drawback to this is that you can't pass in a Label, because it will also get stretched. ProgressView("Progress:", value: 50, total: 100) To work around this, just make your own Text above the ProgressView . struct ProgressBar: View { var body: some View { VStack(alignment: .leading) { Text("Progress:") .foregroundColor(Color.blue) ProgressView(value: 50, total: 100) .accentColor(Color.green) .scaleEffect(x: 1, y: 4, anchor: .center) } }} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/64936411', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13993493/']} | jdg_70502 |
stackexchange | llm_judgeable_groundtruth_similarity | 55918468 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Given an number and an encoding length, how can I convert the number to its binary representation as a tensor? Eg, given the number 6 and width 8 , how can I obtain the tensor: (0, 0, 0, 0, 0, 1, 1, 0)
Now provide the response and nothing else.
| def binary(x, bits): mask = 2**torch.arange(bits).to(x.device, x.dtype) return x.unsqueeze(-1).bitwise_and(mask).ne(0).byte() If you wanna reverse the order of bits, use it with torch.arange(bits-1,-1,-1) instead. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/55918468', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5353461/']} | jdg_70503 |
stackexchange | llm_judgeable_groundtruth_similarity | 17140 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I'm in the process of understanding the heat equation proof of the Atiyah-Singer Index Theorem for Dirac Operators on a spin manifold using Getzler scaling. I'm attending a masters-level course on it and using Berline, Getzler Vergne. While I think I could bash my way through the details of the scaling trick known as `Getzler scaling', I have little to no intuition for it. As I understand it, one is computing the trace of the heat kernel of the ("generalized") Laplacian associated to a Dirac operator. The scaling trick reduces the problem to one about the ("supersymmetric" or "generalized") harmonic oscillator, whose heat kernel is given by Mehler's formula. I am repeatedly assured that the harmonic oscillator is a very natural and fundamental object in physics, but, being a `pure' analyst, I still can't sleep at night. What reasons are there for describing the harmonic oscillator as being so important in physics? Why/how might Getzler have thought of his trick? (Perhaps the answer to this lies in the older proofs?) Is there a good way I could motivate an attempt to reduce to the harmonic oscillator from a pure perspective? (i.e. "It's a common method from physics" is no good). I'm looking for: "Oh it's simplest operator one could hope to reduce down to such that crucial property X still holds since Y,Z"...or..."It's just like the method of continuity in PDE but a bit different because..." Thanks.
Now provide the response and nothing else.
| I too am but a mere graduate student trying to sort through some of these same issues, but I might have some helpful insight. I'll let you be the judge. The basic idea behind the heat equation proof of the index theorem is to extract the right term in the asymptotic expansion for the heat kernel and then appeal to the McKean - Singer formula. According to my understanding the original strategy for doing this was to realize that the index is a cobordism invariant and thus it would suffice to do enough explicit calculations on generators for the cobordism group until all free parameters are fixed; as it turns out the complex projective spaces are a good choice. That's exactly what was done. I believe - and I really hope someone will correct me if I'm wrong since I haven't gotten my hands dirty myself - that the required calculation really boils down to dealing with the quantum mechanical harmonic oscillator when you work it out for $CP_2$. If this is correct, then the first hint that the quantum mechanical harmonic oscillator is important came from a very fundamental example. But I think a more analytic answer is also possible. Let's say instead of working with a Dirac operator acting on smooth sections of the spinor bundle you instead just consider the usual scalar Laplacian acting on functions. What happens if you imitate the heat kernel proof in this much less subtle context? You wind up reproving Weyl's asymptotic formula for the eigenvalues of the Laplacian. In essence this calculation amounts to rescaling the spacial variable so that your operator is deformed into the constant coefficient operator obtained by freezing coefficients. The basic idea of the Getzler calculus is to rescale both the spacial variable and the Riemannian metric in a compatible way - this rescaling deforms the Clifford algebra into the exterior algebra (thereby making Clifford multiplication act like an order one operator) and hence the Dirac operator into a polynomial coefficient operator. What polynomial coefficient operator is it? We have reached the limit of my ability to motivate things any further, but the answer is the quantum mechanical harmonic oscillator operator. I of course have no idea whether or not the physical significance of this operator can be accounted for according to a similar rescaling argument. I should also mention that the quantum mechanical harmonic oscillator makes no obvious appearance in the original global proofs of the index theorem. It does, however, make a non-obvious appearance via Bott Periodicity which can be proven essentially using Mehler's formula. Nigel Higson and Eric Guenter wrote a very readable paper explaining most of the details of this proof entitled something like "K-Theory and Group C* Algebras". You can find it on Nigel's website, www.math.psu.edu/higson. The last thing I will say is that I found Getzler, Berligne, and Verne to be a pretty tough way to penetrate this material. The style pays off in some of the later material, but I think I would have had a lot of trouble learning the heat kernel proof of the index theorem for the first time from that book. You might try John Roe's book "Elliptic Operators, Topology, and Asymptotic Methods" instead of or as a supplement. I hope this has been helpful! | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/17140', 'https://mathoverflow.net', 'https://mathoverflow.net/users/4281/']} | jdg_70504 |
stackexchange | llm_judgeable_groundtruth_similarity | 2497930 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
We pick two numbers randomly from the set $\{1,2,...,n \}$ without replacement. Let $X$ denote the smalles and $Y$ denote the larger one of the two numbers. Find probability mass function of $Z=Y-X$ . try Each number is equally likely to occur so $P(Z = z) = \frac{1}{n}$ . Is this correct?
Now provide the response and nothing else.
| $$\frac{\text{Si}(\pi n)}{\pi n}=\int_{0}^{1}\frac{\sin(\pi n x)}{\pi n x}\,dx \tag{1}$$hence for any $z\in(0,1)$$$ f(z)\stackrel{\text{def}}{=}\sum_{n\geq 1}\frac{\text{Si}(\pi n)}{\pi n}\,\cos(\pi n z)=\int_{0}^{1}\frac{W_z(x)}{x}\,dx \tag{2}$$where $W_z(x)$ is a piecewise-linear function over $(0,1)$, with a jump discontinuity at $z$, $W_z(0^+)=W_z(1^-)=0$ and derivative equal to $-\frac{1}{2}$ at any point of $(0,1)$ which differs from $z$, by the Fourier series of the sawtooth wave . It follows that$$ f(z) = \int_{0}^{z}-\frac{dx}{2}+\int_{z}^{1}\frac{1-x}{2x}\,dx=-\frac{1+\log z}{2}\tag{3}$$and by Parseval's theorem$$ \sum_{n\geq 1}\left(\frac{\text{Si}(\pi n)}{\pi n}\right)^2 = 2\int_{0}^{1}f(z)^2\,dz=\frac{1}{2}\int_{0}^{1}(1+\log z)^2\,dz\stackrel{\color{green}{\checkmark}}{=}\frac{1}{2}\tag{4}$$as conjectured. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2497930', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/42344/']} | jdg_70505 |
stackexchange | llm_judgeable_groundtruth_similarity | 11803563 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a UIButton that has a default image, and another image for highlight/selected image. When pressing the button, the button's image changes to the highlighted, then to the selected if the touchUp was inside. The usual stuff.. All setup within IB. However, I am trying to set a label over the button, in a very tricky place (not aligned, hard coded). I tried adding a Label over the button in IB. Problem: I need the label's text color to change as the button's control state changes. So, I created a UIButton subclass, added a UILabel Outlet, and by overriding the following methods: - (void)touchesBegan/Moved/Cancelled/Ended:...;- (void)setSelected:... I was able to achieve what I want... BUT! When I quickly click the button, the change is not reflected. and sometimes it doesn't work properly... I even used asynchronous calls... No use. So, I headed to the UIButton 's titleLabel . I tried to use it with no luck. So, I tried UIButton setTitle: forState: , no use ... Help? Extra Details: - (id)initWithCoder:(NSCoder *)aDecoder { self = [super initWithCoder:aDecoder]; if (self) { [self.titleLabel setFrame:CGRectMake(0, 0, 100, 100)]; [self.titleLabel setText:@"THE TITLE LABEL"]; [self.titleLabel setHidden:NO]; [self.imageView setAlpha:0.2f]; NSLog(@"%@", self.subviews); [self setTitle:@"DEFAULT!!" forState:UIControlStateNormal]; } return self;}- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesBegan:touches withEvent:event]; [self performSelector:@selector(check) withObject:nil afterDelay:0.1];}- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesMoved:touches withEvent:event]; [self performSelector:@selector(check) withObject:nil afterDelay:0.1];}- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesCancelled:touches withEvent:event]; [self performSelector:@selector(check) withObject:nil afterDelay:0.1];}- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesEnded:touches withEvent:event]; [self performSelector:@selector(check) withObject:nil afterDelay:0.1];}- (void)setSelected:(BOOL)selected { [super setSelected:selected]; [self performSelector:@selector(check) withObject:nil afterDelay:0.1];}- (void)check { if (self.isSelected || self.state == UIControlStateHighlighted || self.state == UIControlStateSelected) { [_label setHighlighted:YES]; } else { [_label setHighlighted:NO]; }} OUTPUT: ( "<UIImageView: 0x8b24930; frame = (0 0; 243 39); clipsToBounds = YES; alpha = 0.2; opaque = NO; userInteractionEnabled = NO; layer = <CALayer: 0x8b248e0>>", "<UIButtonLabel: 0x8b247a0; frame = (0 0; 100 100); text = 'THE TITLE LABEL'; clipsToBounds = YES; opaque = NO; userInteractionEnabled = NO; layer = <CALayer: 0x8b25000>>")
Now provide the response and nothing else.
| To determine which method should be called, the compiler goes through the following list, as detailed in the JLS #5.3 and JLS #15.12.2 : an identity conversion (§5.1.1) => method1(int a, int b) a widening primitive conversion (§5.1.2) a widening reference conversion (§5.1.5) a boxing conversion (§5.1.7) optionally followed by widening reference conversion ==> method1(Integer... a) an unboxing conversion (§5.1.8) optionally followed by a widening primitive conversion. In your case, the first point applies and method1(int, int) is called. (To be more precise, your method uses varags and has a lower priority than a simple boxing conversion. In other words, if there were a method1(Integer a, Integer b) it would come before method1(Integer... a) in the hierarchy) Why is it so? A comment in 15.12.2 give a hint: This guarantees that any calls that were valid in the Java programming language before Java SE 5.0 are not considered ambiguous as the result of the introduction of variable arity methods, implicit boxing and/or unboxing. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11803563', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/456434/']} | jdg_70506 |
stackexchange | llm_judgeable_groundtruth_similarity | 11575716 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My Question I managed to answer myself, however the same set of functionality has another problem. For some reason the first postback of the save event of the posted file hits the Ol' Object not set to an instance of an object error, but on the second attempt of uploading a file and firing my save event (converts to byte[] an stored as SQL Server BLOB) it does everything is supposed to do. Same problem here There is a good suggestion of using the AJAX AsyncUpload control however I am a firm believer of removing the cause and not treating the problem. I will continue down this route to best my understanding of asp.net etc. Would there be a wizrd amongst you that could help me identify why I get "object ref not set to inst of obj" error on first postback but on second it works fine. Content page has a master page which wraps content page in an update panel. Not my decision to do this. There is also an update panel with postback triggers targeting my save event. What are your thoughts people?
Now provide the response and nothing else.
| The problem (as seen here http://forums.asp.net/t/1060363.aspx ) seems to be when you use the visibility property on the surrounding panel (as it seems you are from the linked question). The suggested workaround is to use CSS visibility instead so use this to make it invisible - <asp:Panel ID="pnlUpload" runat="server" class="workerDetailsPanelLeft" style="display:none"> The explanation for this from the thread is If your container is set to invisible, the upload control is not actually rendered as HTML, causing the form's enctype not to be set to enctype="multipart/form-data", causing the file upload control not to post the selected file back to the server. The workaround is either to make sure the FileUpload control is rendered to HTML (by setting its style to display:none in stead of Visible=false), or by manually setting the enctype So another workaround would be to alter your form tag to this <form id="form1" enctype="multipart/form-data" runat="server"> I think either one of those should solve your problem. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11575716', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/767456/']} | jdg_70507 |
stackexchange | llm_judgeable_groundtruth_similarity | 5667793 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's say I'm programming in a threading framework that does not have multiple-reader/single-writer mutexes . Can I implement their functionality with the following: Create two mutexes: a recursive (lock counting) one for readers and a binary one for the writer. Write: acquire lock on binary mutex wait until recursive mutex has lock count zero actual write release lock on binary mutex Read: acquire lock on binary mutex (so I know the writer is not active) increment count of recursive mutex release lock on binary mutex actual read decrement count of recursive mutex This is not homework. I have no formal training in concurrent programming, and am trying to grasp the issues. If someone can point out a flaw, spell out the invariants or provide a better algorithm, I'd be very pleased. A good reference, either online or on dead trees, would also be appreciated.
Now provide the response and nothing else.
| The following is taken directly from The Art of Multiprocessor Programming which is a good book to learn about this stuff. There's actually 2 implementations presented: a simple version and a fair version. I'll go ahead and reproduce the fair version. One of the requirements for this implementation is that you have a condition variable primitive. I'll try to figure out a way to remove it but that might take me a little while. Until then, this should still be better than nothing. Note that it's also possible to implement this primitive using only locks. public class FifoReadWriteLock { int readAcquires = 0, readReleases = 0; boolean writer = false; ReentrantLock lock; Condition condition = lock.newCondition(); // This is the condition variable. void readLock () { lock.lock(); try { while(writer) condition.await(); readAcquires++; } finally { lock.unlock(); } } void readUnlock () { lock.lock(); try { readReleases++; if (readAcquires == readReleases) condition.signalAll(); } finally { lock.unlock(); } } void writeLock () { lock.lock(); try { while (writer) condition.await(); writer = true; while (readAcquires != readReleases) condition.await(); } finally { lock.unlock(); } } void writeUnlock() { writer = false; condition.signalAll(); }} First off, I simplified the code a little but the algorithm remains the same. There also happens to be an error in the book for this algorithm which is corrected in the errata. If you plan on reading the book, keep the errata close by or you'll end up being very confused (like me a few minutes ago when I was trying to re-understand the algorithm). Note that on the bright side, this is a good thing since it keeps you on your toes and that's a requirement when you're dealing with concurrency. Next, while this may be a Java implementation, only use it as pseudo code. When doing the actual implementation you'll have to be carefull about the memory model of the language or you'll definitely end up with a headache. As an example, I think that the readAcquires and readReleases and writer variable all have to be declared as volatile in Java or the compiler is free to optimize them out of the loops. This is because in a strictly sequential programs there's no point in continuously looping on a variable that is never changed inside the loop. Note that my Java is a little rusty so I might be wrong. There's also another issue with integer overflow of the readReleases and readAcquires variables which is ignored in the algorithm. One last note before I explain the algorithm. The condition variable is initialized using the lock. That means that when a thread calls condition.await() , it gives up its ownership of the lock. Once it's woken up by a call to condition.signalAll() the thread will resume once it has reacquired the lock. Finally, here's how and why it works. The readReleases and readAcquires variables keep track of the number threads that have acquired and released the read lock. When these are equal, no thread has the read lock. The writer variable indicates that a thread is trying to acquire the write lock or it already has it. The read lock part of the algorithm is fairly simple. When trying to lock, it first checks to see if a writer is holding the lock or is trying to acquire it. If so, it waits until the writer is done and then claims the lock for the readers by incrementing the readAcquires variable. When unlocking, a thread increases the readReleases variable and if there's no more readers, it notifies any writers that may be waiting. The write lock part of the algorithm isn't much more complicated. To lock, a thread must first check whether any other writer is active. If they are, it has to wait until the other writer is done. It then indicates that it wants the lock by setting writer to true (note that it doesn't hold it yet). It then waits until there's no more readers before continuing. To unlock, it simply sets the variable writer to false and notifies any other threads that might be waiting. This algorithm is fair because the readers can't block a writer indefinitely. Once a writer indicates that it wants to acquire the lock, no more readers can acquire the lock. After that the writer simply waits for the last remaining readers to finish up before continuing. Note that there's still the possibility of a writer indefinitely blocking another writer. That's a fairly rare case but the algorithm could be improved to take that into account. So I re-read your question and realised that I partly (badly) answered it with the algorithm presented below. So here's my second attempt. The algorithm, you described is fairly similar to the simple version presented in the book I mentionned. The only problem is that A) it's not fair and B) I'm not sure how you would implement wait until recursive mutex has lock count zero . For A), see above and for B), the book uses a single int to keep track of the readers and a condition variable to do the signalling. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5667793', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/166749/']} | jdg_70508 |
stackexchange | llm_judgeable_groundtruth_similarity | 631754 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
It seems like this should be straightforward, but I don't see it in the console nor did I notice anything in "describe-instances" for it. Given some alert e-mails are getting tied to the machine name finding out quickly which instance it is would be most helpful. Some way to dump a list of all the instance names to machine name mappings for my account would be ideal. UPDATE: I've tried opening up the ports from another EC2 host 135-139 to see if I can use NetBios to get the name. I ran: ping -a <ip>nbtstat -a <ip> Neither returned the machine name for the remote ip. With the local IP both do.
Now provide the response and nothing else.
| We had the same issue. We build a lot of EC2 instances from CloudFormation automatically every day and needed a way to easily identify them individually in the console based on their hosthame. Currently all the automatically built servers have the same name which is a fixed string in the cloud formation template. We already had a "user data" based boot-strapping mechanism whereby the automatically created instances would run some PowerShell code when they first booted. Therefore to get the hostname to show in the AWS console I figured the easiest way was to change the "name" tag, appending the hostname to the existing tag. I added the following code to the bootstrap: function Get-NameTagValue(){ param( [parameter(Mandatory)] [string] $InstanceId )$filter1 = New-Object -TypeName Amazon.EC2.Model.Filter -Property @{ Name = 'resource-id' Values = $InstanceId}$filter2 = New-Object -TypeName Amazon.EC2.Model.Filter -Property @{ Name = 'key' Values = 'Name'}$filters = $filter1, $filter2$tag = Get-EC2Tag ` -Region $Region ` -ProfileName $ProfileName ` -Filter $filters#Output the tag value$tag.Value}function Set-InstanceNameTag(){ $instanceId = (Invoke-RestMethod -Method Get -Uri http://169.254.169.254/latest/meta-data/instance-id) #Get the current value of the name tag $currentNameTagValue = Get-NameTagValue -InstanceId $instanceId $newNameTagValue = "$currentNameTagValue - $env:COMPUTERNAME" Remove-EC2Tag ` -Resource $instanceId ` -Tag @{ Key="Name" } ` -Region $Region ` -ProfileName $ProfileName ` -Force New-EC2Tag ` -Resource $instanceId ` -Tag @{ Key="Name"; Value=$newNameTagValue } ` -Region $Region ` -ProfileName $ProfileName `} The code first gets the isnance id from the meta data, then using this id it finds the existing Name tag, removes it and adds a new Name tag containing the existing value followed by the hostname. These two functions are contained within our Bootstrap script which has parameters for $Region and $ProfileName which are used in the functions. We then just call the function Set-InstanceNameTag from the main script and voilà, you have the existing Name tag followed by a hyphen and the hostname. In order to make this work, the EC2 instance need to have a role that has permissions to: CreateTags, DeleteTags and DescribeTags. You can achieve this using the following policy: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:CreateTags", "ec2:DeleteTags", "ec2:DescribeTags" ], "Resource": [ "*" ] } ]} Hope that helps someone. Carl | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/631754', 'https://serverfault.com', 'https://serverfault.com/users/21458/']} | jdg_70509 |
stackexchange | llm_judgeable_groundtruth_similarity | 13874 |
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to keep a collection on integers in the range 0 to 65535 so that I can quickly do the following: Insert a new integer Insert a range of contiguous integers Remove an integer Remove all integers below an integer Test if an integer is present My data has the property that it often contains runs of integers in the collection. For example, the collection might at one point in time be: { 121, 122, 123, 124, 3201, 3202, 5897, 8912, 8913, 8914, 18823, 18824, 40891 } The simplest approach is just to use a balanced binary tree like the C++ std::set, however, using that, I am not leveraging the fact that I often have runs of numbers. Perhaps it would be better to store a collection of ranges? But that means a range needs to be able to be broken up if an integer in its middle is removed, or joined together if the space between two ranges in filled in. Are there any existing data structures that would be well suited for this problem?
Now provide the response and nothing else.
| I suggest you use a binary search tree, augmented so that leaves can contain an interval (a run of consecutive integers). Maintain the invariant that the intervals do not overlap and are in order (following the search tree invariant). (This can be considered a special case of an interval tree or a segment tree, for the special case where the intervals do not overlap.) This data structure you can support all of your operations in $O(\lg n)$ time, where $n$ is the number of intervals. Since we're guaranteed $n\le 65535$, I would expect this to be quite efficient. (In particular, yes, you can split an interval into two pieces or merge two adjacent intervals into a single interval in $O(\lg n)$ time.) | {} | {'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/13874', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/7284/']} | jdg_70510 |
stackexchange | llm_judgeable_groundtruth_similarity | 7597797 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using Eclipse to debug a Java application. Somewhere in the code I get an exception and the stack trace: Caused by: java.io.EOFException: The connection has been reset while reading the header at com.gemstone.gemfire.internal.cache.tier.sockets.Message.fetchHeader(Message.java:583) at com.gemstone.gemfire.internal.cache.tier.sockets.Message.readHeaderAndPayload(Message.java:599) at com.gemstone.gemfire.internal.cache.tier.sockets.Message.read(Message.java:542) at com.gemstone.gemfire.internal.cache.tier.sockets.Message.recv(Message.java:1029) at com.gemstone.gemfire.cache.client.internal.AbstractOp.attemptReadResponse(AbstractOp.java:158) at com.gemstone.gemfire.cache.client.internal.AbstractOp.attempt(AbstractOp.java:363) at com.gemstone.gemfire.cache.client.internal.ConnectionImpl.execute(ConnectionImpl.java:229) at com.gemstone.gemfire.cache.client.internal.pooling.PooledConnection.execute(PooledConnection.java:321) at com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.executeWithPossibleReAuthentication(OpExecutorImpl.java:646) at com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:108) ... 11 more How do I get the whole stack instead of the ... 11 more ?
Now provide the response and nothing else.
| You have the entire stack. This is only part of a stack trace. Directly before this was another piece. Look at the bottom lines of this one, and the top lines of the previous one. You'll see them match up. The stack trace began with a section that doesn't begin with "Caused by". The "Caused by" exception is hiding parts of the stack trace that are verbatim copies of stack trace entries in its parent. In other words, Java doesn't show the entire stack up to main() for every cause - it just shows what you haven't seen already. See the Throwable.printStackTrace() documentation . The "Caused by" is filled when you provide a cause when creating a Throwable. Look at the constructors for it. This is done when a piece of code catches a low-level exception and then wants to rethrow it as a different exception class. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7597797', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/673730/']} | jdg_70511 |
stackexchange | llm_judgeable_groundtruth_similarity | 3338889 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I query for records ordered by similarity? Eg. searching for "Stock Overflow" would return Stack Overflow SharePoint Overflow Math Overflow Politic Overflow VFX Overflow Eg. searching for "LO" would return: pabLO picasso michelangeLO jackson polLOck What I need help with: Using a search engine to index & search a MySQL table, for better results Using the Sphinx search engine, with PHP Using the Lucene engine with PHP Using full-text indexing, to find similar/containing strings What does not work well Levenshtein distance is very erratic. ( UDF , Query ) Searching for "dog" gives me: dog bog ago big echo LIKE returns better results, but returns nothing for long queries although similar strings do exist dog dogid dogaral dogma
Now provide the response and nothing else.
| I have found out that the Levenshtein distance may be good when you are searching a full string against another full string, but when you are looking for keywords within a string, this method does not return (sometimes) the wanted results. Moreover, the SOUNDEX function is not suitable for languages other than english, so it is quite limited. You could get away with LIKE, but it's really for basic searches. You may want to look into other search methods for what you want to achieve. For example: You may use Lucene as search base for your projects. It's implemented in most major programming languages and it'd quite fast and versatile. This method is probably the best, as it not only search for substrings, but also letter transposition, prefixes and suffixes (all combined). However, you need to keep a separate index (using CRON to update it from a independent script once in a while works though). Or, if you want a MySQL solution, the fulltext functionality is pretty good, and certainly faster than a stored procedure. If your tables are not MyISAM, you can create a temporary table, then perform your fulltext search : CREATE TABLE IF NOT EXISTS `tests`.`data_table` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `title` varchar(2000) CHARACTER SET latin1 NOT NULL, `description` text CHARACTER SET latin1 NOT NULL, PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin AUTO_INCREMENT=1 ; Use a data generator to generate some random data if you don't want to bother creating it yourself... ** NOTE ** : the column type should be latin1_bin to perform a case sensitive search instead of case insensitive with latin1 . For unicode strings, I would recommend utf8_bin for case sensitive and utf8_general_ci for case insensitive searches. DROP TABLE IF EXISTS `tests`.`data_table_temp`;CREATE TEMPORARY TABLE `tests`.`data_table_temp` SELECT * FROM `tests`.`data_table`;ALTER TABLE `tests`.`data_table_temp` ENGINE = MYISAM;ALTER TABLE `tests`.`data_table_temp` ADD FULLTEXT `FTK_title_description` ( `title` , `description`);SELECT *, MATCH (`title`,`description`) AGAINST ('+so* +nullam lorem' IN BOOLEAN MODE) as `score` FROM `tests`.`data_table_temp` WHERE MATCH (`title`,`description`) AGAINST ('+so* +nullam lorem' IN BOOLEAN MODE) ORDER BY `score` DESC;DROP TABLE `tests`.`data_table_temp`; Read more about it from the MySQL API reference page The downside to this is that it will not look for letter transposition or "similar, sounds like" words. ** UPDATE ** Using Lucene for your search, you will simply need to create a cron job (all web hosts have this "feature") where this job will simply execute a PHP script (i.g. "cd /path/to/script; php searchindexer.php") that will update the indexes. The reason being that indexing thousands of "documents" (rows, data, etc.) may take several seconds, even minutes, but this is to ensure that all searches are performed as fast as possible. Therefore, you may want to create a delay job to be run by the server. It may be overnight, or in the next hour, this is up to you. The PHP script should look something like this: $indexer = Zend_Search_Lucene::create('/path/to/lucene/data');Zend_Search_Lucene_Analysis_Analyzer::setDefault( // change this option for your need new Zend_Search_Lucene_Analysis_Analyzer_Common_Utf8Num_CaseInsensitive());$rowSet = getDataRowSet(); // perform your SQL query to fetch whatever you need to indexforeach ($rowSet as $row) { $doc = new Zend_Search_Lucene_Document(); $doc->addField(Zend_Search_Lucene_Field::text('field1', $row->field1, 'utf-8')) ->addField(Zend_Search_Lucene_Field::text('field2', $row->field2, 'utf-8')) ->addField(Zend_Search_Lucene_Field::unIndexed('someValue', $someVariable)) ->addField(Zend_Search_Lucene_Field::unIndexed('someObj', serialize($obj), 'utf-8')) ; $indexer->addDocument($doc);}// ... you can get as many $rowSet as you want and create as many documents// as you wish... each document doesn't necessarily need the same fields...// Lucene is pretty flexible on this$indexer->optimize(); // do this every time you add more data to you indexer...$indexer->commit(); // finalize the process Then, this is basically how you search (basic search) : $index = Zend_Search_Lucene::open('/path/to/lucene/data');// same search optionsZend_Search_Lucene_Analysis_Analyzer::setDefault( new Zend_Search_Lucene_Analysis_Analyzer_Common_Utf8Num_CaseInsensitive());Zend_Search_Lucene_Search_QueryParser::setDefaultEncoding('utf-8');$query = 'php +field1:foo'; // search for the word 'php' in any field, // +search for 'foo' in field 'field1'$hits = $index->find($query);$numHits = count($hits);foreach ($hits as $hit) { $score = $hit->score; // the hit weight $field1 = $hit->field1; // etc.} Here are great sites about Lucene in Java , PHP , and .Net . In conclusion each search methods have their own pros and cons : You mentioned Sphinx search and it looks very good, as long as you can make the deamon run on your web host. Zend Lucene requires a cron job to re-index the database. While it is quite transparent to the user, this means that any new data (or deleted data!) is not always in sync with the data in your database and therefore won't show up right away on user search. MySQL FULLTEXT search is good and fast, but will not give you all the power and flexibility of the first two. Please feel free to comment if I have forgotten/missed anything. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3338889', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/41021/']} | jdg_70512 |
stackexchange | llm_judgeable_groundtruth_similarity | 7325052 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I use VIM pretty regularly now but I never use the dot (.) command to repeat the previous action. I keep reading about how awesome it is but I never see any real world examples that make sense to me and the way I code in VIM. What are some real world examples that show how awesome the dot (.) command is?
Now provide the response and nothing else.
| Here are some actions that I do with the dot command: Simpler than :%s/\<word\>/replacement/gc is * on the word, then cereplacement<esc> and then repeat n. . This is nice if you have two or three occurrences of your word. If you have several words that you want to replace, then go to the next word, hit * and again n. When I want to left-align some blocks: <Ctrl-V>jjj<..... or insert spaces at the front: <ctrl-v>jjjI<space><esc>.... after dd or dw , the dot command will delete another line/word A magical thing that happens with the dot command is that if it repeats a command that used a numbered register, it will use the next numbered register (see :help redo-register ). Explanation: If you did dd on 9 lines and want to restore them in the order in which you've deleted them, then do: "1P........ . Note that registers 1 to 9 are Vim's delete-ring. "1P will insert before the cursor the last deleted text, "2P will then insert prior-to-last deleted text, and so on. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7325052', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/403915/']} | jdg_70513 |
stackexchange | llm_judgeable_groundtruth_similarity | 1393957 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working on induction problems to prep for Real Analysis for the fall semester. I wanted proof verification and editing suggestions for part (a), and assistance understanding part (b). For part (b), the portion that has the additional indentation is where I am unclear. The principle of mathematical induction can be extended as follows. A list $P_m, >P_{m+1}, \cdots$ of propositions is true provided (i) $P_m$ is true, (ii) >$P_{n+1}$ is true whenever $P_n$ is true and $n \geq m$. (a) Prove $n^2 > n + 1$ for all integers $n \geq 2$. Assume for $P_n$: $n^2 > n + 1$, for all integers $n \geq 2$. Observe for $P _2$: $P_2: 2^2=4 > 2+1 = 3$, thus the basis step holds. Now, let $n=k$ such that $k^2 > k + 1$, and assume this also holds. We now consider the case $P_{k+1} : (k+1)^2 > (k+1) + 1$. Observe: $(k+1)^2 = k^2 + 2k + 1$ = $k(k+2) + 1$ Clearly, $k(k+2)+1$ must be greater than $(k+1)+1$. Thus, by the principle of mathematical induction, the case holds for all $n \geq 2$. *I am aware that part (a) does not require induction to prove, but the book problem suggests doing so... (b) Prove $n! > n^2$ for all integers $n \geq 4$. Assume for $P_n$: $n!>n^2$ for all integers $n \geq 4$. Observe for $P_4$: $P_4: 24 = 4! > 16 = 4^2$, thus the basis step holds. Let $n=k$ such that $k! > k^2 $, and assume this also holds. We now consider the case $P_{k+1} : (k+1)! > (k+1)^2$. Observe: $(k+1)! = (k+1)k!$ $> (k+1)k^2$ $= k^3 + k^2$ $> k^2 + 2k + 1$ $= (k + 1)^2$ I was able to write some of this on my own, and I used my book + internet to help me figure out how to write this out. First, I am not clear why we are adding $(k+1)$ to the right hand side (the $k^2$) side of the equation (or what that's allowed, really). Also, I'm not clear on the jump from $k^3+k^2>k^2 + 2k + 1$. Additionally, while looking around MSE, I have noticed many people talk about induction with LHS and RHS notation. I have not seen this in any books--would someone be able to explain using that method as a form of bookkeeping, or be able to suggest a site or stack that could do that?
Now provide the response and nothing else.
| Note: Your question should really be two questions since they're completely distinct and separate. You are simply trying to cover too much for one single question on this site, but I'll help with what I can. Before addressing both parts of your question, I would encourage you to read the following three posts because I think they would go a long way in helping you with induction proofs in general. How to write a clear induction proof. The difference between weak and strong induction. Use of LHS/RHS notation or terminology in an induction answer . Now on to your question(s). Part (a): Your write-up here is really struggling in a number of ways. I'm going to be straight with you--your statement of strong induction is sloppy (you don't actually need strong induction even if you do use an inductive argument which is why I included (2) above) and your inductive "proof" never even uses the inductive hypothesis. See (1) above for how to actually write a clear induction proof; this will more or less force you to understand how the proof actually works, where you use the inductive hypothesis, etc. Also, as Hirshy notes, you actually assume what you are trying to prove (you do this again in part (b)). Regardless, the following is one way you could structure the main part of your induction proof for part (a):\begin{align}(k+1)^2&= \color{blue}{k^2}+2k+1\tag{expand}\\[0.5em]&\color{blue}{> (k+1)}+2k+1\tag{by inductive hypothesis}\\[0.5em]&= 3k+2\tag{simplify}\\[0.5em]&> k+2\tag{since $k\geq 2$}\\[0.5em]&= (k+1)+1.\tag{simplify}\end{align}Did you see how that worked? Can you see how the inductive hypothesis is used in the part highlighted with $\color{blue}{\mathrm{blue}}$? Part (b): The part you have indented is actually not too bad at all when you actually write out what is happening and why at each step; nonetheless, you still need to clean up the beginning, as you are again assuming what you are trying to prove. The goal here is to move from the left-hand side (LHS) of the statement $P_{n+1}$ to the right-hand side (RHS) of $P_{n+1}$. To that end, note the following (the indented steps have explanatory notes in the margin now):\begin{align}(k+1)! &= (k+1)k!\tag{by definition}\\[0.5em] &> (k+1)k^2\tag{by induction hypothesis}\\[0.5em] &= k^3+k^2\tag{expand}\\[0.5em] &> k^2+2k+1\tag{since $k\geq 4$}\\[0.5em] &= (k+1)^2.\tag{factor}\end{align}Can you see how all of that worked? | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1393957', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/253808/']} | jdg_70514 |
stackexchange | llm_judgeable_groundtruth_similarity | 8938574 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm doing something like this Class.hpp: class Class { private: static const unsigned int arraySize; int ar[arraySize+2]; }; Class.cpp: #include <Class.hpp>const unsigned int arraySize = 384; The compiler (q++, a c++ compiler for the QNX OS based on g++) gives me error: array bound is not an integer constant while compiling a unit including Class.hpp (not while compiling Class.cpp). Why isn't that working? I know that a static const member can be used as an array bound, guaranteed by the C++ standard (see this anwser ). But why doesn't the compiler see the result of static const + const as a constant?
Now provide the response and nothing else.
| This is good code which should have been accepted by the compiler: class Class { const static int arraySize = 384; int ar[arraySize+2]; }; and if it isn't, your compiler is broken. However, if you move actual constant out of the header file to selected translation unit, that invalidates the code. // Class.hclass Class { const static int arraySize; int ar[arraySize+2]; // ERROR}; // Class.cppconst int Class::arraySize = 384; This is because the size of your Class object cannot be determined at compile time from the data available in the header alone. This is not exactly right reason, but reasoning along these lines helps to understand compilation errors such as this. To avoid making such mistakes, you can replace static const int with an enum , e.g. class Class { enum { arraySize = 384 }; int ar[arraySize+2]; }; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8938574', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/990968/']} | jdg_70515 |
stackexchange | llm_judgeable_groundtruth_similarity | 14589386 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have following code. It saves file but with empty content. What's wrong with it? public void saveMap() { String sb = "TEST CONTENT"; JFileChooser chooser = new JFileChooser(); chooser.setCurrentDirectory(new File("/home/me/Documents")); int retrival = chooser.showSaveDialog(null); if (retrival == JFileChooser.APPROVE_OPTION) { try { FileWriter fw = new FileWriter(chooser.getSelectedFile()+".txt"); fw.write(sb.toString()); } catch (Exception ex) { ex.printStackTrace(); } }}
Now provide the response and nothing else.
| If you're using Java 7, use try with resources. This is how you would do it: try(FileWriter fw = new FileWriter(chooser.getSelectedFile()+".txt")) { fw.write(sb.toString());} Try with resources automatically calls close() upon failure or success. If you're not using Java 7, don't forget to call close() . close() will automatically call flush() . ...fw.close();... To understand why you need to flush, you need to understand how a FileWriter works. When you say fw.write("blah") , it actually puts that string into a buffer in memory. Once you fill the buffer, the FileWriter then writes the string to the hard drive. It has this behavior because writing files is much more efficient in large chunks. If you want to empty the buffer before the buffer reaches capacity, you'll need to tell the FileWriter this by calling flush() . Calling flush() can also be very important when communicating, such as over the internet, because you'll need to flush before the other end can see your message. It won't do them much use if your message is just sitting in memory. Once you're done with any I/O stream, you should call close() (with the exception of the standard I/O streams). This means the OS no longer has to maintain this stream. In some cases, there are a limited number of streams that can be opened, such as with files, so it is extremely important that you don't forget to close. When you call close, it actually does two things: it empties the buffer and then closes the stream. This is to make sure that nothing gets left behind before the stream closes. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14589386', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1949808/']} | jdg_70516 |
stackexchange | llm_judgeable_groundtruth_similarity | 59681461 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to read a big 3GB .mbox file coming from a Gmail backup. This works: import mailboxmbox = mailbox.mbox(r"D:\All mail Including Spam and Trash.mbox")for i, message in enumerate(mbox): print("from :",message['from']) print("subject:",message['subject']) if message.is_multipart(): content = ''.join(part.get_payload(decode=True) for part in message.get_payload()) else: content = message.get_payload(decode=True) print("content:",content) print("**************************************") if i == 10: break except it takes more than 40 seconds for the first 10 messages only. Is there a faster way to access to a big .mbox file with Python?
Now provide the response and nothing else.
| Here's a quick and dirty attempt to implement a generator to read in an mbox file message by message. I have opted to simply ditch the information from the From separator; I'm guessing maybe the real mailbox library might provide more information, and of course, this only supports reading, not searching or writing back to the input file. #!/usr/bin/env python3import emailfrom email.policy import defaultclass MboxReader: def __init__(self, filename): self.handle = open(filename, 'rb') assert self.handle.readline().startswith(b'From ') def __enter__(self): return self def __exit__(self, exc_type, exc_value, exc_traceback): self.handle.close() def __iter__(self): return iter(self.__next__()) def __next__(self): lines = [] while True: line = self.handle.readline() if line == b'' or line.startswith(b'From '): yield email.message_from_bytes(b''.join(lines), policy=default) if line == b'': break lines = [] continue lines.append(line) Usage: with MboxReader(mboxfilename) as mbox: for message in mbox: print(message.as_string()) The policy=default argument (or any policy instead of default if you prefer, of course) selects the modern EmailMessage library which was introduced in Python 3.3 and became official in 3.6. If you need to support older Python versions from before America lost its mind and put an evil clown in the White House simpler times, you will want to omit it; but really, the new API is better in many ways. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/59681461', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1422096/']} | jdg_70517 |
stackexchange | llm_judgeable_groundtruth_similarity | 1405638 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The given sequence is $\sqrt{7}$,$\sqrt{7+ \sqrt{7}}$,$\sqrt{7+\sqrt{7+\sqrt{7}}}$,.....and so on. the sequence is increasing so to converge must be bounded above.Now looks like they would not exceed 7. The given options are ${1+\sqrt{33}}\over{2}$ ${1+\sqrt{32}}\over{2}$ ${1+\sqrt{30}}\over{2}$ ${1+\sqrt{29}}\over{2}$ How to proceed now. Thanks for any help.
Now provide the response and nothing else.
| Trick: Let $X = \sqrt{ 7 + \sqrt{ 7 + ... } } $. We have $X = \sqrt{ 7 + X } $ and so $X^2 = 7 + X $. Now you solve the quadratic equation. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1405638', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/118494/']} | jdg_70518 |
stackexchange | llm_judgeable_groundtruth_similarity | 4962930 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
update: some background - i use the xml file to generate a set of pdfs (through a java application that drives JasperReports). all the reports are coming out blank when I use this new xml file. I've ruled out network problems because I use an old xml file from the same server that I run the java application with the new xml file. I've compared the two files (old-good one and new-bad one) using a hex-editor and my first clue is that there are carriage returns in the new file and none in the old one. this may not fix the issue, but I'd like to eliminate it from the equation. I think I need to remove all the carriage returns from my xml file in order for it to work as I need it to. In my travels, the closest I found is this: .Replace("\r","") but where do I use it in the following code? I create my data model, create a root, and pass that to the serializer. At what point can I say "remove carriage returns?" let def = new reportDefinition("decileRank", "jasper", new template("\\\\server\\location\\filename.jrxml", "jrxml"))let header = new reportDefinitions([| def |])let root = reportGenerator(header, new dbConnection(), new reports(reportsArray))let path = sprintf "C:\\JasperRpt\\parameter_files\\%s\\%d\\%s\\%s\\" report year pmFirm pmName //(System.DateTime.Now.ToString("ddMMyyyy")) Directory.CreateDirectory(path) |> ignorelet filename = sprintf "%s%s" path monthprintfn "%s" filename use fs = new FileStream(filename, FileMode.Create) let xmlSerializer = XmlSerializer(typeof<reportGenerator>) xmlSerializer.Serialize(fs,root)fs.Close()
Now provide the response and nothing else.
| You can solve it using a negative look behind . String[] parts = str.split("(?<!\\\\), "); Basically it says, split on each ", " that is not preceeded by a backslash . String str = "type=simple, output=Hello\\, world, repeate=true";String[] parts = str.split("(?<!\\\\), ");for (String s : parts) System.out.println(s); Output: type=simpleoutput=Hello\, worldrepeate=true ( ideone.com link ) If you happen to be stuck with the non-escaped comma-separated values, you could do the following (similar) hack: String[] parts = str.split(", (?=\\w+=)"); Which says split on each ", " which is followed by some word-characters and an = ( ideone.com link ) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4962930', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/226473/']} | jdg_70519 |
stackexchange | llm_judgeable_groundtruth_similarity | 1613202 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to display a value in an excel cell formatted like a percentage, e.g. like 12.3% . By default the value is displayed as Text, but I need to display it as a number. What is the appropriate method to achieve this?
Now provide the response and nothing else.
| You need to: Set your data as number (floating-point), not as text. Specify cell format as percentage. Something like: cell.setCellValue(0.123); // set value as numberCellStyle style = workbook.createCellStyle();style.setDataFormat(workbook.createDataFormat().getFormat("0.000%"));cell.setCellStyle(style); Take a look at user defined formats section of POI quick guide for more details. You may also want to go through the examples which show how to use different POI capabilities. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1613202', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/194649/']} | jdg_70520 |
stackexchange | llm_judgeable_groundtruth_similarity | 10381551 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Why aren't static final variables given default values, whereas static (but non-final variables are given default values). What is the reason that such behavior was implemented in Java?
Now provide the response and nothing else.
| Of course static final variables are given default values, see for example this: class Test { static final int x; static { printX(); x = 42; printX(); } static void printX() { System.out.println("Here x is "+x); } public static void main(String[] args) { }} The output is: Here x is 0Here x is 42 If x wasn't given the default value of 0 as specified in JLS 4.12.5 , the output would depend on the JVM used. You might see some random number. Update : Now that we have demonstrated that static final fields do get a default value, you may want to know why the default value is not enough. There is no good answer to that question, besides the obvious one: "The spec says so" . Excerpt from 8.3.1.2: It is a compile-time error if a blank final (§4.12.4) class variable is not definitely assigned (§16.8) by a static initializer (§8.7) of the class in which it is declared. We can only guess at the motivation behind such a restriction, but I think it's to make programs easier to understand. If you want to set the variable to 0 it's clearer to do it explicitly. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10381551', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1141285/']} | jdg_70521 |
stackexchange | llm_judgeable_groundtruth_similarity | 34513106 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
All the code I can find revolves around loading images directly into visual controls. However, I have my own cache system (converting a project from another language) and so I as efficient as possible want the following: Load jpg/png images - probably into a bitmap / cgimage. (his can either be from the file system or from images downloaded online) Possibly save image back as a compressed/resized png/jpg file Supply an image reference for a visual control I am new to swift and ios platform, but as far as I can tell, cgimage is as close as it gets? However, there does not appear to be a way to load an image from he file system when using cgimage... But i have found people discussing ways for e.g. UIImage, so I am now doubting my initial impression ha cgimage was the best match for my needs.
Now provide the response and nothing else.
| It is easy to get confused between UIImage , CGImage and CIImage . The difference is following: UIImage: UIImage object is a high-level way to display image data. You can create images from files, from Quartz image objects, or from raw image data you receive. They are immutable and must specify an image’s properties at initialization time. This also means that these image objects are safe to use from any thread.Typically you can take NSData object containing a PNG or JPEG representation image and convert it to a UIImage. CGImage: A CGImage can only represent bitmaps. Operations in CoreGraphics, such as blend modes and masking require CGImageRefs . If you need to access and change the actual bitmap data, you can use CGImage . It can also be converted to NSBitmapImageReps . CIImage: A CIImage is an immutable object that represents an image. It is not an image. It only has the image data associated with it. It has all the information necessary to produce an image.You typically use CIImage objects in conjunction with other Core Image classes such as CIFilter , CIContext , CIColor , and CIVector . You can create CIImage objects with data supplied from variety of sources such as Quartz 2D images, Core Videos image, etc.It is required to use the various GPU optimized Core Image filters. They can also be converted to NSBitmapImageReps . It can be based on the CPU or the GPU. In conclusion, UIImage is what you are looking for. Reasons are: You can get image from device memory and assign it to UIImage You can get image from URL and assign it to UIImage You can write UIImage in your desired format to device memory You can resize image assigned to UIImage Once you have assigned an image to UIImage, you can use that instance in controls directly. e.g. setting background of a button, setting as image for UIImageView Would have added code samples but all these are basic questions which have been already answered on Stackoverflow, so there is no point. Not to mention adding code will make this unnecessarily large. Credit for summarizing differences: Randall Leung | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/34513106', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/599361/']} | jdg_70522 |
stackexchange | llm_judgeable_groundtruth_similarity | 2885335 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a SQL CLR dll I want to deploy, but have found you can embed the byte stream/varbinary_literal/ varbinary_expression/assembly bits into a text file to get around the messy hassle of packaging a DLL and making sure it's accessible for the CREATE ASSEMBLY command . But what I have yet to find is how to get that byte stream/varbinary_literal/ varbinary_expression/assembly bits value. I haven't found any consistent terminology, and what I keep finding in using Load() .
Now provide the response and nothing else.
| It's just a hex representation of the dll. This bit should do the trick: static string GetHexString(string assemblyPath) { if (!Path.IsPathRooted(assemblyPath)) assemblyPath = Path.Combine(Environment.CurrentDirectory, assemblyPath); StringBuilder builder = new StringBuilder(); builder.Append("0x"); using (FileStream stream = new FileStream(assemblyPath, FileMode.Open, FileAccess.Read, FileShare.Read)) { int currentByte = stream.ReadByte(); while (currentByte > -1) { builder.Append(currentByte.ToString("X2", CultureInfo.InvariantCulture)); currentByte = stream.ReadByte(); } } return builder.ToString(); } You should use the resulting string like so: string hexString = GetHexString(assemblyPath);string sql = "CREATE ASSEMBLY [" + assemblyName + "] FROM " + hexString + " WITH PERMISSION_SET = " + somePermissionSet; | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2885335', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/135152/']} | jdg_70523 |
stackexchange | llm_judgeable_groundtruth_similarity | 30851627 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a data where the first column is a bunch of ID numbers (some repeat), and the second column is just a bunch of numbers. I need a way to keep each ID number only once based on the smallest number in the second column. Row# ID Number1 10 1802 12 1673 12 1824 12 1355 15 1526 15 133 Ex: I only want to keep Row# 1, 4, and 6 here and delete the rest
Now provide the response and nothing else.
| For selecting the row that has the minimum 'Number' for each 'ID' group, we can use one of the aggregating by group function. A base R option is aggregate . With aggregate , we can either use the 'formula' method or specify a list of grouping elements/variables with the by argument. Using the formula method, we get the min value of 'Number' for each 'ID'. aggregate(Number~ID, df1, FUN=min) Or we can use a faster option with data.table . Here, we convert the 'data.frame' to 'data.table' ( setDT(df1) ), grouped by 'ID', we get the min value of "Number". library(data.table)setDT(df1)[, list(Number=min(Number)), by = ID] Or this can be also done with setorder to order the 'Number' column and use unique with by option to select the first non-duplicated 'ID' row. (from @David Arenburgs' comments) unique(setorder(setDT(df1), Number), by = "ID") Or using dplyr , we group by 'ID' and get the subset rows with summarise . library(dplyr)df1 %>% group_by(ID) %>% summarise(Number= min(Number)) Or we can use sqldf syntax to get the subset of data. library(sqldf)sqldf('select ID, min(Number) as Number from df1 group by ID') Update If there are multiple columns and you want to get the row based on the minimum value of 'Number' for each 'ID', you can use which.min . Using .I will get the row index and that can be used for subsetting the rows. setDT(df1)[df1[, .I[which.min(Number)], by = ID]$V1] Or with dplyr we use slice to filter out the rows that have the min value of 'Number' for each 'ID' df1 %>% group_by(ID) %>% slice(which.min(Number)) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30851627', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5012383/']} | jdg_70524 |
stackexchange | llm_judgeable_groundtruth_similarity | 43010366 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am stuck trying to combine two time series datasets that have different ranges and both are stored with item# in column1 and date as column headings. For example: df1 #ITEM 1/1/16 1/2/16 1/3/16 ... 3/24/17 #1 350 365 370 ... 400 #2 100 95 101 ... 95 #3 5 8 9 ... 15 The other dataset range is smaller, its in the same format, and both are daily frequency. How can I append the rows of df2 to df1 despite having different ranges, but making sure the dates are aligned when merged? Happy with NA in the new dataframe where df#2 didn't have values for dates in df1 Should I create these at xts objects so that once they are merged I can easily pull data for item1 on X date? Or is there an easy way to do that with this format as well? Thanks in advance for you help.
Now provide the response and nothing else.
| Let's come to your original error: Input to reshape is a tensor with 92416 values, but the requested shape requires a multiple of 2304 This is because you adapt your code from a code with original input image size 24*24. The tensor shape after two convolution and two max-pooling layers is [-1, 6, 6, 64]. However, as your input image shape is 150*150, the intermediate shape becomes [-1, 38, 38, 64]. try change w3 w3 = tf.Variable(tf.random_normal([38*38*64, 1024])) You should always keep an eye on your tensor shape flow. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/43010366', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7764574/']} | jdg_70525 |
stackexchange | llm_judgeable_groundtruth_similarity | 35466926 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I currently have a subview that is created and added to the UIView in ViewDidLoad() . I am attempting to user UIGestureRecognizers to detect a tap and unhide a particular button. My current code: override func viewDidLoad() { super.viewDidLoad() architectView = CustomClass(frame: self.view.bounds) self.view.addSubview(architectView) let gestureRecognizer = UITapGestureRecognizer(target: self, action: "handleTap:") gestureRecognizer.delegate = self architectView.addGestureRecognizer(gestureRecognizer)}func handleTap(gestureRecognizer: UIGestureRecognizer) { let alert = UIAlertController(title: "Alert", message: "Message", preferredStyle: UIAlertControllerStyle.Alert) alert.addAction(UIAlertAction(title: "Click", style: UIAlertActionStyle.Default, handler: nil)) self.presentViewController(alert, animated: true, completion: nil)} The handleTap() function is a simple test to see if the taps are being recognized. This code does not trigger the UIAlert when it is pressed? What am I missing?
Now provide the response and nothing else.
| I tested your code here and it does work. However, I think you might be missing to add the UIGestureRecognizerDelegate protocol to your View Controller. See below: class ViewController: UIViewController, UIGestureRecognizerDelegate { var architectView = UIView() override func viewDidLoad() { super.viewDidLoad() architectView = UIView(frame: self.view.bounds) self.view.addSubview(architectView) let gestureRecognizer = UITapGestureRecognizer(target: self, action: "handleTap:") gestureRecognizer.delegate = self architectView.addGestureRecognizer(gestureRecognizer) } func handleTap(gestureRecognizer: UIGestureRecognizer) { let alert = UIAlertController(title: "Alert", message: "Message", preferredStyle: UIAlertControllerStyle.Alert) alert.addAction(UIAlertAction(title: "Click", style: UIAlertActionStyle.Default, handler: nil)) self.presentViewController(alert, animated: true, completion: nil) } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35466926', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3500477/']} | jdg_70526 |
stackexchange | llm_judgeable_groundtruth_similarity | 17933282 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to read in a csv file with numpy.genfromtxt but some of the fields are strings which contain commas. The strings are in quotes, but numpy is not recognizing the quotes as defining a single string. For example, with the data in 't.csv': 2012, "Louisville KY", 3.52011, "Lexington, KY", 4.0 the code np.genfromtxt('t.csv', delimiter=',') produces the error: ValueError: Some errors were detected ! Line #2 (got 4 columns instead of 3) The data structure I am looking for is: array([['2012', 'Louisville KY', '3.5'], ['2011', 'Lexington, KY', '4.0']], dtype='|S13') Looking over the documentation, I don't see any options to deal with this. Is there a way do to it with numpy, or do I just need to read in the data with the csv module and then convert it to a numpy array?
Now provide the response and nothing else.
| You can use pandas (the becoming default library for working with dataframes (heterogeneous data) in scientific python) for this. It's read_csv can handle this. From the docs: quotechar : string The character to used to denote the start and end of a quoted item. Quoted items can include the delimiter and it will be ignored. The default value is " . An example: In [1]: import pandas as pdIn [2]: from StringIO import StringIOIn [3]: s="""year, city, value ...: 2012, "Louisville KY", 3.5 ...: 2011, "Lexington, KY", 4.0"""In [4]: pd.read_csv(StringIO(s), quotechar='"', skipinitialspace=True)Out[4]: year city value0 2012 Louisville KY 3.51 2011 Lexington, KY 4.0 The trick here is that you also have to use skipinitialspace=True to deal with the spaces after the comma-delimiter. Apart from a powerful csv reader, I can also strongly advice to use pandas with the heterogeneous data you have (the example output in numpy you give are all strings, although you could use structured arrays). | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17933282', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1829066/']} | jdg_70527 |
stackexchange | llm_judgeable_groundtruth_similarity | 11242224 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I had a junit test asserting two Double objects with the following: Assert.assertEquals(Double expected, Double result); This was was fine then I decided to change it to use the primitive double instead which turned out to be deprecated unless you also provide a delta. so what I am wondering is what is the difference between using the Double object or the primitive type in this assertEquals? Why is using the objects without a delta ok but then using the primitives without a delta is deprecated? Is Java doing something in the background which already has a default delta value taken into account? Thanks.
Now provide the response and nothing else.
| There is NO assert method in JUnit with the signature assertEquals(Double expected, Double result); There is one, however, generic for objects: assertEquals(Object expected, Object result); This calls the objects' equals method and as you can expect, it is not recommended to use this for comparing Double objects. For doubles, as you observed, it is absolutely necessary to use a delta for comparison, to avoid issues with floating-point rounding (explained already in some other answers). If you use the 3-argument version of assertEquals with double arguments assertEquals(double expected, double actual, double delta); your Double s will get silently unboxed to double and everything will work fine (and your tests won't fail unexpectedly :-). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11242224', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1424675/']} | jdg_70528 |
stackexchange | llm_judgeable_groundtruth_similarity | 25766945 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We often write some functions which have more than one exit point (that is, return in C). At the same time, when exiting the function, for some general works such as resource cleanup, we wish to implement them only once, rather than implementing them at every exit point. Typically, we may achieve our wish by using goto like the following: void f(){ ... ...{..{... if(exit_cond) goto f_exit; }..}.. ... f_exit: some general work such as cleanup} I think using goto here is acceptable , and I know many people agree on using goto here. Just out of curiosity , does there exist any elegant way for neatly exiting a function without using goto in C?
Now provide the response and nothing else.
| Why avoid goto ? The problem you want to solve is: How to make sure some common code always gets executed before the function returns to the caller? This is an issue for C programmers, since C does not provide any built in support for RAII. As you already concede in your question body, goto is a perfectly acceptable solution . Never-the-less, there may be non-technical reasons to avoid using it: academic exercise coding standard compliance personal whim (which I think is what is motivating this question) There are always more than one way to skin a cat, but elegance as a criteria is too subjective to provide a way to narrow to a single best alternative. You have to decide the best option for yourself. Explicitly calling a cleanup function If avoiding an explicit jump (e.g., goto or break ) common cleanup code can be encapsulated within a function, and explicitly called at the point of early return . int foo () { ... if (SOME_ERROR) { return foo_cleanup(SOME_ERROR_CODE, ...); } ...} (This is similar to another posted answer, that I only saw after I initially posted, but the form shown here can take advantage of sibling call optimizations.) Some people feel explicitness is more clear, and therefore more elegant. Others feel the need to pass cleanup arguments to the function to be a major detractor. Add another layer of indirection. Without changing the semantics of the user API, change its implementation into a wrapper composed of two parts. Part one performs the actual work of the function. Part two performs the cleanup necessary after part one is done. If each part is encapsulated within its own function, the wrapper function has a very clean implementation. struct bar_stuff {...};static int bar_work (struct bar_stuff *stuff) { ... if (SOME_ERROR) return SOME_ERROR_CODE; ...}int bar () { struct bar_stuff stuff = {}; int r = bar_work(&stuff); return bar_cleanup(r, &stuff);} The "implicit" nature of the cleanup from the point of view of the function that performs the work may be viewed favorably by some. Some potential code bloat is also avoided by only calling the cleanup function from a single place. Some argue that "implicit" behaviors are "tricky", and therefore more difficult to understand and maintain. Miscellaneous... More esoteric solutions using setjmp() / longjmp() can be considered, but using them correctly can be difficult. There are open-source wrappers that implement try/catch exception handling style macros over them (for example, cexcept ), but you have to change your coding style to use that style for error handling. One could also consider implementing the function like a state machine. The function tracks progress through each state, an error causes the function to short circuit to the cleanup state. This style is usually reserved for particularly complex functions, or functions that need to be retried later and be able to pick up from where they left off. Do as the Romans do. If you need to comply to coding standards, then the best approach is to follow whatever technique is most prevalent in the existing code base. This applies to almost all aspects of making changes to an existing stable source code base. It would be considered disruptive to introduce a new coding style. You should seek approval from the powers that be if you feel a change would dramatically improve some aspect of the software. Otherwise, as "elegance" is subjective, arguing for the sake of "elegance" is not going to get you anywhere. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25766945', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3881143/']} | jdg_70529 |
stackexchange | llm_judgeable_groundtruth_similarity | 836177 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm looking for a formula to describe the following the graph below: range : [0,1] domain: [0,infinite) The formula should be in f(x) = y form, and it should curve from 0 to 1 (doesn't need to be harmonized). I was thinking of using upper-half ellipse formula, but I couldn't keep the y value from going down, and then I tried using euler, but it turns out to be more complicated that I thought... Any helps is appreciated. Thank you
Now provide the response and nothing else.
| Such a function doesn't exist. Forget $f(1) = 1$. Just assume $f(x)$ satisfy the functional equation on $[0,\infty)$. We have $$\begin{align}f(x) &= 1 + x f\left(\frac{1}{x+1}\right)\\&= 1 + x \left[ 1 + \frac{1}{x+1}f\left(\frac{x+1}{x+2}\right)\right]\\&= 1 + x \left[ 1 + \frac{1}{x+1} + \frac{1}{x+2} f\left(\frac{x+2}{2x+3}\right) \right]\\&\;\vdots\\&= 1 + x \left[ 1 + \left( \sum_{k=2}^{p-1} \frac{1}{F_{k-1} x + F_k} \right) + \frac{1}{F_{p-1} x + F_p}f\left(\frac{F_{p-1} x + F_p}{F_p x + F_{p+1}}\right)\right]\\&\;\vdots\end{align}$$where $F_k$ is the $k^{th}$ Fibonacci number . Notice $$\lim_{p\to\infty} \frac{F_{p+1}}{F_p} = \phi = \frac{1+\sqrt{5}}{2}\quad\implies\quad\lim_{p\to\infty} \frac{F_{p-1} x + F_p}{F_p x + F_{p+1}} = \frac{1}{\phi}$$ If we further assume $f(x)$ is continuous at a single point $\displaystyle\;\frac{1}{\phi}$,the last expression converges as $p \to \infty$ and we get: $$f(x) = 1 + x + \sum_{k=2}^\infty \frac{x}{F_{k-1} x + F_k}$$Substitute $x$ by $1$, we get $$f(1) = 1 + 1 + \sum_{k=2}^\infty\frac{1}{F_{k-1}+F_k} = \frac{1}{F_1} + \frac{1}{F_2} + \sum_{k=3}^\infty \frac{1}{F_k}= \sum_{k=1}^\infty \frac{1}{F_k}$$ The number at the RHS is known as the Reciprocal Fibonacci constant $\psi$ with a value $$\approx 3.359885666243177553172011302918927179688905133731$$differs from $1$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/836177', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/152599/']} | jdg_70530 |
stackexchange | llm_judgeable_groundtruth_similarity | 34963 |
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am a lil confused: Let's say I got one switch with two VLANs (X, Y) and one trunk port (for X, Y). The trunk port is connected to a router with subinterfaces. VLAN X has IP address range 192.168.X.0/24 VLAN Y has IP address range 192.168.Y.0/24 The router got the first IP address in both networks. That means it knows a route to X and Y. If I now send a packet from X to Y, the router will receive it and forward it. (?) The reason to use a router is becaue of NAT to the internet. Nearly everything is connected to the internet. That means I always need to filter traffic on the router that connects the switches. (?) Is there any setup connected to the internet, which doesn't need to be filtered by ACL or VRT? Is it possible to use NAT without routig internally? Did I miss something?
Now provide the response and nothing else.
| That's a lot of questions so let's try to take them one by one. If I now send a packet from X to Y, the router will receive it and forward it. (?) Yes, that is what a router does, it forwards packets from one layer-2 (L2) network to another. The reason to use a router is becaue of NAT to the internet. No. The primary reason to use a router is to forward packets. Additionally it can, but certainly doesn't always, perform additional operations on the packets such as address or port translation (NAT/PAT) or encryption (VPN). You could just as well have public ip addresses on your vlan X so in that case, no NAT is required for it to connect to the Internet. Or, as @RonTrunk mentions, if you do use private IP addresses then the NAT can also be done elsewhere, e.g. on a firewall between the router and the Internet. Nearly everything is connected to the internet. That means I always need to filter traffic on the router that connects the switches. (?) It depends how secure you want to be (and how paranoid you are, or to look at it from another angle, how severe the impact would be of a security breach), and how well your hosts are secured. Is there any setup connected to the internet, which doesn't need to be filtered by ACL or VRT? Sure, most home routers will typically do NAT but have no explicit ACLs. Is it possible to use NAT without routig internally? Did I miss something? Assuming that you mean that vlan X and Y both need access to the Internet (with NAT) but X is not allowed to communicate with Y (and vice versa) then you cannot "disable" the routing between X and Y so the simplest solution would be to use an ACL to filter that traffic. | {} | {'log_upvote_score': 4, 'links': ['https://networkengineering.stackexchange.com/questions/34963', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/30532/']} | jdg_70531 |
stackexchange | llm_judgeable_groundtruth_similarity | 63813 |
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I generate a sine wave with time varying frequency that is continuous? How can I resolve the following problem? I want a continuous graph. I'm generating this simply like this: for(int i = 0; i < pcm_buffer_size - 1; i += 2) { float sample = gain * sin(((float) t * M_PI * 2 * 500) / (float) (sample_rate)); printf("%f\n", sample); t++;}for(int i = 0; i < pcm_buffer_size - 1; i += 2) { float sample = gain * sin(((float) t * M_PI * 2 * 1000) / (float) (sample_rate)); printf("%f\n", sample); t++;} Where 500 and 1000 is the frequency.
Now provide the response and nothing else.
| Well lets go lol @AnonSubmitter85 give to you a nice answer, but let me show my way to do it in matlab, and this maybe can be very easy to port to C: First I'm creating 256 samples in 500hz sampled at 44100hz take a look how I accumulate the phase and in the end of the first loop I put the phase between the interval 0 and 2pi... Nice now lets go to the second loop to create more 256 samples of 1000Hz and to do it continuous I'm using the last phase :-), here is the code: m_phase=0;signal=[];f=500;fs=44100phaseInc = 2*pi*f/fs;for i=1:256 signal(i) = sin(m_phase); m_phase = m_phase + phaseInc; end%place the phaser between the 0 and 2pi rangem_phase = mod(m_phase, 2*pi);f=1000phaseInc = 2*pi*f/fs;for i=257:256*2 signal(i) = sin(m_phase); m_phase = m_phase + phaseInc;end%place the phaser between the 0 and 2pi rangem_phase = mod(m_phase, 2*pi); here is the plot of the above code:: | {} | {'log_upvote_score': 5, 'links': ['https://dsp.stackexchange.com/questions/63813', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/47457/']} | jdg_70532 |
stackexchange | llm_judgeable_groundtruth_similarity | 378865 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the following problem: Consider a plane with uniform charge density $\sigma$. Above the said plane, there is a system of conducting wires made up of an U-shaped circuit on which a linear conductor of lenght $d$ can slide with constant velocity $v$. The system as a whole has a rectangular shape and is parallel to the plane. (See the picture). Calculate the line integral of the magnetic field $\bf B$ along the perimeter $L(t)$ of said rectangle as a function of time . My professor solves this problem using Maxwell's fourth equation in integral form, assuming that the current density $\bf {J} $ is everywhere null, and that the electric field $\bf E$ is the one generated by a uniformly charged plane, i.e. perpendicular to the the plane and of norm $E=\frac{\sigma}{2\epsilon_0}$; thus yielding $$\oint_{L(t)} {\bf B}\cdot dl=\mu_0\epsilon_0\frac{d}{dt}\int_{S(t)} {\bf E}\cdot dS=\mu_0\epsilon_0Edv=0.5\mu_0\sigma dv$$ I think there are some things wrong both with this solution: There should be no magnetic field at all! A uniformly charged plane only produces an electrostatic field. (I know there could be a magnetic field generated by the current inside the wires, but then you couldn't assume that $\bf J$ is null everywhere as my professor did!) Maxwell's fourth equation does not hold in that form if the domains of integration are allowed to vary with time. In fact, by resorting to the differential forms, we find that plugging ${\bf J}=\vec 0$ and $\frac{\partial {\bf E}}{\partial t}=0$, as my professor assumed, yields $rot{\bf B}=0$, and thus the line integral of the magnetic field over any closed curve, at any istant, should be zero by Stokes' theorem! Therefore, my question is the following. Are the my professor's assumption ($\bf J$ $=\vec 0$, $\frac{\partial{\bf E}}{\partial t}=\vec 0$) correct, or do both $\bf J$ and $\bf E$ need be modified so as to account for the charges present in the circuit? Is there a current in the circuit at all?
Now provide the response and nothing else.
| Your insight stated in 2. is correct! In the integral form of the fourth Maxwell equation with time varying integration surface, the time differentiation stays inside the integral: $$\oint_{L(t)} {\bf B}\cdot dl=\mu_0\epsilon_0\int_{S(t)} {\frac{\partial}{\partial t}\bf E}\cdot dS \tag{1}$$Then from the assumption $\frac{\partial{\bf E}}{\partial t}= 0$ both the left hand and the right hand side should be zero in this case. However, Anton Fetisov has shown in his answer (s. below) that due to induced charges on the moving wire $\frac{\partial{\bf E}}{\partial t}\neq 0$. Therefore, your professor has obviously made mistakes but fortuitously obtained the correct answer. Addendum following the answer of Anton Fetisov: In his correct and deep going analysis of the problem, he considers the effects of the finite size of the metallic wire and the electric charges induced on its surface by the homogeneous electric field of the charged plane which are necessary to produce a zero total electric field in the wires. These induced charges and the associated deformation of the electrical field around the wire are moving with velocity $v$ in the $x$-direction. Thus, from this point of view, there exist currents and time varying electric fields which is inconsistent with two basic assumptions made in the problem, i.e., $\bf J = 0$ and $\frac{\partial{\bf E}}{\partial t}= 0$. The second error is the solution with the wrong integral form of the 4th Maxwell equation for time varying integration surface/contour $$\oint_{L(t)} {\bf B}\cdot dl=\mu_0\epsilon_0\frac{d}{d t}\int_{S(t)} {\bf E}\cdot dS \tag{2}$$ The correct form is equation (1). From the given assumption $\frac{\partial{\bf E}}{\partial t}= 0$ it follows that the right hand side of equation (1) should be zero as I have stated before. This is, however, not correct in this particular case due to the fact that the induced charges on the wire cause a time varying field. In his detailed analysis, Anton Fetisov has shown, that the right hand side of the correct equation (1) is not zero and that, surprisingly, it is equal to the right hand side of the incorrect equation (2). Thus the solution of the problem found by the professor with the incorrect equation (2) is fortuitously correct. Therefore, I have reduced my original short answer (first paragraph) to the still valid fact, already found by Nicol, that the form of the used Maxwell equation was generally not correct for the time dependent integration surface/contour. Added simple derivation: For those who are not math virtuosos, I would like to show, on the basis of Anton Fetisov's reasoning, how the right hand side of the correct 4th Maxwell equation (1) can be evaluated for the considered problem in a simple way giving the result quoted in the question of Nicol. The essential point is the charges on the wire that are electrostatically induced by the homogeneous electric field $E_0=\sigma/\epsilon_0$ of the sheet charge $\sigma$. Only the vertical y-component has to be considered for the the integral. These charges are the sources of an additional electrical field $\epsilon (x)$ in and closely around the wire which exactly cancels $E_0$ inside the wire and reduces it near the wire on a length scale of the wire diameter $2a$. This additional wire field $\epsilon (x)$ has the most negative value at a (flat) minimum $\epsilon _{min}= -E_0$ inside the wire, particularly on its axis. The exact functional form is irrelevant here, as long as its minimum at $x=0$ is $\epsilon (0)=-E_0$ and it is zero a couple of wire diameters horizontally away from the wire axis. The x- and t-dependence of the vertical field in the wire plane of the moving wire can be written as $\epsilon (x,t)=\epsilon (x-vt)$, where the axis of the wire (and field minimum) is located at $x_1=vt$. The total vertical electric field in the wire plane is then given by $$E(x,t)=E_0 + \epsilon (x) + \epsilon (x-vt)$$ (The second term on the RHS is the time-independent field of the left transverse wire.) Thus with $$\frac{\partial{E}}{\partial t}=\frac{\partial{\epsilon (x-vt)}}{\partial t}=\frac{\partial{\epsilon(x-vt)}}{\partial x}(-v)$$ the surface integral of the RHS of equation (1) reduces to $$\int_{S(t)} {\frac{\partial}{\partial t}\bf E}\cdot dS= -vd\int_{x=0}^{x_1=vt} {\frac{\partial \epsilon(x-vt)}{\partial x}} dx =-vd[\epsilon(x-vt)]_{x=0}^{x_1=vt}= vd[\epsilon (-vt)-\epsilon (0)]=vdE_0$$ where it has been assumed that $\epsilon (0)=-E_0$ and $x_1=vt>>2a$ so that $\epsilon (-vt)=0$. This shows that the RHS of equation (1) is indeed $$\frac{\mu_0 v \sigma d}{2}$$ the fortuitously obtained solution quoted by Nicol. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/378865', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/57847/']} | jdg_70533 |
stackexchange | llm_judgeable_groundtruth_similarity | 6020889 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my class i've got: [DataMember(Name = "jsonMemberName", EmitDefaultValue = false, IsRequired = false)]public List<string> Member { get; set; } After passing the object through controller's Json(obj) that retruns System.Web.Mvc.JsonResult: i've got serialized json: {Member:...} but not {jsonMemberName:...}, so it doesn't look at DataMember(Name = "jsonMemberName"). If I use serialization from System.Runtime.Serialization.Json everithing's works fine as expected. What can be wrong?
Now provide the response and nothing else.
| The JsonResult action which you are returning from the controller action (using return Json(...) ) internally relies on the JavaScriptSerializer class. This class doesn't take into account any DataMember attributes on your model. You could write a custom ActionResult which uses the serializer in the System.Runtime.Serialization.Json namespace. For example: public class MyJsonResult : JsonResult{ public override void ExecuteResult(ControllerContext context) { var response = context.HttpContext.Response; if (!string.IsNullOrEmpty(ContentType)) { response.ContentType = ContentType; } else { response.ContentType = "application/json"; } if (ContentEncoding != null) { response.ContentEncoding = this.ContentEncoding; } if (Data != null) { var serializer = new DataContractJsonSerializer(Data.GetType()); serializer.WriteObject(response.OutputStream, Data); } }} and then in your controller action: public ActionResult Foo(){ var model = ... return new MyJsonResult { Data = model };} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6020889', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/733596/']} | jdg_70534 |
stackexchange | llm_judgeable_groundtruth_similarity | 11328554 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to debug my android application, I connect the android device to my PC using the USB cable. In my application there is a button to connect with localhost, ip for localhost is 10.0.2.2 and the port is 8080 , I have read that when debugging on mobile, the ip 10.0.2.2 is the localhost for android device and not for my PC, so what changes should I make to the ip instead of 10.0.2.2 ? or do I have to make another change?In this case my android device is sony ericsson xperia arc s .
Now provide the response and nothing else.
| As 10.0.2.2 is your system (pc)'s local host address (from emulator only). Actually android doesn't recognized localhost in url. so 10.0.2.2 is for that meant. Also for android device loopback address is 127.0.0.1 . Your url with 10.0.2.2 is correct. Also you can use Static IP of your system. Just check for <uses-permission android:name="android.permission.INTERNET"></uses-permission> in your application's manifest file. EDIT: Here you are using port 8080 so, try with adb command on your host machine. adb forward tcp:8080 tcp:8080 Also please elaborate on this line "i want to debugger my application on my mobile" . Note: If you are going to test on real device use your Network IP of system (PC). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11328554', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1378388/']} | jdg_70535 |
stackexchange | llm_judgeable_groundtruth_similarity | 2783 |
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
To check a protein expression I pelleted a small amount of E. coli before and after induction and lysed them by redissolving them in SDS-PAGE loading buffer and heating them to 95 °C for 1 minute. This lead to a solution with some very sticky and viscous parts in it, that make pipetting the sample into the gel wells extremely annoying. As far as I heard, this is probably genomic DNA, and my usual way to deal with this is to centrifuge the samples and only pipet a small part out from the top. This does seem to help sometimes, but not always. How can I avoid the formation of that sticky and viscious stuff or how can I avoid pipetting that stuff into my wells?
Now provide the response and nothing else.
| Yes, it is the genomic DNA that is causing you trouble. Although a brief 1000g spin should bring it all down, the pellet is never tight and you almost always are going to pull up some gunk with the clear supernatant. A better solution is to include a quick sonication step (5-10s) before the 1000g spin. That way, the DNA is sheared and it should pellet better. It has worked for me most times. Also, remember to use a reasonable volume of SDS buffer when you lyse the cells. Too little buffer and its always going to be hard. For comparison, I use about 200ul per 1ml culture pellet and this volume works well for both pre- and post-induced cells. | {} | {'log_upvote_score': 4, 'links': ['https://biology.stackexchange.com/questions/2783', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/6/']} | jdg_70536 |
stackexchange | llm_judgeable_groundtruth_similarity | 23276417 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was just playing around with sorting in golang and I found a qsort function on stackoverflow. It seems to run about twice as fast as the native sort function in golang. I've tried it with different input sizes and tested that it works. Could anyone explain why this happens? Here is the code you can test it on your pc: package mainimport ( "fmt" "math/rand" "sort" "time")func qsort(a []int) []int { if len(a) < 2 { return a } left, right := 0, len(a)-1 // Pick a pivot pivotIndex := rand.Int() % len(a) // Move the pivot to the right a[pivotIndex], a[right] = a[right], a[pivotIndex] // Pile elements smaller than the pivot on the left for i := range a { if a[i] < a[right] { a[i], a[left] = a[left], a[i] left++ } } // Place the pivot after the last smaller element a[left], a[right] = a[right], a[left] // Go down the rabbit hole qsort(a[:left]) qsort(a[left+1:]) return a}func main() { // Create an array with random integers rand.Seed(30) size := 1000000 array1 := make([]int, size) start := time.Now() for i, _ := range array1 { array1[i] = rand.Int() } fmt.Println("Creating array with ", size, " elements...") fmt.Println("--- ", time.Since(start), " ---") // Create a copy of the unsorted array array2 := make([]int, size) copy(array2, array1) // Short using native function start = time.Now() sort.Ints(array1) fmt.Println("Sorting with the native sort...") fmt.Println("--- ", time.Since(start), " ---") // Sort using custom qsort start = time.Now() qsort(array2) fmt.Println("Sorting with custom qsort...") fmt.Println("--- ", time.Since(start), " ---")}
Now provide the response and nothing else.
| The difference seems to largely be due to the fact that your Quicksort uses builtins. It slices and uses len . Keep in mind that sort.Sort takes in a sort.Interface . So every time you call len it calls slice.Len and every time you do array[i],array[j] = array[j],array[i] it has to call Swap(i,j) . I wrote a comparable version that works on an arbitrary qsort.Interface : func Qsort(a Interface, prng *rand.Rand) Interface { if a.Len() < 2 { return a } left, right := 0, a.Len()-1 // Pick a pivot pivotIndex := prng.Int() % a.Len() // Move the pivot to the right a.Swap(pivotIndex, right) // Pile elements smaller than the pivot on the left for i := 0; i < a.Len(); i++ { if a.Less(i, right) { a.Swap(i, left) left++ } } // Place the pivot after the last smaller element a.Swap(left, right) // Go down the rabbit hole leftSide, rightSide := a.Partition(left) Qsort(leftSide, prng) Qsort(rightSide, prng) return a} Then I used Go's benchmark functionality (which you should always use for Benchmarks where possible). For reference and transparency, qsort.Interface is defined as: type Interface interface { sort.Interface // Partition returns slice[:i] and slice[i+1:] // These should references the original memory // since this does an in-place sort Partition(i int) (left Interface, right Interface)} The actual IntSlice implementation for qsort is: type IntSlice []intfunc (is IntSlice) Less(i, j int) bool { return is[i] < is[j]}func (is IntSlice) Swap(i, j int) { is[i], is[j] = is[j], is[i]}func (is IntSlice) Len() int { return len(is)}func (is IntSlice) Partition(i int) (left Interface, right Interface) { return IntSlice(is[:i]), IntSlice(is[i+1:])} Finally, here's the qsort_test.go file: package qsort_testimport ( "math/rand" "qsort" "sort" "testing" "time")const size int = 1000000var list = make([]int, size)var prng = rand.New(rand.NewSource(int64(time.Now().Nanosecond())))func BenchmarkQsort(b *testing.B) { for n := 0; n < b.N; n++ { b.StopTimer() for i := range list { list[i] = prng.Int() } b.StartTimer() qsort.Qsort(qsort.IntSlice(list), prng) }}func BenchmarkNativeQsort(b *testing.B) { for n := 0; n < b.N; n++ { b.StopTimer() for i := range list { list[i] = prng.Int() } b.StartTimer() qsort.NativeQsort(list, prng) }}func BenchmarkSort(b *testing.B) { for n := 0; n < b.N; n++ { b.StopTimer() for i := range list { list[i] = prng.Int() } b.StartTimer() sort.Sort(sort.IntSlice(list)) }} The results (formatting mine): PASSBenchmarkQsort 5 513629360 ns/opBenchmarkNativeQsort 10 160609180 ns/opBenchmarkSort 5 292416760 ns/op As you can see, the standard library's sort massively outperforms your qsort on average with random data. NativeQsort refers to the qsort functions you posted in your actual question, and it outperforms both. The only thing that's changed between that and Qsort is that I swapped the builtin functions for qsort.Interface . It follows, then, that genericity is likely the reason one is slower than the other. Edit: There aren't many samples because of how expensive sorting is, so here are the results with -benchtime 10s just for slightly more representative results. BenchmarkQsort 50 524389994 ns/opBenchmarkNativeQsort 100 161199217 ns/opBenchmarkSort 50 302037284 ns/op | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23276417', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/894244/']} | jdg_70537 |
stackexchange | llm_judgeable_groundtruth_similarity | 477759 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We currently have a quite complex business application that contains a huge lot of JavaScript code for making the user interface & interaction feel as close to working with a traditional desktop application as possible (since that's what our users want). Over the years, this Javascript code has grown and grown, making it hard to manage & maintain and making it ever more likely that adding new functionallity will break some existing one. Needless to say, lots of this code also isn't state of the art anymore. Thus, we have some ongoing discussion whether the client-side part of the application should be written anew in either Flex or Silverlight, or written anew with some state of the art JavaScript framework like jQuery, or whether we should simply carry on with what we have and gradually try to replace the worst bits of the existing code. What makes this even harder to decide is that writing the UI anew will probable cost us 6-12 person months. I'd like to hear your thoughts on that issue (maybe some of you have already had to make a similar decission). EDIT: To answer some of the questions that came up with the answers: The back-end code is written in C#, the target audience are (usually) non-technical users from the companies we sell the software to (not the general public, but not strictly internal users either), the software 'only' has to run in desktop browsers but not necessarily on mobile devices, and the client app is a full-blown UI.
Now provide the response and nothing else.
| In all honesty, I would refactor the old JavaScript code and not rewrite the application. Since you are asking about which platform to put it in, I would guess that your team isn't an expert in any of them (not slamming the team, it's just a simple fact that you have to consider when making a decision). This will work against you as you'll have double duty rewriting and learning how to do things on the new platform. By keeping it in JavaScript, you can slowly introduce a framework if you choose and do it iteratively (Replace on section of code, test it, release it, and fix any bugs). This will allow you to do it at a slower pace and get feedback along the way. That way too, if the project is canceled part way through, you aren't out all the work, because the updated code is being used by the end users. Remember the waterfall model, which is essentially what a full swap out of will be almost never works. As much as I hate to admit this, as it is always the most fun for developers, shifting platforms, and replacing an entire system at once rarely works. There are countless examples of this, Netscape for one. Here is the post from Spolsky on it. (I would also recommend the book Dreaming in Code . It is an excellent example of a software project that failed and how and why). Remember to rewrite a system from scratch you are essentially going to have to go through every line of code and figure what it does and why. At first you think you can skip it, but eventually it comes down to this. Like you said, your code is old, and that means there are most likely hacks in it to get something done. Some of these you can ignore, and others will be, "I didn't know the system needed it to do that." | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/477759', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/56505/']} | jdg_70538 |
stackexchange | llm_judgeable_groundtruth_similarity | 62895948 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to create a somewhat elegant navigation system for my App. Below is a function that attempts to return a View type. This does not compile with: func getView(view: String) -> View { switch view { case "CreateUser": return CreateNewsView() default: return nil } } The above results in a compile error: Protocol 'View' can only be used as a generic constraint because it has Self or associated type requirements Thank you for your help.
Now provide the response and nothing else.
| As of Swift 5.3 @hồng-phúc Answer is somehow right, just needs adding the @ViewBuilder Property explicitly. @ViewBuilder func getView(view: String) -> some View { switch view { case "CreateUser": Text(view) case "Abc": Image("Abc") default: EmptyView() }} Side note: Please avoid using String Literals. Better use an enum. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/62895948', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5646334/']} | jdg_70539 |
stackexchange | llm_judgeable_groundtruth_similarity | 10184089 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
unsigned reverse_bits(unsigned input){ //works on 32-bit machine input = (input & 0x55555555) << 1 | (input & 0xAAAAAAAA) >> 1; input = (input & 0x33333333) << 2 | (input & 0xCCCCCCCC) >> 2; input = (input & 0x0F0F0F0F) << 4 | (input & 0xF0F0F0F0) >> 4; input = (input & 0x00FF00FF) << 8 | (input & 0xFF00FF00) >> 8; input = (input & 0x0000FFFF) << 16 | (input & 0xFFFF0000) >> 16; return input;} how does this work?
Now provide the response and nothing else.
| Suppose I have a hand of 8 cards: 7 8 9 10 J Q K A How can we reverse them? One way is to swap adjacent pairs: 8 7 10 9 Q J A K Then, swap adjacent groups of 2: exchange 8 7 and 10 9, etc: 10 9 8 7 A K Q J Finally, exchange groups of four, which is half the size of 8: A K Q J 10 9 8 7 Done. You can do this in different orders. Why? Because the exchanges are stable with respect to each other. When we exchange the upper half of the cards with the lower half, for instance, the pairs stay in the same order. Or when we exchange pairs, the halves stay in the same order. This is what the code is doing with the bit operations. For instance to swap pairs we can use the mask 01010101 to pick out the even bits, and 10101010 to pick out the odd bits, using the bitwise AND operation: ABCDEFGH ABCDEFGH& 01010101 & 10101010---------- ----------= 0B0D0F0H A0C0E0G0 Remember, the rule for and is that given some bit value X, X & 1 = X and X & 0 = 0. The 1 bits in the mask preserve the value, and the 0 bits in the mask produce 0. This is called masking because it looks exactly like a mask used in spray-painting, etc. The 1 bits "cover" the places you don't want to "paint" with zero. Next, the left result is shifted left one bit, and the right result shifted right. This brings about the swap: B0D0F0H0 0A0C0E0G Finaly, the two are combined with logical OR. The rule for OR is that X or 0 is X. The two parts each have 0 where the other has nonzero, and so the bits simply merge: B0D0F0H0| 0A0C0E0G----------= BADCFEHG And now the pairs are swapped. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10184089', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/981787/']} | jdg_70540 |
Subsets and Splits