source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
31965413
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using g++ 4.8.4 on Ubuntu 14.04 LTS. When trying to compile with '-std=c++14', I get this error: g++: error unrecognized command line option '-std=c++14' Compiling with '-std=c++11' works fine, so I'm not sure what's going on. Does g++ really have no support for c++14 yet? Am I using a wrong command line option? I used "sudo apt-get install g++" which should automatically retrieve the latest version, is that correct? Now provide the response and nothing else.
For gcc 4.8.4 you need to use -std=c++1y in later versions, looks like starting with 5.2 you can use -std=c++14 . If we look at the gcc online documents we can find the manuals for each version of gcc and we can see by going to Dialect options for 4.9.3 under the GCC 4.9.3 manual it says: ‘c++1y’ The next revision of the ISO C++ standard, tentatively planned for 2014. Support is highly experimental, and will almost certainly change in incompatible ways in future releases. So up till 4.9.3 you had to use -std=c++1y while the gcc 5.2 options say: ‘c++14’ ‘c++1y’ The 2014 ISO C++ standard plus amendments. The name ‘c++1y’ is deprecated. It is not clear to me why this is listed under Options Controlling C Dialect but that is how the documents are currently organized.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/31965413', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1879228/']}
jdg_70941
stackexchange
llm_judgeable_groundtruth_similarity
796824
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I love the syntax of railroad diagrams on json.org which are a graphical representation of the BNF language. I haven't found any tools that can produce results as eloquently. Can anyone identify the tool used to generate these diagrams? Now provide the response and nothing else.
There is an Online Railroad Diagram Generator . It creates SVG syntax diagrams , also known as railroad diagrams, from context-free grammars specified in EBNF . You can copy the SVG code or take screen shots. You have to type in the grammar and it'll make the diagram. For example, to create the first railroad diagram you show, you would use the code: object ::= '{' ((string ':' value ) ( ',' string ':' value )*)? '}' Then you could go on to define string and value using string ::= ... and value ::= ... The references are all shown. Check out some of the example diagrams on the page. They have XML and even EBNF itself.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/796824', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/50475/']}
jdg_70942
stackexchange
llm_judgeable_groundtruth_similarity
634364
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a file of patterns and I want to return all the line numbers where the pattern was found, but in a wide format and not long/spread.Example: fileA.txt GermanyUSAUK fileB.txt USAUSAItalyGermanyUKUKCanadaCanadaGermanyAustraliaUSA I have done something like this: grep -nf fileA.txt fileB.txt which returned me: 1:USA2:USA4:Germany5:UK6:UK9:Germany11:USA However, I want to have something like: Germany 4 9USA 1 2 11UK 5 6 Now provide the response and nothing else.
Using GNU datamash : $ grep -n -x -F -f fileA.txt fileB.txt | datamash -s -t : -g 2 collapse 1Germany:4,9UK:5,6USA:1,2,11 This first uses grep to get the lines from fileB.txt that exactly matches the lines in fileA.txt , and outputs the matching line numbers along with the lines themselves. I'm using -x and -F in addition to the options that are used in the question. I do this to avoid reading the patterns from fileA.txt as regular expressions ( -F ), and to match complete lines, not substrings ( -x ). The datamash utility is then parsing this as lines of : -delimited fields ( -t : ), sorting it ( -s ) on the second field ( -g 2 ; the countries) and collapsing the first field ( collapse 1 ; the line numbers) into a list for each country. You could then obviously replace the colons and commas with tabs using tr ':,' '\t\t' , or with spaces in a similar way. $ grep -n -x -f fileA.txt -F fileB.txt | datamash -s -t : -g 2 collapse 1 | tr ':,' '\t\t'Germany 4 9UK 5 6USA 1 2 11
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/634364', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/361114/']}
jdg_70943
stackexchange
llm_judgeable_groundtruth_similarity
5172134
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm currently researching MongoDb as a possible database option, and I'm having trouble dealing with Guid serialization. I thought at first maybe this was a bug in the C# driver's serialization, but now I think it's more likely a naive assumption on my part. To help me convert the Bson base64 representations back and forth to Guids, I wrote a couple of little powershell functions to help: function base64toguid { param($str); $b = [System.Convert]::FromBase64String($str); $hex = ""; foreach ($x in $b) { $hex += $x.ToString("x2"); } $g = new-object -TypeName System.Guid -ArgumentList $hex; return $g;}function guidtobase64{ param($str); $g = new-object -TypeName System.Guid -ArgumentList $str; $b64 = [System.Convert]::ToBase64String($g.ToByteArray()); return $b64;} An example of the issue I'm having: :) guidtobase64("53E32701-9863-DE11-BD66-0015178A5E3C");ASfjU2OYEd69ZgAVF4pePA==:) base64toguid("ASfjU2OYEd69ZgAVF4pePA==");Guid----0127e353-6398-11de-bd66-0015178a5e3c And from the mongo shell: :) mongoMongoDB shell version: 1.6.5connecting to: test> b = new BinData(3, "ASfjU2OYEd69ZgAVF4pePA==");BinData(3,"ASfjU2OYEd69ZgAVF4pePA==")> b.hex();127e353639811debd66015178a5e3c> So as you can see, the Guid I get back doesn't match what I put in. My function and hex() return the same thing. If you compare the original to the result: 53E32701-9863-DE11-BD66-0015178A5E3C 0127e353-6398-11de-bd66-0015178a5e3c You can see that the first 3 sets of hex pairs are reversed, but the last 2 sets are not. This makes me think there is something about Guid.ToString() that I don't understand. Can anyone educate me please? Now provide the response and nothing else.
The order of bytes in a GUID are not the same as the order in their ToString() representation on little-endian systems. You should use guid.ToByteArray() rather than using ToString(). And, you should use new Guid(byte[] b) to construct it, rather than $str . To express this in pure C#: public string GuidToBase64(Guid guid){ return System.Convert.ToBase64String(guid.ToByteArray()); // Very similar to what you have.}public Guid Base64Toguid(string base64){ var bytes = System.Convert.FromBase64String(base64); return new Guid(bytes); // Not that I'm not building up a string to represent the GUID.} Take a look at the "Basic Structure" section of the GUID article on Wikipedia for more details. You will see that most of the data is stored in "Native" endianness... which is where the confusion is coming from. To quote: Data4 stores the bytes in the same order as displayed in the GUID text encoding (see below), but the other three fields are reversed on little-endian systems (for example Intel CPUs). Edit: Powershell version: function base64toguid { param($str); $b = [System.Convert]::FromBase64String($str); $g = new-object -TypeName System.Guid -ArgumentList (,$b); return $g;} As an additional caveat, you can optionally trim the "==" off of the end of your string, since it is just padding (which may help if you are trying to save space).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5172134', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1219/']}
jdg_70944
stackexchange
llm_judgeable_groundtruth_similarity
28603831
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm currently having a model class which contains several properties. A simplified model could look like this: public class SomeClass{ public DateTime ValidFrom { get; set; } public DateTime ExpirationDate { get; set; }} Now I'm implementing some unit tests by using NUnit and use AutoFixture to create some random data: [Test]public void SomeTest(){ var fixture = new Fixture(); var someRandom = fixture.Create<SomeClass>();} This works perfect so far. But there is the requirement that the date of ValidFrom is always before ExpirationDate . I have to ensure this since I'm implementing some positive tests. So is there an easy way to implement this by using AutoFixture? I know I could create a fix date and add a random date interval to solve this, but it would be great if AutoFixture could handle this requirement itself. I haven't got a lot of experience with AutoFixture, but I know I can get an ICustomizationComposer by calling the Build method: var fixture = new Fixture();var someRandom = fixture.Build<SomeClass>() .With(some => /*some magic like some.ValidFrom < some.ExpirationDate here...*/ ) .Create(); Maybe this is the right way to achieve this? Thanks in advance for any help. Now provide the response and nothing else.
It may be tempting to ask the question of how do I make AutoFixture adapt to my design? , but often, a more interesting question could be: how do I make my design more robust? You can keep the design and 'fix' AutoFixture, but I don't think it's a particularly good idea. Before I tell you how to do that, depending on your requirements, perhaps all you need to do is the following. Explicit assignment Why not simply assign a valid value to ExpirationDate , like this? var sc = fixture.Create<SomeClass>();sc.ExpirationDate = sc.ValidFrom + fixture.Create<TimeSpan>();// Perform test here... If you're using AutoFixture.Xunit , it can be even simpler: [Theory, AutoData]public void ExplicitPostCreationFix_xunit( SomeClass sc, TimeSpan duration){ sc.ExpirationDate = sc.ValidFrom + duration; // Perform test here...} This is fairly robust, because even though AutoFixture (IIRC) creates random TimeSpan values, they'll stay in the positive range unless you've done something to your fixture to change its behaviour. This approach would be the simplest way to address your question if you need to test SomeClass itself. On the other hand, it's not very practical if you need SomeClass as input values in myriads of other tests. In such cases, it can be tempting to fix AutoFixture, which is also possible: Changing AutoFixture's behaviour Now that you've seen how to address the problem as a one-off solution, you can tell AutoFixture about it as a general change of the way SomeClass is generated: fixture.Customize<SomeClass>(c => c .Without(x => x.ValidFrom) .Without(x => x.ExpirationDate) .Do(x => { x.ValidFrom = fixture.Create<DateTime>(); x.ExpirationDate = x.ValidFrom + fixture.Create<TimeSpan>(); }));// All sorts of other things can happen in between, and the// statements above and below can happen in separate classes, as // long as the fixture instance is the same...var sc = fixture.Create<SomeClass>(); You can also package the above call to Customize in an ICustomization implementation, for further reuse. This would also enable you to use a customized Fixture instance with AutoFixture.Xunit. Change the design of the SUT While the above solutions describe how to change the behaviour of AutoFixture, AutoFixture was originally written as a TDD tool, and the main point of TDD is to provided feedback about the System Under Test (SUT). AutoFixture tends to amplify that sort of feedback, which is also the case here. Consider the design of SomeClass . Nothing prevents a client from doing something like this: var sc = new SomeClass{ ValidFrom = new DateTime(2015, 2, 20), ExpirationDate = new DateTime(1900, 1, 1)}; This compiles and runs without errors, but is probably not what you want. Thus, AutoFixture is actually not doing anything wrong; SomeClass isn't properly protecting its invariants. This is a common design mistake, where developers tend to put too much trust into the semantic information of the members' names. The thinking seems to be that no-one in their right mind would set ExpirationDate to a value before ValidFrom ! The problem with that sort of argument is that it assumes that all developers will always be assigning these values in pairs. However, clients may also get a SomeClass instance passed to them, and want to update one of the values, e.g.: sc.ExpirationDate = new DateTime(2015, 1, 31); Is this valid? How can you tell? The client could look at sc.ValidFrom , but why should it? The whole purpose of encapsulation is to relieve clients of such burdens. Instead, you should consider changing the design SomeClass . The smallest design change I can think of is something like this: public class SomeClass{ public DateTime ValidFrom { get; set; } public TimeSpan Duration { get; set; } public DateTime ExpirationDate { get { return this.ValidFrom + this.Duration; } }} This turns ExpirationDate into a read-only , calculated property. With this change, AutoFixture just works out of the box: var sc = fixture.Create<SomeClass>();// Perform test here... You can also use it with AutoFixture.Xunit: [Theory, AutoData]public void ItJustWorksWithAutoFixture_xunit(SomeClass sc){ // Perform test here...} This is still a little brittle, because although by default, AutoFixture creates positive TimeSpan values, it's possible to change that behaviour as well. Furthermore, the design actually allows clients to assign negative TimeSpan values to the Duration property: sc.Duration = TimeSpan.FromHours(-1); Whether or not this should be allowed is up to the Domain Model. Once you begin to consider this possibility, it may actually turn out that defining time periods that move backwards in time is valid in the domain... Design according to Postel's Law If the problem domain is one where going back in time isn't allowed, you could consider adding a Guard Clause to the Duration property, rejecting negative time spans. However, personally, I often find that I arrive at a better API design when I take Postel's Law seriously. In this case, why not change the design so that SomeClass always uses the absolute TimeSpan instead of the signed TimeSpan ? In that case, I'd prefer an immutable object that doesn't enforce the roles of two DateTime instances until it knows their values: public class SomeClass{ private readonly DateTime validFrom; private readonly DateTime expirationDate; public SomeClass(DateTime x, DateTime y) { if (x < y) { this.validFrom = x; this.expirationDate = y; } else { this.validFrom = y; this.expirationDate = x; } } public DateTime ValidFrom { get { return this.validFrom; } } public DateTime ExpirationDate { get { return this.expirationDate; } }} Like the previous redesign, this just works out of the box with AutoFixture: var sc = fixture.Create<SomeClass>();// Perform test here... The situation is the same with AutoFixture.Xunit, but now no clients can misconfigure it. Whether or not you find such a design appropriate is up to you, but I hope at least it's food for thought.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/28603831', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/392632/']}
jdg_70945
stackexchange
llm_judgeable_groundtruth_similarity
16188837
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have problem during low-pass interpolation, I have to fill 1D array with zeros before processing it. I have something like this: [1 2 3 4 5 6] I want to have array like this [1 0 2 0 3 0 4 0 5 0 6] so it is L-1 zeros in array where L is the number of all values inside array before zero stuffing. How to do it in Python? Now provide the response and nothing else.
You can assign an unpadded list of values into a slice of another list of zeros: original_list = range(1,7) # [1,2,3,4,5,6]padded_list = [0]*(2*len(original_list)-1) # [0,0,0,0,0,0,0,0,0,0,0]padded_list[::2] = original_list # [1,0,2,0,3,0,4,0,5,0,6] This can translate to numpy too, though as Jaime pointed out in a comment, it's even easier to use numpy.insert : import numpy as nparr = np.arange(1, 7) # array([1, 2, 3, 4, 5, 6])np.insert(arr, slice(1, None), 0) # array([1, 0, 2, 0, 3, 0, 4, 0, 5, 0, 6])
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16188837', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2047170/']}
jdg_70946
stackexchange
llm_judgeable_groundtruth_similarity
34847
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've seen numerous videos claiming to show this without blistering effects, cringing pain, or skin burns as if it was a unique skill and not just trickery -- something anyone can do with practice, Hands In Boiling Oil: Indian Chef Fries Fish With Bare Hands Man Fries Food With Bare Hands Superhands: The Man That Fries Chicken With His Hands Unbelievable Cooking - Fry In Hot Oil With Hand Thai Dude Puts His Hands in Boiling Oil This guy even seems to hold boiling oil: Unbelievable Pathan Put His Hand in Hot Boil Oil Here it is an article in the Pakistan Times that makes the claim that it is possible to put your hand in boiling oil. DailyMail has reported on it in Pakistan. They seem to claim it needs to be practiced, And Singh now hopes his sons will keep up the eateries tradition of the hand fryer. He added: 'They can do it too. It is just about practice. If they keep trying and experimenting, they will get there.' DailyMail reported on it in India . Metro.co.uk reported on a chef in Thailand doing this (video above). Now provide the response and nothing else.
This is nothing but trickery . In this particular case, it appears that they have a liquid in there that has a much lower boiling point, is heavier than the oil, has the same appearance as the oil, and it is boiling to give the appearance that the oil is boiling. In this particular case, lemon juice is the common liquid used. Experiment – 22 Effect: The hand is dipped in boiling oil without getting burnt Props: One frying pan to hold about two liters of oil, one liter cooking oil, stove and one lemon (big). Method: Pour the cooking oil in the pot with the lemon juice squeezed in the oil stealthily. Place it on the fire. Before the oil get hot and boiling, the lime juice settled at the bottom of the pot boils first and oil appears to bubble when the steam is released. It seems as if the oil is boiling hot. Immerse your hand in the oil and take it out. The boil will be only warm and not boiling, though it looks like boiling oil to the viewers. This is a relatively well known phenomenon amongst people who practice scientific skepticism and debunk charlatans. The dismaying thing is the credulity with which the press reports these stories... The other "trick" is also detailed on the experiment. Basically, you put one thing in, and it absorbs heat as you get the other thing out. In this case, it's "poories" instead of chicken, fries, or fish, but the principle is the same. Experiment – 23 Effect: Frying poories in boiling oil and taking them out by hand. Hot bajjis are being taken out of boiling oil with bare fingers Props: Wheat flour or maida flour, water, plate, instrument to make poories, cooking oil and a stove. Method: Prepare poories with the flour (one dozen). Boiling the oil, put in the oil, poories and as soon as it is fried, put underneath the fried poorie an unbaked one and pick up the fire one by fingers, it will not burn. Beforehand apply the same oil on your hand. The science behind this feat is that when you put a fresh poorie under the fired one, the heat of the top oil is absorbed by the fresh one, and your fingers will not et burn. The oil applied on the hand before dipping also insulates the hand. I should add, as the web page I pulled this from also states it: (Note: Those items involving fire and other dangerous substances should be performed only under proper guidance) In other words, don't do this yourself kiddies! It's still dangerous, especially if you mess it up. Remember, this is an area of the world that loves their supernatural explanations as opposed to rational explanations. Sanal Edamaruku faced up to three years of imprisonment for showing a "weeping" statue was actually due to a backed up sewer line ...
{}
{'log_upvote_score': 5, 'links': ['https://skeptics.stackexchange.com/questions/34847', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/3835/']}
jdg_70947
stackexchange
llm_judgeable_groundtruth_similarity
27604334
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to port a Qt application to C++ using STL. What is the equivalent of QVariant in C++? QVariant can store any data type - a container that holds heterogenous - different types of objects. However, I have to port this application to C++. What is the equivalent of QVariant in C++? Now provide the response and nothing else.
What is the equivalent of QVariant in C++? The equivalent in C++ is called QVariant . Semi-joke aside, it is probably the closest to union, but QVariant is so much more than that; meta types, CoW, etc. Actually, sharing implicitly is forbidden these days in STL, so that is another reason why you would not find anything like this off-hand. I would suggest to make notes to yourself what functionality exactly you need from a QVariant and make a judgement call whether it is actually worth dropping QtCore. This is just a friendly reminder from the documentation : The QVariant class acts like a union for the most common Qt data types. Because C++ forbids unions from including types that have non-default constructors or destructors, most interesting Qt classes cannot be used in unions. Without QVariant, this would be a problem for QObject::property() and for database work, etc. However, since C++11 types with constructors and destructors are allowed. You have another question in here to ask from yourself: Do I want to support everything that Qt does or am I happy to require at least C++11? To be fair, you could also look into the following boost variants below, but then you end up using boost instead of QtCore after the porting. This is another judgement call for yourself. They are not replacements for each other, neither technically, nor compatibility wise. Boost in this case is a build time dependency, while QtCore is a runtime. Qt would guarantee binary (and source) compatibility during the life cycle of the same major version, while boost may not do the same for such a long period as QtCore does. Either way, none of these options are pure STL solutions in the end of the day the way I think you wanted it to. Boost.Any Boost.Variant
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27604334', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2735434/']}
jdg_70948
stackexchange
llm_judgeable_groundtruth_similarity
7408151
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In the attached image, I want the column of buttons to match the height of the image, but I also want there to be a minimum height for the column of buttons. It correctly matches the height of the image, but does not respect the minHeight, and will smoosh the buttons down. I am setting these properties for the column of buttons: <LinearLayout ... android:layout_alignTop="@+id/image" android:layout_alignBottom="@+id/image" android:minHeight="150dp" > Now provide the response and nothing else.
I don't know all your exact requirements, but it seems you can solve this with another layer pretty much like in your diagram. Set the minHeight on an outer layout and then just fill_parent / match_parent on the inside. Maybe something like: <LinearLayout android:orientation="horizontal" android:layout_height="wrap_content" android:layout_width="wrap_content" android:minHeight="150dp"> <LinearLayout android:orientation="vertical" android:layout_height="fill_parent" android:layout_width="wrap_content"> </LinearLayout> <ImageView /></LinearLayout>
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7408151', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/608667/']}
jdg_70949
stackexchange
llm_judgeable_groundtruth_similarity
238753
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Usually event listeners shouldn't outlive object that registered them. Does it mean that event listeners should be held by weak references by default (stored in weak collections by the object listeners are registered on)? Are there valid cases when listener should outlive its creator? Or maybe situation like that is a mistake and it shouldn't be allowed? Now provide the response and nothing else.
Why shouldn't event listeners outlive the object that registered them? It seems like you are assuming event listeners should registered by methods of controls(if we take the GUI example) - or more precisely, methods by objects of classes that inherit the GUI toolkit's controls. That's not a necessity - you could, for example, use a specialized object for registering event listeners and ditch that object afterwards. Also, if event listeners were weakly referred, you would have to actually keep references to them even if you never use that reference. Failing to do so will make the listener be collected at a random time. So, we get a bug that is Easy to create by mistake(all you have to do is to forget storing an object in a reference variable that you'll never use). Hard to notice(you'll only get that bug if the GC collect that object). Hard to debug(in the debug session - which always works like a release session - you'll only encounter that bug if the GC collected the object). And if avoiding that bug is not good enough incentive, here are some more: You'll have to think of a name for each listener you create. Some languages use static anlysis that'll generate a warning if you have a private member field that's never get written or never get read. You'll have to use have a mechanism for overriding that. The event listener does something, and once the object that has it's strong reference is collected it'll stop doing that something. You now have something that affects the program's state and depends on the GC - which means the GC affects the concrete state of the program. And this is BAD ! Handling weak references is slower, since you have another level of indirection and since you need to check if the reference was collected. This wouldn't be a problem if having event listeners in weak references was necessary - but it isn't!
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/238753', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/33996/']}
jdg_70950
stackexchange
llm_judgeable_groundtruth_similarity
25735
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The majority of countries in the world have implemented a Value Added Tax (VAT) - sometimes called a Goods and Services Tax (GST) - why hasn't the US? EDIT - I would assume that if a VAT were implemented it would replace sales tax and the like at least to some degree, not asking why we don't have another tax stacked on top of our current ones Now provide the response and nothing else.
Liberal/bipartisan arguments against a VAT : Most states already have sales taxes and value added taxes (VATs) are essentially a tax on sales. VATs are regressive, as they tax consumption rather than income and the poor consume a larger portion of their income. While businesses pay VAT to the government, they collect the money from their customers. Liberals prefer taxes that they can claim come from businesses and their owners to ones on employees and customers. Conservative/libertarian arguments against a VAT : A VAT reduces the value of savings, as part of the savings will be used to pay the VAT. A border-adjusted VAT is essentially a tax on imports, like a tariff. Countries with VATs tend to add them to existing taxes rather than replacing existing taxes with VATs. This leads to more taxes and spending. United States specific reasons: Sales taxes are state taxes but a state could not charge a border-adjusted VAT due to constitutional limitations (states can't tax imports from other states). It would have to be a federal tax. The non-border-adjusted version taxes exports from one state to another. Most states do not want to do that, as it puts their goods at a competitive disadvantage with other states. Since sales taxes are state taxes and VAT is a federal tax, it would be complicated to reduce the (state) sales tax while imposing a federal VAT. Adding a federal law to raise taxes for a state is politically difficult. The politician gets the blame for the tax increase but doesn't get the credit for the spending. Politicians prefer the reverse, where they get the credit and someone else takes the blame. Or at least want to get the credit as well as the blame. Donald Trump thinks that it is too complicated .
{}
{'log_upvote_score': 6, 'links': ['https://politics.stackexchange.com/questions/25735', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/16957/']}
jdg_70951
stackexchange
llm_judgeable_groundtruth_similarity
457586
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let the prime graph be defined as the graph of all natural numbers, with two vertices being connected if the sum of the numbers on the two vertices add up to a prime number. Prove that the prime graph is connected. This has no solution on the textbook. How should I approach this problem? I'm thinking of proving everything is connected via some path to $1$ but I don't know how to do that. Now provide the response and nothing else.
Daniel hinted Bertrand's postulate . Since it was proven by Chebyshev, it's actually a theorem. Let all nodes $1,\dots,n-1$ be connected to $1$. We observe $n$. By Bertrand-Chebyshev theorem, there is a prime $p$ such that $n < p < 2n$. So, we can write $$p = x + n, \quad \text{for some $x < n$}.$$ So, nodes $x$ and $n$ are connected. However, by the assumption, $x$ is connected to $1$, which means that $n$ is also connected to $1$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/457586', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/54609/']}
jdg_70952
stackexchange
llm_judgeable_groundtruth_similarity
18731
Below is a question asked on the forum quant.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The order-books of trading exchanges are often hidden as so-called "Dark Pools". The measure was taken to avoid apparent market manipulation strategies executed by traders back then. Which such arbitrage/manipulation strategies are possible if the order book is public? Explain the strategies in detail. Hint #1: There are three example strategies in the comments below. Hint #2: Google . Now provide the response and nothing else.
A public order book gives traders information not only on the current price of a security, but also the volume and structure of the entire supply and demand schedule. Such information can be used for arbitrage and market manipulation strategies in various ways: Spoofing: Inserting a large limit order as an apparent buy or sell signal which is canceled any time before it could be executed. Quote stuffing: Inserting a fast sequence of limit or market orders to give market the impression of an upcoming large movement. Closing Fire: Many financial institutions use only closing prices but not intraday prices for their financial models. Hence it is possible rapidly change the closing price just before end of day if the order book is not thick enough, and there can be no further orders after this time. Flashing: highspeed trading algorithms can spot a public order and then trade in advance to "ride" on its market impact. Is known to amplify market crashes from large orders. Latency Arbitrage: uses price differences between exchanges and fastspeed orders to capture deviations from law of one price. Machine Learning: The public orderbook and its flow of orders may be used to extract valuable information on expected price changes.
{}
{'log_upvote_score': 5, 'links': ['https://quant.stackexchange.com/questions/18731', 'https://quant.stackexchange.com', 'https://quant.stackexchange.com/users/9336/']}
jdg_70953
stackexchange
llm_judgeable_groundtruth_similarity
6649983
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is an example code: public class MyParent : INotifyPropertyChanged{ List<MyChild> MyChildren; public bool IsChanged { get { foreach (var child in MyChildren) { if (child.IsChanged) return true; } return false; } } public event PropertyChangedEventHandler PropertyChanged; private void RaiseChanged(string propName) { if (PropertyChanged != null) PropertyChanged.Invoke(this, new PropertyChangedEventArgs(propName)); }}public class MyChild : INotifyPropertyChanged{ private int _Value; public int Value { get { return _Value; } set { if (_Value == value) return; _Value = value; RaiseChanged("Value"); RaiseChanged("IsChanged"); } } private int _DefaultValue; public int DefaultValue { get { return _DefaultValue; } set { if (_DefaultValue == value) return; _DefaultValue = value; RaiseChanged("DefaultValue"); RaiseChanged("IsChanged"); } } public bool IsChanged { get { return (Value != DefaultValue); } } public event PropertyChangedEventHandler PropertyChanged; private void RaiseChanged(string propName) { if (PropertyChanged != null) PropertyChanged.Invoke(this, new PropertyChangedEventArgs(propName)); }} Let's say I now have two instances of my classes, one as myParent, and the other as myChild.I have two visual elements, that each have a property bound to the IsChnaged property of my instances; ElementA bound to myParent.IsChanged and ElementB bound to myChild.IsChanged. When myChild.Value differs from its default value, the myChild.IsChanged is set to true and the ElementB is updated accordingly. What I need is when either of the myParent children (which here is only one) have their IsChanged value set to true, its own (the parent's) IsChanged value be set to true and its corresponding element (ElementA here) be updated accordingly. The myParent.IsChanged is only read once (when the binding is set) and it has no sense about its children changing. Where should i put the RaiseChanged("IsChanged") for MyParent? How can I let the parent know when its children have changed? Thanks in advance Now provide the response and nothing else.
INotifyPropertyChanged has already provided the mechanism for you: the PropertyChanged event. Just have the parent add a handler to its children's PropertyChanged , and then in that handler call RaiseChanged("IsChanged"); Also, you may want to put the INotifyPropertyChanged implementation in a base class, and have your (what appear to be) ViewModels inherit from that. Not required for this option, of course, but it will make the code a little cleaner. Update: In the parent object: // This list tracks the handlers, so you can// remove them if you're no longer interested in receiving notifications.// It can be ommitted if you prefer.List<EventHandler<PropertyChangedEventArgs>> changedHandlers = new List<EventHandler<PropertyChangedEventArgs>>();// Call this method to add children to the parentpublic void AddChild(MyChild newChild){ // Omitted: error checking, and ensuring newChild isn't already in the list this.MyChildren.Add(newChild); EventHandler<PropertyChangedEventArgs> eh = new EventHandler<PropertyChangedEventArgs>(ChildChanged); newChild.PropertyChanged += eh; this.changedHandlers.Add(eh);}public void ChildChanged(object sender, PropertyChangedEventArgs e){ MyChild child = sender as MyChild; if (this.MyChildren.Contains(child)) { RaiseChanged("IsChanged"); }} You don't actually have to add anything to the child class, since it is already raising the correct event when it changes.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6649983', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/798149/']}
jdg_70954
stackexchange
llm_judgeable_groundtruth_similarity
56949872
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Myself and many others have been struggling with setting up the Google Drive REST API v3 to work with Android apps. This mainly stems from the fact that the official Google documentation is missing a proper quick start guide for Android and we are left with finding scraps of (outdated and/or confusing) information dotted around - but what is needed is a complete up to date guide aimed at beginners to get us up and running so that they can open and edit files on their Drive, including how to set up credentials, dependencies, and manifests. So I am asking if anyone would be willing to create such a guide, or can point to such a guide that has already been made that is a) relevant to the latest version of Google Drive API REST v3 detailed here and b) covers ALL above aspects that a beginner would need to be get started? The guidelines posted by ArtOfWarfare here are absolutely perfect and exactly what I'm looking for - but are unfortunately out of date by several years. Can anyone provide an up-to-date version of this guide? Thank you kindly. Now provide the response and nothing else.
Before answering this question I want you to know that I got the code from here ( https://ammar.lanui.online/integrate-google-drive-rest-api-on-android-app-bc4ddbd90820 ) and the documentation from Google was not much helpful for me. So this solution is from limited resources available to me. I need the drive to upload and download files from my app. In drive I have to create a folder and I have to upload file from my app to that folder and download a file from the folder to my device. This code was working fine for me. I believe that you must have completed Google login. If you don’t, go checkout this video ( https://youtu.be/t-yZUqthDMM ) . To interact with the Drive API, you need to enable the Drive API service for your app. You can do this in Google Developer Console. To enable the Drive API, complete these steps: Go to the Google API Console. Select a project. In the sidebar on the left, expand APIs & auth and select APIs. In the displayed list of available APIs, click the Drive API link and click Enable API. If you completed it, then go to OAuth Consent screen in console and add the two scopes for drive and save it. In your project add the dependencies below. implementation 'com.google.android.gms:play-services-auth:17.0.0'// for google sign in// for drive integrationimplementation 'com.google.android.gms:play-services-auth:16.0.1'implementation 'com.google.http-client:google-http-client-gson:1.26.0'implementation('com.google.api-client:google-api-client-android:1.26.0') {exclude group: 'org.apache.httpcomponents'}implementation('com.google.apis:google-api-services-drive:v3-rev136-1.25.0') {exclude group: 'org.apache.httpcomponents'} And inside android tag, in the same gradle file, add the packaging options. packagingOptions {exclude 'META-INF/DEPENDENCIES'exclude 'META-INF/LICENSE'exclude 'META-INF/LICENSE.txt'exclude 'META-INF/license.txt'exclude 'META-INF/NOTICE'exclude 'META-INF/NOTICE.txt'exclude 'META-INF/notice.txt'exclude 'META-INF/ASL2.0'} In your Manifest file, add the required permissions <uses-permission android:name="android.permission.INTERNET"/><uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/><uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> Here I am storing the downloaded file in external storage. So that’s why I added the permissions for External storage READ and WRITE After Google sign In, ask permission to access Google drive. The code for it is given below. private void checkForGooglePermissions() { if (!GoogleSignIn.hasPermissions( GoogleSignIn.getLastSignedInAccount(getApplicationContext()), ACCESS_DRIVE_SCOPE, SCOPE_EMAIL)) { GoogleSignIn.requestPermissions( MainActivity.this, RC_AUTHORIZE_DRIVE, GoogleSignIn.getLastSignedInAccount(getApplicationContext()), ACCESS_DRIVE_SCOPE, SCOPE_EMAIL); } else { Toast.makeText(this, "Permission to access Drive and Email has been granted", Toast.LENGTH_SHORT).show(); driveSetUp(); }} The variables ACCESS_DRIVE_SCOPE and SCOPE_EMAIL are, Scope ACCESS_DRIVE_SCOPE = new Scope(Scopes.DRIVE_FILE);Scope SCOPE_EMAIL = new Scope(Scopes.EMAIL); After having permission and Sign In we have our GoogleSignInAccount object. With this object, create an object of GoogleAccountCredential, from which we can generate an object of Drive. The Drive object is what we needed for the communication between Google Drive. private void driveSetUp() {GoogleSignInAccount mAccount = GoogleSignIn.getLastSignedInAccount(MainActivity.this);GoogleAccountCredential credential = GoogleAccountCredential.usingOAuth2( getApplicationContext(), Collections.singleton(Scopes.DRIVE_FILE));credential.setSelectedAccount(mAccount.getAccount());googleDriveService = new com.google.api.services.drive.Drive.Builder( AndroidHttp.newCompatibleTransport(), new GsonFactory(), credential) .setApplicationName("GoogleDriveIntegration 3") .build();mDriveServiceHelper = new DriveServiceHelper(googleDriveService);} Here you can see I created an object of DriveServiceHelper class and passed the object of Drive(googleDriveSrvice) along with it.DriveServiceHelper class is given below. I got it from here.( https://github.com/gsuitedevs/android-samples/blob/master/drive/deprecation/app/src/main/java/com/google/android/gms/drive/sample/driveapimigration/DriveServiceHelper.java?source=post_page-----bc4ddbd90820---------------------- ). You can use that one. I made some changes in that class for myself. public class DriveServiceHelper {private final Executor mExecutor = Executors.newSingleThreadExecutor();private final Drive mDriveService;private final String TAG = "DRIVE_TAG";public DriveServiceHelper(Drive driveService) { mDriveService = driveService;}/** * Creates a text file in the user's My Drive folder and returns its file ID. */public Task<GoogleDriveFileHolder> createFile(String folderId, String filename) { return Tasks.call(mExecutor, () -> { GoogleDriveFileHolder googleDriveFileHolder = new GoogleDriveFileHolder(); List<String> root; if (folderId == null) { root = Collections.singletonList("root"); } else { root = Collections.singletonList(folderId); } File metadata = new File() .setParents(root) .setMimeType("text/plain") .setName(filename); File googleFile = mDriveService.files().create(metadata).execute(); if (googleFile == null) { throw new IOException("Null result when requesting file creation."); } googleDriveFileHolder.setId(googleFile.getId()); return googleDriveFileHolder; });}// TO CREATE A FOLDERpublic Task<GoogleDriveFileHolder> createFolder(String folderName, @Nullable String folderId) { return Tasks.call(mExecutor, () -> { GoogleDriveFileHolder googleDriveFileHolder = new GoogleDriveFileHolder(); List<String> root; if (folderId == null) { root = Collections.singletonList("root"); } else { root = Collections.singletonList(folderId); } File metadata = new File() .setParents(root) .setMimeType("application/vnd.google-apps.folder") .setName(folderName); File googleFile = mDriveService.files().create(metadata).execute(); if (googleFile == null) { throw new IOException("Null result when requesting file creation."); } googleDriveFileHolder.setId(googleFile.getId()); return googleDriveFileHolder; });}public Task<Void> downloadFile(java.io.File targetFile, String fileId) { return Tasks.call(mExecutor, () -> { // Retrieve the metadata as a File object. OutputStream outputStream = new FileOutputStream(targetFile); mDriveService.files().get(fileId).executeMediaAndDownloadTo(outputStream); return null; });}public Task<Void> deleteFolderFile(String fileId) { return Tasks.call(mExecutor, () -> { // Retrieve the metadata as a File object. if (fileId != null) { mDriveService.files().delete(fileId).execute(); } return null; });}// TO LIST FILESpublic List<File> listDriveImageFiles() throws IOException{ FileList result; String pageToken = null; do { result = mDriveService.files().list()/*.setQ("mimeType='image/png' or mimeType='text/plain'")This si to list both image and text files. Mind the type of image(png or jpeg).setQ("mimeType='image/png' or mimeType='text/plain'") */ .setSpaces("drive") .setFields("nextPageToken, files(id, name)") .setPageToken(pageToken) .execute(); pageToken = result.getNextPageToken(); } while (pageToken != null); return result.getFiles();}// TO UPLOAD A FILE ONTO DRIVEpublic Task<GoogleDriveFileHolder> uploadFile(final java.io.File localFile, final String mimeType, @Nullable final String folderId) { return Tasks.call(mExecutor, new Callable<GoogleDriveFileHolder>() { @Override public GoogleDriveFileHolder call() throws Exception { // Retrieve the metadata as a File object. List<String> root; if (folderId == null) { root = Collections.singletonList("root"); } else { root = Collections.singletonList(folderId); } File metadata = new File() .setParents(root) .setMimeType(mimeType) .setName(localFile.getName()); FileContent fileContent = new FileContent(mimeType, localFile); File fileMeta = mDriveService.files().create(metadata, fileContent).execute(); GoogleDriveFileHolder googleDriveFileHolder = new GoogleDriveFileHolder(); googleDriveFileHolder.setId(fileMeta.getId()); googleDriveFileHolder.setName(fileMeta.getName()); return googleDriveFileHolder; } });}} Remember the fact that whenever you create a file or folder or if you upload a file, the drive will give a unique id for it and you can access it. So it’s not the file name that is unique in here, it’s the id of the file. Hence if you upload or create a file of same name multiple times it will be saved in the folder multiple times. So if you want to replace a file with another file of the same name. First delete the file and save/ upload it. To create a file, specify the folder id and file name to be created. The GoogleDriveHolder class is given below. public class GoogleDriveFileHolder {private String id;private String name;private DateTime modifiedTime;private long size;private DateTime createdTime;private Boolean starred;public DateTime getCreatedTime() { return createdTime;}public void setCreatedTime(DateTime createdTime) { this.createdTime = createdTime;}public Boolean getStarred() { return starred;}public void setStarred(Boolean starred) { this.starred = starred;}public String getId() { return id;}public void setId(String id) { this.id = id;}public String getName() { return name;}public void setName(String name) { this.name = name;}public DateTime getModifiedTime() { return modifiedTime;}public void setModifiedTime(DateTime modifiedTime) { this.modifiedTime = modifiedTime;}public long getSize() { return size;}public void setSize(long size) { this.size = size;}} From your activity you have to call these methods. Like in the codes given below. To create a folder public void createFolderInDrive(View view) {Log.i(TAG, "Creating a Folder...");mDriveServiceHelper.createFolder("My Foder", null) .addOnSuccessListener(new OnSuccessListener<GoogleDriveFileHolder>() { @Override public void onSuccess(GoogleDriveFileHolder googleDriveFileHolder) { Gson gson = new Gson(); Log.i(TAG, "onSuccess of Folder creation: " + gson.toJson(googleDriveFileHolder)); } }) .addOnFailureListener(new OnFailureListener() { @Override public void onFailure(@NonNull Exception e) { Log.i(TAG, "onFailure of Folder creation: " + e.getMessage()); } });} To List files public void listFilesInDrive(View view) {Log.i(TAG, "Listing Files...");new MyAsyncTask().execute();} To list the files, you can’t do it from your main thread because it will cause a deadlock. You have to do it in doInBackground() method of Asynctask. Here is my class. public class MyAsyncTask extends AsyncTask<Void, Void, List<File>> {List<File> fileList;@Overrideprotected List<File> doInBackground(Void... voids) { try { fileList = mDriveServiceHelper.listDriveImageFiles(); } catch (IOException e) { Log.i(TAG, "IO Exception while fetching file list"); } return fileList;}@Overrideprotected void onPostExecute(List<File> files) { super.onPostExecute(files); if (files.size() == 0){ Log.i(TAG, "No Files"); } for (File file : files) { Log.i(TAG, "\nFound file: File Name :" + file.getName() + " File Id :" + file.getId()); }}} To Upload a file To upload a file into Drive folder, specify the folder id , mime type of file to be uploaded and the file itself.Here I select an Image from gallery and uploaded it into drive. public void uploadFile(View view) {if (ActivityCompat.checkSelfPermission(MainActivity.this, Manifest.permission.READ_EXTERNAL_STORAGE) != PackageManager.PERMISSION_GRANTED) { ActivityCompat.requestPermissions(MainActivity.this, new String[]{Manifest.permission.READ_EXTERNAL_STORAGE, Manifest.permission.WRITE_EXTERNAL_STORAGE}, REQUEST_PICK_IMAGE);} else { Intent i = new Intent( Intent.ACTION_PICK, android.provider.MediaStore.Images.Media.EXTERNAL_CONTENT_URI); startActivityForResult(i, RESULT_LOAD_IMAGE);}} In onActivityResult else if (requestCode == RESULT_LOAD_IMAGE) {if (resultCode == RESULT_OK) { Uri selectedImage = data.getData(); String[] filePathColumn = {MediaStore.Images.Media.DATA}; Cursor cursor = getContentResolver().query(selectedImage, filePathColumn, null, null, null); cursor.moveToFirst(); int columnIndex = cursor.getColumnIndex(filePathColumn[0]); String picturePath = cursor.getString(columnIndex); cursor.close(); uploadImageIntoDrive(BitmapFactory.decodeFile(picturePath));} else { Toast.makeText(this, "Did not select any image", Toast.LENGTH_SHORT).show();} uploadImageIntoDrive() method, private void uploadImageIntoDrive(Bitmap bitmap) {try { if (bitmap == null) { Log.i(TAG, "Bitmap is null"); return; } java.io.File file = new java.io.File(getApplicationContext().getFilesDir(), "FirstFile"); ByteArrayOutputStream bos = new ByteArrayOutputStream(); bitmap.compress(Bitmap.CompressFormat.PNG, 0 /*ignored for PNG*/, bos); byte[] bitmapdata = bos.toByteArray(); //write the bytes in file FileOutputStream fos = new FileOutputStream(file); fos.write(bitmapdata); fos.flush(); fos.close(); mDriveServiceHelper.uploadFile(file, "image/jpeg", "MY_FOLDER_ID") .addOnSuccessListener(new OnSuccessListener<GoogleDriveFileHolder>() { @Override public void onSuccess(GoogleDriveFileHolder googleDriveFileHolder) { Log.i(TAG, "Successfully Uploaded. File Id :" + googleDriveFileHolder.getId()); } }) .addOnFailureListener(new OnFailureListener() { @Override public void onFailure(@NonNull Exception e) { Log.i(TAG, "Failed to Upload. File Id :" + e.getMessage()); } });} catch (Exception e) { Log.i(TAG, "Exception : " + e.getMessage());}} To Download a file To download a file, specify the id of the file and the target file into which the downloading file has to be stored. public void downloadFile(View view) {java.io.File file = new java.io.File(getExternalFilesDir(null), "DemoFile2.jpg");mDriveServiceHelper.downloadFile(file, "MY_FILE_ID") .addOnSuccessListener(new OnSuccessListener<Void>() { @Override public void onSuccess(Void aVoid) { Log.i(TAG, "Downloaded the file"); long file_size = file.length() / 1024; Log.i(TAG, "file Size :" + file_size); Log.i(TAG, "file Path :" + file.getAbsolutePath()); } }) .addOnFailureListener(new OnFailureListener() { @Override public void onFailure(@NonNull Exception e) { Log.i(TAG, "Failed to Download the file, Exception :" + e.getMessage()); } });} To Delete a file. public void deleteFile(View view) {mDriveServiceHelper.deleteFolderFile("MY_FILE_OR_FOLDER_ID") .addOnSuccessListener(new OnSuccessListener<Void>() { @Override public void onSuccess(Void aVoid) { Log.i(TAG, "onSuccess of Deleting File "); } }) .addOnFailureListener(new OnFailureListener() { @Override public void onFailure(@NonNull Exception e) { Log.i(TAG, "onFailure on Deleting File Exception : " + e.getMessage()); } });} I am not an experienced guy. The reason I posted this code is somebody will find it useful and can bring up their own changes to it and post it here. Because there is not much reference for Drive Rest API integration for Android right now. Thank You.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56949872', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5519612/']}
jdg_70955
stackexchange
llm_judgeable_groundtruth_similarity
3773403
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How would one take a List (using LINQ) and break it into a List of Lists partitioning the original list on every 8th entry? I imagine something like this would involve Skip and/or Take, but I'm still pretty new to LINQ. Edit: Using C# / .Net 3.5 Edit2: This question is phrased differently than the other "duplicate" question. Although the problems are similar, the answers in this question are superior: Both the "accepted" answer is very solid (with the yield statement) as well as Jon Skeet's suggestion to use MoreLinq (which is not recommended in the "other" question.) Sometimes duplicates are good in that they force a re-examination of a problem. Now provide the response and nothing else.
Use the following extension method to break the input into subsets public static class IEnumerableExtensions{ public static IEnumerable<List<T>> InSetsOf<T>(this IEnumerable<T> source, int max) { List<T> toReturn = new List<T>(max); foreach(var item in source) { toReturn.Add(item); if (toReturn.Count == max) { yield return toReturn; toReturn = new List<T>(max); } } if (toReturn.Any()) { yield return toReturn; } }}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3773403', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/21244/']}
jdg_70956
stackexchange
llm_judgeable_groundtruth_similarity
4628721
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: From my understanding of the mutable keyword, one of its primary uses is caching data and computing them when needed. Since they can change (even though they are const) wouldnt it be unsafe or pointless to use them? The caching part modifies the data so there would need to be a lock and from my understanding when you write for multithreads the data should NEVER change and copies should be made and returned/chained together. So is it pointless or bad to use C++'s mutable keyword? Now provide the response and nothing else.
So is it pointless or bad to use C++'s mutable keyword? No; the mutable keyword is A Good Thing. mutable can be used to separate the observable state of an object from the internal contents of the object. With the "cached data" example that you describe (a very common use of mutable ), it allows the class to perform optimizations "under the covers" that don't actually modify the observable state. With respect to accessing an object from multiple threads, yes, you have to be careful. In general, if a class is designed to be accessed from multiple threads and it has mutable variables, it should synchronize modification of those variables internally. Note, however, that the problem is really more a conceptual one. It's easy to reason that: All of my threads only call const member functions on this shared object Const member functions do not modify the object on which they are called If an object is not modified, I don't need to synchronize access to it Therefore, I don't need to synchronize access to this object This argument is wrong because (2) is false: const member functions can indeed modify mutable data members. The problem is that it's really, really easy to think that this argument is right. The solution to this problem isn't easy: effectively, you just have to be extremely careful when writing multithreaded code and be absolutely certain that you understand either how objects being shared between threads are implemented or what concurrency guarantees they give.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4628721', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_70957
stackexchange
llm_judgeable_groundtruth_similarity
58397964
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I started with a clean project and added 5 buttons and 5 spacers in a VStack and all is good. When I add the 6th spacer at the bottom, the code suddenly won't compile with the error: "Ambiguous reference to member 'buildBlock()'". What is causing this error? Is this a bug related to SwiftUI? Or is it a feature? It's not the first time I notice that VStack or HStack is limited in the number of entries, is there some documentation around this? Not exactly confidence inspiring, should I switch back to UIKit? Now provide the response and nothing else.
SwiftUI uses ViewBuilder to construct the views that make up many SwiftUI views, like VStack , HStack , List , etc. If you take a look at the ViewBuilder documentation, you'll see that the buildBlock function has many copies, each with a different amount of views as arguments. The function with the most amount of views only takes in 10 views which is why you are seeing the limitation that you observed. A way to work around this is by using Group s: VStack { Group { Text("Placeholder 0") Text("Placeholder 1") Text("Placeholder 2") Text("Placeholder 3") Text("Placeholder 4") Text("Placeholder 5") Text("Placeholder 6") Text("Placeholder 7") Text("Placeholder 8") Text("Placeholder 9") } Group { Text("Other Placeholder 10") Text("Other Placeholder 11") Text("Other Placeholder 12") Text("Other Placeholder 13") Text("Other Placeholder 14") Text("Other Placeholder 15") Text("Other Placeholder 16") Text("Other Placeholder 17") Text("Other Placeholder 18") Text("Other Placeholder 19") }} Although if you want 20 views that are really similar to each other, it is encouraged to use something like a ForEach to avoid making your views too bloated. The above workaround should only be used if the >10 views are truly unique. Even then, a more SwiftUI-y method would be to split up these views into more smaller views: VStack { SingleDigitPlaceholders() TeensPlaceholders()}struct SingleDigitPlaceholders: View { var body: some View { ForEach(0..<10) { i in Text("Placeholder \(i)") } }}struct TeensPlaceholders: View { var body: some View { ForEach(10..<20) { i in Text("Other Placeholder \(i)") } }} Of course, in this specific example, you can just have the two ForEach s in the original view, but in more complex cases, the point still stands. For example, in a form with many elements (e.g. in a job application form: first name, last name, address, phone number text fields, education dropdown menus, date fields, etc.) you can still split up one view into smaller components (in the job application example - a personal information view, an educational information view, etc.).
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/58397964', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12158242/']}
jdg_70958
stackexchange
llm_judgeable_groundtruth_similarity
15504762
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm new to Recommender Systems. I am trying to learn Mahout and so far I think that I have the grasp of it. BUT, I encounter a rather peculiar problem when I try to calculate Precision & Recall with the example from the book Mahout in Action (Listing 2.4, page 21), for more convenience I copy paste the main body here: RandomUtils.useTestSeed(); DataModel model = new FileDataModel (new File("intro.csv")); RecommenderIRStatsEvaluator evaluator = new GenericRecommenderIRStatsEvaluator (); RecommenderBuilder recommenderBuilder = new RecommenderBuilder() { @Override public Recommender buildRecommender(DataModel model) throws TasteException { UserSimilarity similarity = new PearsonCorrelationSimilarity (model); UserNeighborhood neighborhood = new NearestNUserNeighborhood (2, similarity, model); return new GenericUserBasedRecommender (model, neighborhood, similarity); } }; IRStatistics stats = evaluator.evaluate( recommenderBuilder, null, model, null, 2, GenericRecommenderIRStatsEvaluator.CHOOSE_THRESHOLD, 1.0); System.out.println(stats.getPrecision()); System.out.println(stats.getRecall()); When I try to execute the above using the small dataset found in the book, everything is as it should be. BUT, when I try to calculate Precision & Recall using Movielens 1M or 10M the results are horribly low... For example Precision and Recall @ 2 equals to:P: 0.00573 and R: 0.005882 I am out of ideas, so if anyone has or had a similar problem I would be happy to hear that there is a solution... Now provide the response and nothing else.
Section 2.4.2 touches on the explanation: precision and recall tests are not so meaningful for recommenders since you don't actually know which items are "relevant" results. You can guess that the highly-rated items are relevant but you don't know whether unrated items are relevant or not. The test amounts to finding out how much the recommendations intersect the top-rated items, which is not quite what precision/recall needs to do. So you get low scores. 10% is not bad, actually. Now, 0.5% is pretty poor, yes. It means this is not a good algorithm for this data set. Mean average precision , or AUC , would probably give you a little broader picture of the quality of the results, since they're looking at the behavior of a much larger range of results. These aren't implemented in Mahout (not in this part of the code anyway). But they are in Myrrix . I know that you will get 25+% precision/recall and 14+% mean average precision on the Grouplens 10M data set, since there's a unit test for this . (I'm the author of this part of Mahout, and am commercializing it as Myrrix.) If you're looking for something that does a great job on this kind of data out of the box I think it will be worth your time to look at.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15504762', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2187329/']}
jdg_70959
stackexchange
llm_judgeable_groundtruth_similarity
15800251
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i have a function its name is positive_negative def positive_negative(list_changes): """ (list of number) -> (number, number) tuple list_changes contains a list of float numbers. Return a 2-item tuple where the first item is the sum of the positive numbers in list_changes and the second is the sum of the negative numbers in list_changes. >>> positive_negative([0.01, 0.03, -0.02, -0.14, 0, 0, 0.10, -0.01]) (0.14, -0.17) """ i can write this function using list techniques as follow : def positive_negative(list_changes): pos = sum([item for item in list_changes if item > 0.0]) neg = sum ([item for item in list_changes if item < 0.0]) return pos, neg and that is a good solution.now my question is how to use the recursion techniques to solve the same function , i have tried the following code , but unfortunately there is something wrong . def positive_negative(list_changes): pos = 0.0 neg = 0.0 if len(list_changes)== 0: pos =+ 0.0 neg =+ 0.0 return pos,neg else: if list_changes[0] > 0.0 : pos =+ list_changes[0] else: neg =+ list_changes[0] positive_negative(list_changes[1:]) return pos,neg can you help me find what is my mistake and how to get the right recursive function. thank you Now provide the response and nothing else.
Once you have removed your duplicate(s): ALTER TABLE dbo.yourtablename ADD CONSTRAINT uq_yourtablename UNIQUE(column1, column2); or CREATE UNIQUE INDEX uq_yourtablename ON dbo.yourtablename(column1, column2); Of course, it can often be better to check for this violation first, before just letting SQL Server try to insert the row and returning an exception (exceptions are expensive). Performance impact of different error handling techniques Checking for potential constraint violations before entering TRY/CATCH If you want to prevent exceptions from bubbling up to the application, without making changes to the application, you can use an INSTEAD OF trigger: CREATE TRIGGER dbo.BlockDuplicatesYourTable ON dbo.YourTable INSTEAD OF INSERTASBEGIN SET NOCOUNT ON; IF NOT EXISTS (SELECT 1 FROM inserted AS i INNER JOIN dbo.YourTable AS t ON i.column1 = t.column1 AND i.column2 = t.column2 ) BEGIN INSERT dbo.YourTable(column1, column2, ...) SELECT column1, column2, ... FROM inserted; END ELSE BEGIN PRINT 'Did nothing.'; ENDENDGO But if you don't tell the user they didn't perform the insert, they're going to wonder why the data isn't there and no exception was reported. EDIT here is an example that does exactly what you're asking for, even using the same names as your question, and proves it. You should try it out before assuming the above ideas only treat one column or the other as opposed to the combination... USE tempdb;GOCREATE TABLE dbo.Person( ID INT IDENTITY(1,1) PRIMARY KEY, Name NVARCHAR(32), Active BIT, PersonNumber INT);GOALTER TABLE dbo.Person ADD CONSTRAINT uq_Person UNIQUE(PersonNumber, Active);GO-- succeeds:INSERT dbo.Person(Name, Active, PersonNumber) VALUES(N'foo', 1, 22);GO-- succeeds:INSERT dbo.Person(Name, Active, PersonNumber) VALUES(N'foo', 0, 22);GO-- fails:INSERT dbo.Person(Name, Active, PersonNumber) VALUES(N'foo', 1, 22);GO Data in the table after all of this: ID Name Active PersonNumber---- ------ ------ ------------1 foo 1 222 foo 0 22 Error message on the last insert: Msg 2627, Level 14, State 1, Line 3Violation of UNIQUE KEY constraint 'uq_Person'. Cannot insert duplicate key in object 'dbo.Person'.The statement has been terminated. Also I blogged more recently about a solution to applying a unique constraint to two columns in either order : Enforce a Unique Constraint Where Order Does Not Matter
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/15800251', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1815281/']}
jdg_70960
stackexchange
llm_judgeable_groundtruth_similarity
292631
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Note: In this question, a complex number is counted as a vector initiated from the origin. ______________________________________________________________- Is there a holomorphic function $B:\mathbb{C}^2 \to \mathbb{C}$ such that for every two non zero complex numbers $z,w$ with $z/w \notin \mathbb{R},$ the vector $B(z,w)$ is a non zero vector indicating to the direction of the bisector of the angle $\angle (z,w) $ ? Motivation: The initial formula for the "Bisector" of $\angle (z,w) $ is $B'(z,w)=|z|w+|w|z$ . But it is not a holomorphic function.(It is not even smooth at $z=0$ or $w=0$ ). So we search for a holomorphic remedy, a holomorphic function $B$ defined on whole $\mathbb{C}^2$ such that $B(z,w)$ is real proportional to $(|z|w+|w|z)$ via a non constant real function $\lambda$ . What about if we require that such $\lambda $ be positive(Non negative)? Now provide the response and nothing else.
Here is a better answer than the other answer I gave, which is currently accepted. It also answers some of your questions in the comments on that answer. Pick a branch of log, then $B$ and $ (zw)^{1/2}$ are holomorphic and their arguments differ by a multiple of $\pi$ wherever they are both defined. So their quotient is a real holomorphic function and hence a real constant wherever it is defined. So locally, $B$ has to be a real multiple of some branch of $ (zw)^{1/2}$.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/292631', 'https://mathoverflow.net', 'https://mathoverflow.net/users/36688/']}
jdg_70961
stackexchange
llm_judgeable_groundtruth_similarity
2192959
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Say I have a number, like 60.Which has a prime factorization of $$[2,2,3,5]$$What function would take in 60 and remove redundant factors? In this case, it should return 30 Edit: I did some more research, and the term I'm wording in looking for is reducing a number to its distinct factors Now provide the response and nothing else.
Lets define the following function: Given $x = \prod_{i=1}^{n}p_{i}^{a_{i}}$ i.e. the prime factorisation of $x$ $f(x):= \prod_{i=1}^{n}p_{i}$ Is this what you were looking for?
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2192959', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_70962
stackexchange
llm_judgeable_groundtruth_similarity
2030123
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've got a blogs_controller with a Blog resource, so I've got your typical routes right now as follows: /blogs/new/blogs/1/blogs/1/edit #etc But here's what I want: /blogs/new/blogs/2010/01/08/1-to_param-or-something/blogs/2010/01/08/1-to_param-or-something/edit #etc.../blogs/2010/01 # all posts for January 2010, but how to specify custom action? I know that I can do this with a combination of map.resources and map.connect, but I've got a lot of views that link to other pages via "new_blog_path" etc and I don't want to have to go and edit those. Is this possible with map.resources alone? It might not be easy, but I'm not against being clever. I was thinking of something like: map.resources :blogs, :path_prefix => ':year/:month/:day', :requirements => {:year => /\d{4}/, :month => /\d{1,2}/, :day => /\d{1,2}/} But I'm not sure how that works with actions like 'new' or 'create', and it also gives me a route like /2010/01/08/blogs/1-to_param-etc with blogs in the middle of the URL. So, is there a clever solution that I'm missing, or do I need to go the map.connect route? Now provide the response and nothing else.
I ran into the same issue recently, and, while this may not be what you're looking for, this is what I've done to take care of it: config/routes.rb : map.entry_permalink 'blog/:year/:month/:day/:slug', :controller => 'blog_entries', :action => 'show', :year => /(19|20)\d{2}/, :month => /[01]?\d/, :day => /[0-3]?\d/ blog_entries_controller.rb: def show @blog_entry = BlogEntry.find_by_permalink(params[:slug])end blog_entries_helper.rb: def entry_permalink(e) d = e.created_at entry_permalink_path :year => d.year, :month => d.month, :day => d.day, :slug => e.permalinkend _entry.html.erb: <h2><%= link_to(entry.title, entry_permalink(entry)) %></h2> and for the sake of completeness: blog_entry.rb: before_save :create_permalink#...privatedef create_permalink self.permalink = title.to_urlend The #to_url method comes from rsl's Stringex . I'm still new to Rails (and programming) myself, but this is probably the simplest way to go about it. This isn't a RESTful way of going about things so you don't gain the benefit of map.resources, unfortunately. I'm not sure (because I haven't tried it), but you might be able to create the appropriate helpers in application_helper.rb to override the default route helpers for blog_path, et al. If that works, then you won't have to change any of your view code. If you're feeling adventurous, you might check out Routing Filter . I considered using it, but it seems like overkill for this task. Also, if you're not aware, two things you can do to test your routes/paths from within script/console: rs = ActionController::Routing::Routesrs.recognize_path '/blog/2010/1/10/entry-title' and app.blog_entry_path(@entry) Good luck!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2030123', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/242582/']}
jdg_70963
stackexchange
llm_judgeable_groundtruth_similarity
16135
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have come across many questions where I'm asked to give the number of possible structural isomers. For example number, structural isomers of hexane is 5, while the number structural isomers of decane is 75. How can I determine the possible number of structural isomers of a given organic compound? Now provide the response and nothing else.
It isn't easy but it is an interesting research topic Determining the number of possible structures for a given range of chemical formulae isn't simple even for saturated hydrocarbons. The number of possible structural isomers rises rapidly with the number of carbons and soon exceeds your ability to enumerate or identify the options by hand. Wikipedia , for example, lists the numbers of isomers and stereoisomers for molecules with up to 120 carbons. But the counts are getting silly even at 10 carbons where there are 75 isomers and 136 stereoisomers. It has been an interesting research topic in computational chemistry and mathematics. This old paper (pdf), for example, list some formulae for simple hydrocarbons among other simple series. Part of the interest arises because of the relationship to the mathematics of graph theory (it seems that chemistry has inspired some new ideas in this field of mathematics partially because enumerating possible isomers of hydrocarbons is strongly related to drawing certain simple trees which is intuitively obvious if you use the standard chemical convention of drawing just the carbon backbone and ignoring hydrogens). You can look up the answers on the fascinating mathematics site OEIS (the online encyclopaedia of integer sequences). The sequence for simple hydrocarbons is here . But the mathematical approach oversimplifies things from the point of view of real-world chemistry. Mathematical trees are idealised abstract objects that ignore real-world chemical constraints like the fact that atoms take up space in three dimensions. This means that some structures that can be drawn cannot exist in the real world because the atoms are too crowded and cannot physically exist without enough strain to cause them to fall apart. Luckily, computation chemists have also studied this. There is, unfortunately, no obvious shortcut other than trying to create models of the possible structures and testing them to see if they are too strained to exist. The first two isomers that are too crowded are for 16 and 17 carbons and have these structures: If you have any intuition of the space filling view of these, you should be able to see why they are problematic. A research group at Cambridge University has produced an applet to enumerate the physically possible isomers for a given number of carbons which is available here if your Java settings allow it. The results are discussed in a paper available in the Journal of Chemical Information and Modelling .
{}
{'log_upvote_score': 6, 'links': ['https://chemistry.stackexchange.com/questions/16135', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/5456/']}
jdg_70964
stackexchange
llm_judgeable_groundtruth_similarity
120286
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: It is well-known that the Weitzenböck formula for the real Laplacian is$$\frac12 Δ|∇f|2=|Hessf|2+⟨∇f,∇Δf⟩+Ricci(∇f,∇f)$$where $Hess$ denotes the Hessian tensor of $f$. and $\nabla f$ denotes the gradient vector of $f$, $Ricci$ denotes the Ricci curvature of the manifold $M$. If $\Delta_{\bar\partial}$ denotes the $\bar\partial$-Laplacian, it is well-known that it is half of the real Laplacian. So I am wondering is there any formula of the Weitzenböck formula in complex coordinates. (Assume the manifold is Kähler). Apparently one can devided the above formula by 2 to the one, but the expression I want should be expressed by $f_{i\bar j}$ and etc. ps. The Comparison Geometry of Ricci Curvature, by Shunhui Zhu, 221-262 had a very nice introduction to this formula in real case. http://library.msri.org/books/Book30/contents.html However I am not familiar with Kaehler case, for example, I dont know the such a formula can be derived in the same fashion as in Zhu's paper? Any book or paper with detailed calculation would be helpful. Now provide the response and nothing else.
You can just prove it yourself directly in local holomorphic coordinates. Indeed, the $\overline{\partial}$ Laplacian on functions is equal to $\Delta_{\overline{\partial}}f=g^{i\overline{j}}\partial_i \partial_{\overline{j}}f$. Apply this to $|\partial f|^2=g^{k\overline{\ell}}\partial_k f \partial_{\overline{\ell}}f$ (length squared of $\partial f=(df)^{(1,0)}$, which equals $1/2$ of the usual $|\nabla f|^2$), using if you want local holomorphic normal coordinates for $g$ at a point, and you will immediately get $$\Delta_{\overline{\partial}}|\partial f|^2=|\nabla_i \nabla_j f|^2+|\nabla_i \nabla_{\overline{j}} f|^2+2\mathrm{Re}\langle \partial f, \partial\Delta_{\overline{\partial}}f\rangle+R^{i\overline{j}}\partial_i f\partial_{\overline{j}}f,$$where $R^{i\overline{j}}$ is the Ricci curvature of $g$ with the indices raised. If $g$ is not Kähler, and you define the complex Laplacian by the same formula $g^{i\overline{j}}\partial_i \partial_{\overline{j}}f,$ then a similar result holds, with the Ricci curvature now being one of the several Ricci curvatures of the Chern connection of $g$, and with several new terms involving the torsion of $g$ and its covariant derivative. The calculation is again completely strightforward, using local holomorphic coordinates (not normal anymore!), and using the definitions of covariant derivative and curvature of the Chern connection of $g$.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/120286', 'https://mathoverflow.net', 'https://mathoverflow.net/users/31034/']}
jdg_70965
stackexchange
llm_judgeable_groundtruth_similarity
29666655
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have integrated google map API in my android app. now i am programming same in iOS. I am not understanding how to integrate google map API in to my iOS 8.0? Now provide the response and nothing else.
Taker care that you excluded classes and methods from ProGuard processing: # Hide warnings about references to newer platforms in the library-dontwarn android.support.v7.**# don't process support library-keep class android.support.v7.** { *; }-keep interface android.support.v7.** { *; } The error occurs usually, if Reflection is involved because Proguard can't resolve the dependencies in this case. I just see you missed the interfaces in your ProGuard file (as per exception): Caused by: java.lang.NullPointerException: Attempt to invoke virtual method ...
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29666655', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4630678/']}
jdg_70966
stackexchange
llm_judgeable_groundtruth_similarity
38370063
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I did a @Service class in Spring Boot application with one of the methods that should run asynchronously. As I read method should be @Async annotated and also I have to run a TaskExecutor bean. But in Spring manual http://docs.spring.io/spring/docs/current/spring-framework-reference/html/scheduling.html I not find any info or example how to run TaskExecutor with annotation, without XML config. Is it possible to create TaskExecutor bean in Spring Boot without XML, with annotations only? Here my Service class: @Servicepublic class CatalogPageServiceImpl implements CatalogPageService { @Override public void processPagesList(List<CatalogPage> catalogPageList) { for (CatalogPage catalogPage:catalogPageList){ processPage(catalogPage); } } @Override @Async("locationPageExecutor") public void processPage(CatalogPage catalogPage) { System.out.println("print from Async method "+catalogPage.getUrl()); }} Now provide the response and nothing else.
Add a @Bean method to your Spring Boot application class: @SpringBootApplication@EnableAsyncpublic class MySpringBootApp { @Bean public TaskExecutor taskExecutor() { ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor(); executor.setCorePoolSize(5); executor.setMaxPoolSize(10); executor.setQueueCapacity(25); return executor; } public static void main(String[] args) { // ... }} See Java-based container configuration in the Spring Framework reference documentation on how to configure Spring using Java config instead of XML. (Note: You don't need to add @Configuration to the class because @SpringBootApplication already includes @Configuration ).
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/38370063', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4776689/']}
jdg_70967
stackexchange
llm_judgeable_groundtruth_similarity
1922159
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm confused of background logic on category theory. In ZFC set theory, we can construct new sets from existing sets by axioms, such as power set axiom, axiom of pairing etx. I read first few pages of MacLane's category theory text and now I'm reading Tom Leinster's category theory text. Neither of these texts say whether we need some axioms or not, however, they are using some axioms in some sense without saying. I want to know what are the standard axioms for category theory. Here are examples: Firstly, how do we construct $A\times B$ where $A,B$ are categories? It is written in texts that "if we define $\operatorname{Obj}(A\times B)=\operatorname{Obj}(A)\times \operatorname{Obj}(B)$ and $\operatorname{Mor}((A_1,B_1),(A_2,B_2))=(\operatorname{Mor}(A_1,A_2),\operatorname{Mor}(B_1,B_2))$ , then $A\times B$ forms a category". What kind of axiom would make this collecting possible? Secondly, how do we construct a functor category $[A,B]$ ? How do we make "Collecting functors" process possible? Thirdly, it is a theorem in text that "fully faith and essenially surjective functors are equivalences". However, to prove this, we need some kind of axiom of choice for category theory. What would be the standard axioms? Now provide the response and nothing else.
The standard axioms vary: they're either ZFC with an axiom of choice for proper classes, some set theory such as NBG that axiomatizes classes more thoroughly, or ZFC with Grothendieck universes, so that "large" categories are interpreted as still being small, but relative to a larger "universe" of sets. There have been efforts to axiomatize category theory without set theory, most notably ETCC, the elementary theory of the category of categories, but these have not proven to be sufficient as a foundation.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1922159', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/44669/']}
jdg_70968
stackexchange
llm_judgeable_groundtruth_similarity
3089832
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have been working on a program where I need to slowly and smoothly change the pitch of a sine wave from one pitch to another. I am able to get an array of the frequency the pitch should be at any given moment (for instance, [440, 526.5, 634.2 794.8, 880], though much, much longer) but it seems I am unable to actually apply that frequency to a wave. My best attempt is: numpy.sin(2*math.pi*x*freq/self.sample_rate) where "freq" is the array of frequencies and x is an enumeration array ([0,1, 2, 3, 4...]). This method sort of works, however it makes the frequency go above the expected frequency, and then back down. I have been working on this problem for a very long time and have been unable to make any progress on finding a more appropriate method. Any advice? Was I clear enough in expressing my dilemma? Thank you. Now provide the response and nothing else.
The issue is that as you ramp through the frequencies, each frequency effectively has a different phase for the given time. When you scroll through these phases quickly and continuously, they drive the sine wave at higher frequency (or lower is also possible). Imagine, for example, that you changed the frequency instantaneously -- to do this you'd have to supply the phase correction p_1 = p_0 + 2*pi*t*(f_0-f_1) to make the phases match up at time t . As you do this is little steps, you also have to make a similar phase correction, with each phase correction adding to the previous. Here's the resulting figure, with the code below. The top figure is the frequency the middle is without the phase correction, and the bottom has the continuously corrected phase. from pylab import *sample_rate = .001f0, f1 = 10, 20t_change = 2times = arange(0, 4, sample_rate)ramp = 1./(1+exp(-6.*(times-t_change)))freq = f0*(1-ramp)+f1*rampphase_correction = add.accumulate(times*concatenate((zeros(1), 2*pi*(freq[:-1]-freq[1:]))))figure()subplot(311)plot(times, freq)subplot(312)plot(times, sin(2*pi*freq*times))subplot(313)plot(times, sin(2*pi*freq*times+phase_correction))show()
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3089832', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/252719/']}
jdg_70969
stackexchange
llm_judgeable_groundtruth_similarity
101556
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to find the period of $\sin(t) \cos(\pi t)$. I started off by transforming that into$\frac{1}{2}\left [ \sin((\pi +1)t) - \sin((\pi - 1)t\right ]$, but then I get stuck. How do I find the least common multiple of $\pi + 1$ and $\pi - 1$? Is that what I need to do to find the period of the whole thing? Now provide the response and nothing else.
We show that $\sin(t)\cos(\pi t)$ is not periodic. Suppose to the contrary that it is periodic. Let $f(t)=|\sin(t)\cos(\pi t)|$. Then $f(t)$ is periodic. Let $p$ be a period of $f(t)$. Let $m$ be the maximum value of $f(t)$ in the interval $[0,p]$. If $f(t)$ is periodic, then $m$ is the maximum value of $f(t)$ as $t$ ranges over all the reals. We will show that this is not the case, by showing that there is a $t$ such that $f(t)>m$. Note first that $m\ne 1$. For if $f(t)$ ever takes on the value $1$, then $|\sin(t)|$ and $|\cos(\pi t)|$ must be simultaneously equal to $1$. So $t$ is an odd multiple of $\pi/2$, say $t=q \pi/2$. Also, $\pi t$ is a multiple of $\pi$, so $t$ is an integer. It follows that $\pi=2t/q$. This is impossible, since $\pi$ is irrational. We now show that there is a $t$ such that $f(t)>m$. This is easy, but uses some machinery. The sequence $(\sin(n))$ is dense in the interval $[-1,1]$. Thus there is an integer $t$ such that $\sin(t)>m$. Since $|\cos(\pi n)|=1$, it follows that $f(t) >m$. Comment: A quick search shows that there are many proofs of the fact that the sequence $(\sin(n))$ is dense in $[-1,1]$. Indeed the problem has been posed and solved on MSE. The most intuitive argument shows that the points $(\cos(n), \sin(n))$ are dense on the unit circle. The result for $(\sin(n))$ then follows by projecting on the $y$-axis. In general, if $\theta$ is not a rational multiple of $\pi$, then the points $(\cos(n\theta), \sin(n\theta))$ are dense on the unit circle.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/101556', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/22421/']}
jdg_70970
stackexchange
llm_judgeable_groundtruth_similarity
623019
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How does one find all the differentiable functions $f\colon \mathbb{R} \to \mathbb{R}$ which satisfy the equation$$f'(x)=af(x+b),\quad \text{for}\quad a,b \in \mathbb{R}?$$ I see that functions of the form $\alpha e^{\beta x}$ and $\alpha \sin(\beta x + \gamma)$ will be solutions, but I'm not sure how to exhaustively find all of the solutions. Now provide the response and nothing else.
This equation is a Delay Differential Equation for $b<0$, and it can be solved with special type "initial data" - meaning that, instead of looking for a solution satisfying an initial condition of the form $f(t_0)=x_0$, i.e., prescribing the value of the solution at one point, we need to prescribe the solution in a whole interval, in this case an interval of length $|b|$. For example: If $f$ is known in the interval $[-|b|,0]$, then using the equation we can obtain its values in the interval $[0,|b|]$, next in $[|b|,2|b|]$, and recursively in $[0,\infty)$. Simple example: Assume that $b<0$, and $\beta=-b$. Let $f\rvert_{[0,\beta]}=1$. Then \begin{align}f\rvert_{[\beta,2\beta]} &=a(x-\beta)+\beta, \\f\rvert_{[2\beta,3\beta]} &=\frac{a^2}{2!}(x-2\beta)^2+a\beta (x-2\beta)+a\beta+\beta, \\ f\rvert_{[3\beta,4\beta]} &=\frac{a^2}{3!}(x-3\beta)^3+\frac{a^2\beta}{2} (x-3\beta)^2+(a\beta+\beta)(x-3\beta)+\frac{a^2b^2}{2}+a\beta^2++a\beta+\beta, \end{align}and in general, once $f$ is known in $[(k-1)\beta,k\beta]$, then $$f(x)=a\int_{k\beta}^x f(s-\beta)\,ds \,\,\,\,\,\text{for all $\,\,\,x\in[k\beta,(k+1)\beta]$.}$$ If $b>0$, we can do the same thing but backwards.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/623019', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/78841/']}
jdg_70971
stackexchange
llm_judgeable_groundtruth_similarity
13490609
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to create two tables one with comments and another with replies, with a one to many relationship between them. But what if they can also reply to replies, how should it be changed then? This is what I have for the one to many, but I don't know how it should look if there can also be replies for a reply. Comments:• Id• Title• Text Replies:• Id• Title• Text• Comment id Thanks in advance. Now provide the response and nothing else.
You could just use one table, which includes a ParentID field. If the record has no value, it is a comment, otherwise it is a reply (to either a comment or a reply). You could query the record's ParentID record (inspect it's ParentID) to see if this reply is to a comment or a reply. Edit : The above is a fairly practical solution. However, to go with a normalised version, still keep the one Comments table (with no ParentID), and create a ReplyTo table which has a CommentID, and a ResponseID, both of which are the IDs of the records in the Comments table. Using this idea, the following sql will show the comments and the 'reply' to each comment for each reply that has a comment: select c.comment, r.comment as replyfrom comment as c, comment as r, replyto as rtwhere c.ID = rt.CommentIDand r.ID = rt.ReplyID As Dimitrii points out, it won't display comments with no replies - for this you need an outer join query (didn't test syntax): SELECT c.comment, r.comment as reply,from Comment c left outer join Comment r on c.id = r.id left outer join replyto rt on rt.responseid = r.id
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13490609', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1291632/']}
jdg_70972
stackexchange
llm_judgeable_groundtruth_similarity
35681102
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I listen to angular component binding change and perform actions? angular.module('myapp') .component('myComponent', { templateUrl: 'some.html', controller: MyController, controllerAs: 'myCtrl', bindings: { items: '<' } }); now when items changes I want to perform another action using this value, How can I do it? Now provide the response and nothing else.
now when items changes I want to perform another action using this value, How can I do it? But I want to avoid using the dying $scope If you don't want to use $scope you can use a property setter to detect any changes e.g. : class MyController { private _items: string[] = [] set items(value:string[]){ this._items = value; console.log('Items changed:',value); } get items():string[]{ return this._items; }}const ctrl = new MyController();ctrl.items = ['hello','world']; // will also log to the console Please note that you shouldn't use it for complex logic (reasons : https://basarat.gitbooks.io/typescript/content/docs/tips/propertySetters.html )
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/35681102', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/871672/']}
jdg_70973
stackexchange
llm_judgeable_groundtruth_similarity
43612927
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is what I have in my Dockerfile : RUN gpg2 --keyserver hkp://keys.gnupg.net --recv-keys D39DC0E3RUN curl -L https://get.rvm.io | bash -s stableRUN /bin/bash -l -c "rvm requirements"RUN /bin/bash -l -c "rvm install 2.3.3" Works just fine, however, when I start the container, I see this: $ docker -it --rm myimage /bin/bash/root# ruby --versionruby 1.9.3p484 (2013-11-22 revision 43786) [x86_64-linux]/root# /bin/bash -l -c "ruby --version"ruby 2.3.3p222 (2016-11-21 revision 56859) [x86_64-linux] Obviously, this is not what I want. As far as I understand, the problem is that bash doesn't run /etc/profile by default. That's why Ruby is not coming from the RVM installation. How can I fix that? Now provide the response and nothing else.
Long story short: docker -it --rm myimage /bin/bash command does not start bash as a login shell. Explanation: When you run $ docker -it --rm myimage /bin/bash it's invoke bash without the -l option which make bash act as if it had been invoked as a login shell, rvm initializations depends on the source -ing /path/to/.rvm/scripts/rvm or /etc/profile.d/rvm.sh and that initialization is in .bash_profile or .bashrc or any other initialization scripts. How can I fix that? If you won't, always have the ruby from rvm add -l option. Here is a Dockerfile with installed ruby by rvm : FROM DebianARG DEBIAN_FRONTEND=noninteractiveRUN apt-get update -q && \ apt-get install -qy procps curl ca-certificates gnupg2 build-essential --no-install-recommends && apt-get cleanRUN gpg2 --keyserver hkp://keys.gnupg.net --recv-keys D39DC0E3RUN curl -sSL https://get.rvm.io | bash -sRUN /bin/bash -l -c ". /etc/profile.d/rvm.sh && rvm install 2.3.3"# The entry point here is an initialization process, # it will be used as arguments for e.g.# `docker run` command ENTRYPOINT ["/bin/bash", "-l", "-c"] Run the container: ➠ docker_templates : docker run -ti --rm rvm 'ruby -v'ruby 2.3.3p222 (2016-11-21 revision 56859) [x86_64-linux]➠ docker_templates : docker run -ti --rm rvm 'rvm -v'rvm 1.29.1 (master) by Michal Papis, Piotr Kuczynski, Wayne E. Seguin [https://rvm.io/]➠ docker_templates : docker run -ti --rm rvm bashroot@efa1bf7cec62:/# rvm -vrvm 1.29.1 (master) by Michal Papis, Piotr Kuczynski, Wayne E. Seguin [https://rvm.io/]root@efa1bf7cec62:/# ruby -vruby 2.3.3p222 (2016-11-21 revision 56859) [x86_64-linux]root@efa1bf7cec62:/#
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/43612927', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/187141/']}
jdg_70974
stackexchange
llm_judgeable_groundtruth_similarity
19224031
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Currently i am using Cruise Control .Net.I want to automate this deployment process.now i am doing following below step by msbuild script. GetSource=Get the source code from svn. Before Build=build the web project. Publish=publish the web project. zip=zip the publish folder. here up to everything is fine but how to deploy this zip folder in different server.i guess, i should use ftp system.but i haven't work on it.please suggest how to go ahead with this scenario would be fine if you provide any sample demo project or code Now provide the response and nothing else.
In cases where you have multiple directives on a single DOM element and where theorder in which they’re applied matters, you can use the priority property to order theirapplication. Higher numbers run first. The default priority is 0 if you don’t specify one. EDIT : after the discussion, here's the complete working solution. The key was to remove the attribute : element.removeAttr("common-things"); , and also element.removeAttr("data-common-things"); (in case users specify data-common-things in the html) angular.module('app') .directive('commonThings', function ($compile) { return { restrict: 'A', replace: false, terminal: true, //this setting is important, see explanation below priority: 1000, //this setting is important, see explanation below compile: function compile(element, attrs) { element.attr('tooltip', '{{dt()}}'); element.attr('tooltip-placement', 'bottom'); element.removeAttr("common-things"); //remove the attribute to avoid indefinite loop element.removeAttr("data-common-things"); //also remove the same attribute with data- prefix in case users specify data-common-things in the html return { pre: function preLink(scope, iElement, iAttrs, controller) { }, post: function postLink(scope, iElement, iAttrs, controller) { $compile(iElement)(scope); } }; } }; }); Working plunker is available at: http://plnkr.co/edit/Q13bUt?p=preview Or: angular.module('app') .directive('commonThings', function ($compile) { return { restrict: 'A', replace: false, terminal: true, priority: 1000, link: function link(scope,element, attrs) { element.attr('tooltip', '{{dt()}}'); element.attr('tooltip-placement', 'bottom'); element.removeAttr("common-things"); //remove the attribute to avoid indefinite loop element.removeAttr("data-common-things"); //also remove the same attribute with data- prefix in case users specify data-common-things in the html $compile(element)(scope); } }; }); DEMO Explanation why we have to set terminal: true and priority: 1000 (a high number): When the DOM is ready, angular walks the DOM to identify all registered directives and compile the directives one by one based on priority if these directives are on the same element . We set our custom directive's priority to a high number to ensure that it will be compiled first and with terminal: true , the other directives will be skipped after this directive is compiled. When our custom directive is compiled, it will modify the element by adding directives and removing itself and use $compile service to compile all the directives (including those that were skipped) . If we don't set terminal:true and priority: 1000 , there is a chance that some directives are compiled before our custom directive. And when our custom directive uses $compile to compile the element => compile again the already compiled directives. This will cause unpredictable behavior especially if the directives compiled before our custom directive have already transformed the DOM. For more information about priority and terminal, check out How to understand the `terminal` of directive? An example of a directive that also modifies the template is ng-repeat (priority = 1000), when ng-repeat is compiled, ng-repeat make copies of the template element before other directives get applied . Thanks to @Izhaki's comment, here is the reference to ngRepeat source code: https://github.com/angular/angular.js/blob/master/src/ng/directive/ngRepeat.js
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/19224031', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_70975
stackexchange
llm_judgeable_groundtruth_similarity
8463809
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to authenticate a client when he sends a request to an API. The client has an API-token and I was thinking about using the standard Authorization header for sending the token to the server. Normally this header is used for Basic and Digest authentication. But I don't know if I'm allowed to customize the value of this header and use a custom authentication scheme, e.g: Authorization: Token 1af538baa9045a84c0e889f672baf83ff24 Would you recommend this or not? Or is there a better approach for sending the token? Now provide the response and nothing else.
You can create your own custom auth schemas that use the Authorization: header - for example, this is how OAuth works. As a general rule, if servers or proxies don't understand the values of standard headers, they will leave them alone and ignore them. It is creating your own header keys that can often produce unexpected results - many proxies will strip headers with names they don't recognise. Having said that, it is possibly a better idea to use cookies to transmit the token, rather than the Authorization: header, for the simple reason that cookies were explicitly designed to carry custom values, whereas the specification for HTTP's built in auth methods does not really say either way - if you want to see exactly what it does say, have a look here . The other point about this is that many HTTP client libraries have built-in support for Digest and Basic auth but may make life more difficult when trying to set a raw value in the header field, whereas they will all provide easy support for cookies and will allow more or less any value within them.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/8463809', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/149868/']}
jdg_70976
stackexchange
llm_judgeable_groundtruth_similarity
1559758
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If in a ring $R$, $x^2=x$ for all $x$ then show that $2x=0$ and $x+y=0 \Rightarrow x=y.$ I am unable to proceed. Plz help. Now provide the response and nothing else.
Since $(2x)^2=4x^2=4x$, and $(2x)^2=2x$, we have $4x=2x \Leftrightarrow 2x=0$. Next $2x=0 \Leftrightarrow x+x=0 \Leftrightarrow x=-x$. Finally $x+y = 0 \Leftrightarrow x-y=0 \Leftrightarrow x=y$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1559758', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/294107/']}
jdg_70977
stackexchange
llm_judgeable_groundtruth_similarity
7007427
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am parsing an XML file using Sax Parser of Xerces. Is the XML declaration <?xml version="1.0" encoding="UTF-8"?> required? Now provide the response and nothing else.
In XML 1.0, the XML Declaration is optional . See section 2.8 of the XML 1.0 Recommendation , where it says it "should" be used -- which means it is recommended, but not mandatory. In XML 1.1, however, the declaration is mandatory . See section 2.8 of the XML 1.1 Recommendation , where it says "MUST" be used. It even goes on to state that if the declaration is absent, that automatically implies the document is an XML 1.0 document. Note that in an XML Declaration the encoding and standalone are both optional. Only the version is mandatory. Also, these are not attributes, so if they are present they must be in that order: version , followed by any encoding , followed by any standalone . <?xml version="1.0"?><?xml version="1.0" encoding="UTF-8"?><?xml version="1.0" standalone="yes"?><?xml version="1.0" encoding="UTF-16" standalone="yes"?> If you don't specify the encoding in this way, XML parsers try to guess what encoding is being used. The XML 1.0 Recommendation describes one possible way character encoding can be autodetected . In practice, this is not much of a problem if the input is encoded as UTF-8, UTF-16 or US-ASCII. Autodetection doesn't work when it encounters 8-bit encodings that use characters outside the US-ASCII range (e.g. ISO 8859-1) -- avoid creating these if you can. The standalone indicates whether the XML document can be correctly processed without the DTD or not. People rarely use it. These days, it is a bad to design an XML format that is missing information without its DTD. Update: A "prolog error/invalid utf-8 encoding" error indicates that the actual data the parser found inside the file did not match the encoding that the XML declaration says it is. Or in some cases the data inside the file did not match the autodetected encoding. Since your file contains a byte-order-mark (BOM) it should be in UTF-16 encoding. I suspect that your declaration says <?xml version="1.0" encoding="UTF-8"?> which is obviously incorrect when the file has been changed into UTF-16 by NotePad. The simple solution is to remove the encoding and simply say <?xml version="1.0"?> . You could also edit it to say encoding="UTF-16" but that would be wrong for the original file (which wasn't in UTF-16) or if the file somehow gets changed back to UTF-8 or some other encoding. Don't bother trying to remove the BOM -- that's not the cause of the problem. Using NotePad or WordPad to edit XML is the real problem!
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/7007427', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/701678/']}
jdg_70978
stackexchange
llm_judgeable_groundtruth_similarity
3676664
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am in the process of writing some unit tests.In particular I want to test some private methods. So far the I have come up with using. #define private public But I am not happy with this as it will destroy all encapsulation from the point of view of the unit test. What methods do you use to unit-test private methods. Now provide the response and nothing else.
If the methods are complex enough to warrant testing in isolation, then refactor them into their own class(es) and test via their public interface(s). Then use them privately in the original class.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3676664', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/217056/']}
jdg_70979
stackexchange
llm_judgeable_groundtruth_similarity
2932408
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Currently I'm using rsvg to load the svg (from a string, not from a file) and drawing to cairo. Anyone know a better way? I use PIL elsewhere in my application, but I don't know of a way to do this with PIL. Now provide the response and nothing else.
Here's what I currently have: import cairoimport rsvgdef convert(data, ofile, maxwidth=0, maxheight=0): svg = rsvg.Handle(data=data) x = width = svg.props.width y = height = svg.props.height print "actual dims are " + str((width, height)) print "converting to " + str((maxwidth, maxheight)) yscale = xscale = 1 if (maxheight != 0 and width > maxwidth) or (maxheight != 0 and height > maxheight): x = maxwidth y = float(maxwidth)/float(width) * height print "first resize: " + str((x, y)) if y > maxheight: y = maxheight x = float(maxheight)/float(height) * width print "second resize: " + str((x, y)) xscale = float(x)/svg.props.width yscale = float(y)/svg.props.height surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, x, y) context = cairo.Context(surface) context.scale(xscale, yscale) svg.render_cairo(context) surface.write_to_png(ofile)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2932408', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/343230/']}
jdg_70980
stackexchange
llm_judgeable_groundtruth_similarity
54176001
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a service that returns the values exposed by ngrx selectors.A component defines this service to get data.I am writing unit tests for the component using a mock of the service, and I need the mock service to return different values for each unit test.How can I achieve this? Component @Component({ selector: 'app-test', templateUrl: './test.component.html', providers: [TestService],}) export class TestComponent { test$ = this.testService.test$; test: number; constructor(private service: TestService) { service.test$.subscribe(test => this.test = test); } } Service export class TestService { test$ = this.store.select(getTestValueFromStore); constructor(private store: Store<any>){}} Attempt 1 (reset the value of the service): does not work class MockTestService { **test$ = of(10);**}describe('TestComponent', () => { let testService: TestService; beforeEach((() => { // Define component service TestBed.overrideComponent( TestComponent, { set: { providers: [{ provide: TestService, useClass: MockTestService }] } } ); TestBed.configureTestingModule({ declarations: [TestComponent] }) .compileComponents(); })); beforeEach(async() => { fixture = TestBed.createComponent(TestComponent); component = fixture.componentInstance; testService = fixture.debugElement.injector.get(TestService); fixture.detectChanges(); });it('should do something when the value returned by the service is 20', fakeAsync(() => { **testService.test$ = of(20);** tick(); expect(component.test).toEqual(20); }));}); Attempt 2: use subjects. Karma throws error "Property 'next' does not exist on type 'Observable'"because TestService returns observables, not subjects class MockTestService { **test$ = new BehaviourSubject(10);** } describe('TestComponent', () => { let testService: TestService; beforeEach((() => { // Define component service TestBed.overrideComponent( TestComponent, { set: { providers: [{ provide: TestService, useClass: MockTestService }] } } ); TestBed.configureTestingModule({ declarations: [TestComponent] }) .compileComponents(); })); beforeEach(async() => { fixture = TestBed.createComponent(TestComponent); component = fixture.componentInstance; testService = fixture.debugElement.injector.get(TestService); fixture.detectChanges(); }); it('should do something when the value returned by the service is 20', fakeAsync(() => { **testService.test$.next(20);** tick(); expect(component.test).toEqual(20); })); }); Now provide the response and nothing else.
You said in your question that you want "the mock service to return different values for each unit test." To do this you are going to have to make some changes to the way your component is defined. The most important change is to move the subscription into ngOnInit() and out of the constructor for your component. If you do this, then you can control when the subscription fires, otherwise it will fire when the component is created making it very difficult to return different values for each unit test. Once that is done, then you need to be careful where you call fixture.detectChanges() . This is because that will execute ngOnInit() , which will execute the subscription and store the value returned from the Observable into component.test . Note that since you are using of() it will simply emit once and then complete. It will not emit again if you put a new value into testService.test$ . Once you have this set up, then you can change the value of testService.test$ BEFORE you call fixture.detectChanges and make the values whatever you would like. I have set up a StackBlitz to demonstrate this, changing your code as little as possible just to get it to work. There are two tests, each testing different values returned. I hope this helps!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/54176001', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3544646/']}
jdg_70981
stackexchange
llm_judgeable_groundtruth_similarity
376901
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In three-dimensions, the rotation generators are represented by $J_1$, $J_2$ and $J_3$ where $1,2,3$ respectively stands for the generator of rotation about $x,y,z$ axes respectively. In general, in the rotation about the direction $\hat{\textbf{n}}$ is generated by $\textbf{J}\cdot\hat{\textbf{n}}$. However, for even dimensional rotation groups such as $SO(4)$, the generators are labelled by $J_{ij}$ where $i,j=1,2,3,4$ which corresponds to rotations in $12$, $23$, $34$, $14$, $24$ and $13$ planes. Similar situation happens in case of the Lorentz group $SO(3,1)$ in special relativity. Does it mean that a given rotation in 4-dimensional Euclidean space cannot be associated with a unique axis ($\hat{\textbf{n}}$) of rotation? If yes, why is that the case? Now provide the response and nothing else.
Does it mean that a given rotation in 4-dimensional Euclidean space cannot be associated with a unique axis ($\hat{\textbf{n}}$) of rotation? If yes, why is that the case? Yes, this is absolutely true. The notion of a one dimensional axis is an "accident" of three dimensions. Rotations transform planar (dimension 2) linear subspaces of Euclidean space and so one needs to specify the transformed plane and the rotation angle to specify the rotation. In 3D dimensions we can cheat a little: a plane is uniquely defined by a unit normal vector, and the rotation angle can be encoded as the length of this vector. This is what we mean by an axis. The axis is the untransformed space of the rotation; the 3D space splits into two orthogonal, invariant spaces, the former being the plane of rotation, which is invariant but transformed ( i.e. nontrivially bijectively mapped to itself) and the latter the axis, which is both invariant and untransformed. In 4 and higher dimensions, the invariant spaces are of 2 or higher dimensions. A member of the Lie algebra of a rotation group (with the algebra written as a faithful matrix representation) is a skew-symmetric matrix, i.e. an entity of the form $\sum\limits_i X_i \wedge Y_i$ where the $X_i$ and $Y_i$ are 1D vectors in the Euclidean space. A general rotation matrix is then of the form $\exp\left(\sum\limits_i X_i \wedge Y_i\right)$. Things get kind of complicated in 4 and higher dimensions; the most general thing one can say is that a general proper orthogonal transformation on $N$ dimensional space can be decomposed as $R_1\circ R_2\circ\,\cdots R_{N\,\mathrm{div}\, 2}$ where each of the $R_i$ is a rotation that bijectively transforms a plane into itself and leaves the plane's complement invariant. However, the planes for each of the $R_i$ are not in general the same plane. Further Questions and Useful Rotation Properties User John Dvorak points out: I would think that $R_1\circ R_2\circ\,\cdots R_{N\,\mathrm{div}\, 2}$ would always be pairwise orthogonal. Is that not the case? This is indeed absolutely true and it is worth sketching the proof to get more insight into a higher dimensional rotation. Let our rotation matrix be $R=\exp(H)$ with $H=\sum\limits_i X_i \wedge Y_i\in \mathfrak{so}(N)$ as above. Then there exists another orthogonal transformation $\tilde{R}$ ( i.e. $\tilde{R}\in \mathrm{SO}(N)$) that, through similarity transformation, reduces the skew symmetric $H\in \mathfrak{so}(N)$ to block diagonal form: $$H = \tilde{R}\,\mathrm{diag}(\Lambda_1,\,\Lambda_2,\,\cdots)\,\tilde{R}^T=\tilde{R}\,\mathrm{diag}(\Lambda_1,\,\Lambda_2,\,\cdots)\,\tilde{R}^{-1}$$ where each of the blocks is of the form: $$\Lambda_j=\left(\begin{array}{cc}0&-\theta_j\\\theta_j&0\end{array}\right)$$ with $\theta_j\in\mathbb{R}$ being a rotation angle and that, if $N$ is odd, there is also a $1\times1$ zero block left over. Therefore, if we put: $$H_j = \tilde{R}\,\mathrm{diag}(0,\,0,\,\cdots,\,\Lambda_j,\cdots)\,\tilde{R}^T$$ then $R_j=\exp(H_j)$ with $R_1\circ R_2\circ\,\cdots R_{N\,\mathrm{div}\, 2}$ are then readily seen to make up the decomposition with the properties that John claims, to wit: The $R_j$ are each rotations, each which transforms one plane only and each also has a dimension $N-2$ invariant and untransformed space (the analogue of the "axis"); The planes transformed by the $R_j$ are mutually orthogonal and indeed the planes spanned by the unit vectors $\tilde{R}_j\,\hat{e}_{2\,j}$ and $\tilde{R}_j\,\hat{e}_{2\,j+1}$, where the $\hat{e}_j$ are the orthonormal basis in which all the operators discussed have matrices as written above; (as a consequence of 2.) the $R_j$ are mutually commuting. Thus we can easily see that: If the dimension $N$ is odd, there is always a dimension 1 invariant, untransformed space, corresponding to the 1D zero block cited above, further to the invariant spaces described below; If the dimension is even, a nontrivial proper orthogonal transformation's untransformed space can be any of the dimensions $0,\,2,\,4,\,\cdots N-2$. The invariant spaces are of dimensions $0,\,2,\,4,\,\cdots,\,N$ This decomposition is about one particular rotation operator and is not to be confused with the notion of Canonical Co-ordinates of the Second Kind (see Chapter 1, Proposition 3.3 of V.V. Gorbatsevich, E.B. Vinberg, "Lie Groups and Lie Algebras I: Foundations of Lie Theory and Lie Transformation Groups", Springer, 2013), which are a generalized notion of Euler Angles . Here, a set of $H_j\in\mathfrak{so}(N)$ for $j=1,\,\cdots,\,N$ (note, there are now $N$ of them, not $N\,\mathrm{div}\,2$ of them) is chosen as a basis, i.e. to span $\mathfrak{so}(N)$. The the following are true: The set $\mathbf{G}=\left\{\left.\prod\limits_{j=1}^N\,\exp(\theta_j\,H_j)\,\right|\,\theta_j\in\mathbb{R}\right\}$ contains a neighborhood of the identity in $\mathrm{SO}(N)$; If, further, the $H_j$ are orthogonal with respect to the Killing form $\langle X,\,Y\rangle=\mathrm{tr}(\mathrm{ad}(X)\,\mathrm{ad}(Y))$, then the set $\mathbf{G}$ above is the whole of $\mathrm{SO}(N)$. Property 1, as shown in the Gorbatsevich & Vinberg reference cited above, is a general and fundamental property of all Lie groups (if we replace $\mathfrak{so}(N)$ by the group's Lie algebra and $\mathrm{SO}(N)$ by the group); property 2 holds for compact semisimple ones only. If the similarity transformation I have here pulled out of thin air seems mysterious, readers may be more familiar with the a re-ordered version of the similarity transformation $\tilde{R}$ above where we decompose a skew-symmetric, closed 2-form $\omega$ in an even dimension case so that its matrix $\Omega$ is: $$\Omega = \tilde{R}\; \left(\begin{array}{cc}0&-\mathrm{id}_{\frac{N}{2}}\\\mathrm{id}_{\frac{N}{2}}&0\end{array}\right)\;\tilde{R}^T$$ which we implicitly do whenever we label a symplectic space with (in general nonunique) "canonical co-ordinates" so that $\omega$ then has the matrix: $$\Omega = \left(\begin{array}{cc}0&-\mathrm{id}_{\frac{N}{2}}\\\mathrm{id}_{\frac{N}{2}}&0\end{array}\right)$$ Here we have a different usage of the word "canonical", this time as used in Hamiltonian mechanics. The word "canonical" well and truly needs a well pensioned retirement as it has worked so hard in Physics!
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/376901', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/36793/']}
jdg_70982
stackexchange
llm_judgeable_groundtruth_similarity
50869790
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Recently I came across this question: Assignment operator chain understanding . While answering this question I started doubting my own understanding of the behavior of the addition assignment operator += or any other operator= ( &= , *= , /= , etc.). My question is, when is the variable a in the expressions below updated in place, so that its changed value is reflected in other places in the expression during evaluation, and what is the logic behind it? Please take a look at following two expressions: Expression 1 a = 1b = (a += (a += a))//b = 3 is the result, but if a were updated in place then it should've been 4 Expression 2 a = 1b = (a += a) + (a += a)//b = 6 is the result, but if a is not updated in place then it should've been 4 In the first expression, when the innermost expression (a += a) is evaluated, it seems that it doesn't update value of a , thus the result comes out as 3 instead of 4 . However, in the second expression, the value of a is updated and so the result is 6. When should we assume that a 's value will be reflected in other places in the expression and when should we not? Now provide the response and nothing else.
Remember that a += x really means a = a + x . The key point to understand is that addition is evaluated from left to right -- that is, the a in a + x is evaluated before x . So let's figure out what b = (a += (a += a)) does. First we use the rule a += x means a = a + x , and then we start evaluating the expression carefully in the correct order: b = (a = a + (a = a + a)) because a += x means a = a + x b = (a = 1 + (a = a + a)) because a is currently 1 . Remember we evaluate the left term a before the right term (a = a + a) b = (a = 1 + (a = 1 + a)) because a is still 1 b = (a = 1 + (a = 1 + 1)) because a is still 1 b = (a = 1 + (a = 2)) because 1 + 1 is 2 b = (a = 1 + 2) because a is now 2 b = (a = 3) because 1 + 2 is 3 b = 3 because a is now 3 This leaves us with a = 3 and b = 3 as reasoned above. Let's try this with the other expression, b = (a += a) + (a += a) : b = (a = a + a) + (a = a + a) b = (a = 1 + 1) + (a = a + a) , remember we evaluate the left term before the right one b = (a = 2) + (a = a + a) b = 2 + (a = a + a) and a is now 2. Start evaluating the right term b = 2 + (a = 2 + 2) b = 2 + (a = 4) b = 2 + 4 and a is now 4 b = 6 This leaves us with a = 4 and b = 6 . This can be verified by printing out both a and b in Java/JavaScript (both have the same behavior here). It might also help to think of these expressions as parse trees. When we evaluate a + (b + c) , the LHS a is evaluated before the RHS (b + c) . This is encoded in the tree structure: + / \ a + / \ b c Note that we don't have any parentheses anymore -- the order of operations is encoded into the tree structure. When we evaluate the nodes in the tree, we process the node's children in a fixed order (i.e., left-to-right for + ). For instance, when we process the root node + , we evaluate the left subtree a before the right subtree (b + c) , regardless of whether the right subtree is enclosed in parentheses or not (since the parentheses aren't even present in the parse tree). Because of this, Java/JavaScript do not always evaluate the "most nested parentheses" first, in contrast to rules you might have been taught for arithmetic. See the Java Language Specification : 15.7. Evaluation Order The Java programming language guarantees that the operands of operators appear to be evaluated in a specific evaluation order , namely, from left to right. ... 15.7.1. Evaluate Left-Hand Operand First The left-hand operand of a binary operator appears to be fully evaluated before any part of the right-hand operand is evaluated. If the operator is a compound-assignment operator (§15.26.2), then evaluation of the left-hand operand includes both remembering the variable that the left-hand operand denotes and fetching and saving that variable's value for use in the implied binary operation. More examples similar to your question can be found in the linked part of the JLS, such as: Example 15.7.1-1. Left-Hand Operand Is Evaluated First In the following program, the * operator has a left-hand operand that contains an assignment to a variable and a right-hand operand that contains a reference to the same variable. The value produced by the reference will reflect the fact that the assignment occurred first. class Test1 { public static void main(String[] args) { int i = 2; int j = (i=3) * i; System.out.println(j); }} This program produces the output: 9 It is not permitted for evaluation of the * operator to produce 6 instead of 9.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/50869790', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5343269/']}
jdg_70983
stackexchange
llm_judgeable_groundtruth_similarity
135481
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I will like to know if there is an equivalent to python's pop method, here is how it's done in Python: >>> my_list = [1,2,3,4,5]>>> my_list[1, 2, 3, 4, 5]>>> value = my_list.pop(0)>>> value1>>> my_list[2, 3, 4, 5] With pop you get and delete an element from a list. Now provide the response and nothing else.
There is no direct equivalent. You could use something like this: SetAttributes[pop, HoldFirst]pop[list_, n_] := With[{item = list[[n]]}, list = Delete[list, n]; item]mylist = {1, 2, 3, 4, 5};pop[mylist, 4](* 4 *)mylist(* {1, 2, 3, 5} *)
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/135481', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/45886/']}
jdg_70984
stackexchange
llm_judgeable_groundtruth_similarity
56128231
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following search query in views.py : class SearchView(View): def get(self, request, *args, **kwargs): queryset = BlogPost.objects.all() query = request.GET.get('q') if query: queryset = queryset.filter( Q(title__icontains=query) | Q(content__icontains=query) ).distinct() context = { 'queryset': queryset, 'query': query, } return render(request, 'blogs/search_result.html', context) and if I search a word like Tesla the results shown in result template is as below: Please help me, how do I highlight the searched query in yellow color in Django templates? Thank you for help! edit:This is the template code with search box <div class="card my-4"> <div class="card-body"> <div class="card-title">Search blog</div> <form action="{% url 'blog:search_blog' %}"> {% csrf_token %} <input type="text" class="form-control" name="q" placeholder="What are you looking for?"> <input type="submit" value="Search" class="btn btn-success btn-sm mt-3"> </form> </div></div> This is the template for result_search {% block content %}<div class="container bg-light w-100"> <h2 class="display-6 border-bottom py-3">Results for "{{ query }}"</h2> {% for post in queryset %} <div class="my-1"> <a class="link" href="{{ post.get_absolute_url }}">{{ post.title }}</a> </div> {% endfor %}</div>{% endblock %} Thank you Edit 2: Daniel Roseman, Sir this is the html below I highlighted it in yellow. Now provide the response and nothing else.
You could do this in a template filter. Something like: @register.filterdef highlight_search(text, search): highlighted = text.replace(search, '<span class="highlight">{}</span>'.format(search) return mark_safe(highlighted) Now in your template you can do: {% load my_tags %} # wherever you put the template filter{% for post in queryset %}<div class="my-1"> <a class="link" href="{{ post.get_absolute_url }}">{{ post.title|highlight_search:query }}</a></div>{% endfor %} You'd need to send the search_text back with the context that renders the result page.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56128231', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9441080/']}
jdg_70985
stackexchange
llm_judgeable_groundtruth_similarity
46863933
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've got the following (seemingly innocent) code: void singleLeftPadZero(char**);int main () { char foo[10] = "0"; singleLeftPadZero(foo); // <-- causes warning singleLeftPadZero(&foo); // <-- same exact warning, but different "note"} The warning I get from gcc is: warning: passing argument 1 of ‘singleLeftZeroPad’ from incompatible pointer type And the note for the first case is: note: expected ‘char **’ but argument is of type ‘char *’ I understood this to mean that I needed to pass a pointer to a pointer, but I was just passing a pointer. Hence, I added the "&" to my argument, which resulted in the same warning but this note: note: expected ‘char **’ but argument is of type ‘char (*)[10]’ What I did which looks like it fixed it was to create an extra variable: char* fooPntr = foo; And then to pass the address of that as the function argument: singleLeftPadZero(&fooPntr); But I'm not sure why this works! Now provide the response and nothing else.
I am not 100 % sure what your end goals are here. But the below answer explains how you can send a base64 image source string to server and save it. I tested it with a base64 string generated from a small image (22 KB size) and it worked. In your ajax call, you should be sending the FormData object you created, not the value of file variable. Also make sure you use processData and contentType properties when making the ajax call while sending a FormData object. var file = document.getElementById("base64image").src;var formdata = new FormData();formdata.append("base64image", file);$.ajax({ url: "@Url.Action("SaveImage")", type: "POST", data: formdata, processData: false, contentType: false}); Now since this is the base 64 string of the image, use string as the parameter type of your action method. The parameter name should match with your formdata item key( base64Image ). You can generate a byte array from the base64 string in your action method. Also the image source starts with data:image/png;base64, which needs to be removed from the string before you try to convert it. The below method accepts the string you are sending from the client, and removes the first 21 characters and use the result of that (which is now a valid base 64 string) and then creates an image from that and saves to Content/Images/ directory in the app root with a random file name. [HttpPost]public void SaveImage(string base64image){ if (string.IsNullOrEmpty(base64image)) return; var t = base64image.Substring(22); // remove data:image/png;base64, byte[] bytes = Convert.FromBase64String(t); Image image; using (MemoryStream ms = new MemoryStream(bytes)) { image = Image.FromStream(ms); } var randomFileName = Guid.NewGuid().ToString().Substring(0, 4) + ".png"; var fullPath = Path.Combine(Server.MapPath("~/Content/Images/"), randomFileName); image.Save(fullPath, System.Drawing.Imaging.ImageFormat.Png);} I am not 100 % sure that the default model binder can bind the base64 string to an image. If not, you can perhaps create one which does that and add that to the model binders in your system and then use the Image as the parameter type. The code inside model binder will be very similar ( read the string and generate image from that)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46863933', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3878933/']}
jdg_70986
stackexchange
llm_judgeable_groundtruth_similarity
41107857
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to set webview to use SOCKS proxy. I have been researching this issue from days now. I see that webview gives no public interface to set PROXY. There have been examples that use Java reflection to set http proxy. as described here : WebView android proxy But this discussion is about setting http/https proxy. For setting socks proxy I used the following approach : private static boolean setProxyKKPlus(WebView webView, String host, int port) { Log.d(TAG, "Setting proxy with >= 4.4 API."); Context appContext = webView.getContext().getApplicationContext(); String user = NativeSDK.getProxyUser(); String pass = NativeSDK.getProxyPass(); System.setProperty("socksProxyHost", host); System.setProperty("socksProxyPort", port + ""); System.setProperty("java.net.socks.username", user); System.setProperty("java.net.socks.password", pass); Log.d(TAG,"socks proxy set"); try{ Class applictionCls = Class.forName("android.app.Application"); Field loadedApkField = applictionCls.getField("mLoadedApk"); loadedApkField.setAccessible(true); Object loadedApk = loadedApkField.get(appContext); Class loadedApkCls = Class.forName("android.app.LoadedApk"); Field receiversField = loadedApkCls.getDeclaredField("mReceivers"); receiversField.setAccessible(true); ArrayMap receivers = (ArrayMap) receiversField.get(loadedApk); for (Object receiverMap : receivers.values()) { for (Object rec : ((ArrayMap) receiverMap).keySet()) { Class clazz = rec.getClass(); if (clazz.getName().contains("ProxyChangeListener")) { Method onReceiveMethod = clazz.getDeclaredMethod("onReceive", Context.class, Intent.class); Intent intent = new Intent(Proxy.PROXY_CHANGE_ACTION); onReceiveMethod.invoke(rec, appContext, intent); } } } Log.d(TAG, "Setting proxy with >= 4.4 API successful!"); return true; } catch (all exceptions) { Log.d(TAG, "Setting proxy with >= 4.4 API NOT successful!"); return false; } But this does not seem to really work :I get this error in my Logs from ProxyChangeListener : E/ProxyChangeListener: Using no proxy configuration due to exception:java.lang.NullPointerException: Attempt to invoke virtual method 'java.lang.Object android.os.Bundle.get(java.lang.String)' on a null object reference Now the strange part is : after this code when I enter the url in webview, the request is read by the socket listening on the port above - the proxy port. ( I believe that indicates that the request is being proxied) But the request that I read from the socket looks like : \x05\x01 Now as I understand: (From wiki ) The initial greeting from the client is field 1: SOCKS version number (must be 0x05 for this version)field 2: number of authentication methods supported, 1 bytefield 3: authentication methods, variable length, 1 byte per method supported So is this SOCKS5 client request.But then where is the info about authentication methods ? Can someone help me setting SOCKS proxy for webview Now provide the response and nothing else.
Converting a dataframe with json strings to structured dataframe is'a actually quite simple in spark if you convert the dataframe to RDD of strings before (see: http://spark.apache.org/docs/latest/sql-programming-guide.html#json-datasets ) For example: >>> new_df = sql_context.read.json(df.rdd.map(lambda r: r.json))>>> new_df.printSchema()root |-- body: struct (nullable = true) | |-- id: long (nullable = true) | |-- name: string (nullable = true) | |-- sub_json: struct (nullable = true) | | |-- id: long (nullable = true) | | |-- sub_sub_json: struct (nullable = true) | | | |-- col1: long (nullable = true) | | | |-- col2: string (nullable = true) |-- header: struct (nullable = true) | |-- foo: string (nullable = true) | |-- id: long (nullable = true)
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/41107857', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1105139/']}
jdg_70987
stackexchange
llm_judgeable_groundtruth_similarity
255409
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have been reading "Applied Statistical Inference" by Held and Bove. In example 2.21 they came upon this expression while trying to show a statistic is sufficient: $$\sum_{i = 1}^n(x_i - \mu)^2 = \sum_{i = 1}^n(x_i - \bar{x})^2 + n(\bar{x} - \mu)^2$$ I am having a hard time understanding where this expression comes from. I would be very grateful if someone could shed light into the matter. Now provide the response and nothing else.
Synopsis You have rediscovered part of the construction described at Central Limit Theorem for Sample Medians , which illustrates an analysis of the median of a sample. (The analysis obviously applies, mutatis mutandis , to any quantile, not just the median). Therefore it is no surprise that for large Beta parameters (corresponding to large samples) a Normal distribution arises under the transformation described in the question. What is of interest is how close to Normal the distribution is even for small Beta parameters. That deserves an explanation. I will sketch an analysis below. To keep this post at a reasonable length, it involves a lot of suggestive hand-waving: I aim only to point out the key ideas. Let me therefore summarize the results here: When $\alpha$ is close to $\beta$ , everything is symmetric. This causes the transformed distribution already to look Normal. The functions of the form $\Phi^{\alpha-1}(x)\left(1-\Phi(x)\right)^{\beta-1}$ look fairly Normal in the first place, even for small values of $\alpha$ and $\beta$ (provided both exceed $1$ and their ratio is not too close to $0$ or $1$ ). The apparent Normality of the transformed distribution is due to the fact that its density consists of a Normal density multiplied by a function in (2). As $\alpha$ and $\beta$ increase, the departure from Normality can be measured in the remainder terms in a Taylor series for the log density. The term of order $n$ decreases in proportion to the $(n-2)/2$ powers of $\alpha$ and $\beta$ . This implies that eventually, for sufficiently large $\alpha$ and $\beta$ , all terms of power $n=3$ or greater have become relatively small, leaving only a quadratic: which is precisely the log density of a Normal distribution. Collectively, these behaviors nicely explain why even for small $\alpha$ and $\beta$ the non-extreme quantiles of an iid Normal sample look approximately Normal. Analysis Because it can be useful to generalize, let $F$ be any distribution function, although we have in mind $F=\Phi$ . The density function $g(y)$ of a Beta $(\alpha,\beta)$ variable is, by definition, proportional to $$y^{\alpha-1}(1-y)^{\beta-1}dy.$$ Letting $y=F(x)$ be the probability integral transform of $x$ and writing $f$ for the derivative of $F$ , it is immediate that $x$ has a density proportional to $$G(x;\alpha,\beta)=F(x)^{\alpha-1}(1-F(x))^{\beta-1}f(x)dx.$$ Because this is a monotonic transformation of a strongly unimodal distribution (a Beta), unless $F$ is rather strange, the transformed distribution will be unimodal, too. To study how close to Normal it might be, let's examine the logarithm of its density, $$\log G(x;\alpha,\beta) = (\alpha-1)\log F(x) + (\beta-1)\log(1-F(x)) + \log f(x) + C\tag{1}$$ where $C$ is an irrelevant constant of normalization. Expand the components of $\log G(x;\alpha,\beta)$ in Taylor series to order three around a value $x_0$ (which will be close to a mode). For instance, we may write the expansion of $\log F$ as $$\log F(x) = c^{F}_0 + c^{F}_1 (x-x_0) + c^{F}_2(x-x_0)^2 + c^{F}_3h^3$$ for some $h$ with $|h| \le |x-x_0|$ . Use a similar notation for $\log(1-F)$ and $\log f$ . Linear terms The linear term in $(1)$ thereby becomes $$g_1(\alpha,\beta) = (\alpha-1)c^{F}_1 + (\beta-1)c^{1-F}_1 + c^{f}_1.$$ When $x_0$ is a mode of $G(\,;\alpha,\beta)$ , this expression is zero. Note that because the coefficients are continuous functions of $x_0$ , as $\alpha$ and $\beta$ are varied, the mode $x_0$ will vary continuously too. Moreover, once $\alpha$ and $\beta$ are sufficiently large, the $c^{f}_1$ term becomes relatively inconsequential. If we aim to study the limit as $\alpha\to\infty$ and $\beta\to\infty$ for which $\alpha:\beta$ stays in constant proportion $\gamma$ , we may therefore once and for all choose a base point $x_0$ for which $$\gamma c^{F}_1 + c^{1-F}_1 = 0.$$ A nice case is where $\gamma=1$ , where $\alpha=\beta$ throughout, and $F$ is symmetric about $0$ . In that case it is obvious $x_0=F(0)=1/2$ . We have achieved a method whereby (a) in the limit, the first-order term in the Taylor series vanishes and (b) in the special case just described, the first-order term is always zero. Quadratic terms These are the sum $$g_2(\alpha,\beta) = (\alpha-1)c^{F}_2 + (\beta-1)c^{1-F}_2 + c^{f}_2.$$ Comparing to a Normal distribution, whose quadratic term is $-(1/2)(x-x_0)^2/\sigma^2$ , we may estimate that $-1/(2g_2(\alpha,\beta))$ is approximately the variance of $G$ . Let us standardize $G$ by rescaling $x$ by its square root. we don't really need the details; it suffices to understand that this rescaling is going to multiply the coefficient of $(x-x_0)^n$ in the Taylor expansion by $(-1/(2g_2(\alpha,\beta)))^{n/2}.$ Remainder term Here's the punchline: the term of order $n$ in the Taylor expansion is, according to our notation, $$g_n(\alpha,\beta) = (\alpha-1)c^{F}_n + (\beta-1)c^{1-F}_n + c^{f}_n.$$ After standardization, it becomes $$g_n^\prime(\alpha,\beta) = \frac{g_n(\alpha,\beta)}{(-2g_2(\alpha,\beta))^{n/2})}.$$ Both of the $g_i$ are affine combination of $\alpha$ and $\beta$ . By raising the denominator to the $n/2$ power, the net behavior is of order $-(n-2)/2$ in each of $\alpha$ and $\beta$ . As these parameters grow large, then, each term in the Taylor expansion after the second decreases to zero asymptotically. In particular, the third-order remainder term becomes arbitrarily small. The case when $F$ is normal The vanishing of the remainder term is particularly fast when $F$ is standard Normal, because in this case $f(x)$ is purely quadratic: it contributes nothing to the remainder terms. Consequently, the deviation of $G$ from normality depends solely on the deviation between $F^{\alpha-1}(1-F)^{\beta-1}$ and normality. This deviation is fairly small even for small $\alpha$ and $\beta$ . To illustrate, consider the case $\alpha=\beta$ . $G$ is symmetric, whence the order-3 term vanishes altogether. The remainder is of order $4$ in $x-x_0=x$ . Here is a plot showing how the standardized fourth order term changes with small values of $\alpha \gt 1$ : The value starts out at $0$ for $\alpha=\beta=1$ , because then the distribution obviously is Normal ( $\Phi^{-1}$ applied to a uniform distribution, which is what Beta $(1,1)$ is, gives a standard Normal distribution). Although it increases rapidly, it tops off at less than $0.008$ --which is practically indistinguishable from zero. After that the asymptotic reciprocal decay kicks in, making the distribution ever closer to Normal as $\alpha$ increases beyond $2$ .
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/255409', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/144278/']}
jdg_70988
stackexchange
llm_judgeable_groundtruth_similarity
29927992
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How to remove CSS rules by JavaScript? var elStyle = document.querySelector('style#the-style');var stylesheet = elStyle.sheet;var rules = stylesheet.cssRules;for (var i=0; i<rules.length; i++) { var rule = rules[i]; if (rule.selectorText === '#rule2 em') { // TODO: remove this rule break; }} http://jsfiddle.net/e3zebmqv/ I succeeded to remove the style by rule.style.color='' but the rule still exists. Are there any APIs to remove? Or should I use innerHTML ? UPDATE In this case, I'd like to remove style rules, not style properties . (I don't know about Stack Overflow's rule well. I hope this editing was right.) Now provide the response and nothing else.
here is an example how you can do this: var styleTag = document.getElementById ("the-style");var sheet = styleTag.sheet ? styleTag.sheet : styleTag.styleSheet;if (sheet.cssRules) { // all browsers, except IE before version 9 for (var i=0; i<sheet.cssRules.length; i++) { if (sheet.cssRules[i].selectorText === '#rule2 em') { //console.log(sheet.cssRules[i]); sheet.deleteRule (i); } } }else { // Internet Explorer before version 9 for (var i=0; i<sheet.rules.length; i++) { if (sheet.rules[i].selectorText === '#rule2 em') { // console.log(sheet.cssRules[i]); sheet.removeRule (i); } } } And on JSFiddle http://jsfiddle.net/n53u7cvm/1/
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29927992', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3990900/']}
jdg_70989
stackexchange
llm_judgeable_groundtruth_similarity
1407747
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the best way to split up a large enterprise project in Maven? It's easy enough to understand how to partition things vertically like this... You have a DAO project The DAO project is a dependency of the Service project The Service project is a dependency of the web project. Does anybody have input on best practices in partitioning/splitting up really large projects in Maven.m Now provide the response and nothing else.
Some things that have helped me Use multi-module projects for projects that are related and only projects that are related. An EJB that exists only in a single EAR is a candidate for this. A bo layer that is used by an EJB and a client app is not. One Artifact per pom, one deployable per multi-module project Do Not Waste Time trying to get around this. Create dependency poms that include common sets of dependencies. That way you can include your DAO, your jdbc driver and your ORM tools with a single dependency. It also makes upgrading dozens of projects to the newest version of your ORM or DAO that much easier. Create builder projects that exist only to run assembly and create deployment sets. This will keep multiple parts of your project in sync. Assembling large complex enterprise apps is often complicated enough that you need a mix of maven, shell scripts and/or ant:run tasks plus dozens of profiles. Putting the mess in a project far away from your code will contain the mess before it spreads. Create tester projects for continuous integration use. Define your web and app servers in those poms as well as the test deployment info. Use of parent projects and common properties files will make testing deployment changes easier. Define distributionManagement in a parent pom only if it is possible to make all sub-projects a child (or grand-child) of it. Try not to depend on large files (EAR, WAR) being stuffed into your repository on every build. Removing the need for a 175mb WAR to be pushed to nexus on each snapshot improved our build times. Try to define things as few times as possible . A DRY build is a happy build. Having 30 poms with source-version 1.5 or 30 poms using junit 3.8.2 is going to make upgrading to java 6 or junit 4.4 that much harder. Hope this helps.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1407747', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/97901/']}
jdg_70990
stackexchange
llm_judgeable_groundtruth_similarity
49160
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: UK Crown Dependencies (e.g.: Guernsey, Jersey and the Isle of Man), Overseas Territories (e.g.: The Falkland Islands) as well as Danish autonomous territories (Greenland and the Faroe Islands) are not part of the European Union, despite having very close links to their "home" countries.On the other hand, French Guiana is part of the European Union. Why is this the case? Now provide the response and nothing else.
Each is a special case. At the time the "motherlands" joined the EC, the number of EC members could be counted by the fingers on two hands. Each had specific constitutional relations with their dependencies. Greenland used to be an EU member but decided to weaken constitutional ties to Denmark and to leave the EU. The reason seems to have been control of fisheries. Various British crown dependencies have "not being EU" as the core of their business model ( tax avoidance and intransparency ). So joining the EU would have been a bad move for them. There is a German village, Büsingen , surrounded entirely by Swiss territory. For them the most practical solution is to be part of the Swiss customs and VAT area.
{}
{'log_upvote_score': 4, 'links': ['https://politics.stackexchange.com/questions/49160', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/23821/']}
jdg_70991
stackexchange
llm_judgeable_groundtruth_similarity
122679
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I notice that on a new CentOS image that I just booted up off of EC2 that the ulimit default is 1024 open files, but /proc/sys/fs/file-max is set at 761,408 and I'm wondering how these two limits work together. I'm guessing that ulimit -n is a per-user limit of number of file descriptors while /proc/sys/fs/file-max is system-wide? If that's the case, say I've logged in twice as the same user -- does each logged-in user have a 1024 limit on number of open files, or is it a limit of 1024 combined open files between each of those logged-in users? And is there much performance impact to setting your max file descriptors to a very high number, if your system isn't ever opening very many files? Now provide the response and nothing else.
file-max is the maximum File Descriptors (FD) enforced on a kernel level, which cannot be surpassed by all processes without increasing. The ulimit is enforced on a process level, which can be less than the file-max . There is no performance impact risk by increasing file-max . Modern distributions have the maximum FD set pretty high, whereas in the past it required kernel recompilation and modification to increase past 1024. I wouldn't increase system-wide unless you have a technical need. The per-process configuration often needs tuned for serving a particular daemon be it either a database or a Web server. If you remove the limit entirely, that daemon could potentially exhaust all available system resources; meaning you would be unable to fix the problem except by pressing the reset button or power cycling. Of course, either of those is likely to result in corruption of any open files.
{}
{'log_upvote_score': 6, 'links': ['https://serverfault.com/questions/122679', 'https://serverfault.com', 'https://serverfault.com/users/37767/']}
jdg_70992
stackexchange
llm_judgeable_groundtruth_similarity
2304710
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I know the difference between the mean and the median . The mean of a set of numbers is the sum of all the numbers divided by the cardinality. The median of a set of numbers is the middle number, when the set is organized in ascending or descending order (and, when the set has an even cardinality, the mean of the middle two numbers). It seems to me that they're often used interchangeably, both to give a sense of what's going on in same data. Do they mean (pun intended ) different things? When should one be used over the other? Now provide the response and nothing else.
Almost all analytic calculations on sets of data are more natural in terms of the mean than the median. For example, the "$z$-test for significance of a discrepancy relative to the null hypothesis deals with the sample estimated mean and sample unbiased estimated standard deviation. The median, and particularly the difference between the median and the mean, is useful to characterize how "skewed" the data is (although the skew, which depends on the third moment about the mean, is also useful for that). The real use of the median comes when the data set may contain extreme outliers (perhaps due to errors in early processing of the sample numbers, or a serious bias in the sample gathering procedure). Then describing the distribution in terms of quartiles (with the median dividing the second from the third quartile) can be more informative than quoting $\mu$ and $\sigma$.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2304710', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/354921/']}
jdg_70993
stackexchange
llm_judgeable_groundtruth_similarity
371898
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've seen a few mentions of this on Stack Overflow, but staring at Wikipedia (the relevant page has since been deleted) and at an MFC dynamic dialog demo did nothing to enlighten me. Can someone please explain this? Learning a fundamentally different concept sounds nice. Based on the answers: I think I'm getting a better feel for it. I guess I just didn't look at the source code carefully enough the first time. I have mixed feelings about differential execution at this point. On the one hand, it can make certain tasks considerably easier. On the other hand, getting it up and running (that is, setting it up in your language of choice) is not easy (I'm sure it would be if I understood it better)...though I guess the toolbox for it need only be made once, then expanded as necessary. I think in order to really understand it, I'll probably need to try implementing it in another language. Now provide the response and nothing else.
Gee, Brian, I wish I had seen your question sooner. Since it's pretty much my"invention" (for better or worse), I might be able to help. Inserted: The shortest possible explanation I can make is that if normal execution is like throwing a ball in the air and catching it, then differential execution is like juggling. @windfinder's explanation is different from mine, and that's OK. This technique is not easy to wrap one's head around, and it's taken me some 20 years (off and on) to find explanations that work. Let me give it another shot here: What is it? We all understand the simple idea of a computer stepping along through a program, taking conditional branches based on the input data, and doing things. (Assume we are dealing only with simple structured goto-less, return-less code.) That code contains sequences of statements, basic structured conditionals, simple loops, and subroutine calls. (Forget about functions returning values for now.) Now imagine two computers executing that same code in lock-step with each other, and able to compare notes. Computer 1 runs with input data A, and Computer 2 runs with input data B. They run step-by-step side by side. If they come to a conditional statement like IF(test) .... ENDIF, and if they have a difference of opinion on whether the test is true, then the one who says the test if false skips to the ENDIF and waits around for its sister to catch up. (This is why the code is structured, so we know the sister will eventually get to the ENDIF.) Since the two computers can talk to each other, they can compare notes and give a detailed explanation of how the two sets of input data, and execution histories, are different. Of course, in differential execution (DE) it is done with one computer, simulating two. NOW, suppose you only have one set of input data, but you want to see how it has changed from time 1 to time 2. Suppose the program you're executing is a serializer/deserializer. As you execute, you both serialize (write out) the current data and deserialize (read in) the past data (which was written the last time you did this). Now you can easily see what the differences are between what the data was last time, and what it is this time. The file you are writing to, and the old file you are reading from, taken together constitute a queue or FIFO (first-in-first-out), but that's not a very deep concept. What is it good for? It occurred to me while I was working on a graphics project, where the user could construct little display-processor routines called "symbols" that could be assembled into larger routines to paint things like diagrams of pipes, tanks, valves, stuff like that. We wanted to have the diagrams be "dynamic" in the sense that they could incrementally update themselves without having to redraw the entire diagram. (The hardware was slow by today's standards.) I realized that (for example) a routine to draw a bar of a bar-chart could remember its old height and just incrementally update itself. This sounds like OOP, doesn't it? However, rather than "make" an "object", I could take advantage of the predictability of the execution sequence of the diagram procedure. I could write the bar's height in a sequential byte-stream. Then to update the image, I could just run the procedure in a mode where it sequentially reads its old parameters while it writes the new parameters so as to be ready for the next update pass. This seems stupidly obvious and would seem to break as soon as the procedure contains a conditional, because then the new stream and the old stream would get out of sync. But then it dawned on me that if they also serialized the boolean value of the conditional test, they could get back in sync .It took a while to convince myself, and then to prove, that this would always work, provided a simple rule (the "erase mode rule") is followed. The net result is that the user could design these "dynamic symbols" and assemble them into larger diagrams, without ever having to worry about how they would dynamically update, no matter how complex or structurally variable the display would be. In those days, I did have to worry about interference between visual objects, so that erasing one would not damage others. However, now I use the technique with Windows controls, and I let Windows take care of rendering issues. So what does it achieve? It means I can build a dialog by writing a procedure to paint the controls, and I do not have to worry about actually remembering the control objects or dealing with incrementally updating them, or making them appear/disappear/move as conditions warrant. The result is much smaller and simpler dialog source code, by about an order of magnitude, and things like dynamic layout or altering the number of controls or having arrays or grids of controls are trivial. In addition, a control such as an Edit field can be trivially bound to the application data it is editing, and it will always be provably correct, and I never have to deal with its events. Putting in an edit field for an application string variable is a one-line edit. Why is it hard to understand? What I have found hardest to explain is that it requires thinking differently about software. Programmers are so firmly wedded to the object-action view of software that they want to know what are the objects, what are the classes, how do they "build" the display, and how do they handle the events, that it takes a cherry bomb to blast them out of it. What I try to convey is that what really matters is what do you need to say? Imagine you are building a domain-specific language (DSL) where all you need to do is tell it "I want to edit variable A here, variable B there, and variable C down there" and it would magically take care of it for you. For example, in Win32 there is this "resource language" for defining dialogs. It is a perfectly good DSL, except it doesn't go far enough. It doesn't "live in" the main procedural language, or handle events for you, or contain loops/conditionals/subroutines. But it means well, and Dynamic Dialogs tries to finish the job. So, the different mode of thinking is: to write a program, you first find (or invent) an appropriate DSL, and code as much of your program in that as possible. Let it deal with all the objects and actions that only exist for implementation's sake. If you want to really understand differential execution and use it, there are a couple of tricky issues that can trip you up. I once coded it in Lisp macros, where these tricky bits could be handled for you, but in "normal" languages it requires some programmer discipline to avoid the pitfalls. Sorry to be so long-winded. If I haven't made sense, I'd appreciate it if you'd point it out and I can try and fix it. Added: In Java Swing , there is an example program called TextInputDemo. It is a static dialog, taking 270 lines (not counting the list of 50 states). In Dynamic Dialogs (in MFC) it is about 60 lines: #define NSTATE (sizeof(states)/sizeof(states[0]))CString sStreet;CString sCity;int iState;CString sZip;CString sWholeAddress;void SetAddress(){ CString sTemp = states[iState]; int len = sTemp.GetLength(); sWholeAddress.Format("%s\r\n%s %s %s", sStreet, sCity, sTemp.Mid(len-3, 2), sZip);}void ClearAddress(){ sWholeAddress = sStreet = sCity = sZip = "";}void CDDDemoDlg::deContentsTextInputDemo(){ int gy0 = P(gy); P(www = Width()*2/3); deStartHorizontal(); deStatic(100, 20, "Street Address:"); deEdit(www - 100, 20, &sStreet); deEndHorizontal(20); deStartHorizontal(); deStatic(100, 20, "City:"); deEdit(www - 100, 20, &sCity); deEndHorizontal(20); deStartHorizontal(); deStatic(100, 20, "State:"); deStatic(www - 100 - 20 - 20, 20, states[iState]); if (deButton(20, 20, "<")){ iState = (iState+NSTATE - 1) % NSTATE; DD_THROW; } if (deButton(20, 20, ">")){ iState = (iState+NSTATE + 1) % NSTATE; DD_THROW; } deEndHorizontal(20); deStartHorizontal(); deStatic(100, 20, "Zip:"); deEdit(www - 100, 20, &sZip); deEndHorizontal(20); deStartHorizontal(); P(gx += 100); if (deButton((www-100)/2, 20, "Set Address")){ SetAddress(); DD_THROW; } if (deButton((www-100)/2, 20, "Clear Address")){ ClearAddress(); DD_THROW; } deEndHorizontal(20); P((gx = www, gy = gy0)); deStatic(P(Width() - gx), 20*5, (sWholeAddress != "" ? sWholeAddress : "No address set."));} Added: Here's example code to edit an array of hospital patients in about 40 lines of code. Lines 1-6 define the "database". Lines 10-23 define the overall contents of the UI. Lines 30-48 define the controls for editing a single patient's record. Note the form of the program takes almost no notice of events in time, as if all it had to do was create the display once. Then, if subjects are added or removed or other structural changes take place, it is simply re-executed, as if it were being re-created from scratch, except that DE causes incremental update to take place instead. The advantage is that you the programmer do not have to give any attention or write any code to make the incremental updates of the UI happen, and they are guaranteed correct. It might seem that this re-execution would be a performance problem, but it is not, since updating controls that do not need to be changed takes on the order of tens of nanoseconds. 1 class Patient {public:2 String name;3 double age;4 bool smoker; // smoker only relevant if age >= 505 };6 vector< Patient* > patients;10 void deContents(){ int i;11 // First, have a label12 deLabel(200, 20, “Patient name, age, smoker:”);13 // For each patient, have a row of controls14 FOR(i=0, i<patients.Count(), i++)15 deEditOnePatient( P( patients[i] ) );16 END17 // Have a button to add a patient18 if (deButton(50, 20, “Add”)){19 // When the button is clicked add the patient20 patients.Add(new Patient);21 DD_THROW;22 }23 }30 void deEditOnePatient(Patient* p){31 // Determine field widths32 int w = (Width()-50)/3;33 // Controls are laid out horizontally34 deStartHorizontal();35 // Have a button to remove this patient36 if (deButton(50, 20, “Remove”)){37 patients.Remove(p);37 DD_THROW;39 }40 // Edit fields for name and age41 deEdit(w, 20, P(&p->name));42 deEdit(w, 20, P(&p->age));43 // If age >= 50 have a checkbox for smoker boolean44 IF(p->age >= 50)45 deCheckBox(w, 20, “Smoker?”, P(&p->smoker));46 END47 deEndHorizontal(20);48 } Added: Brian asked a good question, and I thought the answer belonged in the main text here: @Mike: I'm not clear on what the "if (deButton(50, 20, “Add”)){" statement is actually doing. What does the deButton function do? Also, are your FOR/END loops using some sort of macro or something? – Brian. @Brian: Yes, the FOR/END and IF statements are macros. The SourceForge project has a complete implementation. deButton maintains a button control. When any user input action takes place, the code is run in "control event" mode, in which deButton detects that it was pressed and signifies that it was pressed by returning TRUE. Thus, the "if(deButton(...)){... action code ...} is a way of attaching action code to the button, without having to create a closure or write an event handler. The DD_THROW is a way of terminating the pass when the action is taken because the action may have modified application data, so it is invalid to continue the "control event" pass through the routine. If you compare this to writing event handlers, it saves you writing those, and it lets you have any number of controls. Added: Sorry, I should explain what I mean by the word "maintains". When the procedure is first executed (in SHOW mode), deButton creates a button control and remembers its id in the FIFO. On subsequent passes (in UPDATE mode), deButton gets the id from the FIFO, modifies it if necessary, and puts it back in the FIFO. In ERASE mode, it reads it from the FIFO, destroys it, and does not put it back, thereby "garbage collecting" it. So the deButton call manages the entire lifetime of the control, keeping it in agreement with application data, which is why I say it "maintains" it. The fourth mode is EVENT (or CONTROL). When the user types a character or clicks a button, that event is caught and recorded, and then the deContents procedure is executed in EVENT mode. deButton gets the id of its button control from the FIFO and askes if this is the control that was clicked. If it was, it returns TRUE so the action code can be executed. If not, it just returns FALSE. On the other hand, deEdit(..., &myStringVar) detects if the event was meant for it, and if so passes it to the edit control, and then copies the contents of the edit control to myStringVar. Between this and normal UPDATE processing, myStringVar always equals the contents of the edit control. That is how "binding" is done. The same idea applies to scroll bars, list boxes, combo boxes, any kind of control that lets you edit application data. Here's a link to my Wikipedia edit: http://en.wikipedia.org/wiki/User:MikeDunlavey/Difex_Article
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/371898', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/18192/']}
jdg_70994
stackexchange
llm_judgeable_groundtruth_similarity
109505
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I am trying to understand the notion of Radon measure, but I am a little bit lost with the different conventions used in the litterature. More precisely, I have a doubt about the very definition of Borel measure. Suppose that $(X,\mathcal{B},\mu)$ is a measure space, where $X$ is a topological space. I have find two different definitions for "$\mu$ is a Borel measure": -Def 1 : $\mu$ is a Borel measure if $\mathcal{B}$ contains the Borel $\sigma$-algebra of $X$, -Def 2: $\mu$ is a Borel measure if $\mathcal{B}$ is exactly the Borel $\sigma$-algebra of $X$. The same thing happens for the notion of Radon measure, as it can be either considered as Borel measure in the sense of Def 1, or in the sense of Def 2. Of course, Def 1 gives a more general notion of Borel or Radon measure. For example the Lebesgue measure (defined on the Lebesgue $\sigma$-algebra of $\mathbb{R}^n$) is Radon in the sense of Def 1, but not in the sense of Def 2. Are there (other) reasons as to why one may prefer Def 1 to Def 2 or vice versa ? Apparently, Def 2 makes it quite difficult to have a "complete Radon measure", which makes me think that it is a little bit artificial or restrictive. But maybe many results hold only for Radon measures in the sense of Def 2, without possible extension to Radon measures in the sense of Def 1 ? Or maybe there is a trivial way to transfer any result involving a Borel measure in the sense of Def 2 to a result involving a Borel measure in the sense of Def 1 ? A related question is the following : if $\mu$ is Radon in the sense of Def 2, will its completion be Radon in the sense of Def 1 ? Same question when you replace "Radon" by "inner regular", "outer regular", and "locally finite". Now provide the response and nothing else.
From a geometric measure theory perspective, it is standard to define Radon measures $\mu$ to be Borel regular measures that give finite measure to any compact set. Of course, their connection with linear functionals is very important, but in all the references I know, they start with a notion of a Radon measure and then prove representation theorems that represent linear functionals by integration against Radon measures. Here are some examples: $\color{blue}{I:}$ Evans and Gariepy's Measure Theory and Fine Properties of Functions states it this way: A [outer] measure $\mu$ on $X$ is regular if for each set $A \subset X$ there exists a $\mu$ -measurable set $B$ such that $A\subset B$ and $\mu(A)=\mu(B)$ . A measure $\mu$ on $\Bbb{R}^n$ is called Borel if every Borel set is $\mu$ -measurable. A measure $\mu$ on $\Bbb{R}^n$ is Borel regular if $\mu$ is Borel and for each $A\subset\Bbb{R}^n$ there exists a Borel set $B$ such that $A\subset B$ and $\mu(A) = \mu(B)$ . A measure $\mu$ on $\Bbb{R}^n$ is a Radon measure if $\mu$ is Borel regular and $\mu(K) < \infty$ for each compact set $K\subset \Bbb{R}^n$ . $\color{blue}{II:}$ In De Lellis' very nice exposition of Preiss' big paper, he doesn't even define Radon explicitly, but rather talks about Borel Regular measures that are also locally finite, by which he means $\mu(K) < \infty$ for all compact $K$ . His Borel regular is a bit different in that he only considers measurable sets -- $\mu$ is Borel regular if any measurable set $A$ is contained in a Borel set $B$ such that $\mu(A) = \mu(B)$ . (I am referring to Rectifiable Sets, Densities and Tangent Measures by Camillo De Lellis.) $\color{blue}{III:}$ In Leon Simon's Lectures on Geometric Measure Theory , he defines Radon measures on locally compact and separable spaces to be those that are Borel Regular and finite on compact subests. $\color{blue}{IV:}$ Federer 2.2.5 defines Radon Measures to be measure a $\mu$ , over a locally compact Hausdorff spaces, that satisfy the following three properties: If $K\subset X$ is compact, then $\mu(K) < \infty$ . If $V\subset X$ is open, then $V$ is $\mu$ measurable and $\hspace{1in} \mu(V) = \sup\mu(K): K\text{ is compact, } K\subset V$ If $A\subset X$ , then $\hspace{1in} \mu(A) = \inf\mu(V): V\text{ is open, } A\subset V$ Note: it is a theorem (actually, a Corollary 1.11 in Mattila's Geometry of Sets and Measures in Euclidean Spaces ) that a measure is a Radon a la Federer if and only if it is Borel Regular and locally finite. I.e {Federer Radon } $\Leftrightarrow$ {Simon or Evans and Gariepy Radon }. (I am referring of course to Herbert Federer's 1969 text Geometric Measure Theory .) $\color{blue}{V:}$ For comparison, Folland (in his real analysis book) defines things a bit differently. For example, he defines regularity differently than the first, third and fourth texts above. In those, a measure $\mu$ is regular if for any $A\subset X$ there is a $\mu$ -measurable set $B$ such that $A\subset B$ and $\mu(A) = \mu(B)$ . In Folland, a Borel measure $\mu$ is regular if all Borel sets are approximated from the outside by open sets and from the inside by compact sets. I.e. if $\hspace{1in}\mu(B) = \inf \mu(V): V\text{ is open, } B\subset V$ and $\hspace{1in}\mu(B) = \sup \mu(K): K\text{ is compact, } K\subset B$ for all Borel $B\subset X$ . Folland's definition of Radon is very similar to Federer's but not quite the same: A measure $\mu$ is Radon if it is a Borel measure that satisfies: If $K\subset X$ is compact, then $\mu(K) < \infty$ . If $V\subset X$ is open, then $\hspace{1in} \mu(V) = \sup\mu(K): K\text{ is compact, } K\subset V$ If $A\subset X$ and $A$ is Borel then $\hspace{1in} \mu(A) = \inf\mu(V): V\text{ is open, } A\subset V$ ... and by Borel measure, Folland means a measure whose measuralbe sets are exactly the Borel sets. Discussion: Why choose one definition over another? Partly personal preference -- I prefer the typical approach taken in geometric measure theory, starting with an outer measure and progressing to Radon measures a la Evans and Gariepy or Simon or Federer or Mattila. It seems, somehow, more natural and harmonious with the Caratheodory criterion and Caratheodory construction used to generate measures, like the Hausdorff measures. With this approach, for example, sets with an outer measure of 0 are automatically measurable. Another reason not to use the more restrictive definition 2 (in the question above) it makes sense to require that continuous images of Borel sets be measurable. But all we know is that continuous maps map Borel to Suslin sets. And there are Suslin sets which are not Borel! If we use the definition of Borel regular, as in I,III and IV above, then Suslin sets are measurable. There is a very nice discussion of this in section 1.7 of Krantz and Parks' Geometric Integration Theory -- see that reference for the definition of Suslin sets. (Krantz and Parks is yet another text I could have added to the above list that agrees with I, III, and IV as far as Radon, Borel regular, etc. goes.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/109505', 'https://mathoverflow.net', 'https://mathoverflow.net/users/27199/']}
jdg_70995
stackexchange
llm_judgeable_groundtruth_similarity
2848759
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am learning algebra and I am a bit confused. Let's say I have a finitely presented group $G$, can anyone tell me if it is possible to find out if $G\cong \mathbb{Z}$? Thanks Now provide the response and nothing else.
No. More strikingly: it is undecidable if a finitely presented group is the trivial group! These facts were proven (independently) by Adyan and Rabin in the 50s. The key idea is that of "Markov properties": A property $\mathcal{P}$ of finitely presentable groups is a Markov property if: the property $\mathcal{P}$ is preserved under group isomorphism. there exists a finitely presentable group (a witness) $K_+$ with property $\mathcal{P}$. there exists a finitely presentable group $K_{-}$ which cannot be embedded as a subgroup in any finitely presentable group with property $\mathcal{P}$. The theorem is as follows: Theorem (Adyan-Rabin). If $\mathcal{P}$ is a Markov property then there does not exist an algorithm with input a finite presentation $G = \langle \mathbf{x} \mid \mathbf{r}\rangle$ and which decides whether or not the group $G$ defined by this presentation has property $\mathcal{P}$. For a reference, see Lydon and Schupp, Combinatorial group theory , Section IV.4, p192. I tried to set this theorem, and some related results, in the "big picture" of group theory in this old answer . So, for the examples I mentioned above: being infinite cyclic is a Markov property: it is preserved under isomorphism, and take $K_+=\langle a\mid-\rangle$ and $K_-=\langle a\mid a^2\rangle$. being trivial is a Markov property: it is preserved under isomorphism, and take $K_+=\langle a\mid a\rangle$ and $K_-=\langle a\mid a^2\rangle$. Another example: being finite is a Markov property: it is preserved under isomorphism, and take $K_+=\langle a\mid a\rangle$ and $K_-=\langle a\mid -\rangle$. Now, being infinite is not a Markov property (as every finite group embeds in an infinite group). However, this is still undecidable as it is the complement of a Markov property: Suppose I have an algorithm with input $\langle \mathbf{x}\mid\mathbf{r}\rangle$ and which tells me if the associated group is infinite. If it returns "no" then my group is finite. Hence, I can detect finiteness, a contradiction. A third example (hyperbolic groups are standard objects in geometric group theory): being hyperbolic is a Markov property: it is preserved under isomorphism, and take $K_+=\langle a\mid a\rangle$ and $K_-=\langle a, b\mid [a, b]\rangle$. (It is a theorem that $\mathbb{Z}\times\mathbb{Z}$ does not embed into any hyperbolic group.) Derek Holt points out in the comments to the question that the problem is semi-decidable. I thought it would be a good idea to build on this a litte: Lemma. If $G=\langle \mathbf{x}\mid\mathbf{r}\rangle$ is (infinite) cyclic then it is possible to prove it. This does not contradict undecidability, as you will never know when to conclude that the input group $G$ is not infinite cyclic. That is, suppose that we input $\langle \mathbf{x}\mid\mathbf{r}\rangle$ into the procedure given by the above lemma, and it doesn't terminate after 1 hour. What can we conclude? Well, we can conclude nothing ! It may be the case that the underlying group is infinite cyclic, but we need 100 years of computation to prove that it is so. Proof of Lemma. Write $\mathbf{x}=\{x_1, \ldots, x_n\}$. If $G$ is cyclic then there exists a word $w\in F(\mathbf{x})$ and integers $p_0, \ldots, p_n$ such that $x_i=_Gw^{p_i}$. So, enumerate all consequences of the relators and then check each consequence to see if it has the form $x_i^{-1}w^{p_i}$ for some $i, p_i, w$. Terminate the procedure if we have a "complete" set $\{x_i^{-1}w^{p_i}\mid i=1, \ldots, n\}$ with $w$ fixed. If we conclude that $G$ is cyclic then we can easily determine if it is infinite cyclic, as required. Sticking with the examples above, we also have the following lemma: Lemma. If $G=\langle \mathbf{x}\mid\mathbf{r}\rangle$ is trivial then it is possible to prove it. Proof. Write $\mathbf{x}=\{x_1, \ldots, x_n\}$. Enumerate all consequences of the relators and then check each consequence to see if it has the form $x_i$. Terminate the procedure if we have a "complete" set $\{x_i^{-1}\mid i=1, \ldots, n\}$.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2848759', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/574027/']}
jdg_70996
stackexchange
llm_judgeable_groundtruth_similarity
2071710
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The file in the following function is still in use after the using statement. How can I fix this to release the file.... /// <summary> /// Serializes an object to an xml file. /// </summary> /// <param name="obj"> /// The object to serialize. /// </param> /// <param name="type"> /// The class type of the object being passed. /// </param> /// <param name="fileName"> /// The filename where the object should be saved to. /// </param> /// <param name="xsltPath"> /// Pass a null if not required. /// </param> public static void SerializeToXmlFile(object obj, Type type, string fileName, string xsltPath ) { var ns = new XmlSerializerNamespaces(); ns.Add(String.Empty, String.Empty); var serializer = new XmlSerializer(type); var settings = new XmlWriterSettings {Indent = true, IndentChars = "\t"}; using (var w = XmlWriter.Create(File.Create(fileName), settings)) { if (!String.IsNullOrEmpty(xsltPath)) { w.WriteProcessingInstruction("xml-stylesheet", "type=\"text/xsl\" href=\"" + xsltPath + "\""); } serializer.Serialize(w, obj, ns); } } Now provide the response and nothing else.
You only have the XmlWriter as object in the using, just because you call the File.Create from code that is inside the using doesn't mean that it will be disposed. Use two using blocks: using (FileStream f = File.Create(fileName)) { using (XmlWriter w = XmlWriter.Create(f, settings)) { ... }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2071710', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/41543/']}
jdg_70997
stackexchange
llm_judgeable_groundtruth_similarity
23740
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Of course, the standard way of testing if a file is empty is with test -s FILE , but one of our clients have received a script containing tests like this: RETVAL=`ls -s ./log/cr_trig.log | awk '{print $1}'`if test $RETVAL -ne 0then echo "Badness: Log not empty" exit 25fi with claims from the supplier that it works in the two environments that they have tested it in. Needless to say, it failed badly on both two places that I tested it. So, I got curious. When does ls -s print 0 for empty files? This is my findings so far: GFS on Linux: 4 ext4 on Linux: 0 ZFS on Solaris: 1 UFS on Solaris: 0 jfs on AIX : 0 VxFS on HP-UX: 0 HFS on HP-UX: 0 HFS on Mac OS X: 0 I haven't examined networked file systems yet. Question: How can I elegantly explain to the others that their scripts are wrong ? In my opinion, the "correct" version would be: if test ! -s ./log/cr_trig.logthen echo "Badness: Log not empty" exit 25fi Now provide the response and nothing else.
Very interesting finding. Although I've never used ls -s to check whether a file is empty or not, I would have assumed, that it reports 0 for empty files, too. To your question: As Mat already commented, show them your test results. To explain the results to them, state that ls -s reports the amount of allocated blocks in the filesystem, not the actual size in bytes. Obviously some filesystem implementations allocate blocks even if they don't have to store any data instead of storing just a NULL pointer in the inode. The explanation for this may be performance-related. To create empty files that will stay empty is an exeption for normal processing (the most common usage I've seen would be the creation of status files where the existance of a file represents a certain state of the software). But normally a file created will get some data soon, so the designers of a certain FS may have assumed that it pays off to immediately allocate a data block upon file creation, so when the first data arrives this task is already done. Second reason could be that a file has contained data in the past which has been erased. Instead of freeing the last data block it may be worthy to keep that data block for reuse by the same file. EDIT: One more reason came to mind: The filesystems where you have found values > 0 are ZFS , the the RAID+LVM+FS implementation and GFS , a cluster filesystem. Both may have to store metadata to maintain file integrity that is not stored in inodes. It could be that ls -s counts in data blocks allocated for this metadata.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/23740', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/5923/']}
jdg_70998
stackexchange
llm_judgeable_groundtruth_similarity
10671
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: First of all, I found a similar question but it doesn't really solve my problem. I am trying to discover if the USB bus for a device I am using is the bottleneck in my program. How can I monitor a USB bus (similar to how gnome-system-monitor works) to show bus utilization? Basically I want to identify when the bus is being 'maxed' out. I guess what I am looking for is some interface for usbmon, as that appears like it would do what I need. This came about from testing the USRP and GNU Radio. I am running into a situation where it appears that the USB bus could be a limiting factor, so I ask the more general question of USB performance monitoring. Now provide the response and nothing else.
Since usbmon provides the length of each packet transferred, I would approach this by writing a quick program to parse the 0u file (which has data for all USB devices.) It would pick out the USB bus and device numbers, then keep a running total of the packet length field in both directions for each device. This will then give you the amount of data transferred per device, in each direction. If you print it once a second you'll get a pretty good idea of each device's throughput. Note that it won't include any USB overhead, but if you compare the figures to a device that is able to saturate the available bandwidth you'll know whether you're getting close to the limit.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/10671', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/4856/']}
jdg_70999
stackexchange
llm_judgeable_groundtruth_similarity
20475257
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Using Rails 4.0, ActiveAdmin, Rolify and CanCan, Adding and removing user roles via the activeadmin panel does not save (commit to database). The my ActiveAdmin User and User Model look okay as I can list all the roles that apply to a user using check_boxes. Although when adding any roles or removing any roles via the checkboxes the changes do not get applied. I get a notification that the user was updated successfully but as I look through the database or render the page, the roles have not been updated. How can I get the roles to update when the form is saved? Edit: Using Devise also. Now provide the response and nothing else.
The solution here is to allow the ActiveAdmin controller to update the role ids related to the user. ActiveAdmin.register User do permit_params :email, :password, :password_confirmation, role_ids: [] Here's a form that shows a check box for each global role. form do |f| f.inputs "User Details" do f.input :email f.input :password f.input :password_confirmation f.input :roles, as: :check_boxes end f.actions end While we're at it, we might as well make it possible to update the user without entering their password: # Allow form to be submitted without a password controller do def update if params[:user][:password].blank? params[:user].delete "password" params[:user].delete "password_confirmation" end super end end Put all of this in the app/admin/user.rb .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20475257', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/703161/']}
jdg_71000
stackexchange
llm_judgeable_groundtruth_similarity
86123
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Suppose $S$ is an algebraic surface (possibly projective) over an algebraically closed field $k$. Suppose $D_i$ are irreducible smooth curves (rational, if you want) with negative self-intersection and simple normal crossings such that there is a morphism $f:S\rightarrow S'$ which collapses them to a point. Can we form an effective divisor $D=\sum d_i D_i$ such that $D\cdot D_i<0$ for all $i$? Note that if I had said $\leq$ rather than $\lt$ this is the content of the Proposition in page 83 of Reid's Park City lectures . I ask for strict inequality. My belief is that this is possible, but I don't manage to give a proof nor find a reference or even to find a counter-example... This would be very useful for many arguments in birational geometry of surfaces. Now provide the response and nothing else.
The intersection matrix $(D_i\cdot D_j)$ is negative definite and hence invertible. So the system of equations: $\sum_i d_i D_i\cdot D_j=\alpha_j$ is solvable for any set of $\alpha_j$'s, in particular for negative ones. It is an easy consequence of the negative definite property that if $\alpha_j<0$ for all $j$, then $d_i>0$ for all $i$. A really slick way to do this (in any dimension) if $S$ is quasi-projective, $f$ is projective, and $S'$ is $\mathbb Q$-factorial the following:Let $H$ be an effective (reduced if you want) ample divisor on $S$ and consider $f_*H$ (as a subscheme). Since $S'$ is $\mathbb Q$-factorial, this is a $\mathbb Q$-Cartier divisor, so $mf_*H$ is Cartier for some $m$. Therefore $f^*(mf_*H)-mH$ is effective, supported on the exceptional divisor and (strictly) negative on every curve contracted by $f$.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/86123', 'https://mathoverflow.net', 'https://mathoverflow.net/users/1887/']}
jdg_71001
stackexchange
llm_judgeable_groundtruth_similarity
54502
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Let $\mathbb{A}^n_k$ be the Affine $n$-space over an algebraically closed field $k$.Let $X$ be a variety over $k$. What would be the right definition of an "Affine bundle" i.e bundle of fiber type $\mathbb{A}^n_k$ over $X$ (I mean local triviality in zarisky topology,oretale .. )?. When can one get a vector bundle from an "Affine bundle" , moreprecisely (I think !) if I assume the structure group of the affine bundle tobe $Aut_{Var_k}(\mathbb{A}^n_k)$, when can one get a reduction of the structure groupto $GL_n(k)$? Now provide the response and nothing else.
I think the term "affine bundle" is used for at least two things: (1) A map $p:Y\to X$ such that for some open cover (in your choice of topology) there are isomorphisms $p^{-1}(U)={\Bbb A}^n \times U$ --- just like you said. (2) A torsor for a vector bundle, i.e., like (1) but with the added condition that the transition functions are affine-linear. In the situation of (2), it's a vector bundle exactly when there's a section (like any torsor). A simple non-vector-bundle-example is the complement of the diagonal in ${\Bbb P}^1 \times {\Bbb P}^1$, projecting onto one of the factors. For (1), I don't know any general (non-trivial) criterion for such a thing to be a vector bundle. (Maybe because the group $Aut({\Bbb A}^n)$ is so complicated...) A simple non-example is the 2nd-order jet scheme $\mathrm{Hom}(\mathrm{Spec}(k[t]/(t^3)),{\Bbb P}^1) \to {\Bbb P}^1$. The fibers are ${\Bbb A}^2$, and there's a section, but it's not linear. I suppose one test is whether the sheaf of $O_X$-algebras $p_*O_Y$ admits a grading generated in degree one. (This fails for the jet schemes, though there is a natural grading by scaling $t$.)
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/54502', 'https://mathoverflow.net', 'https://mathoverflow.net/users/12202/']}
jdg_71002
stackexchange
llm_judgeable_groundtruth_similarity
12426691
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know that deque is more efficient than vector when insertions are at front or end and vector is better if we have to do pointer arithmetic. But which one to use when we have to perform insertions in middle.? and Why.? Now provide the response and nothing else.
You might think that a deque would have the advantage, because it stores the data broken up into blocks. However to implement operator[] in constant time requires all those blocks to be the same size. Inserting or deleting an element in the middle still requires shifting all the values on one side or the other, same as a vector . Since the vector is simpler and has better locality for caching, it should come out ahead.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12426691', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1543957/']}
jdg_71003
stackexchange
llm_judgeable_groundtruth_similarity
177577
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am using TM4C1230C3PMI controller from Texas instrument in one of my project. It has 32KB of internal flash, which is not sufficient for my application. Higher flash sized micro-controller are available in the market which can be used but I want to go with this micro-controller only. As per my knowledge external EEPROM can be used to increase the total flash size (Program memory). Is my thinking is correct? If not please suggest how can I increase the total flash memory size of the controller? Now provide the response and nothing else.
You cannot extend the program memory (flash). TI produces the same chip with double the flash and RAM, but nothing else changed: TM4C1230D5PMI . If you cannot use a chip with larger flash, you will have to reduce your code size: Disable debugging, such as the expensive printf function. A printf that supports floating point output will typically set you back around 5KB-10KB. Make sure you compile with optimization enabled - typically the compiler flag is -Os . Modern compilers can do link time optimization (LTO) . With gcc, you get this with -flto . You have to pass -flto to both compilation and link stages, for all files. This typically reduces the produced code size by 30%-50%.
{}
{'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/177577', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/35350/']}
jdg_71004
stackexchange
llm_judgeable_groundtruth_similarity
4552769
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I return a specific range of ROWNUM values? I'm trying the following: select * from maps006 where rownum >49 and rownum <101 This returns only rows matching the < operator. Now provide the response and nothing else.
SELECT * from ( select m.*, rownum r from maps006 m ) where r > 49 and r < 101
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4552769', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/521180/']}
jdg_71005
stackexchange
llm_judgeable_groundtruth_similarity
47187107
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: With the release of Java 9, numerous methods have been added to many classes, most (if not all) of which contain the following in their documentation: Since: 9 Is there an easy way to find any new methods added in an arbitrary class without having to scour through its documentation? Example : ByteBuffer.alignedSlice Now provide the response and nothing else.
You're probably looking for something like jdkapidiff which uses japicmp to generate reports similar to one hosted here by the author - jdk8-jdk9-api-diff. You can clone the project and execute mvn clean install to get the similar report on your local. Provide a file ~.m2/toolchains.xml like this: <?xml version="1.0" encoding="UTF8"?><toolchains> <toolchain> <type>jdk</type> <provides> <version>1.8</version> <vendor>oracle</vendor> </provides> <configuration> <jdkHome>/path/to/jdk-1.8</jdkHome> </configuration> </toolchain> <toolchain> <type>jdk</type> <provides> <version>9</version> <vendor>oracle</vendor> </provides> <configuration> <jdkHome>/path/to/jdk-9</jdkHome> </configuration> </toolchain></toolchains>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47187107', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7294647/']}
jdg_71006
stackexchange
llm_judgeable_groundtruth_similarity
43239078
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Have this script: use 5.014;use warnings;use utf8; binmode STDOUT, ':utf8';my $str = "XYZ ΦΨΩ zyz φψω";my @greek = ($str =~ /\p{Greek}/g);say "Greek: @greek";my @upper = ($str =~ /\p{Upper}/g);say "Upper: @upper";#my @upper_greek = ($str =~ /\p{Upper+Greek}/); #wrong.#say "Upper+Greek: @upper_greek"; Is possible combine multiple unicode properties? E.g how to select only Upper and Greek , and get the wanted: Greek: Φ Ψ Ω φ ψ ωUpper: X Y Z Φ Ψ ΩUpper+Greek: Φ Ψ Ω #<-- how to get this? Now provide the response and nothing else.
We want to perform an AND operation, so we can't use /(?:\p{Greek}|\p{Upper})/ # Greek OR Upper or /[\p{Greek}\p{Upper}]/ # Greek OR Upper Since 5.18, one can use regex sets. /(?[ \p{Greek} & \p{Upper} ])/ # Greek AND Upper This requires use experimental qw( regex_sets ); before 5.36. But it's safe to add this and use the feature as far back as its introduction as an experimental feature in 5.18, since no change was made to the feature since then. There are some other approaches that can be used in older versions of Perl, but they are indisputably harder to read. One way of achieving AND in a regex is using lookarounds. /\p{Greek}(?<=\p{Upper})/ # Greek AND Upper Another way of getting an AND is to negate an OR. De Morgan's laws tells us NOT( Greek AND Upper ) ⇔ NOT(Greek) OR NOT(Upper) so Greek AND Upper ⇔ NOT( NOT(Greek) OR NOT(Upper) ) This gives us /[^\P{Greek}\P{Upper}]/ # Greek AND Upper This is more efficient then using a lookbehind.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43239078', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/632407/']}
jdg_71007
stackexchange
llm_judgeable_groundtruth_similarity
183108
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a C++ template class that gets instantiated with 3 different type parameters. There's a method that the class needs to have for only one of those types and that isn't ever called with the two other types. Will object code for that method be generated thrice (for all types for which the template is instantiated), or is object code generated only once (for the type with which it is actually used)? Now provide the response and nothing else.
Virtual member functions are instantiated when a class template is instantiated, but non-virtual member functions are instantiated only if they are called. This is covered in [temp.inst] in the C++ standard (In C++11, this is §14.7.1/10. In C++14, it is §14.7.1/11, and in C++17 it is §17.7.1/9. Excerpt from C++17 below) An implementation shall not implicitly instantiate a function template, a variable template, a member template, a non-virtual member function, a member class, a static data member of a class template, or a substatement of a constexpr if statement (9.4.1), unless such instantiation is required Also note that it is possible to instantiate a class template even if some of the member functions are not instantiable for the given template parameters. For example: template <class T>class Xyzzy{public: void CallFoo() { t.foo(); } // Invoke T::foo() void CallBar() { t.bar(); } // Invoke T::bar()private: T t;};class FooBar{public: void foo() { ... } void bar() { ... }};class BarOnly{public: void bar() { ... }};int main(int argc, const char** argv){ Xyzzy<FooBar> foobar; // Xyzzy<FooBar> is instantiated Xyzzy<BarOnly> baronly; // Xyzzy<BarOnly> is instantiated foobar.CallFoo(); // Calls FooBar::foo() foobar.CallBar(); // Calls FooBar::bar() baronly.CallBar(); // Calls BarOnly::bar() return 0;} This is valid, even though Xyzzy::CallFoo() is not instantiable because there is no such thing as BarOnly::foo(). This feature is used often as a template metaprogramming tool. Note, however, that "instantiation" of a template does not directly correlate to how much object code gets generated. That will depend upon your compiler/linker implementation.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/183108', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/18721/']}
jdg_71008
stackexchange
llm_judgeable_groundtruth_similarity
56723
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to find a case/study where scientists documented a mutation in an animal or human that was to the benefit of the host. The closest thing I have been able to find is sickle cell anemia (SCA) helping to fight malaria. However, the life expectancy for people with SCA is 40 – 60 years, and in 1973 it was only 14 years ( source ). I am looking for another case, preferably one that is not life-threatening. Are there any other cases where a beneficial mutation — one where the good outweighs the bad — was documented? By beneficial I simply mean that it helps or protects the host in some way, while not causing substantial harm. As in my example of SCA it can benefit the host if the host lives in an area with malaria. However, it is also life threatening and reduces the life expectancy of the host. If — for example — SCA would only cause pain and not be life-threatening, then it would (in my opinion) be a beneficial mutation. While not purely beneficial, it would still increase the life expectancy of people living in an area with a high occurrence of malaria. Now provide the response and nothing else.
The answer depends on what you mean by "observed". Bacteria acquire antibiotic resistance genes all the time, for example. This happens 'in the wild', but you can see in the lab that bacteria can become resistant overnight - I'd call that 'observed'. There are many papers about the emergence of antibiotic resistance (not that I'm suggesting this is the only observable beneficial mutation). In this paper, for example, several spontaneous mutations are induced in H. pylori in the lab in a short space of time: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC90364/ In this paper, estimation of mutation rates are attempted in the emergence of resistance: http://aac.asm.org/content/44/7/1771.full It should be pointed out however that the timeframes can vary wildly. If a bacteria acquires a resistance gene by horizontal transfer, for example, this could be considered virtually instantaneous. Spontaneous mutations may take longer to emerge, and might only do so in the presence of steadily increased selection pressure, by contrast.
{}
{'log_upvote_score': 4, 'links': ['https://biology.stackexchange.com/questions/56723', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/-1/']}
jdg_71009
stackexchange
llm_judgeable_groundtruth_similarity
3701233
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm doing some bulk migration of a large Oracle database. The first step of this involves renaming a whole load of tables as a preparation for dropping them later (but I need to keep the data in them around for now). Any foreign key constraints on them need to be dropped - they shouldn't be connected to the rest of the database at all. If I were dropping them now I could CASCADE CONSTRAINTS, but rename simply alters the constraints. Is there a way I can drop all of the constraints that CASCADE CONSTRAINTS would drop without dropping the table itself? Now provide the response and nothing else.
You can do it with dynamic SQL and the data dictionary: begin for r in ( select table_name, constraint_name from user_constraints where constraint_type = 'R' ) loop execute immediate 'alter table '|| r.table_name ||' drop constraint '|| r.constraint_name; end loop;end; If the tables are owned by more than one user you'll need to drive from DBA_CONSTRAINTS and include OWNER in the projection and the executed statement. If you want to touch less than all the tables I'm afraid you'll need to specify the list in the WHERE clause, unless there's some pattern to their names.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3701233', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/340251/']}
jdg_71010
stackexchange
llm_judgeable_groundtruth_similarity
27937285
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following working code, however I'm not convinced that I'm calling glDeleteBuffers in a safe way. In practice it's working (for now at least) but from what I've been reading I don't think it should work. GLuint vao_id;glGenVertexArrays(1, &vao_id);glBindVertexArray(vao_id);GLuint VBO;glGenBuffers(1, &VBO);glBindBuffer(GL_ARRAY_BUFFER, VBO);glBufferData(GL_ARRAY_BUFFER, size, data, GL_STATIC_DRAW);glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);glEnableVertexAttribArray(0); //Alternate position <<----//Unbind the VAOglBindVertexArray(0);//Current position <<----glDeleteBuffers(1, &VBO); I am currently calling glDeleteBuffers straight after unbinding the VAO. I have tried calling it in the alternative position marked - immediately after I have set the attribute pointer. This however caused a crash - my guess is this was because when I made the draw call there was no data to be drawn because I'd deleted it. The thing that confuses me is that it works as I currently have it. I'm worried that a) I don't quite understand what happens when the buffer is delete and b) that it only works by chance and could unexpectedly break. As far as I understand calling glDeleteBuffers deletes the data so there shouldn't be any data to draw - but there is. So my other thought was that when I re-bind the VAO the data is restored, although that didn't make much sense to me because I can't reason where the data would be restored from. Can someone let me know if I am using glDeleteBuffer correctly? and if not where it should be called (I'm guessing once there is no need for the data to be drawn any more, probably at the end of the program). Now provide the response and nothing else.
What you're seeing is well defined behavior. The following are the key parts of the spec related to this (emphasis added). From section "5.1.2 Automatic Unbinding of Deleted Objects" in the OpenGL 4.5 spec: When a buffer, texture, or renderbuffer object is deleted, it is unbound from any bind points it is bound to in the current context , and detached from any attachments of container objects that are bound to the current context , as described for DeleteBuffers, DeleteTextures, and DeleteRenderbuffers. and "5.1.3 Deleted Object and Object Name Lifetimes": When a buffer, texture, sampler, renderbuffer, query, or sync object is deleted, its name immediately becomes invalid (e.g. is marked unused), but the underlying object will not be deleted until it is no longer in use . A buffer, texture, sampler, or renderbuffer object is in use if any of the following conditions are satisfied: the object is attached to any container object ... The VAO is considered a "container object" for the VBO in this case. So as long as the VBO is referenced in a VAO, and the VAO itself is not deleted, the VBO stays alive. This is why your version of the code with the glDeleteBuffers() at the end works. However, if the VAO is currently bound, and you delete the VBO, it is automatically unbound from the VAO. Therefore, it is not referenced by the VAO anymore, and deleted immediately. This applies to the case where you call glDeleteBuffers() immediately after glVertexAttribPointer() . In any case the id (aka name) becomes invalid immediately. So you would not be able to bind it again, and for example modify the data. There are some caveats if you dig into the specs more deeply. For example, if you delete a buffer, and it stays alive because it is still referenced by a VAO, the name of the buffer could be used for a new buffer. This means that you basically have two buffers with the same name, which can result in some confusing behavior. Partly for that reason, I personally wouldn't call glDelete*() for objects that you want to keep using. But others like to call glDelete*() as soon as possible.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27937285', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3378157/']}
jdg_71011
stackexchange
llm_judgeable_groundtruth_similarity
5349139
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to know if it's possible to send email through iPhone simulator. I have seen thetutorial for sending an email through iphone as below: http://www.edumobile.org/iphone/iphone-programming-tutorials/compose-mail-application-in-iphone/ Now to test it is it necessary to have real device? What is the way if I want to send emailthrough iPhone simulator? Now provide the response and nothing else.
You have to rely on the iOS that the MFMailComposeResult that is handed back in mailComposeController:didFinishWithResult:error: is correct. The simulator fakes that result; no actual mail is sent although it says MFMailComposeResultSent . The tutorial mentioned misses an important point : The first thing you should do before using MFMailComposeViewController is to check [MFMailComposeViewController canSendMail] . That will return NO , if the user hasn't configured mail on their device. If you must support an iOS version prior to 3.0 the correct way is to check if the class MFMailComposeViewController exists: Class mailClass = (NSClassFromString(@"MFMailComposeViewController"));if (mailClass != nil){ if ([mailClass canSendMail]) { [self displayComposerSheet]; } else { [self launchMailAppOnDevice]; }}else{ [self launchMailAppOnDevice];} The canSendMail-issue can only be tested on a real device though. It will crash if you don't check canSendMail and the user has no mail account configured.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5349139', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/645938/']}
jdg_71012
stackexchange
llm_judgeable_groundtruth_similarity
143801
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The Leoxsys Nano adapter that I bought recently does not work in my Debian wheezy, but works flawlessly in my Ubuntu. I tried installing using the install.sh script that came with the drivers, but still no luck. This is the output of install.sh (the timestamp errors are repeated for various different files. I have removed some of them because the posts was too long, you can see the entire output here : rtl8188EUS_rtl8189ES_linux_v4.1.2.2_5805.20121116/hal/rtl8188e/usb/rtl8188eu_xmit.ctar: rtl8188EUS_rtl8189ES_linux_v4.1.2.2_5805.20121116/hal/rtl8188e/usb/rtl8188eu_xmit.c: time stamp 2012-11-16 13:06:39 is 280134634.419668702 s in the futurertl8188EUS_rtl8189ES_linux_v4.1.2.2_5805.20121116/hal/rtl8188e/usb/usb_halinit.ctar: rtl8188EUS_rtl8189ES_linux_v4.1.2.2_5805.20121116/hal/rtl8188e/usb/usb_halinit.c: time stamp 2012-11-16 13:06:39 is 280134634.417253308 s in the futurertl8188EUS_rtl8189ES_linux_v4.1.2.2_5805.20121116/hal/rtl8188e/usb/usb_ops_linux.ctar: rtl8188EUS_rtl8189ES_linux_v4.1.2.2_5805.20121116/hal/rtl8188e/usb/usb_ops_linux.c: time stamp 2012-11-16 13:06:39 is 280134634.416732716 s in the futurertl8188EUS_rtl8189ES_linux_v4.1.2.2_5805.20121116/ifcfg-wlan0tar: rtl8188EUS_rtl8189ES_linux_v4.1.2.2_5805.20121116/hal/rtl8188e/usb: time stamp 2012-11-16 13:06:39 is 280134634.41663481 s in the futuretar: rtl8188EUS_rtl8189ES_linux_v4.1.2.2_5805.20121116/hal/rtl8188e: time stamp 2012-11-16 13:06:37 is 280134632.416591435 s in the futuretar: rtl8188EUS_rtl8189ES_linux_v4.1.2.2_5805.20121116/hal: time stamp 2012-11-16 13:06:34 is 280134629.416553349 s in the futuretar: rtl8188EUS_rtl8189ES_linux_v4.1.2.2_5805.20121116/ifcfg-wlan0: time stamp 2012-11-16 13:06:37 is 280134632.416436593 s in the futurertl8188EUS_rtl8189ES_linux_v4.1.2.2_5805.20121116/include/rtl8188EUS_rtl8189ES_linux_v4.1.2.2_5805.20121116/include/autoconf.htar: rtl8188EUS_rtl8189ES_linux_v4.1.2.2_5805.20121116/include/autoconf.h: time stamp 2012-11-16 13:06:39 is 280134634.416238385 s in the futurertl8188EUS_rtl8189ES_linux_v4.1.2.2_5805.20121116/include/basic_types.htar: rtl8188EUS_rtl8189ES_linux_v4.1.2.2_5805.20121116/include/basic_types.h: time stamp 2012-11-16 13:06:38 is 280134633.416090583 s in the futurertl8188EUS_rtl8189ES_linux_v4.1.2.2_5805.20121116/include/byteorder/rtl8188EUS_rtl8189ES_linux_v4.1.2.2_5805.20121116/include/byteorder/big_endian.htar: rtl8188EUS_rtl8189ES_linux_v4.1.2.2_5805.20121116/include/byteorder/big_endian.h: time stamp 2012-11-16 13:06:38 is 280134633.415909986 s in the futurertl8188EUS_rtl8189ES_linux_v4.1.2.2_5805.20121116/include/byteorder/generic.htar: rtl8188EUS_rtl8189ES_linux_v4.1.2.2_5805.20121116/include/byteorder/generic.h: time stamp 2012-11-16 13:06:38 is 280134633.415770465 s in the future[...]rtl8188EUS_rtl8189ES_linux_v4.1.2.2_5805.20121116/include/rtw_xmit.htar: rtl8188EUS_rtl8189ES_linux_v4.1.2.2_5805.20121116/include/rtw_xmit.h: time stamp 2012-11-16 13:06:38 is 280134633.360691402 s in the futurePlease select card type(1/2):1) RTL8188eus2) RTL8189es#? 1You have selected RTL8188eusrtw_version.h has existed!Authentication requested [root] for make clean:rm -fr *.mod.c *.mod *.o .*.cmd *.ko *~rm -fr .tmp_versionsrm -fr Module.symvers ; rm -fr Module.markers ; rm -fr modules.ordercd core/efuse ; rm -fr *.mod.c *.mod *.o .*.cmd *.kocd core ; rm -fr *.mod.c *.mod *.o .*.cmd *.kocd hal/rtl8188e/usb ; rm -fr *.mod.c *.mod *.o .*.cmd *.kocd hal/rtl8188e ; rm -fr *.mod.c *.mod *.o .*.cmd *.kocd hal/OUTSRC/rtl8188e ; rm -fr *.mod.c *.mod *.o .*.cmd *.ko cd hal/OUTSRC/ ; rm -fr *.mod.c *.mod *.o .*.cmd *.ko cd hal ; rm -fr *.mod.c *.mod *.o .*.cmd *.kocd os_dep/linux ; rm -fr *.mod.c *.mod *.o .*.cmd *.kocd os_dep ; rm -fr *.mod.c *.mod *.o .*.cmd *.koAuthentication requested [root] for make driver:make ARCH=i386 CROSS_COMPILE= -C /lib/modules/3.2.0-4-686-pae/build M=/home/vivek/Linux/driver/rtl8188EUS_rtl8189ES_linux_v4.1.2.2_5805.20121116 modulesmake: *** /lib/modules/3.2.0-4-686-pae/build: No such file or directory. Stop.make: *** [modules] Error 2##################################################Compile make driver error: 2Please check error Mesg##################################################root@debian:/home/vivek/Linux# Now provide the response and nothing else.
Yep, lots of differences. For instance, = checks for exact string equality, but -eq evaluates both expressions arithmetically before checking for equality: $ [ " 1 " -eq 1 ] && echo equal || echo notequal$ [ " 1 " = 1 ] && echo equal || echo notnot$ [ +1 -eq 1 ] && echo equal || echo notequal$ [ +1 = 1 ] && echo equal || echo notnot$ [ "0+1" -eq 1 ] && echo equal || echo notequal$ [ "0+1" = 1 ] && echo equal || echo notnot Also, the empty string happens to be numerically equal to zero: $ [ "" -eq 0 ] && echo equal || echo notequal$ [ "" = 0 ] && echo equal || echo notnot And a whole other class of differences appears when you bring the comparison operators in - considering < vs -lt , for instance: $ [[ 2 -lt 10 ]] && echo less || echo notless$ [[ 2 < 10 ]] && echo less || echo notnot This is because the string "2" is alphabetically after the string "10" (since 1 comes before 2), but the number "2" is numerically less than the number "10".
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/143801', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/77105/']}
jdg_71013
stackexchange
llm_judgeable_groundtruth_similarity
2406349
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I use a large (millions) entries hashmap to cache values needed by an algorithm, the key is a combination of two objects as a long. Since it grows continuously (because keys in the map changes, so old ones are not needed anymore) it would be nice to be able to force wiping all the data contained in it and start again during the execution, is there a way to do effectively in Java? I mean release the associated memory (about 1-1.5gb of hashmap) and restart from the empty hashmap.. Now provide the response and nothing else.
You can call HashMap.clear() . That will remove all data. Note that this will only discard all entries, but keep the internal array used to store the entries at the same size (rather than shrinking to an initial capacity). If you also need to eliminate that, the easiest way would be to discard the whole HashMap and replace it with a new instance. That of course only works if you control who has a pointer to the map. As for reclaiming the memory, you will have to let the garbage collector do its work. Are your values also Long? In this case, you may want to look at a more (memory-) efficientimplementation than the generic HashMap, such as the TLongLongHashMap found in the GNU Trove library . That should save a lot of memory.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2406349', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/121747/']}
jdg_71014
stackexchange
llm_judgeable_groundtruth_similarity
70239258
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm making a simple submit button for an email form,when clickling on the submit button and the email is valid,it changes the state of the button from "Subscribe" to Unsubscribe and it hides the form. Till here everything is good,but the problem is that when i click the submit button,it does what i need but after i enter a valid email and it changes the state of the button,the oposite action isnt done and it continues to send the email to localstorage,instead of removing it.Here i add a non valid email : And in console log i have this email.Here i add a valid email without refreshing the page : And now,when i click unsubscribe,it should remove the email from localStorage and return the form and the state of the button,but instead,it makes the submit action..: What should i do? I am new in JS and here i use only vanilla js. The code that use the submit button: document.querySelector('form').addEventListener('submit', function (e) {e.preventDefault()console.log(inputForm.value)localStorage.setItem('Email', inputForm.value)subscribeEmail(inputForm.value); }) The code below is used to mentain the state of the button when refreshing the page: const isSubscribed = localStorage.getItem('isSubscribed') if (isSubscribed === 'true') { subscribe() // document.querySelector('form').addEventListener('click', function (e) { // e.preventDefault() // unsubscribe() // localStorage.removeItem('Email') // }) } else { unsubscribe() }} Below is the code that changes the state of the button :Edit : I modified the code below and now i can unsubscribe without refreshing the page,the problem now is that i can not submit another value for validation and sending in localStorage. import { validateEmail } from './email-validator.js'export const subscribe = () => { const subscribeBtn = document.getElementById('subscribeButton') subscribeBtn.setAttribute('value', 'Unsubscribe') document.getElementById('emailForm').style.display = 'none' localStorage.setItem('isSubscribed', 'true')}export const unsubscribe = () => { const subscribeBtn = document.getElementById('subscribeButton') subscribeBtn.setAttribute('value', 'Subscribe') document.getElementById('emailForm').style.display = 'block' localStorage.setItem('isSubscribed', 'false')}export const subscribeEmail = (email) => { if (validateEmail(email) == true) { subscribe(); document.querySelector('form').addEventListener('click', function (e) { e.preventDefault() unsubscribe() localStorage.removeItem('Email') }) } else if (validateEmail == false) { unsubscribe(); }}; And here is the validation function : const VALID_EMAIL_ENDINGS = ['gmail.com', 'outlook.com', 'yandex.ru']export const validateEmail = (email) => { if (VALID_EMAIL_ENDINGS.some(v => email.includes(v))) { return true } else { return false }}export { VALID_EMAIL_ENDINGS as validEnding } I dont understand what is wrong,or is this normally to happen..It gives me the oportunity to "unsubscribe" only after refreshing the page,but i want to subscribe and unsubscribe multiple times without the need of refresh. I should mention that the section where the button is created is displayed usign window.onload,so it is dinamically created. Edit: The whole section creation code: import { subscribe, unsubscribe} from './subscribe.js'import { subscribeEmail } from './subscribe.js'const addSection = () => { const sectionFour = createElement('sectionFour', 'section', 'app-section app-section--image-program', 'fourth-section') const sectionParent = getElbyID('sectionParent', 'third-section') const parentSection = sectionParent.parentNode parentSection.insertBefore(sectionFour, sectionParent.nextSibling) const heading2 = createElement('heading2', 'h2', 'program-title') const heading2Text = document.createTextNode('Join Our Program') heading2.append(heading2Text) const parent = getElbyID('parent', 'fourth-section') const heading3 = createElement('heading3', 'h3', 'program-subtitle') const heading3Text = document.createTextNode('Sed do eiusmod tempor incididunt') heading3.appendChild(heading3Text) const linebreak = createElement('linebreak', 'br') heading3.appendChild(linebreak) const textAfterBreak = document.createTextNode('ut labore et dolore magna aliqua') heading3.appendChild(textAfterBreak) const form = createElement('submitFieldWrapper', 'form', 'submitFieldWrapper', 'form') parent.append(heading2, heading3, form) const emailForm = createElement('emailForm', 'div', 'form-wrapper', 'emailForm') const inputForm = createElement('inputForm', 'input', 'form-input', 'submit-info') setAttributes(inputForm, 'type', 'text', 'placeholder', 'Email') if (localStorage.getItem('Email') !== null) { inputForm.setAttribute('value', localStorage.getItem('Email')) } else { inputForm.setAttribute('placeholder', 'Email') } emailForm.appendChild(inputForm) document.querySelector('form').addEventListener('submit', function (e) { e.preventDefault() console.log(inputForm.value) localStorage.setItem('Email', inputForm.value) subscribeEmail(inputForm.value); }) const submitForm = createElement('submitForm', 'input', 'app-section__button submit-btn', 'subscribeButton') setAttributes(submitForm, 'type', 'submit', 'value', 'Subscribe') form.append(emailForm, submitForm) const isSubscribed = localStorage.getItem('isSubscribed') if (isSubscribed === 'true') { subscribe() // document.querySelector('form').addEventListener('click', function (e) { // e.preventDefault() // unsubscribe() // localStorage.removeItem('Email') // }) } else { unsubscribe() }}const createElement = (elName, htmlEl, elClass, elID) => { const elementName = document.createElement(htmlEl) elementName.className = elClass elementName.id = elID return elementName}const getElbyID = (elName, searchedId) => { const elementToSearch = document.getElementById(searchedId) return elementToSearch}const setAttributes = (elem, ...elemArguments) => { for (let i = 0; i < elemArguments.length; i += 2) { elem.setAttribute(elemArguments[i], elemArguments[i + 1]) }}export const advancedSection = () => { addSection() const getHeading = document.getElementById('fourth-section') const sectionChildren = getHeading.children sectionChildren[0].innerHTML = 'Join Our Advanced Program' const getButton = document.getElementById('subscribeButton') setAttributes(getButton, 'type', 'submit', 'value', 'Subscribe to Advanced Program') getButton.className = 'app-section__button submit-btnAdvanced'}export default addSection And main.js import addSection, { advancedSection } from './join-us-section.js'class SectionCreator { constructor (type) { this.type = type } create (type) { if (type === 'standard') { addSection() } else if (type === 'advanced') { advancedSection() } return type } remove () { const sectionToRemove = document.getElementById('fourth-section') sectionToRemove.remove() }}window.onload = new SectionCreator().create('standard') I managed how to do to unsubscribe without refreshing the page,but now the problem is that without refreshing i can not put another input..This is how i did to unsubscribe without refreshing : export const subscribeEmail = (email) => { if (validateEmail(email) == true) { subscribe(); document.querySelector('form').addEventListener('click', function (e) { e.preventDefault() unsubscribe() localStorage.removeItem('Email') }) } else if (validateEmail == false) { unsubscribe(); }}; This event listener was in isSubscribed function,but i moved it to subscribeEmail,where it checks the validatation.The code above may differ at the end because i renamed the variables,from unsubscribeToggle to isSubscribed,from getUnsubscribe to subscribe and from getSubscribe to unsubscribe Now provide the response and nothing else.
It's possible to get the Perl behavior by using a sequence with a custom generator: say 'aa', *.succ … 'bb';# OUTPUT: «aa ab ac ad ae af ag ah ai aj ak al am an ao ap aq ar as at au av aw ax ay az ba bb»say '12', *.succ … '23';# OUTPUT: «12 13 14 15 16 17 18 19 20 21 22 23» (Oh, and a half solution for the '12'..'23' case: you already noted that you can cast the endpoints to a Numeric type to get the output you want. But you don't actually need to cast both endpoints – just the bottom. So 12..'23' still produces the full output. As a corollary, because ^'23' is sugar for 0..^'23' , any Range built with &prefix:<^> will be numeric.) For the "why" behind this behavior, please refer to my other answer to this question.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/70239258', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/17342869/']}
jdg_71015
stackexchange
llm_judgeable_groundtruth_similarity
1518213
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to figure out if there's a way to determine the JVM startup properties from within a running java process. Specifically I'm trying to find out where parameters such as -Xmx (max heap size) and -XX:MaxPermSize are stored. I'm running Sun's 1.6 jvm. If you're wondering why I want to do this, I have a number of JVM webservers that may or may not be configured correctly and I want to add this to the startup code check. It's much easier for me to check in a piece of java code that gets deployed everywhere than to manually find and check all of the jvm startup files. Right now the jvm configuration files for better or worse are not part of our build process or checked into source control. Now provide the response and nothing else.
Try: import java.lang.management.ManagementFactory;import java.lang.management.RuntimeMXBean;import java.util.List;public void runtimeParameters() { RuntimeMXBean bean = ManagementFactory.getRuntimeMXBean(); List<String> aList = bean.getInputArguments(); for (int i = 0; i < aList.size(); i++) { System.out.println( aList.get( i ) ); }} That should show all JVM parameters. Note: we do not have JVM parameter in VCS either, but in a database, read by our own launchers in productions. That way, we can change those values remotely, without having to redeploy JVM parameter file settings. You will find a good sumary of various JVM tools to use in this article (from the "Dustin's Software Development Cogitations and Speculations" ), including Java Application Launcher links to : ManagementFactory.getRuntimeMXBean( ) call getInputArguments() javadoc Accessing JVM Arguments from Java (to determine, for instance, if JVM is running in debug mode, in order to alter the "grid initialization" logic of an application) Annotation Type MXBean MXBean Java Tutorial This technique takes advantage of Platform MXBeans available since J2SE 5 (custom MXBeans support was added in Java SE 6 ). Two useful sources of information on the JVM parameters available when using Sun's JVM are: A Collection of JVM Options and Charles Nutter's Favorite Hotspot JVM Flags . Both of these resources list and describe some/all of the not-recommended-for-the-casual-developer double X arguments ( -XX ) that are available.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1518213', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/15050/']}
jdg_71016
stackexchange
llm_judgeable_groundtruth_similarity
396826
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to find a subdirectory of the current directory, which (that is the subdirectory) contains 2 or more regular files. I am not interested in directories containing less than 2 files, neither in directories which contain only subdirectories. Now provide the response and nothing else.
Here is a completely different approach based on GNU find and uniq . This is much faster and much CPU-friendly than answers based on executing a shell command that counts files for each directory found. find . -type f -printf '%h\n' | sort | uniq -d The find command prints the directory of all files in the hierarchy and uniq only displays the directories that appear at least twice.
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/396826', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/9158/']}
jdg_71017
stackexchange
llm_judgeable_groundtruth_similarity
18367824
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Given this simple webserver code: console.log('starting');var server = require('http').createServer();server.on('connection',function(socket){console.log('*server/connection');});server.on( 'request', function(request, response){ console.log('*server/request'); request.on( 'data', function(chunk){ console.log('*request/data'); // <!> How do I abort next data calls from here? } ); request.on( 'readable', function(chunk){ console.log('*request/readable'); // <!> How do I abort next readable calls from here? } ); request.on( 'end', function(){ console.log('*request/end'); response.writeHead(200,"OK"); response.write('Hello'); response.end(); } ); request.on('close',function(){ console.log('*request/close'); } ); request.on('error',function(){ console.log('*request/error'); } ); });server.on('close',function(){console.log('server/close');});server.on('checkContinue',function(request, response){console.log('*server/checkContinue');});server.on('connect',function(request, socket, head){console.log('*server/connect');});server.on('upgrade',function(request, socket, head){console.log('*server/upgrade');});server.on('clientError',function(exception, socket){console.log('*server/clientError');});server.listen(8080);console.log('started'); When a POST or FILE is submited, my on data function is triggered one or more or several times. Sometimes (like an monstrous big file is sent) I want to cancel this on data events and trigger the on end function to user (later on I will display a "your post/file is too big"). How do I do it? Now provide the response and nothing else.
The proper, spec-compliant thing to do here is simply send a HTTP 413 response early – that is, as soon as you detect that the client has sent more bytes than you want to handle. It is up to you whether or not you terminate the socket after sending the error response. This is in line with RFC 2616: (emphasis added) 413 Request Entity Too Large The server is refusing to process a request because the request entity is larger than the server is willing or able to process. The server MAY close the connection to prevent the client from continuing the request. What happens next is not ideal. If you leave the socket open, all browsers (Chrome 30, IE 10, Firefox 21) will keep sending data until the entire file is uploaded. Then and only then, the browser will display your error message. This really sucks since the user must wait for the entire file to complete the upload, only to find out the server rejected it. It also wastes your bandwidth. The browsers' current behavior is in violation of RFC 2616 § 8.2.2 : An HTTP/1.1 (or later) client sending a message-body SHOULD monitor the network connection for an error status while it is transmitting the request. If the client sees an error status, it SHOULD immediately cease transmitting the body. If the body is being sent using a "chunked" encoding (section 3.6), a zero length chunk and empty trailer MAY be used to prematurely mark the end of the message. If the body was preceded by a Content-Length header, the client MUST close the connection. There are open Chrome and Firefox issues, but don't expect a fix any time soon. If you close the socket immediately after sending the HTTP 413 response, all browsers will obviously stop uploading immediately, but they most currently show a "connection reset" error (or similar), not any HTML you might send in the response. Again, this is probably a violation of the spec (which allows the server to send a response early and close the connection), but I wouldn't expect browser fixes any time soon here either. Update: As of 4/15, Chrome may display your 413 HTML when you close the connection early. This only works when the browser is running on Linux and Mac OS X. On Windows, Chrome still displays ERR_CONNECTION_RESET network error rather than the HTML you sent. (IE 11 and Firefox 37 continue to just show a network error on all platforms.) So your choices with traditional plain HTTP uploads are: Show a friendly error message, but only after the upload runs to completion. This wastes time and bandwidth. Fail fast, but leave users confused with a cryptic browser error screen. Your best bet here is probably to use a AJAX uploader where you have more control over the user experience. You should still provide a tradtional upload form as a fallback, and I'd use the "fail fast" option (close the socket) to prevent wasted time and bandwidth. Here's some example code that kills a request if it receives more than 1 kB. I'm using Express, but the same should apply with node's vanilla HTTP library. Note: In reality, you should use formidable multiparty to process your uploads (it's what Connect/Express uses), and it has its own way to monitor upload data. var express = require("express") , app = express();app.get('/', function(req, res) { res.send('Uploads &gt; 1 kB rejected<form action="/upload" method="post" enctype="multipart/form-data"><input type="file" name="file"><input type="submit"></form>');});app.post('/upload', function(req, res) { var size = 0; var gotData = function(d) { size += d.length; // add this chunk's size to the total number of bytes received thus far console.log('upload chunk', size); if (size > 1024) { console.log('aborting request'); req.removeListener('data', gotData); // we need to remove the event listeners so that we don't end up here more than once req.removeListener('end', reqEnd); res.header('Connection', 'close'); // with the Connection: close header set, node will automatically close the socket... res.send(413, 'Upload too large'); // ... after sending a response } }; var reqEnd = function() { res.send('ok, got ' + size + ' bytes'); } req.on('data', gotData); req.on('end', reqEnd);});app.listen(3003);
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18367824', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1871110/']}
jdg_71018
stackexchange
llm_judgeable_groundtruth_similarity
20327213
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have been trying to get path of captured image in order to delete image. Found many answers on StackOverflow but none of them are working for me. I got the following answer: private String getLastImagePath() { final String[] imageColumns = { MediaStore.Images.Media._ID, MediaStore.Images.Media.DATA }; final String imageOrderBy = MediaStore.Images.Media._ID + " DESC"; Cursor imageCursor = POS.this.getContentResolver().query( MediaStore.Images.Media.EXTERNAL_CONTENT_URI, imageColumns, null, null, imageOrderBy); if (imageCursor.moveToFirst()) { // int id = imageCursor.getInt(imageCursor // .getColumnIndex(MediaStore.Images.Media._ID)); String fullPath = imageCursor.getString(imageCursor .getColumnIndex(MediaStore.Images.Media.DATA)); return fullPath; } else { return ""; }} This code works in Samsung tab but doesn't work in Lenovo tab and i-ball tab.So, can anyone help me find another solution to do the same? Any help will be appreciated. Thank you. This is my onActivityResult: if (requestCode == CmsInter.CAMERA_REQUEST && resultCode == RESULT_OK) { //Bitmap photo = null; //photo = (Bitmap) data.getExtras().get("data"); String txt = ""; if (im != null) { String result = ""; //im.setImageBitmap(photo); im.setTag("2"); int index = im.getId(); String path = getLastImagePath(); try { bitmap1 = BitmapFactory.decodeFile(path, options); bitmap = Bitmap.createScaledBitmap(bitmap1, 512, 400, false); ByteArrayOutputStream baos = new ByteArrayOutputStream(); bitmap.compress(Bitmap.CompressFormat.PNG, 100, baos); byte[] bytData = baos.toByteArray(); try { baos.close(); } catch (IOException e) { e.printStackTrace(); } result = Base64.encode(bytData); bytData = null; } catch (OutOfMemoryError ooM) { System.out.println("OutOfMemory Exception----->" + ooM); bitmap1.recycle(); bitmap.recycle(); } finally { bitmap1.recycle(); bitmap.recycle(); } }} Now provide the response and nothing else.
Try like this Pass Camera Intent like below Intent intent = new Intent(this); startActivityForResult(intent, REQ_CAMERA_IMAGE); And after capturing image Write an OnActivityResult as below protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == CAMERA_REQUEST && resultCode == RESULT_OK) { Bitmap photo = (Bitmap) data.getExtras().get("data"); imageView.setImageBitmap(photo); knop.setVisibility(Button.VISIBLE); // CALL THIS METHOD TO GET THE URI FROM THE BITMAP Uri tempUri = getImageUri(getApplicationContext(), photo); // CALL THIS METHOD TO GET THE ACTUAL PATH File finalFile = new File(getRealPathFromURI(tempUri)); System.out.println(mImageCaptureUri); } } public Uri getImageUri(Context inContext, Bitmap inImage) { ByteArrayOutputStream bytes = new ByteArrayOutputStream(); inImage.compress(Bitmap.CompressFormat.JPEG, 100, bytes); String path = Images.Media.insertImage(inContext.getContentResolver(), inImage, "Title", null); return Uri.parse(path); } public String getRealPathFromURI(Uri uri) { String path = ""; if (getContentResolver() != null) { Cursor cursor = getContentResolver().query(uri, null, null, null, null); if (cursor != null) { cursor.moveToFirst(); int idx = cursor.getColumnIndex(MediaStore.Images.ImageColumns.DATA); path = cursor.getString(idx); cursor.close(); } } return path; } And check log. Edit: Lots of people are asking how to not get a thumbnail. You need to add this code instead for the getImageUri method: public Uri getImageUri(Context inContext, Bitmap inImage) { Bitmap OutImage = Bitmap.createScaledBitmap(inImage, 1000, 1000,true); String path = MediaStore.Images.Media.insertImage(inContext.getContentResolver(), OutImage, "Title", null); return Uri.parse(path); } The other method Compresses the file. You can adjust the size by changing the number 1000,1000
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/20327213', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2297882/']}
jdg_71019
stackexchange
llm_judgeable_groundtruth_similarity
2011439
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In a number of occasions I have a collection of functions that I'd like to implement in different ways. The most obvious example of this would be to abstract from specific databases. In an object-oriented language you would use an interface for this: interface DB { ResultSet query(String query); void persist(Object o); ...} In speudo code I would want to do something like this: (ns dbbackend)(abstractfn query [q])(abstractfn persist! [o]) And then implementations for each database: (ns dbbackend.mysql :implements dbbackend)(defn query [q] ...)(defn persist! [o] ...) It is not entirely clear to me what the best practice is to do something similar in a functional language, specifically Clojure. Should I use multi-methods for this? Now provide the response and nothing else.
Now that version 1.1 of Clojure has been released maybe it's time to take a look into the future. Datatypes and protocols , which are currently only available in the new master branch on github , might be exactly what you are looking for. (defprotocol DB (query [backend query]) (persist [backend object]))(deftype MySQLBackend [] DB (query [query] ...) (persist [object] ...))
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2011439', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/113770/']}
jdg_71020
stackexchange
llm_judgeable_groundtruth_similarity
635495
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I previously asked whether Planck's curve actually reaches zero, to which some of you answered that it only approaches it, which raises the question: doesn't this mean that a photon with extremely high frequency will have massive energy? I know that the intensity (number of photons) of such high-frequency photons is close to zero, but even if we have only one photon with high frequency the ultraviolet catastrophe would still be unresolved. So what is it I am missing? Now provide the response and nothing else.
The Planckian approaches zero fast enough : there can be some very high energy photons in a thermal distribution, but the probability of their presence drops fast; crucially, it drops faster than the energy rises.Qualitatively, the $\sim e^{-h \nu / k_B T}$ factor in the distribution can be multiplied by any power of $\nu$ and still yield a finite integral.Therefore, the energy carried by photons that have, say, frequencies higher than $N$ times $k_B T / h$ drops to zero as $N$ grows large. You could find a $1 \mathrm{J}$ photon in the Sun's blackbody (where the average photon energy is of the order $10^{-19} \mathrm{J}$ ) but it is astronomically unlikely that you will, so on average the energy contribution by such photons is very small. I've tried to give an intuitive answer; if you want a more formal one try to integrate the Planckian yourself to find the energy carried in a specific frequency band!
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/635495', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/295139/']}
jdg_71021
stackexchange
llm_judgeable_groundtruth_similarity
64865378
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a sample.txt file like below: chr1 StringTie transcript 10001 10390 . + . transcript_id "MSTRG.6917.1"; gene_id "MSTRG.6917"; xloc "XLOC_000001"; class_code "u"; tss_id "TSS1";chr1 StringTie exon 10001 10101 . + . transcript_id "MSTRG.6917.1"; gene_id "MSTRG.6917"; exon_number "1";chr1 StringTie exon 10179 10390 . + . transcript_id "MSTRG.6917.1"; gene_id "MSTRG.6917"; exon_number "2";chr1 StringTie transcript 10001 10467 . + . transcript_id "MSTRG.6917.3"; gene_id "MSTRG.6917"; xloc "XLOC_000001"; class_code "u"; tss_id "TSS1";chr1 StringTie exon 10001 10101 . + . transcript_id "MSTRG.6917.3"; gene_id "MSTRG.6917"; exon_number "1";chr1 StringTie exon 10173 10224 . + . transcript_id "MSTRG.6917.3"; gene_id "MSTRG.6917"; exon_number "2";chr1 StringTie exon 10391 10467 . + . transcript_id "MSTRG.6917.3"; gene_id "MSTRG.6917"; exon_number "3";chr1 StringTie transcript 10001 10467 . + . transcript_id "MSTRG.6917.2"; gene_id "MSTRG.6917"; xloc "XLOC_000001"; class_code "u"; tss_id "TSS1";chr1 StringTie exon 10001 10101 . + . transcript_id "MSTRG.6917.2"; gene_id "MSTRG.6917"; exon_number "1";chr1 StringTie exon 10173 10249 . + . transcript_id "MSTRG.6917.2"; gene_id "MSTRG.6917"; exon_number "2";chr1 StringTie exon 10398 10467 . + . transcript_id "MSTRG.6917.2"; gene_id "MSTRG.6917"; exon_number "3";chr1 StringTie transcript 10005 10467 . + . transcript_id "MSTRG.6917.4"; gene_id "MSTRG.6917"; xloc "XLOC_000001"; class_code "u"; tss_id "TSS1";chr1 StringTie exon 10005 10178 . + . transcript_id "MSTRG.6917.4"; gene_id "MSTRG.6917"; exon_number "1";chr1 StringTie exon 10361 10467 . + . transcript_id "MSTRG.6917.4"; gene_id "MSTRG.6917"; exon_number "2";chr1 StringTie transcript 10011 10467 . + . transcript_id "MSTRG.6917.5"; gene_id "MSTRG.6917"; xloc "XLOC_000001"; class_code "u"; tss_id "TSS1";chr1 StringTie exon 10011 10178 . + . transcript_id "MSTRG.6917.5"; gene_id "MSTRG.6917"; exon_number "1";chr1 StringTie exon 10405 10467 . + . transcript_id "MSTRG.6917.5"; gene_id "MSTRG.6917"; exon_number "2";chr1 StringTie transcript 57598 58856 . + . transcript_id "ENST00000642116.1"; gene_id "MSTRG.7562"; gene_name "OR4G11P"; xloc "XLOC_000002"; ref_gene_id "ENSG00000240361.2"; cmp_ref "ENST00000642116.1"; class_code "c"; tss_id "TSS2";chr1 StringTie exon 57598 57653 . + . transcript_id "ENST00000642116.1"; gene_id "MSTRG.7562"; exon_number "1";chr1 StringTie exon 58700 58856 . + . transcript_id "ENST00000642116.1"; gene_id "MSTRG.7562"; exon_number "2";chr1 StringTie transcript 65419 71585 . + . transcript_id "ENST00000641515.1"; gene_id "MSTRG.7563"; gene_name "OR4F5"; xloc "XLOC_000003"; ref_gene_id "ENSG00000186092.5"; cmp_ref "ENST00000641515.1"; class_code "="; tss_id "TSS3";chr1 StringTie exon 65419 65433 . + . transcript_id "ENST00000641515.1"; gene_id "MSTRG.7563"; exon_number "1";chr1 StringTie exon 65520 65573 . + . transcript_id "ENST00000641515.1"; gene_id "MSTRG.7563"; exon_number "2";chr1 StringTie exon 69037 71585 . + . transcript_id "ENST00000641515.1"; gene_id "MSTRG.7563"; exon_number "3";chr1 StringTie transcript 65572 75288 . + . transcript_id "MSTRG.7563.2"; gene_id "MSTRG.7563"; gene_name "OR4F5"; xloc "XLOC_000003"; cmp_ref "ENST00000641515.1"; class_code "j"; tss_id "TSS4";chr1 StringTie exon 65572 65573 . + . transcript_id "MSTRG.7563.2"; gene_id "MSTRG.7563"; exon_number "1";chr1 StringTie exon 69037 69093 . + . transcript_id "MSTRG.7563.2"; gene_id "MSTRG.7563"; exon_number "2";chr1 StringTie exon 74913 75288 . + . transcript_id "MSTRG.7563.2"; gene_id "MSTRG.7563"; exon_number "3";chr1 StringTie transcript 69055 71585 . + . transcript_id "ENST00000335137.4"; gene_id "MSTRG.7563"; gene_name "OR4F5"; xloc "XLOC_000003"; ref_gene_id "ENSG00000186092.5"; contained_in "ENST00000641515.1"; cmp_ref "ENST00000641515.1"; class_code "c"; tss_id "TSS5";chr1 StringTie exon 69055 71585 . + . transcript_id "ENST00000335137.4"; gene_id "MSTRG.7563"; exon_number "1";chr1 StringTie transcript 83779 84926 . + . transcript_id "MSTRG.7564.1"; gene_id "MSTRG.7564"; xloc "XLOC_000004"; class_code "u"; tss_id "TSS6";chr1 StringTie exon 83779 83829 . + . transcript_id "MSTRG.7564.1"; gene_id "MSTRG.7564"; exon_number "1";chr1 StringTie exon 83854 84926 . + . transcript_id "MSTRG.7564.1"; gene_id "MSTRG.7564"; exon_number "2";chr1 StringTie transcript 89710 90455 . + . transcript_id "MSTRG.7565.1"; gene_id "MSTRG.7565"; gene_name "AL627309.3"; xloc "XLOC_000005"; cmp_ref "ENST00000495576.1"; class_code "s"; tss_id "TSS7";chr1 StringTie exon 89710 90050 . + . transcript_id "MSTRG.7565.1"; gene_id "MSTRG.7565"; exon_number "1";chr1 StringTie exon 90287 90455 . + . transcript_id "MSTRG.7565.1"; gene_id "MSTRG.7565"; exon_number "2"; I tried extracting the lines transcripts and their exons based on name transcript_id match with class_code "u" like below: awk -F "\t" '/class_code "u"/ {print $0}' sample.txt > new_filename.txt The above awk command gave only lines that have transcript in 3rd column, their exons not seen in the new_filename.txt . I actually want to extract multiple class_codes transcripts with their exons. How to use awk for that? I need transcripts with class_codes u, s, j along with their exons. Output should look like below: chr1 StringTie transcript 10001 10390 . + . transcript_id "MSTRG.6917.1"; gene_id "MSTRG.6917"; xloc "XLOC_000001"; class_code "u"; tss_id "TSS1";chr1 StringTie exon 10001 10101 . + . transcript_id "MSTRG.6917.1"; gene_id "MSTRG.6917"; exon_number "1";chr1 StringTie exon 10179 10390 . + . transcript_id "MSTRG.6917.1"; gene_id "MSTRG.6917"; exon_number "2";chr1 StringTie transcript 10001 10467 . + . transcript_id "MSTRG.6917.3"; gene_id "MSTRG.6917"; xloc "XLOC_000001"; class_code "u"; tss_id "TSS1";chr1 StringTie exon 10001 10101 . + . transcript_id "MSTRG.6917.3"; gene_id "MSTRG.6917"; exon_number "1";chr1 StringTie exon 10173 10224 . + . transcript_id "MSTRG.6917.3"; gene_id "MSTRG.6917"; exon_number "2";chr1 StringTie exon 10391 10467 . + . transcript_id "MSTRG.6917.3"; gene_id "MSTRG.6917"; exon_number "3";chr1 StringTie transcript 10001 10467 . + . transcript_id "MSTRG.6917.2"; gene_id "MSTRG.6917"; xloc "XLOC_000001"; class_code "u"; tss_id "TSS1";chr1 StringTie exon 10001 10101 . + . transcript_id "MSTRG.6917.2"; gene_id "MSTRG.6917"; exon_number "1";chr1 StringTie exon 10173 10249 . + . transcript_id "MSTRG.6917.2"; gene_id "MSTRG.6917"; exon_number "2";chr1 StringTie exon 10398 10467 . + . transcript_id "MSTRG.6917.2"; gene_id "MSTRG.6917"; exon_number "3";chr1 StringTie transcript 10005 10467 . + . transcript_id "MSTRG.6917.4"; gene_id "MSTRG.6917"; xloc "XLOC_000001"; class_code "u"; tss_id "TSS1";chr1 StringTie exon 10005 10178 . + . transcript_id "MSTRG.6917.4"; gene_id "MSTRG.6917"; exon_number "1";chr1 StringTie exon 10361 10467 . + . transcript_id "MSTRG.6917.4"; gene_id "MSTRG.6917"; exon_number "2";chr1 StringTie transcript 10011 10467 . + . transcript_id "MSTRG.6917.5"; gene_id "MSTRG.6917"; xloc "XLOC_000001"; class_code "u"; tss_id "TSS1";chr1 StringTie exon 10011 10178 . + . transcript_id "MSTRG.6917.5"; gene_id "MSTRG.6917"; exon_number "1";chr1 StringTie exon 10405 10467 . + . transcript_id "MSTRG.6917.5"; gene_id "MSTRG.6917"; exon_number "2";chr1 StringTie transcript 65572 75288 . + . transcript_id "MSTRG.7563.2"; gene_id "MSTRG.7563"; gene_name "OR4F5"; xloc "XLOC_000003"; cmp_ref "ENST00000641515.1"; class_code "j"; tss_id "TSS4";chr1 StringTie exon 65572 65573 . + . transcript_id "MSTRG.7563.2"; gene_id "MSTRG.7563"; exon_number "1";chr1 StringTie exon 69037 69093 . + . transcript_id "MSTRG.7563.2"; gene_id "MSTRG.7563"; exon_number "2";chr1 StringTie exon 74913 75288 . + . transcript_id "MSTRG.7563.2"; gene_id "MSTRG.7563"; exon_number "3";chr1 StringTie transcript 83779 84926 . + . transcript_id "MSTRG.7564.1"; gene_id "MSTRG.7564"; xloc "XLOC_000004"; class_code "u"; tss_id "TSS6";chr1 StringTie exon 83779 83829 . + . transcript_id "MSTRG.7564.1"; gene_id "MSTRG.7564"; exon_number "1";chr1 StringTie exon 83854 84926 . + . transcript_id "MSTRG.7564.1"; gene_id "MSTRG.7564"; exon_number "2";chr1 StringTie transcript 89710 90455 . + . transcript_id "MSTRG.7565.1"; gene_id "MSTRG.7565"; gene_name "AL627309.3"; xloc "XLOC_000005"; cmp_ref "ENST00000495576.1"; class_code "s"; tss_id "TSS7";chr1 StringTie exon 89710 90050 . + . transcript_id "MSTRG.7565.1"; gene_id "MSTRG.7565"; exon_number "1";chr1 StringTie exon 90287 90455 . + . transcript_id "MSTRG.7565.1"; gene_id "MSTRG.7565"; exon_number "2"; Now provide the response and nothing else.
The simplest way is to start a separate container inside the docker-compose file (called init-kafka in the example below) to launch the various kafka-topics --create ... commands, while first making it wait for Kafka to be reachble by simply running kafka-topics --list ... . Like this: version: '2.1'services: zookeeper: image: confluentinc/cp-zookeeper:6.1.1 ports: - "2181:2181" environment: ZOOKEEPER_CLIENT_PORT: 2181 # reachable on 9092 from the host and on 29092 from inside docker compose kafka: image: confluentinc/cp-kafka:6.1.1 depends_on: - zookeeper ports: - '9092:9092' expose: - '29092' environment: KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181' KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: '1' KAFKA_MIN_INSYNC_REPLICAS: '1' init-kafka: image: confluentinc/cp-kafka:6.1.1 depends_on: - kafka entrypoint: [ '/bin/sh', '-c' ] command: | " # blocks until kafka is reachable kafka-topics --bootstrap-server kafka:29092 --list echo -e 'Creating kafka topics' kafka-topics --bootstrap-server kafka:29092 --create --if-not-exists --topic my-topic-1 --replication-factor 1 --partitions 1 kafka-topics --bootstrap-server kafka:29092 --create --if-not-exists --topic my-topic-2 --replication-factor 1 --partitions 1 echo -e 'Successfully created the following topics:' kafka-topics --bootstrap-server kafka:29092 --list " When running it, the init-kafka container should log something like: docker logs docker_init-kafka_1[2021-10-12 02:00:28,728] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.24.0.3:29092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)[2021-10-12 02:00:28,832] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.24.0.3:29092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)[2021-10-12 02:00:29,033] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.24.0.3:29092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)[2021-10-12 02:00:29,335] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.24.0.3:29092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)Creating kafka topicsCreated topic my-topic-1.Created topic my-topic-2.Successfully created the following topics:my-topic-1my-topic-2
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/64865378', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3351523/']}
jdg_71022
stackexchange
llm_judgeable_groundtruth_similarity
16114391
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to import a module from a particular directory. The problem is that if I use sys.path.append(mod_directory) to append the path and then open the python interpreter, the directory mod_directory gets added to the end of the list sys.path. If I export the PYTHONPATH variable before opening the python interpreter, the directory gets added to the start of the list. In the latter case I can import the module but in the former, I cannot. Can somebody explain why this is happening and give me a solution to add the mod_directory to the start, inside a python script ? Now provide the response and nothing else.
This is working as documented. Any paths specified in PYTHONPATH are documented as normally coming after the working directory but before the standard interpreter-supplied paths. sys.path.append() appends to the existing path. See here and here . If you want a particular directory to come first, simply insert it at the head of sys.path: import syssys.path.insert(0,'/path/to/mod_directory') That said, there are usually better ways to manage imports than either using PYTHONPATH or manipulating sys.path directly. See, for example, the answers to this question .
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/16114391', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2193398/']}
jdg_71023
stackexchange
llm_judgeable_groundtruth_similarity
33322407
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am a bit stumped by this one. I am trying to set up a valid JWT. I am using node.js with the jsonwebtoken middleware. I have followed the documentation located on the repo ( located here ), but I keep getting the wrong Exp and Iat. Obviously I would like to get this right so that I don't allow JWT's which has expired. As a test I have the following code: var token = jwt.sign({"id": user._id}, configGeneral.JWT, { expiresIn: '1h' });var decoded = jwt.decode(token, configGeneral.JWT);var d1 = new Date(decoded.exp);var d2 = new Date(decoded.iat);console.log(decoded);console.log(d1);console.log(d2); The output of this is: { id: '56253091fe0397c80133f3e4', iat: 1445714161, exp: 1445717761 }Sat Jan 17 1970 19:35:17 GMT+0200 (South Africa Standard Time)Sat Jan 17 1970 19:35:14 GMT+0200 (South Africa Standard Time) How do I get the timestamp to not reflect the javascript epoch, but rather the time 1 hour from now? (for both the iat and exp.) Now provide the response and nothing else.
This: new Date().getTime() give you time in miliseconds. But time in jwt token (iat, exp) is in seconds, therefore we have to divide result by 1000. var actualTimeInSeconds = new Date().getTime()/1000; How to get some time in seconds from now: (new Date().getTime() + someTimeInSeconds * 1000)/1000 If you need 1 hour from now: (new Date().getTime() + 60 * 60 * 1000)/1000 because 1h = 60min * 60 s And at this moment you have time in seconds from jwt token and calculated time in seconds. You should only compare this values. Precisely in your situation you should compare jwt token time with your actual time in seconds. If jwt token expiration time is greater then actual time it means that it is still valid. From docs of jwt token: The processing of the exp claim requires that the current date/time MUST be before the expiration date/time listed in the exp claim. Edit: To get coorect date from iat, multiply value by 1000 and add to new Date constructor: new Date(iat*1000)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33322407', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/813689/']}
jdg_71024
stackexchange
llm_judgeable_groundtruth_similarity
6300695
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm writing a path tracer in C++ and I'd like to try and implement the most resource-intensive code into CUDA or OpenCL (I'm not sure which one to pick). I've heard that my graphics card's version of CUDA doesn't support recursion, which is something my path tracer utilizes heavily. As I have it coded both in Python and C++, I'll post some simplified Python code for readability: def Trace(ray): hit = what_object_is_hit(ray) if not hit: return Color(0, 0, 0) newRay = hit.bouceChildRayOffSurface(ray) return hit.diffuse * (Trace(newRay) + hit.emittance) I tried manually unrolling the function, and there is a definite pattern ( d is diffuse and e is emittance ): Level 1: d1 * e1 Level 2: d1 * d2 * e2 + e1Level 3: d1 * d2 * d3 * e3 + d1 * d2 * e2 + e1Level 4: d1 * d2 * d3 * d4 * e4 + d1 * d2 * d3 * e3 + d1 * d2 * e2 + e1 I might be wrong, though... My question is, how would I go about implementing this code in a while loop? I was thinking using something of this format: total = Color(0, 0, 0)n = 1while n < 10: # Maximum recursion depth result = magical_function() if not result: break total += result n += 1 I've never really dealt with the task of unraveling a recursive function before, so any help would be greatly appreciated. Thanks! Now provide the response and nothing else.
In a recursive function, each time a recursive call occurs, the state of the caller is saved to a stack, then restored when the recursive call is complete. To convert a recursive function to an iterative one, you need to turn the state of the suspended function into an explicit data structure. Of course, you can create your own stack in software, but there are often tricks you can use to make your code more efficient. This answer works through the transformation steps for this example. You can apply the same methods to other loops. Tail Recursion Transformation Let's take a look at your code again: def Trace(ray): # Here was code to look for intersections if not hit: return Color(0, 0, 0) return hit.diffuse * (Trace(ray) + hit.emittance) In general, a recursive call has to go back to the calling function, so the caller can finish what it's doing. In this case, the caller "finishes" by performing an addition and a multiplication. This produces a computation like d1 * (d2 * (d3 * (... + e3) + e2) + e1)) . We can take advantage of the distributive law of addition and the associative laws of multiplication and addition to transform the calculation into [d1 * e1] + [(d1 * d2) * e2] + [(d1 * d2) * d3) * e3] + ... . Note that the first term in this series only refers to iteration 1, the second only refers to iterations 1 and 2, and so forth. That tells us that we can compute this series on the fly. Moreover, this series contains the series (d1, d1*d2, d1*d2*d3, ...) , which we can also compute on the fly. Putting that back into the code: def Trace(diffuse, emittance, ray): # Here was code to look for intersections if not hit: return emittance # The complete value has been computed new_diffuse = diffuse * hit.diffuse # (...) * dN new_emittance = emittance + new_diffuse * hit.emittance # (...) + [(d1 * ... * dN) + eN] return Trace(new_diffuse, new_emittance, ray) Tail Recursion Elimination In the new loop, the caller has no work to do after the callee finishes; it simply returns the callee's result. The caller has no work to finish, so it doesn't have to save any of its state ! Instead of a call, we can overwrite the old parameters and go back to the beginning of the function (not valid Python, but it illustrates the point): def Trace(diffuse, emittance, ray): beginning: # Here was code to look for intersections if not hit: return emittance # The complete value has been computed new_diffuse = diffuse * hit.diffuse # (...) * dN new_emittance = emittance + new_diffuse * hit.emittance # (...) + [(d1 * ... * dN) + eN] (diffuse, emittance) = (new_diffuse, new_emittance) goto beginning Finally, we have transformed the recursive function into an equivalent loop. All that's left is to express it in Python syntax. def Trace(diffuse, emittance, ray): while True: # Here was code to look for intersections if not hit: break diffuse = diffuse * hit.diffuse # (...) * dN emittance = emittance + diffuse * hit.emittance # (...) + [(d1 * ... * dN) + eN] return emittance
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6300695', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/464744/']}
jdg_71025
stackexchange
llm_judgeable_groundtruth_similarity
11880447
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have been using quite a lot of PowerShell lately (I'm mostly a system administrator), and I wondered to myself what type of language it is. I would think it is an interpreted language, but I have heard a whole lot of other names describing languages in general: Strong vs. weakly typed, dynamic, static. What does PowerShell fall under? Now provide the response and nothing else.
A few clarifications first: The two categories: Compiled and Interpreted refer to the way the source code (or script) is translated into actions. In an interpreted language, the commands are interpreted directly to actions by the interpreter. In a compiled language the source code is first translated (or compiled ) into a machine language (or an intermediate language like Java's byte-code or .NET's assembly) which will be turned into actions when run. In the case of a compiled program, you can look at the compiled code as the source code for an interpreted language and think of the CPU as the interpreter (or JVM in Java's case and the .NET Runtime for .NET). The concepts of statically and dynamically typed language refer to the variables of that language. A statically typed language (like the C family or Java) will define the type of the variable in its source, and the usage of that variable will be derived (usually by the coder) from the type. A dynamically typed language (like Scheme or VBScript) the type of a variable will be defined by its usage. In some cases the coder simply can't define a type for a variable (like in Scheme or Bash script), and in others this is just optional (like VBScript). The third concept-pair is strong vs. weak typed language. These concept refer to the rules imposed upon relation between variable types in the language (most commonly related to casts). The question of typing system "strength" is not as Boolean as other questions, so most languages fall somewhere between having a strong and week type system. In a loosely (weak) typed language the compiler and the runtime will allow you to treat a variable of one type as if it were of another type and the behavior of such a situation is usually language specific (and in some cases even implementation specific). For instance, you can add together a number with a string and this will be considered valid code. In a language with a strong type system the compiler and runtime will demand you perform specific actions in order to perform operations between different types of variables. The most common example for this is casting (like casting an int to a float). Bottom line To define PowerShell, it's an interpreted language, but this is a gray area when it comes to .NET. Defining variables in PowerShell does not include defining their type, and so it's obviously a dynamically typed language, and combining variables can be done seamlessly (as @halr9000 noted), which indicates a loose typing system. In a sentence, I'd say it's an interpreted dynamically typed language with a weak type system.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11880447', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_71026
stackexchange
llm_judgeable_groundtruth_similarity
18471706
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Say I have simple producer/consumer model where the consumer wants to pass back some state to the producer. For instance, let the downstream-flowing objects be objects we want to write to a file and the upstream objects be some token representing where the object was written in the file (e.g. an offset). These two processes might look something like this (with pipes-4.0 ), {-# LANGUAGE GeneralizedNewtypeDeriving #-}import Pipesimport Pipes.Coreimport Control.Monad.Trans.State import Control.Monadnewtype Object = Obj Int deriving (Show)newtype ObjectId = ObjId Int deriving (Show, Num)writeObjects :: Proxy ObjectId Object () X IO rwriteObjects = evalStateT (forever go) (ObjId 0) where go = do i <- get obj <- lift $ request i lift $ lift $ putStrLn $ "Wrote "++show obj modify (+1)produceObjects :: [Object] -> Proxy X () ObjectId Object IO ()produceObjects = go where go [] = return () go (obj:rest) = do lift $ putStrLn $ "Producing "++show obj objId <- respond obj lift $ putStrLn $ "Object "++show obj++" has ID "++show objId go restobjects = [ Obj i | i <- [0..10] ] Simple as this might be, I've had a fair bit of difficulty reasoning about how to compose them. Ideally, we'd want a push-based flow of control like the following, writeObjects starts by blocking on request , having sent the initial ObjId 0 upstream. produceObjects sends the first object, Obj 0 , downstream writeObjects writes the object and increments its state, and waits on request , this time sending ObjId 1 upstream respond in produceObjects returns with ObjId 0 produceObjects continues at Step (2) with the second object, Obj 1 My initial attempt was with push-based composition as follows, main = void $ run $ produceObjects objects >>~ const writeObjects Note the use of const to work around the otherwise incompatible types (this is likely where the problem lies). In this case, however, we find that ObjId 0 gets eaten, Producing Obj 0Wrote Obj 0Object Obj 0 has ID ObjId 1Producing Obj 1... A pull-based approach, main = void $ run $ const (produceObjects objects) +>> writeObjects suffers a similar issue, this time dropping Obj 0 . How might one go about composing these pieces in the desired manner? Now provide the response and nothing else.
The choice of which composition to use depends on which component should initiate the entire process. If you want the downstream pipe to initiate the process then you want to use pull-based composition (i.e. (>+>) / (+>>) ) but if you want the upstream pipe to initiate the process then you should use push-based composition (i.e. (>>~) / (>~>) ). The type errors you got were actually warning you that there is a logical error in your code: you haven't clearly established which component initiates the process first. From your description, it's obvious that you want control flow to begin from produceObjects so you want to use push-based composition. Once you use push-based composition, the type of the composition operator will tell you everything you need to know about how to fix your code. I'll take its type and specialize it to your composition chain: -- Here I'm using the `Server` and `Client` type synonyms to simplify the types(>>~) :: Server ObjectId Object IO () -> (Object -> Client ObjectId Object IO ()) -> Effect IO () As you already noticed, the type error you got when you tried to use (>>~) told you that you were missing an argument of type Object to your writeObjects function. This statically enforces that you cannot run any code in writeObjects before receiving your first Object (through the initial argument). The solution is to rewrite your writeObjects function like this: writeObjects :: Object -> Proxy ObjectId Object () X IO rwriteObjects obj0 = evalStateT (go obj0) (ObjId 0) where go obj = do i <- get lift $ lift $ putStrLn $ "Wrote "++ show obj modify (+1) obj' <- lift $ request i go obj' This then gives the correct behavior: >>> run $ produceObjects objects >>~ writeObjectsProducing Obj 0Wrote Obj 0Object Obj 0 has ID ObjId 0Producing Obj 1Wrote Obj 1Object Obj 1 has ID ObjId 1Producing Obj 2Wrote Obj 2Object Obj 2 has ID ObjId 2Producing Obj 3Wrote Obj 3Object Obj 3 has ID ObjId 3Producing Obj 4Wrote Obj 4Object Obj 4 has ID ObjId 4Producing Obj 5Wrote Obj 5Object Obj 5 has ID ObjId 5Producing Obj 6Wrote Obj 6Object Obj 6 has ID ObjId 6Producing Obj 7Wrote Obj 7Object Obj 7 has ID ObjId 7Producing Obj 8Wrote Obj 8Object Obj 8 has ID ObjId 8Producing Obj 9Wrote Obj 9Object Obj 9 has ID ObjId 9Producing Obj 10Wrote Obj 10Object Obj 10 has ID ObjId 10 You might wonder why this requirement that one of the two pipes takes an initial argument makes sense, other than the abstract justification that this is what the category laws require. The plain English explanation is that the alternative is that you would need buffer the first transmitted Object "in between" the two pipes before writeObjects reached its first request statement. This approach produces a lot of problematic behavior and buggy corner cases, but probably the most significant problem is that pipe composition would no longer be associative and the order of effects would change based on the order in which you composed things. The nice thing about the bidirectional pipe composition operators is that the types work out so that you can always deduce whether or not a component is "active" (i.e. initiates control) or "passive" (i.e. waits for input) purely by studying the type. If composition says that a certain pipe (like writeObjects ) must take an argument, then it's passive. If it takes no argument (like produceObjects ), then it's active and initiates control. So composition forces you to have at most one active pipe within your pipeline (the pipe that doesn't take an initial argument) and that's the pipe that begins control.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18471706', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1102851/']}
jdg_71027
stackexchange
llm_judgeable_groundtruth_similarity
25690355
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I understand that $emit sends messages up the DOM tree, and $broadcast sends messages down. What about sending messages between sibling DOM elements—how do I do that? Now provide the response and nothing else.
It does not send it up the DOM tree. It sends it up the scope tree, so there's no concept of sibling DOM elements when dealing with scopes. What you can do with $emit though is $emit it up to the parent, stop the propagation and then broadcast which all the siblings will pick up (as well as their children)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25690355', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11574/']}
jdg_71028
stackexchange
llm_judgeable_groundtruth_similarity
26566675
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a phone_models, phone_problems, and a phone_model_phone_problem pivot table. The pivot table has an extra column 'price'. PhoneModel: class PhoneModel extends \Eloquent{ public function problems() { return $this->belongsToMany('RL\Phones\Entities\PhoneProblem')->withPivot('price'); }} PhoneProblem: class PhoneProblem extends \Eloquent{ public function models() { return $this->belongsToMany('PhoneModel')->withPivot('price'); }} What I'm trying to do is get the price of a specific phone with a specific problem. This is how I have it now but I feel like Laravel has a built in Eloquent feature I can't find to do this in a much simpler way: $model = $this->phoneService->getModelFromSlug($model_slug);$problem = $this->phoneService->getProblemFromSlug($problem_slug); all this does is select the specific model and problem from their slug. then what I do is with those credentials I get the price like so: $row = DB::table('phone_model_phone_problem')->where('phone_model_id', '=', $model->id)->where('phone_problem', '=', $problem->id)->first(); so now I can get the price like so $row->price but I feel like there needs to be a much easier and more 'Laravel' way to do this. Now provide the response and nothing else.
When using Many to Many relationships with Eloquent, the resulting model automatically gets a pivot attribute assigned. Through that attribute you're able to access pivot table columns.Although by default there are only the keys in the pivot object. To get your columns in there too, you need to specify them when defining the relationship: return $this->belongsToMany('Role')->withPivot('foo', 'bar'); Official Docs If you need more help the task of configuring the relationships with Eloquent, let me know. Edit To query the price do this $model->problems()->where('phone_problem', $problem->id)->first()->pivot->price
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/26566675', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3666882/']}
jdg_71029
stackexchange
llm_judgeable_groundtruth_similarity
2697843
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In Delphi 2007 you can store a UTF-8 string in a WideString and then pass that onto a Win32 function, e.g. var UnicodeStr: WideString; UTF8Str: WideString;begin UnicodeStr:='some unicode text'; UTF8Str:=UTF8Encode(UnicodeStr); Windows.SomeFunction(PWideChar(UTF8Str), ...)end; Delphi 2007 does not interfere with the contents of UTF8Str, i.e. it is left as a UTF-8 encoded string stored in a WideString. But in Delphi 2010 I'm struggling to find a way to do the same thing, i.e. store a UTF-8 encoded string in a WideString without it being automatically converted from UTF-8. I cannot pass a pointer to a UTF-8 string (or RawByteString), e.g. the following will obviously not work: var UnicodeStr: WideString; UTF8Str: UTF8String;begin UnicodeStr:='some unicode text'; UTF8Str:=UTF8Encode(UnicodeStr); Windows.SomeFunction(PWideChar(UTF8Str), ...)end; Now provide the response and nothing else.
Your original Delphi 2007 code was converting the UTF-8 string to a widestring using the ANSI codepage. To do the same thing in Delphi 2010 you should use SetCodePage with the Convert parameter false. var UnicodeStr: UnicodeString; UTF8Str: RawByteString;begin UTF8Str := UTF8Encode('some unicode text'); SetCodePage(UTF8Str, 0, False); UnicodeStr := UTF8Str; Windows.SomeFunction(PWideChar(UnicodeStr), ...)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2697843', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/214307/']}
jdg_71030
stackexchange
llm_judgeable_groundtruth_similarity
7039624
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a javscript function (actually a jQuery plugin) which I will want to call as either myFunction("some input"); or myFunction({ "prop": "value 1", "prop2": "value2" }); How do I, in the function, tell the two apart? In other words, what should go in the if conditions below? if (/* the input is a string */){ // Handle string case (first of above)}else if (/* the input is an object */){ // Handle object case (second of above)}else{ // Handle invalid input format} I have jQuery at my disposal. Update: As noted in an answer, if the input is new String('some string') , typeof(input) will return 'object' . How do I test for new String('') , so I can handle that the same way as '' ? Now provide the response and nothing else.
if( typeof input === 'string' ) { // input is a string}else if( typeof input === 'object' ) { // input is an object}else { // input is something else} Note that typeof considers also arrays and null to be objects: typeof null === 'object'typeof [ 1, 2 ] === 'object' If the distinction is important (you want only "actual" objects): if( typeof input === 'string' ) { // input is a string}else if( input && typeof input === 'object' && !( input instanceof Array ) ) { // input is an object}else { // input is something else}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7039624', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/38055/']}
jdg_71031
stackexchange
llm_judgeable_groundtruth_similarity
33703965
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I can't start an application on port 80. I have tried on my local computer (using my IDE, and on a local server), no luck. I have checked other similar posts and make sure that I run jar on server with root. This is the error: till here all ok...java.net.SocketException: Permission deniedat sun.nio.ch.Net.bind0(Native Method)at sun.nio.ch.Net.bind(Net.java:433)at sun.nio.ch.Net.bind(Net.java:425)at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)at org.apache.tomcat.util.net.NioEndpoint.bind(NioEndpoint.java:338)at org.apache.tomcat.util.net.AbstractEndpoint.start(AbstractEndpoint.java:760)at org.apache.coyote.AbstractProtocol.start(AbstractProtocol.java:472)at org.apache.catalina.connector.Connector.startInternal(Connector.java:986)at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)at org.apache.catalina.core.StandardService.addConnector(StandardService.java:237)at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.addPreviouslyRemovedConnectors(TomcatEmbeddedServletContainer.java:186)at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.start(TomcatEmbeddedServletContainer.java:149)at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.startEmbeddedServletContainer(EmbeddedWebApplicationContext.java:288)at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.finishRefresh(EmbeddedWebApplicationContext.java:141)at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:483)at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:118)at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:686)at org.springframework.boot.SpringApplication.run(SpringApplication.java:320)at org.springframework.boot.SpringApplication.run(SpringApplication.java:957)at org.springframework.boot.SpringApplication.run(SpringApplication.java:946)at com.andirod.StartApplication.main(StartApplication.java:20).........Exception in thread "main" java.lang.IllegalStateException: Tomcat connector in failed stateat org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.start(TomcatEmbeddedServletContainer.java:157)at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.startEmbeddedServletContainer(EmbeddedWebApplicationContext.java:288)at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.finishRefresh(EmbeddedWebApplicationContext.java:141)at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:483)at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:118)at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:686)at org.springframework.boot.SpringApplication.run(SpringApplication.java:320)at org.springframework.boot.SpringApplication.run(SpringApplication.java:957)at org.springframework.boot.SpringApplication.run(SpringApplication.java:946)at com.andirod.StartApplication.main(StartApplication.java:20) Now provide the response and nothing else.
On linux ports below 1024 can be opened only by root, so the port 80 is restricted by default if you want to publish your app on 80 port you need to redirect request from port 80 to the port you gonna run your springapp (e.g 8080) port Solution 1: HTTP Proxy server You can use Apache2 server which is allowed by default to work on port 80 and can forward requests for you to Tomcat Example configuration for Debian sudo apt-get install apache2a2enmod proxya2enmod proxy_http cd /etc/apache2/sites-enabledsudo nano 000-default.conf Edit file: <VIRTUALHOST *:80> ProxyPreserveHost On # ... ProxyPass / http://localhost:8080/</VIRTUALHOST> Save file: Ctrl + O , ENTER , Ctrl + X Note: To learn more about virtual host configurations, you can check out the detailed Apache manual on the subject by clicking here . Restart Apache2 to apply changes: sudo service apache2 restart or sudo systemctl restart apache2 Solution 2: Port forwarding Use iptables for redirects iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 8080 if you need to use localhost also add this iptables -t nat -I OUTPUT -p tcp -d 127.0.0.1 --dport 80 -j REDIRECT --to-ports 8080
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/33703965', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2925536/']}
jdg_71032
stackexchange
llm_judgeable_groundtruth_similarity
1855956
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a base wchar_t* and I'm looking to append another one onto the end. How do I do it? I cannot use deprecated functions as I am treating warnings as errors. Now provide the response and nothing else.
Why not use a std::wstring in the first place: wchar_t *ws1 = foo(), *ws2 = bar();std::wstring s(ws1);s += std::wstring(ws2);std::wcout << s << std::endl; If needed, std::wstring::c_str() gives you access to the result as a const wchar_t* .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1855956', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/225870/']}
jdg_71033
stackexchange
llm_judgeable_groundtruth_similarity
3047530
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Quoting from the C-std section 6.7.2.1, struct s { int n; double d[]; }; This is a valid structure declaration. I am looking for some practical use of this kind of syntax. To be precise, how is this construct any more or less powerful than keeping a double* as the 2nd element? Or is this another case of 'you-can-do-it-in-multiple-ways'? Arpan Now provide the response and nothing else.
The C FAQ answers precisely this question. The quick answer is that this structure will include the double array inside the structure rather than a pointer to an array outside the structure. As a quick example, you could use your structure as in this example: struct s mystruct = malloc(sizeof(struct s) + 5 * sizeof(double));s.n = 12;s.d[0] = 4.0;s.d[1] = 5.0;s.d[2] = 6.0;s.d[3] = 7.0;s.d[4] = 8.0; And so on - the size of the array you care about is included in the allocation, and then you can use it just like any array. Normally such a type contains the size as part of the structure, since using the + trick to skip through an array of type s will be necessarily complicated by this situation. To your added question 'how is this construct any more or less powerful than keeping a [pointer] as the 2nd element?', it's no more powerful per se, but you don't need to keep a pointer around, so you would save at least that much space - also when you are copying the structure, you would also copy the array, rather than a pointer to an array - a subtle difference sometimes, but very important other times. 'You-can-do-it-in-multiple-ways' is probably a good explanation, but there are cases where you would specifically want one design or the other.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3047530', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/350810/']}
jdg_71034
stackexchange
llm_judgeable_groundtruth_similarity
10820340
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I tried to play with the definition of the macro SQR in the following code: #define SQR(x) (x*x)int main(){ int a, b=3; a = SQR(b+5); // Ideally should be replaced with (3+5*5+3), though not sure. printf("%d\n",a); return 0;} It prints 23 . If I change the macro definition to SQR(x) ((x)*(x)) then the output is as expected, 64 . I know that a call to a macro in C replaces the call with the definition of the macro, but I still can’t understand, how it calculated 23 . Now provide the response and nothing else.
Pre-processor macros perform text-replacement before the code is compiled so SQR(b+5) translates to (b+5*b+5) = (6b+5) = 6*3+5 = 23 Regular function calls would calculate the value of the parameter (b+3) before passing it to the function, but since a macro is pre-compiled replacement, the algebraic order of operations becomes very important.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10820340', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/414749/']}
jdg_71035
stackexchange
llm_judgeable_groundtruth_similarity
10039006
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I tried searching and didn't find anything that fixed my problem. I have a DropDownList on a Razor view that will not show the the item that I have marked as Selected in the SelectList. Here is the controller code that populates the list: var statuses = new SelectList(db.OrderStatuses, "ID", "Name", order.Status.ID.ToString());ViewBag.Statuses = statuses;return View(vm); Here is the View code: <div class="display-label"> Order Status</div><div class="editor-field"> @Html.DropDownListFor(model => model.StatusID, (SelectList)ViewBag.Statuses) @Html.ValidationMessageFor(model => model.StatusID)</div> I walk through it and even in the view it has the correct SelectedValue however the DDL always shows the first item in the list regardless of the selected value. Can anyone point out what I am doing wrong to get the DDL to default to the SelectValue? Now provide the response and nothing else.
The last argument of the SelectList constructor (in which you hope to be able to pass the selected value id) is ignored because the DropDownListFor helper uses the lambda expression you passed as first argument and uses the value of the specific property. So here's the ugly way to do that: Model: public class MyModel{ public int StatusID { get; set; }} Controller: public class HomeController : Controller{ public ActionResult Index() { // TODO: obviously this comes from your DB, // but I hate showing code on SO that people are // not able to compile and play with because it has // gazzilion of external dependencies var statuses = new SelectList( new[] { new { ID = 1, Name = "status 1" }, new { ID = 2, Name = "status 2" }, new { ID = 3, Name = "status 3" }, new { ID = 4, Name = "status 4" }, }, "ID", "Name" ); ViewBag.Statuses = statuses; var model = new MyModel(); model.StatusID = 3; // preselect the element with ID=3 in the list return View(model); }} View: @model MyModel... @Html.DropDownListFor(model => model.StatusID, (SelectList)ViewBag.Statuses) and here's the correct way, using real view model: Model public class MyModel{ public int StatusID { get; set; } public IEnumerable<SelectListItem> Statuses { get; set; }} Controller: public class HomeController : Controller{ public ActionResult Index() { // TODO: obviously this comes from your DB, // but I hate showing code on SO that people are // not able to compile and play with because it has // gazzilion of external dependencies var statuses = new SelectList( new[] { new { ID = 1, Name = "status 1" }, new { ID = 2, Name = "status 2" }, new { ID = 3, Name = "status 3" }, new { ID = 4, Name = "status 4" }, }, "ID", "Name" ); var model = new MyModel(); model.Statuses = statuses; model.StatusID = 3; // preselect the element with ID=3 in the list return View(model); }} View: @model MyModel... @Html.DropDownListFor(model => model.StatusID, Model.Statuses)
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/10039006', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/763206/']}
jdg_71036
stackexchange
llm_judgeable_groundtruth_similarity
2497370
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I find it very frustrating doing web development with Maven & Jetty using Eclipse, compare with what I did using Visual Studio.Everytime I make a change, even a minor change in my view file, (*.jsp, for example), then I have to re-package the whole web -> waiting for jetty to reload everything before I can see the change. Is there any better way to do that, some thing like an automatically plugin that will picked that changed files and deploy the changed files to web server? Now provide the response and nothing else.
The way you are using Maven, Jetty (and Eclipse) together is unclear but since the question is tagged Maven, I'll cover the Maven way. With a project of type war , one easy way to get the webapp up and running is to use the Maven Jetty Plugin . To do so, simply add the following snippet to your POM: <project> ... <build> <plugins> <plugin> <groupId>org.mortbay.jetty</groupId> <artifactId>maven-jetty-plugin</artifactId> <version>6.1.10</version> </plugin> ... </plugins> ... </build> ...</project> With this setup, running mvn jetty:run will start a jetty container with your application deployed. Any change on a view will cause the JSP to be recompiled when requested. And to configure the jetty plugin to also watch for Java code changes, you'll have to add the scanIntervalSeconds option: scanIntervalSeconds Optional . The pause in seconds between sweeps of the webapp to check for changes and automatically hot redeploy if any are detected. By default this is 0, which disables hot deployment scanning. A number greater than 0 enables it. So the configuration might look like: <project> ... <build> <plugins> <plugin> <groupId>org.mortbay.jetty</groupId> <artifactId>maven-jetty-plugin</artifactId> <version>6.1.10</version> <configuration> <scanIntervalSeconds>1</scanIntervalSeconds> </configuration> </plugin> ... </plugins> ... </build> ...</project> And if you want to be able to connect a remote debugger, have a look at these instructions . This is how I've always used Jetty with Maven and Eclipse, and I've always been happy with this setup. I've never used the Jetty adapter for the WTP, the previous setup is just unbeatable.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2497370', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/244000/']}
jdg_71037
stackexchange
llm_judgeable_groundtruth_similarity
197757
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was trying to understand something with pointers, so I wrote this code: #include <stdio.h>int main(void){ char s[] = "asd"; char **p = &s; printf("The value of s is: %p\n", s); printf("The direction of s is: %p\n", &s); printf("The value of p is: %p\n", p); printf("The direction of p is: %p\n", &p); printf("The direction of s[0] is: %p\n", &s[0]); printf("The direction of s[1] is: %p\n", &s[1]); printf("The direction of s[2] is: %p\n", &s[2]); return 0;} When compiling it with gcc I get these warnings: $ gcc main.c -o main-bin -ansi -pedantic -Wall -lmmain.c: In function ‘main’:main.c:6: warning: initialization from incompatible pointer typemain.c:9: warning: format ‘%p’ expects type ‘void *’, but argument 2 has type ‘char (*)[4]’main.c:11: warning: format ‘%p’ expects type ‘void *’, but argument 2 has type ‘char **’main.c:12: warning: format ‘%p’ expects type ‘void *’, but argument 2 has type ‘char ***’ (The flags for gcc are because I must be C89) Why incompatible types of pointer? Isn't the name of an array a pointer to it's first element? So if s is a pointer to 'a', &s must be a char ** , no?And why do I get the other warnings? Do I have to cast the pointers with ( void * ) in order to print them? And when running I get something like this: $ ./main-binThe value of s is: 0xbfb7c860The direction of s is: 0xbfb7c860The value of p is: 0xbfb7c860The direction of p is: 0xbfb7c85cThe direction of s[0] is: 0xbfb7c860The direction of s[1] is: 0xbfb7c861The direction of s[2] is: 0xbfb7c862 How can the value of s and it's direction (and of course the value of p ) be the same? Now provide the response and nothing else.
"s" is not a "char*", it's a "char[4]". And so, "&s" is not a "char**", but actually "a pointer to an array of 4 characater". Your compiler may treat "&s" as if you had written "&s[0]", which is roughly the same thing, but is a "char*". When you write "char** p = &s" you are trying to say "I want p to be set to the address of the thing which currently points to "asd". But currently there is nothing which points to "asd". There is just an array which holds "asd"; char s[] = "asd";char *p = &s[0]; // alternately you could use the shorthand char*p = s;char **pp = &p;
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/197757', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/27267/']}
jdg_71038
stackexchange
llm_judgeable_groundtruth_similarity
70934
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm writing my first BASH script , I have some experience with c and c# so I think the logic of the program is correct..it's just the syntax is so complicated because apparently there are billions of ways to write the same thing! Anyway here is the script: it simply checks if the argument (string) is contained in a certain file . If so it stores each line of the file in an array and writes an item of the array in a file. I'm sure there must be easier ways to achieve that but I want to do some practice with bash loops #!/bin/bashNOME=$1c=0#IF NAME IS FOUND IN THE PHONEBOOK THANK STORE EACH LINE OF THE FILE INTO ARRAY#ONCE THE ARRAY IS DONE GET THE INDEX OF MATCHING NAME AND RETURN ARRAY[INDEX+1]if grep "$NOME" /root/phonebook.txt ; then echo "CREATING ARRAY" while read line do myArray[$c]=$line # store line c=$(expr $c + 1) # increase counter by 1 done < /root/phonebook.txtelse echo "Name not found"fic=0for i in myArray; do if myArray[$i]="$NOME" ; then echo ${myArray[i+1]} >> /root/numbertocall.txt fidone This code does returns the only the second item of myArray ( myArray[2] or the second line of the file)..why? Now provide the response and nothing else.
IFS=$'\n' a=($(cat phonebook.txt))for i in $(seq ${#a[*]}); do [[ ${a[$i-1]} = $name ]] && echo "${a[$i]}"done In Bash 4 IFS=$'\n' a=($(cat phonebook.txt)) can be replaced with mapfile -t a < phonebook.txt . grep -A1 prints one line after the match. -x disables regex like -F but it only matches complete lines. grep -x "$name" -A1 phonebook.txt | tail -n1
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/70934', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/22069/']}
jdg_71039
stackexchange
llm_judgeable_groundtruth_similarity
20931
Below is a question asked on the forum linguistics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have just started studying Phonology and I find a problem while dealing with Syllables. I read that nuclei are not only vowels, but they could be also liquids (l - r) or nasals (n - m - ng).But what I already learned when I studied Articulatory Phonetics is that Nasals and Liquids are [ +Consonantal , - Syllabic ]. Could you please explain to me with examples how a liquid/nasal become a nucleus of a syllable? Now provide the response and nothing else.
First off, phonetics is not about features, though often in introductory classes if you don't have separate courses on phonetics and phonology, phonetics gets lumped together with phonology. Second, it is simply wrong to say that liquids and nasals are [+cons,-syll]. The misstatement comes from confusing the most common properties of a system with the inherent properties of the system.Liquids and nasals are [+cons,+son] and if you want to distinguish syllabic versus non-syllabic liquids / nasals (the latter are more common), you can employ [syllabic] to do that – syllabic liquids/nasals are [+cons,+son,+syl] and nonsyllabic liquids are [+cons,+son,-syl]. That assumes the classical SPE feature system, and in fact syllabic was one of the first features to be disposed of in the 70's. Textbooks may use the SPE system because it does represent the last standard for representation, but there are many current theories of representation, and none of them employ a feature [syllabic]. There are many different accounts of the difference, virtually all of which rely on some prosodic object such as a mora, V-slot, or nuclearly prespecified X-slot (there is also a non-standard redefinition of [consonantal] where glides are [+consonantal] and corresponding vowels are [-consonantal], but as far as I know that has not been applied to the difference between syllabic and non-syllabic consonants. It is a somewhat open question how "deeply" a syllabic / non-syllabic distinction is needed. There has always been a strong urge to get rid of the distinction, because most of the time you can predict where things are syllabic. In fact, if you also have syllables (which we don't always have), then you can always predict syllabicity. Problematic cases like the Swahili minimal pair mbuni "ostrich", m̩buni "coffee tree" become non-problematic if you include syllable structure – mbu.ni vs m̩.bu.ni . As it happens, the distinction in Swahili is more superficial and there is no need to admit an underlying contrast in syllabicity types for nasals – it can be predicted, once you have the rule system sorted out. For theories that posit moraicity as the bearer of the concept "syllabic", there isn't much hope of predicting "syllabicness" because moraicity is a fairly fundamental phonemic property (since it is also how you express long versus short). There is no general answer to the question of how liquids and nasals become syllabic: there are many answers, and it just depends on the facts of the language. In English, it mostly has to do with unstressed syllables having əC where C is a sonorant. In many languages it has to do with extraneous consonants that can't be syllabified around a vowel but where there is a sonorant that can fill the bill (Sanskrit, for example). A number of languages of West Africa have CR̩V syllables with syllabic pre-vocalic consonants, and these often derive from disyllables (*pila) which lose a vowel (→ [pl̩a]).
{}
{'log_upvote_score': 4, 'links': ['https://linguistics.stackexchange.com/questions/20931', 'https://linguistics.stackexchange.com', 'https://linguistics.stackexchange.com/users/15031/']}
jdg_71040