text
stringlengths
64
89.7k
meta
dict
Q: Отследить НГ в php Доброго всем здравия после НГ праздников. Возникла мысль, автоматизировать функционал. Показывать определённые блоки в период НГ праздников. Пошёл через: $curent_date=date("d.m.y"); $ng_start_date_bef="20.12.".date("y"); // 20 декабря текущего года $ng_start_date_aft="20.12.".date("y"); date_modify($ng_start_date_aft, "-1 year"); // это дата для, после того как прошёл НГ, т.е. прошлогодний год. .... // и тут я запутался... ведь до НГ надо чтобы дата до НГ была = "текущему году", а после стала "текущий-1"! $ng_end_date="20.01.".date("y"); //20 января, завершение праздников. //А после новогодняя дата, до НГ должна была становиться "текущий год+1", а после НГ, соответственно дата="текущий"... Я пришёл к выводу, что если бы это было посередине года, то нет проблем отследить промежуток даты... А тут я вижу простым решением не обойтись. Или я не прав? A: Я не писал код, так как считаю, что это примитивный код и его можно написать за пару минут. Но посмотрев на код @alexsis20102, я понял, почему недолюбливают php программистов. Вот код, с всей обвязкой он сильно меньше вышеприведенного. <?php //$cd = strtotime('2015-02-20'); // текущая дата для ручного ввода $cd = date("y-m-d"); // просто текущая дата $ng_stop=strtotime(date("y-01-20")); // дата, когда заканчиваются НГ праздники $ng_start=strtotime(date("y-12-20")); // дата, когда начинаются праздники if ($ng_start >= $cd and $cd > $ng_stop) { // собственно условие echo "не НГ\n"; } else { echo "а это НГ\n"; } ?>
{ "pile_set_name": "StackExchange" }
Q: Azure xplat to run a CustomScriptExtension in a Windows VM I am creating Windows VMs from the azure xplat cli, using the following command: azure network vnet create --location "East US" testnet azure vm create --vm-name xplattest3 --location "East US" --virtual-network-name testnet --rdp 3389 xplattest3 ad072bd3082149369c449ba5832401ae__Windows-Server-Remote-Desktop-Session-Host-on-Windows-Server-2012-R2-20150828-0350 username SAFEpassword! After the Windows VM is created I would like to execute a powershell script to configure the server. As far I understand, this is done by executing a CustomScriptExtension. I found several examples for PowerShell but no examples for Xplat cli. I would like, for example, to run the following HelloWorld PowerShell script: New-Item -ItemType directory -Path C:\HelloWorld After reading documentation I should be able to run a CustomExtensionScript by executing something like this (the following command does not work): azure vm extension set xplattest3 CustomScriptExtension Microsoft.Compute 1.4 -i '{"URI":["https://gist.githubusercontent.com/tk421/8b7dd37145eaa8f82e2f/raw/36c11aafd3f5d6b4af97aab9ef5303d80e8ab29b/azureCustomScriptExtensionTest"] }' I think that the problem is the parameter -i. I have not been able to find an example on Internet. There are some references and documentation such as MSDN and Github, but no examples. Therefore, my question: How to execute a PowerShell script after creating a Windows VM in Azure using the xplat cli ? Please note that the my current approach is a CustomScriptExtension, but anything that allows to bootstrap a configuration script will be considered! EDIT How do I know it is failing ? After I run the command azure vm extension ...: xplat cli confirms that the command has been executed properly. As per MSDN documentation, the folder C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\ is created, but there is no script downloaded to C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\{version-number}\Downloads\{iteration} The folder C:\HelloWorld is not created, which means that the contents of the script has not been executed. I cannot find any sort of logs or a trace to know what happened. Does anyone knows where can I find this information ? A: The parameters (The Json) that I used after reading the MSDN documentation were not correct. However, you can get clues of the correct parameters by reading the C# code. And the final command is: azure vm extension set xplattest3 CustomScriptExtension Microsoft.Compute 1.4 -i '{"fileUris":["https://macstoragetest.blob.core.windows.net/testcontainername/createFolder.ps1"], "commandToExecute": "powershell -ExecutionPolicy Unrestricted -file createFolder.ps1" }' This command successfully creates the C:\HelloWorld directory. NOTE: I decided to upload the script to Azure as I read in a post and in the documentation that is mandatory. However I just made a test to download the original script from Github and it is working fine, so I guess that the documentation is a bit outdated. EDIT: I created an detailed article that explains how to provision windows servers with xplat-cli in Azure.
{ "pile_set_name": "StackExchange" }
Q: Forloop with tikz-cd I am trying to write a macro that (for now) takes as input a number n, and outputs a tikz-cd diagram consisting of n consecutive arrows. However the following code leads to an error: You can't use `\relax' after \the. Here is my code: \documentclass[10pt,a4paper]{article} \usepackage[english]{babel} \usepackage{tikz} \usetikzlibrary{matrix, cd, positioning} \usepackage{forloop} \newcommand{\smallcube}[1]{% \begin{tikzcd}[ampersand replacement=\&, nodes in empty cells]% \newcounter{width}% \forloop{width}{1}{\value{ct} < #1}{\ar[r]\&}\\% \end{tikzcd}} \begin{document} \smallcube{2} \end{document} A: You should accumulate the partial arrows; using \foreach is simpler: \documentclass[10pt,a4paper]{article} \usepackage[english]{babel} \usepackage{tikz-cd} \usepackage{etoolbox} \newcommand{\smallcube}[1]{% \begin{tikzcd}[ampersand replacement=\&] \gdef\partialcube{} \foreach \n in {1,...,#1} { \gappto\partialcube{ {} \arrow[r] \& } } \partialcube {} \end{tikzcd}% } \begin{document} \smallcube{2} \smallcube{4} \smallcube{5} \end{document} Alternative method using expl3: \documentclass[10pt,a4paper]{article} \usepackage[english]{babel} \usepackage{tikz-cd} \usepackage{expl3} \ExplSyntaxOn % get a user level version of \prg_replicate:nn \cs_set_eq:NN \replicate \prg_replicate:nn \ExplSyntaxOff \newcommand{\smallcube}[1]{% \begin{tikzcd}[ampersand replacement=\&] \replicate{#1}{ {} \arrow[r] \& } {} \end{tikzcd}% } \begin{document} \smallcube{2} \smallcube{4} \smallcube{5} \end{document}
{ "pile_set_name": "StackExchange" }
Q: broke some stuff in wordpress so I have this code that was a widget in a row called widget text. a:15:{i:8;a:3:{s:5:"title";s:0:"";s:4:"text";s:157:"<div align="center"><A href="http://mvprop.com/Deals-and-Specials"><IMG border=0 alt=Deals and Specials src="dealsandspecials.gif"></A></div>";s:6:"filter";b:0;}i:14;a:3:{s:5:"title";s:0:"";s:4:"text";s:153:"<div align="center"><A href="http://mvprop.com/24-Hour-Recording><IMG border=0 alt=24 Hour Recording src="24hourrecording.gif"></A></div>";s:6:"filter";b:0;}i:16;a:3:{s:5:"title";s:0:"";s:4:"text";s:147:"<div align="center"><A href="http://mvprop.com/We-Will-Buy-Now"><IMG border=0 alt=We Will Buy Now src="wewillbuynow.gif"></A></div>";s:6:"filter";b:0;}i:20;a:3:{s:5:"title";s:0:"";s:4:"text";s:123:"<div align="center"> <A href="http://mvprop.com/FAQS"><IMG border=0 alt=FAQS src="faqs.gif"></A></div>";s:6:"filter";b:0;}i:21;a:3:{s:5:"title";s:0:"";s:4:"text";s:136:"<div align="center"><A href="http://mvprop.com/about-mvp"><IMG border=0 alt=About MVP src="aboutus.gif"></A></div>";s:6:"filter";b:0;}i:23;a:3:{s:5:"title";s:0:"";s:4:"text";s:163:"<div align="center"><A href="http://mvprop.com/Deals-and-Specials"><IMG border=0 alt=Deals and Specials src="dealsandspecials.gif"></A></div>";s:6:"filter";b:0;}i:24;a:3:{s:5:"title";s:0:"";s:4:"text";s:153:"<div align="center"><A href="http://mvprop.com/We-Will-Buy-Now"><IMG border=0 alt=We Will Buy Now src="wewillbuynow.gif"></A></div>";s:6:"filter";b:0;}i:25;a:3:{s:5:"title";s:0:"";s:4:"text";s:159:"<div align="center"><A href="http://mvprop.com/24-Hour-Recording><IMG border=0 alt=24 Hour Recording src="24hourrecording.gif"></A></div>";s:6:"filter";b:0;}i:26;a:3:{s:5:"title";s:0:"";s:4:"text";s:152:"<div align="center"><A href="http://mvprop.com/Sell-Your-Home"><IMG border=0 alt=Sell Your home src="sellyourhome2.gif"></A></div>";s:6:"filter";b:0;}i:27;a:3:{s:5:"title";s:0:"";s:4:"text";s:143:"<div align="center"> <A href="http://mvprop.com/WE-Will-Buy"><IMG border=0 alt=We Will Buy src="wewillbuy.gif"></A></div>";s:6:"filter";b:0;}i:28;a:3:{s:5:"title";s:0:"";s:4:"text";s:161:"<div align="center"> <A href="http://mvprop.com/We-Make-An-Offer"><IMG border=0 alt=We Will Make An Offer src="wemakeanoffer.gif"></A></div>";s:6:"filter";b:0;}i:33;a:3:{s:5:"title";s:0:"";s:4:"text";s:170:" <br> <font size="2px face="Arial, Helvetica, sans-serif"> <br> Are you an Investor? <br> Are you tired of the stock market and looking for a better rate of return? </font>";s:6:"filter";b:0;}i:34;a:3:{s:5:"title";s:0:"";s:4:"text";s:126:"<a href="../../investments/Questions.pdf"><img src="../../investments/top-ten.jpg" width="236" height="250" border="0"></a> ";s:6:"filter";b:0;}i:35;a:3:{s:5:"title";s:0:"";s:4:"text";s:145:" <div align="center"><A href="http://mvprop.com/investments"><IMG border=0 alt=Investors src="151-test-new2.jpg"></A></div>";s:6:"filter";b:0;}s:12:"_multiwidget";i:1;} apparently I deleted or added a space somewhere in this code. Now It doesn't work. What do I do now? The change is somewhere in the second part. If no way to fix it, is there a way to rebuild the widget? A: Is that in your {wp-prefix}_options table? I don't know if it can be fixed easily. Try using PHP's unserialize() function to read those values But, you can reset it by setting the field value to a:0:{} and start building your widgets again.
{ "pile_set_name": "StackExchange" }
Q: How to show $Spin^c(V)$ is isomorphic to $Spin(V)\times_{Z_2} S^1$? In first picture below, Spin(V) is the Spin group,Cl(V) is Clifford algebra. First, how to get 2.4.13 is surjective from the commute of $S^1$ and Spin(V) ? Second,why Spin(V) $ \cap S^1$ is $\{1,-1\}$ ? All the pictures is from about 75 page of Jost's Riemannian geometry and geometric analysis. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A: (1) Suppose a group $G$ is generated by a subset $S$. That means there is no proper subgroup of $G$ which contains $S$, and it also means every element of $G$ may be expressed as a "word" in the "letters" from $S$, i.e. as $g=s_1^{\epsilon_1}s_2^{\epsilon_2}\cdots s_k^{\epsilon_k}$. It is a basic group theory exercise to prove this. If $G$ is generated by two commuting subgroups $H$ and $K$, i.e. if $G=\langle H\cup K\rangle$ and $[H,K]=1$ then there is a surjective group homomorphism $H\times K\to G$ given by $(h,k)\mapsto hk$. To prove it is a group homomorphism, use the fact that $hk=kh$ for all $h\in H,k\in K$. To see that it is surjective, observe every element of $G$ must be a "word" in the elements of $H$ amd $K$, but since elements of $H$ and $K$ commute we may slide all of the $H$ elements to the left and all of the $K$ elements to the right until we have a product of elements of $H$ times a product of elements of $K$, in other words an element of the form $hk$. In the case of a topological group or a Lie group, we might want a closed subgroup generated by a subset $S$. Thus we would do the above construction to get the "abstract" group generated by $S$, and then take its closure to get the smallest closed subgroup containing $S$. In our case, the abstract subgroup we generate is already a closed subgroup. (It is the continuous image of a compact space, hence compact, so closed and bounded.) (2) If $A\otimes B$ is a tensor product of $k$-algebras then there are embeddings $A\to A\otimes B$ and $B\to A\otimes B$ given by $a\mapsto a\otimes 1$ and $b\mapsto 1\otimes b$. The intersection of these two things is $$ (A\otimes 1)\cap (1\otimes B)=k(1\otimes 1), $$ i.e. the scalar multiples of the identity. Indeed, if $\{a_1,\cdots,a_r\}$ and $\{b_1,\cdots,b_s\}$ are $k$-vector space bases for $A$ and $B$ with $a_1=1$ and $b_1=1$ then $\{a_i\otimes b_1\}$ is a vector space basis for $A\otimes 1$, $\{a_1\otimes b_j\}$ is a vector space basis for $1\otimes B$, and $\{a_i\otimes a_j\}$ is a vector space basis for $A\otimes B$. Now we can treat it as a linear algebra problem. If $W$ is a vector space with basis $P$, and $Q,R\subseteq P$ are two subsets, and $U,V$ are subspaces of $W$ spanned by $Q,R$ respectively, then $U\cap V$ is spanned by $Q\cap R$.
{ "pile_set_name": "StackExchange" }
Q: AWS block device mapping to mount a snapshot while creating separate root I want to create a new instance with root mounting from its AMI (sda1), while at the same creating a secondary volume (sda2) from a snapshot. I am using the following block device mapping to add sda2: [ { "DeviceName": "/dev/sda2", "Ebs": { "DeleteOnTermination": false, "SnapshotId": "snap-0daafbeb9409cb652" } } ] However, while an sda1 volume is created from the AMI, it appears that sda2 is mounted as root NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 8G 0 disk └─xvda1 202:1 0 8G 0 part xvdb 202:16 0 8G 0 disk └─xvdb1 202:17 0 8G 0 part / What should be different to cause xvda1 (which links to sda1) to mount as root instead? I do not want to modify the AMI to do this, the starting point for this process is a stock Ubuntu image. aws ec2 run-instances --image-id ami-c80b0aa2 ... --block-device-mappings file://mappings.json A: This problem is caused by the volume labels of the partitions being mounted. In this specific case, both volumes have the same label, indicating they are the root partition, which is confusing the boot process. The solution here is to clear the label of the volume that is not being mounted as the root filesystem.
{ "pile_set_name": "StackExchange" }
Q: Textarea selection in IE 11 not working appropriately on last line I'm building an Angular directive that consists of a textarea for writing Markdown, and buttons that insert formatting into the text area. When clicked, if no text is currently selected, a button (bold, for instance) should append the following: **replace text** where replace text is selected. It is working as expected in every scenario save the following: In IE 11, when the selection occurs on the final row, but is not the first row. It works as expected in every other browser, and works fine in IE 11 minus this condition. Here is the code from the directive for performing the selection: var editor = element.find('textarea')[0]; function createWrappingMarkdown(symbol) { // If text is selected, wrap that text in the provided symbol // After doing so, set the cursor at the end of the highlight, // but before the ending symbol(s) /* This section works fine */ if (editor.selectionStart < editor.selectionEnd) { var start = editor.selectionStart, end = editor.selectionEnd; var value = editor.value.substring(0, start) + symbol + editor.value.substring(start, end) + symbol + editor.value.substring(end, editor.value.length); scope.$evalAsync(function () { editor.value = value; editor.focus(); editor.selectionStart = end + symbol.length; editor.selectionEnd = end + symbol.length; }); // If no text is highlighted, insert {symbol}replace text{symbol} // at the current cursor position. // After inserting, select the text "replace text" /* This is where the selection is broken in IE 11 */ } else if (editor.selectionStart || editor.selectionStart === 0) { var start = editor.selectionStart, message = "replace text"; var value = editor.value.substring(0, start) + symbol + message + symbol + editor.value.substring(start, editor.value.length); scope.$evalAsync(function () { editor.value = value; setCursorSelect(start + symbol.length, start + symbol.length + message.length); }); } } function setCursorSelect(start, end) { editor.focus(); if (editor.setSelectionRange) { editor.setSelectionRange(start, end); } else { editor.selectionStart = start; editor.selectionEnd = end; } } Update See answer for the fix to this issue. The Plunk has been revised to demonstrate this fix. A: After debugging further in IE, I found that editor.selectionStart was being set to a value higher than editor.value.length whenever the cursor was at the last available position in the textarea. This was only happening in IE, and not the other browsers. With this in mind, I was able to come up with the following solution whenever a selection is needed following a button press: scope.$evalAsync(function () { if (editor.value.length < editor.selectionStart) { start = editor.value.length; } editor.value = value; setCursorSelect(start + symbol.length, start + symbol.length + message.length); }); The plunk above has been updated to reflect this fix.
{ "pile_set_name": "StackExchange" }
Q: How prevent h2 content from breaking in :before content? http://i.imgur.com/wsfUCxd.png Yeah i have a little problem with my :before icon. What i want is that the h2 content is on a horizontal line and not after a line break under the :before content. Any suggestions? A: Use position:absolute;left:0; for the icon and add a padding-left the size of the icon to h2. Note: don't forget to set the position of h2 to relative. Example: h2{ position: relative; padding-left: 60px; /* the size of your icon plus a spacer */ } h2:before{ content:''; position:absolute; left:0; height: 50px; /* height of your icon */ width:50px; /* width of your icon */ background-image:url(img/icon.png); /* your icon */ }
{ "pile_set_name": "StackExchange" }
Q: Do javax.persistence.transient and java.bean.transient do the same thing? Does the javax.persistence.transient annotation and the java.beans.transient annotation do the same thing? I know that the latter was only introduced in Java 7. I'm just curious as we recently upgraded to Java 7 on our servers and I'm wondering if Hibernate will work the same with either of the annotations. A: As @Wundwin Born said before: javax.persistence.transient will make sure that Hibernate will ignore that particular field from saving it into the db and vice-versa. java.beans.transient will make sure that the particular field will be ignored by the encoders (derived from Encoder). See http://docs.oracle.com/javase/7/docs/api/java/beans/Transient.html
{ "pile_set_name": "StackExchange" }
Q: Vim - visually select multiple non-consecutive sections Is there a way in VIM to visually select multiple sections / non-consecutive lines in a document that are not directly connected? My goal is to copy certain parts of a document to a different one, open on a different buffer in the current VIM instance. However the same procedure could also be used to delete such a selection. Let's say in the below example I want to select the rows 2,6 and 11. 1 global _main 2 extern _printf 3 4 section .text 5 _main: 6 push message 7 call _printf 8 add esp, 4 9 ret 10 message: 11 db 'Hello, World', 10, 0 A: Thanks to @Andrea Baldini's higlight of a similar question I see one of the answers a workable solution for my problem that doesn't need any plugins. For reference I copy the response from @soulmerge To start the 'Accumulation Buffer': 1. mark a section to copy in visual mode, 2. press "a to operate on the buffer a with the next command and 3. yank it as usual (y). To add to that buffer: 1. mark the next section and 2. press "A (capitalizing the buffer name means "do not overwrite the buffer, append to it instead") 3. and yank again using y. You can then paste the accumulated buffer a at any time using "ap.
{ "pile_set_name": "StackExchange" }
Q: SQL Server STUFF String Concat Slow Is there any alternative to the SQL Server STUFF function? I am developing a Windows Service that loops over a database and does some data processing, but the step of fetching data is extremely slow. I have these tables Sensors table that define sensors config Items table that records each item information from devices Itemdata table that stores sensor values for each item row, so Itemdata table is linked to Sensors and Items tables I need to select data from items with grouping itemsdata as col like this 1=5|2=6| I use this T-SQL - it's working fine, but it's slow with more than 200,000 rows. Without it, exec is extremely fast With actual execution plan it take 99% in the stuff function: I am using the following TSQL SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED IF (@dtFrom IS NOT NULL AND @dtTo IS NOT NULL) BEGIN -- with both dates SELECT m.itemsId, m.ObjectId, 0 AS [type], STUFF((SELECT (CAST(Sensors.SourceNameId AS nvarchar(10)) + '=' + CAST(t.Value AS nvarchar(20)) + '|') FROM [tavl2].[tavl].[itemsData] t WITH (NOLOCK) LEFT JOIN tavl2.tavl.Sensors WITH (NOLOCK) ON t.SensorsId = Sensors.SensorsId WHERE t.itemsId = m.itemsId FOR xml PATH (''), TYPE).value('.[1]', 'nvarchar(max)'), 1, 0, '') AS params FROM tavl.[items] m WITH (NOLOCK) WHERE m.ObjectId = @objId AND m.GpsTime BETWEEN @dtFrom AND @dtTo AND m.Valid = 1; END Any better solutions? A: i Managed to do it using SQLCLR , Here is an open-source CLR That do it in a very fast time GROUP_CONCAT string aggregate for SQL Server (hosted at codeplex)
{ "pile_set_name": "StackExchange" }
Q: How to make a similar cast like 1LL with any other type? Does this make any sense? static_cast<long long>(1) == 1LL static_cast<float>(1) =? 1F Is there a short way of making the casting for other types such as float? Thank you very much! A: Since C++11 you could define your own literals. For example, you could define literal _F like this: float operator"" _F(unsigned long long l) { return static_cast<float>(l); } int main() { auto a = 1_F; static_assert(std::is_same<decltype(a), float>::value, "Not a float"); return 0; } A: This answer describes C++11. User-defined literals, and some of the types, didn't exist in historic versions of the language. Integer literals can end with nothing, L, LL, U, UL or ULL giving a type of int, long, long long, unsigned int, unsigned long or unsigned long long respectively. These can be in lower case if you like; and the actual type may be wider than specified if necessary to represent the value. Floating-point literals can end with nothing, F or L giving a type of double, float or long double respectively. Again, these can be in lower case if you like. Character and string literals can begin with nothing, u, U or L, giving a character type of char, char16_t, char32_t or wchar_t respectively. Strings can also begin with u8 to indicate a char character type with UTF-8 encoding. You can also define your own user-defined literals to make literals of any type, if you find weird things like 123_km more readable than kilometres(123). I don't see the point of that, but someone's posted an example if you're interested.
{ "pile_set_name": "StackExchange" }
Q: Running methods of an object that is within a HashMap I had a static object (hive) that was created within an object called Garden and I was using methods from hive in many other objects. Now i have the need for multipul hives that contain different data (arrays and such.) So ive put the different Hives within a HashMap. I still need to run methods from the hives that are now within the HashMap in other objects and I need it to be running the method from the correct instance of hive. How would I call methods from hive as an object that is within a HashMap in a different Object? class Garden() { Map<String, Hive> HiveMap = new HashMap<String, Hive>(); Hive hiveA = new Hive(); map.put("A", hiveA); Hive hiveB = new Hive(); map.put("B", hiveB) } class Hive() { ArrayList bees<Bee bee> = new ArrayList(); Bee bossBee = new Bee; void importantHiveBusiness() { ... } } class Bee() { //Garden.hive.importantHiveBusiness(); } Not actual code just to try and show what im trying to do clearer. Thanks for any help Bee wants to run the method from the hive that it is in(the bee is in arrayList) A: map.get("A").importantHiveBusiness() Unless I'm interpreting your question wrong, this should do it. Edit: Ah, I see what you're getting at here. You'll need to decide whether your applications will use multiple Gardens, or just one. If you only need one Garden, you should use a "singleton" design pattern. Example below: public class Garden { public static final Garden GARDEN = new Garden(); private HashMap<String, Hive> hiveMap = new HashMap<String, Hive>(); private Garden() { // create garden here } public HashMap<String, Hive> getHiveMap() { return this.hiveMap; } } The private constructor is very important. This ensures that (as long as you don't make any Garden objects within the Garden class' code, only one Garden can ever exist. The "static" keyword makes GARDEN accessible from everywhere in your code. Then you can simply do... public class Bee() { // inside some method... Garden.GARDEN.getHiveMap().get("A").importantHiveBusiness(); } Alternately, if you want multiple gardens, then you only need to instantiate (create) a Garden object, and myGarden.getHiveMap().get("A").importantHiveBusiness(); Edit2: It might be useful for the Bee class to contain a reference to its hive. This would eliminate the need for a HiveMap. public class Bee { private final Hive hive; public Bee(Hive hive) { this.hive = hive; } public getHive() { return this.hive; } }
{ "pile_set_name": "StackExchange" }
Q: Invoke particular writer based on operation type? I am working one batch which reads each row and insert into db as well as write in file. I want to write this below data in database as well as in file. So needs to call particular writer using ClassifierCompositeItemWriter. I have file which has following rows: DATA,I,1,John,Shiazo,Sushi DATA,U,8,Pablo,Carmen DATA,D,9,Diego,Sergio DATA,I,10,rucha,rekha Here, I stands for insert, U stands for update and D stands for delete. How should I call one particular writer for insert, other is for update and one more is for delete. This three writer will work differently based on operations(Insert, Update and Delete) and there is one more writer which will always for work for writing data in file. Below is my sample code: @Classifier public List<String> classify(Object object) { String type = "Success"; List<String> list = new ArrayList<String>(); if(person.getOperationType().contentEquals("I")){ String insert = "I"; list.add(type); list.add(insert); }else if(person.getOperationType().contentEquals("U")){ String update = "U"; list.add(type); list.add(insert); }else{ delete = "D"; list.add(type); list.add(delete); } } Sample xml writer code: <bean id="classifierFileItemWriter" class="org.springframework.batch.item.support.ClassifierCompositeItemWriter" scope="step"> <property name="classifier"> <bean class="org.springframework.classify.BackToBackPatternClassifier"> <property name="routerDelegate"> <bean class="com.iz.batchprocessing.writer.SuccessFailClassifier" scope="step"/> </property> <property name="matcherMap"> <map> <entry key="I" value-ref="jdbcInsertItemWriter" /> //insert writer, here I want access ArrayList <entry key="U" value-ref="jdbcUpdateItemWriter" /> //update writer, here I want access ArrayList <entry key="D" value-ref="jdbcDeleteItemWriter" /> //delete writer, here I want access ArrayList <entry key="Success" value-ref="successMultiFileItemWriter" /> //common file writer </map> </property> </bean> </property> </bean> Please tell me how can I do this? Or is there any other way to achieve this? A: I have not worked on this type of writer but by looking at another SO Question and answer by - Serkan Arıkuşu , I see that you need to return a value / key from your Classifier and that will be used by matcherMap to map appropriate writers. I see that you are not returning any key from your classifier, you are simply making assignment to those String variables but not returning anything. By looking at that answer, I can say that you need to embed that key in your Item being written and return that key for each item from your Classifier. Answer by Mukesh Sabde there is a complete code sample. Instead of adding Strings to List and returning List , you should return concatenated String and those should be the keys in matcher map. I looked at code of BackToBackPatternClassifier class , it has two private fields - private Classifier<C, String> router; & private Classifier<String, T> matcher; So you can't specify a List as key for matcherMap property. It has to be a String and you were wrong there. For your case, C is Spring Batch Item to be written and T is ItemWriter. Hope it helps !!
{ "pile_set_name": "StackExchange" }
Q: Entity Framework version out of date and not allowing the Entry method - how do I fix this? I am using Entity Framework in a project and I am trying to attach an instance of an entity to the current context and set it's entity state to changed. All of the examples I am seeing on how to do this are telling me that I need to use the Entry method in order to do this. Here is another question I have had where I have been directed to do this. Entity Framework Error: An object with a null EntityKey value cannot be attached to an object context The problem is that when I try to write this code, the Entry method is not recognized... using (PriorityOneEntities entities = new PriorityOneEntities()) { entities.AttachTo(entities.UserInfoes.EntitySet.Name, userInfo); entities.Entry(userInfo).State = EntityState.Modified; entities.SaveChanges(); } So I am lead to believe that I'm working with the wrong version of Entity Framework. This project is being built inside of Visual Studio 2010 with .NET 4.0. Is there a possibility that my version of Entity Framework that I'm using is out of date? If so, how do I update it? Thanks A: The easiest way to update packages is to use Nuget. Once you have it installed you can use it directly within Visual Studio to add references and it will download and set up your project. It can even notify you when updates are available for the packages you are using.
{ "pile_set_name": "StackExchange" }
Q: Write SQL query to get all records based on value from another table (not join case) I have two tables, Account and Tracking Account table has an ID (int data type) Tracking table has AccountID (FK) and Status (string data type) I want to write MySQL query. My goal is to get all accounts whose theirs ID number bigger than the biggest ID number recorded in Tracking table OR all accounts that have Status "Failed" in Tracking table. Kindly help A: If I understand correctly: SELECT * FROM Account a JOIN Tracking t ON a.ID = t.AccountID WHERE (a.ID > (SELECT MAX(AccountID) FROM Tracking)) OR (t.Status = 'Failed') you need a regular join but use a subquery to get the "ID larger than" part of your OR statement
{ "pile_set_name": "StackExchange" }
Q: What's the correct way to leave cover? When I'm in cover and I want to leave it (say when all enemies are dead and I need to move on) I often make a fool of myself - I usually end up rolling around or moving to another piece of cover while my squad-mates look on with withering distain (or that's how I see it). Often I find I'm 'stuck' in cover and I can only get out by jumping over the top of a barrier or rolling off the side when there's no more cover that way. I'm assuming I'm missing something, what's the correct way to 'leave' cover and move around normally again? A: Vaulting over the cover seems to be the most reliable way of leaving it, though that is not always what you want to do. The best way I found so far to leave cover without vaulting over it is to just walk backwards, usually this makes Shepard leave the cover. It doesn't always work, the cover system combined with the "One button to rule them all" choice of mapping nearly every action to one single button inevitably causes some situations where Shepard just doesn't do what you want him/her to do.
{ "pile_set_name": "StackExchange" }
Q: A couple of textures into one in Blender I have some model in Blender. I'd like to: Connect a few different textures into one and save it as bitmap Make UV mapping for these connected textures I need to solve this problem for textured models in OpenGL. I have data structure which giving me possibility to bind one texture into one model, so I'd like to have one texture per one model. I'm aware of fact that I can use Texture GL_TEXTURE_xD_ARRAY, but I don't want to complicate my project. I know how to do simple UV mapping in Blender. My questions: Can I do 1. and 2. phases exclusively in Blender? Is Blender Bake technique is what I'm searching for? Is there some tutorials shows how to do it? (for this one specific problem) Maybe somebody advise me another Blender technique (or OpenGL solution) A: Connect a few different textures into one and save it as bitmap Make UV mapping for these connected textures You mean generating a texture atlas? Can I do 1. and 2. phases exclusively in Blender? No. But it would be surely a well received add-in. Is Blender Bake technique is what I'm searching for? No. Blender Bake generates texture contents using the rendering process. For example you might have a texture on a static object into which you bake global illumination; then, instead of recalculating GI for each and every frame in a flythrough, the texture is used as source for the illumination terms (it acts like a cache). Other applications is generating textures for the game engine, from Blender's procedural materials. Maybe somebody advise me another Blender technique (or OpenGL solution) I think a texture array would be really the best solution, as it also won't make problems for wrapped/repeated textures.
{ "pile_set_name": "StackExchange" }
Q: How to solve series that produces i I can find that it converges due to the A.S.T, but I how do you solve for this answer that wolframaplha produces? A: The sum is a real number, as any sum with all real terms is real. Notice that $10^{-13}$ is really really small. That’s showing up due to the numerical approximation methods used to solve problems like this.
{ "pile_set_name": "StackExchange" }
Q: SQL C# Generating SQL statement with LIKE and wildcards; giving Incorrect syntax near 'bla' So I have got an ArrayList called arr, which contains strings such as "decoration", "metal", "paper" etc. What I want to do is loop though that ArrayList, adding each tag to a query string with which to get data from a database. Currently I have something like this: String strSel="select * from Table1 where Tags"; for(int x=0;x<arr.Count;x++){ if (x == arr.Count - 1) { strSel += " like '%'"+arr[x]+"'%'"; } else { strSel += " like '%'" + arr[x] + "'%' or Tags"; } } cmdSel=new SqlCommand(strSel,connectionName); sqlDataReaderName=cmdSel.ExecuteReader(); Anyway I get an error about "Incorrect syntax near bla"...its has probably got something to do with the single quotes or wildcards but I can't figure it out. What am I doing wrong? A: you should remove extra single quote before and after the percent symbol strSel += " like '%" + arr[x] + "%'"; the reason why error has been thrown is because your query is formed like this select * from Table1 where Tags like '%'hello'%' ^ ^ extra single quote that should be removed
{ "pile_set_name": "StackExchange" }
Q: Nautilus won't let me see or access files The last time I accessed this folder, it worked OK as it always has until this afternoon. Today, if I open the folder, there are no files or folders listed inside it. Nevertheless, Files (Nautilus) says there are 641 items, and if I expand the folder, Files (Nautilus) says that it is empty. If I right-click the folder and select Properties, there are 3,978 items inside it. I know that the files were there yesterday, and I haven't deleted anything from there, so I'm confident the files and folders are there. Does anyone know any clever ways to see and access them? el_gallo_azul@W2600CR-850Pro:~$ mount | grep ^/ /dev/sda1 on / type ext4 (rw,errors=remount-ro) /dev/sdb1 on /mnt/InternalHDD type ext4 (rw) /dev/sdc1 on /media/el_gallo_azul/G1 type ext4 (rw,nosuid,nodev,uhelper=udisks2) el_gallo_azul@W2600CR-850Pro:~$ sdc is the disk in question. A: The reason these files disappeared is a mystery. I installed nautilus-open-terminal as I mentioned above, but didn't do anything with it. I thought that I'd check some of the other folders at the same level, to see if their contents were OK. They were. While I was there, I also opened this "Movies" folder again. Everything appeared as normal. I have no idea why it suddenly started working correctly again. I did nothing in the meantime, including not rebooting the computer. For the moment, this is no longer a problem. I hope it stays that way.
{ "pile_set_name": "StackExchange" }
Q: Complicated Conditional Probability Three people role a die starting with person 1, then 2, then 3. First to role a 6 is eliminated. Find the probability that B is elimination first. I think this is some summation of probabilities to get a general form for how many times they have rolled a dice. I get $\frac {1}{6}*(\frac{5}{6})^n$ Such that n=1, 4, 7, 10,... (I cant think of how to write a sequence for this). Is there a way to write this because I get lost on how to write out the probability. After the first one is gone, the remaining two play until the next one wins. Find prob A won the game. I said this was Prob A won given B lost 1st and then C lost 2nd plus the Prob A won given B lost 2nd and then C lost 1st. But since it matters when they each lost to get to this part. How do I go about this? A: You have the right idea, but you don’t want to multiply $\frac56$ by those values of $n$: they should be exponents on $\frac56$. It might be easier to think first about A’s probability of being eliminated first. With probability $\frac16$ A is eliminated right away. He’s also eliminated first if all three of them roll non-sixes in the first round, and then he rolls a six on his second roll; that happens with probability $\left(\frac56\right)^3\frac16$. Continuing in this fashion we see that A’s probability of being eliminated first is $$\frac16+\left(\frac56\right)^3\frac16+\left(\frac56\right)^6\frac16+\ldots=\frac16\sum_{n\ge 0}\left(\frac56\right)^{3n}\;,\tag{1}$$ and since that’s just a geometric series, you can finish off the calculation to get a numerical result. Call this probability $p_A$. What changes if we look at B? A has to roll a non-six first, and if he does, B is now in A’s position: he’s effectively rolling first from now on. Thus, his probability of being eliminated first is $p_B=\frac56p_A$. But if we extend this analysis just a little further, we can actually avoid having to evaluate the summation in $(1)$: the same reasoning shows that C is eliminated first if and only if A and B both survive their first rolls, making C effectively the first player, and C is then eliminated first. That is, C’s probability of being eliminated first is $p_C=\frac56\cdot\frac56p_A=\frac{25}{36}p_A$. One of them has to be eliminated first, so $$1=p_A+p_B+p_C=p_A\left(1+\frac56+\frac{25}{36}\right)\;,$$ and you have a simple linear equation to solve for $p_A$. For the second part notice that if B is eliminated first, A is the second player in the resulting two-person game, while if C is eliminated first, A is the first player in the resulting two-person game. Use the ideas above to work out the probabilities of winning for the two players in the two-person game, and them combine them appropriately with the probabilities of B and C being eliminated first in the original game.
{ "pile_set_name": "StackExchange" }
Q: Erro na função jason_decode() Bem, estou fazendo uma página simples para consumir dados de uma api do wikipedia através de um pequeno formulário: <form action="" method="get"> <input type="text" name="busca"> <input type="submit" value="Busca"> </form> <?php if($_GET['busca']){ $api_url = "https://en.wikipedia.org/w/api.php?format=json&action=query&prop=extracts&titles=".ucwords($_GET['busca']).".redirects=true"; $api_url = str_replace(' ', '%20', $api_url); if($data = jason_decode(file_get_contents($api_url))){ foreach($data->query->pages as $key=>$val){ $pageId =$key; break; } $conteudo = $data->query->pages->$pageId->extract; header('Content-Type:text/html; charset=utf-8'); echo $content; } else{ echo 'Nenhum resultado encontrado.'; } } ?> O php informa Fatal error: Uncaught Error: Call to undefined function jason_decode() Como podemos ver abaixo, a extensão json está funcionando: Cheguei a achar que era por causa da verificação com IF, mas o erro persiste. A: O correto é json_decode() e não jason_decode()
{ "pile_set_name": "StackExchange" }
Q: Clock in Notification Area Does Not Update I have a Samsung Galaxy Tab 2 10.1 with Android 4.2.2 that is behaving strangely. The clock on the notification bar refuses to update while I'm using the tablet or even while it's sleeping. The only times it updates is when you go into date/time settings and change the time or when the tablet goes to sleep and wakes up again. Another symptom is that none of the apps have updated in about two months. I thought this clock (which is located in the notification bar (right next to the battery indicator and the wi-fi signal indicator) is part of the Android O/S. It seems strange that the clock would just stop updating. Any advice you could give (short of resetting the whole device) would be greatly appreciated. A: The solution was simple. I just needed to restart the tablet. It hadn't been restarted in months and I guess that somehow the updating system process got in a weird state. After doing a reboot (restart) via a hard power off, the clock started working and updates are downloading as normal. I hope my solution here helps someone else.
{ "pile_set_name": "StackExchange" }
Q: localStorage for Radio Input? I have the following code working to remember my form input on page refresh. I'm not entirely sure how to progress with remembering the input for a radio button. How do adjust the code to do this? <input type="radio" name="options[Substrate]" id="clear" class="substrate" value="Clear – 125 µm SU320" /> <input type="radio" name="options[Substrate]" id="white-50" class="substrate" value="White – 50 µm Melinex 339"/> $(document).ready(function() { // Save Order on Refresh $(window).unload(saveSettings); loadSettings(); }); function loadSettings() { $('#circuitsNum').val(localStorage.setcircuit); $('#distanceNum').val(localStorage.setdistance); $('#note').val(localStorage.setnote); $("#metreSelect").val(localStorage.setmetre); } function saveSettings() { localStorage.setcircuit = $('#circuitsNum').val(); localStorage.setdistance = $('#distanceNum').val(); localStorage.setnote = $('#note').val(); localStorage.setmetre = $("#metreSelect").val(); } A: You can use the $('radioButton').prop("checked") function to check if a radio button is selected. and $('radioButton').prop("checked", true) to check a radio button. Alternatively you can get the id of the checked option: $('input[name="options[Substrate]"]:checked').attr('id') To set values with localStorage use: localStorage.setItem('item_key', 'item_value'); To get values from localStorage use: localStorage.getItem('item_key'); So to use it with your code like so: function saveSettings() { // Setting the ID of the checked button localStorage.setItem('substrateRadioButton', $('input[name="options[Substrate]"]:checked').attr('id')); // or // Setting the state of each button localStorage.setItem('clear', $("#clear").prop("checked")); localStorage.setItem('white-50', $("#white-50").prop("checked")); } function loadSettings () { // Gettings the ID of the checked button $("#"+localStorage.getItem('substrateRadioButton')).prop("checked", true); // or // Gettings the state of each button $("#clear").prop("checked", localStorage.getItem('clear')); $("#white-50").prop("checked", localStorage.getItem('white-50')); } You can check if the value is set with an if statement: var item = localStorage.getItem('item_key'); if (item) { // key is set } else { // key is not set } It would be wise to check the existence of the values in your loadSettings() function. Read more about localStorage here. and more about prop() here.
{ "pile_set_name": "StackExchange" }
Q: CMake add_subdirectory() Introduction: I am trying to use CMake to obtain cross platform compilation scripts (for VS 9.0 on a Windows32 and Makefiles for Unix). I am experiencing something i can't understand about add_subdirectory(). Let me show you my code : Context: My architecture for a module named "module1" is something like this : CMakeLists.txt include/ file1.h file2.h *.h src/ file1.cpp file2.cpp *.cpp test/ CMakeLists.txt src/ testfile1.cpp testfile2.cpp The architecture of my whole application is composed of these modules which are in themselves projects that could work independantly. My goals: I want to compile my module as a library I want to test the library with the code in the test/ folder Here are the CMakeLists i wrote : This one is the CMakeLists.txt in the root directory of my module. #ENSURE MINIMUM VERSION OF CMAKE cmake_minimum_required(VERSION 2.8) #CONFIGURATION OF THE PROJECT #NAME OF THE PROJECT project(MyProject) #OUTPUT OF THE PROJECT set(LIBRARY_OUTPUT_PATH lib/${CMAKE_BUILD_TYPE}) #ADD THE HEADERS OF THE LIBRARY BEING CREATED include_directories(include) #ADD 3rd PARTY OPENCV LIBRARIES find_package(OpenCV REQUIRED) #ADD 3rd PARTY XERCES LIBRARIES include_directories(${XERCES_INCLUDE_DIR}) link_directories(${XERCES_LIB_DIR}) set(Xerces_LIBS xerces-c_3D.lib) #CONFIGURATION OF THE LIBRARY file(GLOB_RECURSE MYPROJECT_MODULE_CXX src/*) file(GLOB_RECURSE MYPROJECT_MODULE_HDR include/*) #NAME OF THE PRESENT LIBRARY set(MYPROJECT_MODULE_LIB_NAME myModuleLib) add_library(${MYPROJECT_MODULE_LIB_NAME} SHARED ${MYPROJECT_MODULE_CXX} ${MYPROJECT_MODULE_HDR} ) target_link_libraries(${MYPROJECT_MODULE_LIB_NAME} ${OpenCV_LIBS} ${Xerces_LIBS} ) #CONTINUE IN THE SUB FOLDERS add_subdirectory(test) And then, in the test/ folder, here is the CMakeLists.txt #ENSURE MINIMUM VERSION OF CMAKE cmake_minimum_required(VERSION 2.8) #CONFIGURATION OF THE PROJECT #NAME OF THE PROJECT project(MyProjectTest) #OUTPUT OF THE PROJECT set(EXECUTABLE_OUTPUT_PATH bin/${CMAKE_BUILD_TYPE}) #ADD OUR TESTED LIBRARY include_directories(../include) link_directories(../build/lib/${CMAKE_BUILD_TYPE}) #CONFIGURATION OF THE EXE file(GLOB_RECURSE MYPROJECT_MODULE_TEST_CXX src/*) #NAME OF THE PRESENT EXECUTABLE set(MYPROJECT_MODULE_TEST_BIN_NAME myModuleTest) add_executable(${MYPROJECT_MODULE_TEST_BIN_NAME} ${MYPROJECT_MODULE_TEST_CXX} ) target_link_libraries(${MYPROJECT_MODULE_TEST_BIN_NAME} ${MYPROJECT_MODULE_LIB_NAME} ) Question The CMake outputs a correct MyProject.sln Visual Studio 9.0 solution, which compiles successfully in my library linked with OpenCV and Xerces (and other 3rd part libraries). However the test binary did not output any MyProjectTest.sln. I thought, (and read in the CMake documentation) that add_subdirectory(dir) was used to do CMake in the following sub directory (i mean, the name could not be clearer :p !), so shouldn't it continue CMake in the test/ directory and create my MyProjectTest.sln solution ? I use the GUI CMake to run the root CMakeLists.txt in a build directory that i create in the root of my module. When I explore the build directory that's where I can find my MyProjet.sln, a test/ folder, but no MyProjectTest.sln in it ! A: This may not solve your original problem but in your test/folder/CMakeLists.txt try changing #ADD OUR TESTED LIBRARY include_directories(../include) link_directories(../build/lib/${CMAKE_BUILD_TYPE}) to #ADD OUR TESTED LIBRARY include_directories(${CMAKE_SOURCE_DIR}/include) link_directories(${CMAKE_BINARY_DIR}/lib/${CMAKE_BUILD_TYPE}) otherwise you are assuming that your build folder is always named build. A: After three days trying everything, I finally found the answer... Sir DLRdave was right actually: the problem was not from the code itself but from something "out of the code". Problem found: I created and edited all my files with Notepad++. Actually when opening the files with windows notepad (because i was curious) a strange rectangle symbol appeared and the file did not look like the one I usually see on Notepad++ I found out that the symbol was a "\n\r" that Notepad++ did not show me (it must be filtered) but going on windows notepad, you could see that the whole file was "impure" and appeared on a single line instead of the layout i saw on Notepad++. As this "encoding" bug appeared only in the subdirectory CMakeLists, it could not be read but said no error when interpreting with CMake and that's probably why I had no returned error from running CMake. Solution: I used the native2ascii.exe tool from Java to correct the encoding error. The why: Actually what it probably means is that the syntaxic parser of CMake has probably not been designed for filtering this type of char appearing with strange encoding, that's why it gave me 3 days of intense debugging.
{ "pile_set_name": "StackExchange" }
Q: Two Element Lists with Table Extension I'm trying to use the table extension for my model. My code includes the table:from-list command to generate the table. However I am getting an error saying **"Extension exception: expected a two-element list". I generate the list from a txt file that is two columns of numerical data. Apparently this is not the correct format to create a two-element list, but I can't find any documentation describing the appropriate format. How can a two-element list be generated? Also, is it possible to build a table with more than two elements? Time is serving as the key in my table, but I would like to build a table such that this key would update three different variables (hence a four column table: ticks variable1 variable2 variable3) Any good resources for learning my way around the tables and arrays extensions is also welcomed (I've already checked out the extensions guide but don't have the hang of it yet). extensions [table] Globals [ list-O2 table-O2 ] to setup make-lsolutes ; loads lists for solute data (based on node 96) make-tsolutes to go if table:has-key? table-O2 ticks [ update-from-FEM] tick end to make-lsolutes ifelse ( file-exists? "96O214day.txt" ) [ set list-O2 [] file-open "96O214day.txt" while [ not file-at-end? ] [set list-O2 lput file-read list-O2] file-close ] [ user-message "There is no 96O214day.txt file in current directory!" ] end to make-tsolutes set table-O2 table:from-list list-O2 end to update-from-FEM ask insulation [ set O2 table:get table-O2 ticks] end The txt files to make the solute lists have two columns. Left column is time, and the right column is O2 concentration. Columns are separated by spaces. A: A two-element list would look like this: [[key1 val1][key2 val2][keyn valn]] I would use the file-open, file-read-line, and while [not file-at-end?] primitives to iterate over your input and use the String primitives to create the pair-lists from your input file. Something like: let the-list [] file-open filename.txt while [not file-at-end?][ let input file-read-line ;; find the start and end position/index of key and value (X,Y,O,P) using the position primitive on your delimiter let key substring input X Y let value substring input O P let the-pair (list key value) set the-list lput the-pair the-list ] You can embed tables inside tables. So if you want several values as key value, just create a table for each entry and put them in your main table.
{ "pile_set_name": "StackExchange" }
Q: CentOS 7 and Hyper-V I'm trying to install CentOS 7 using Hyper-V and it's failing with the following error message: tsc: Fast TSC calibration failed PCI: Fatal: No config space access function found i8042: No controller found [long waiting period...] dracut-initqueue[475]: Warning: Could not boot. dracut-initqueue[475]: Warning: /dev/disk/by-label/CentOS-7-livecd-x86_64 does not exist dracut-initqueue[475]: Warning /dev/mapper/live-rw does not exist Warning: /dev/disk/by-label/CentOS-7-livecd-x86_64 does not exist Warning: /dev/mapper/live-rw does not exist Generating "/run/initramfs/rdsosreport.txt" I have created a Generation 2 virtual machine and disabled Secure Boot so it would at least start booting. A: CentOS 7 currently does not support running on Hyper-V Generation 2 virtual machines, as can be seen here. You have to recreate the VM and specify Generation 1 as the VM type. Linux Virtual Machines on Hyper-V provides a comprehensive list of which distributions are supported and any limitations associated with them. For a list of the differences between Gen1 and Gen2 virtual machine, check this page. You'll notice Legacy BIOS is gone in favor of UEFI. A: You don't need to switch back to a Generation 1 virtual machine. You can use a Generation 2 virtual machine, so long as you disable Secure Boot. To quote from Microsoft: Generation 2 virtual machines have secure boot enabled by default and Generation 2 Linux virtual machines will not boot unless the secure boot option is disabled. You can disable secure boot in the Firmware section of the settings for the virtual machine in Hyper-V Manager or you can disable it using Powershell: Set-VMFirmware –VMName "VMname" -EnableSecureBoot Off Supporting Secure Boot is still a work-in-progress in most Linux distributions.
{ "pile_set_name": "StackExchange" }
Q: What .desktop file options are available? It looks like we've acquired quite a few new options in desktop files lately. Some of these are explained in very narrow posts, such as how to add a static quicklist to a Unity LauncherEntry. Others are difficult to find documentation for. We seem to have non-standard items in some Gnome applications, for instance, beginning with X-GNOME- So the question is: where can I find information about all of these, including these extensions? Are there extensions for all desktop environments, like KDE, Xfce and LXDE as well? In other words, what I am seeking, is a complete desktop file reference. Before you answer, please check for duplicates in http://standards.freedesktop.org/desktop-entry-spec/latest/ar01s05.html. Those are standard and that's a good reference for those. It is all those other types I'm interested in. A: http://standards.freedesktop.org/desktop-entry-spec/latest/index.html#introduction This page has some sort of specs for .desktop files. A: There doesn't seem to be any such centralized documentation for all .desktop file options. That would've been nice. I'll mark this as answered for now, but if someone makes such a documentation site, please edit this answer, or add a new one and I'll accept that instead. Because I think the question is valid, even if there are no answers.
{ "pile_set_name": "StackExchange" }
Q: How to investigate a memory leak with Apache and PHP? We're running a heavy Drupal website that performs financial modeling. We seem to be running into some sort of memory leak given the fact that overtime the memory used by apache grows while the number of apache processes remains stable: We know the memory problem is coming from apache/PHP because whenever we issue a /etc/init.d/httpd reload the memory usage drops (see above screenshot and below CLI outputs): Before httpd reload $ free total used free shared buffers cached Mem: 49447692 45926468 3521224 0 191100 22609728 -/+ buffers/cache: 23125640 26322052 Swap: 2097144 536552 1560592 After httpd reload $ free total used free shared buffers cached Mem: 49447692 28905752 20541940 0 191360 22598428 -/+ buffers/cache: 6115964 43331728 Swap: 2097144 536552 1560592 Each apache thread is assigned a PHP memory_limit of 512MB which explains the high memory usage depiste the low volume of requests, and a max_execution_time of 120 sec which should terminate threads which execution is taking longer, and should therefore prevent the constant growth in memory usage we're seeing. Q: How could we investigate what is causing this memory leak? Ideally I'm looking for troubleshooting steps I can perform on the system without having to bother the dev team. Additional info: OS: RHEL 5.6 PHP: 5.3 Drupal: 6.x MySQL: 5.6 FYI we're aware of the swapping issue which we're investigating separately and has nothing to do with the memory leak which we've observed before the swapping started to occur. A: We know the memory problem is coming from apache/PHP because whenever we issue a /etc/init.d/httpd reload the memory usage drops No - that just means it's related to the web traffic. You've gone on to mention that you're running mysql on the box - presumably managing data for the webserver - it could just as easily be the culprit here. As could other services your webstack uses which you've not mentioned. Each apache thread is assigned a PHP memory_limit of 512MB which explains No it doesn't. You're reporting an average of 7 and a max of 25 busy servers - yet your memory graph shows a delta of around 25Gb. Really you should start again with basic HTTP tuning - you seem to be running a constant 256 httpds, yet your peak usage is 25 - this is just plain dumb. and a max_execution_time of 120 sec which should terminate threads which execution is taking longer No - only if the thread of execution is within the PHP interpreter - not if PHP is blocked. that performs financial modeling (sigh) It would have been helpful if you'd provided details of how you have configured Apache, threaded or prefork, what version, how PHP is invoked (module, cgi, fastcgi), whether you are using persistent connections, whether you use stored procedures. I'd suggest you start by moving mysql onto a seperate machine and stop using persistent connections (if you're currently using them). Set the memory limit much lower and override this on a per-script basis. Make sure you've got the circular reference garbage collector installed and configured.
{ "pile_set_name": "StackExchange" }
Q: Using sharedPreferences like a variable in my text How to use sharedPreferences data in my Text widget? saving sP: sharedPreferences.setString("firstName", jsonResponse['firstName']); sharedPreferences.setString("lastName", jsonResponse['lastName']); reading sP: getUserDetails(String key) async { SharedPreferences sharedPreferences = await SharedPreferences.getInstance(); final firstName = sharedPreferences.getString('firstName') ?? ''; final lastName = sharedPreferences.getString('lastName') ?? ''; print(firstName); print(lastName); } but how to use my firstName and lastName like a text variable? i try in this way: @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text("Code Land", style: TextStyle(color: Colors.white)), actions: <Widget>[ FlatButton( onPressed: () { sharedPreferences.clear(); // sharedPreferences.commit(); Navigator.of(context).pushAndRemoveUntil(MaterialPageRoute(builder: (BuildContext context) => LoginPage()), (Route<dynamic> route) => false); }, child: Text("Log Out", style: TextStyle(color: Colors.white)), ), ], ), body: Center(child: Text(getUserDetails('firstName'))), <=== HERE drawer: Drawer(), ); } A: You actually can't do that directly, because the sharedPreferences instance needs to be awaited for. But there is a Widget called FutureBuilder with that widget you could do something like: class MyWidget extends StatelessWidget { @override Widget build(BuildContext context) { return FutureBuilder<SharedPreferences>( future: SharedPreferences.getInstance(), builder: (context, snapshot) { if (!snapshot.hasData) return Scaffold( body: CircularProgressIndicator(), ); //Just some placeholder loading widget return Scaffold( appBar: AppBar( title: Text("Code Land", style: TextStyle(color: Colors.white)), actions: <Widget>[ FlatButton( onPressed: () { snapshot.data.clear(); //Note instead of sharedPreferences we now use snapshot.data Navigator.of(context).pushAndRemoveUntil( MaterialPageRoute( builder: (BuildContext context) => LoginPage()), (Route<dynamic> route) => false); }, child: Text("Log Out", style: TextStyle(color: Colors.white)), ), ], ), body: Center(child: Text(getUserDetails('firstName', snapshot.data))), //Here you just pass the already loaded sharedPreferences to your method and make the method return your result immediately instead of a Future drawer: Drawer(), ); }, ); } } I hope this answers your question.
{ "pile_set_name": "StackExchange" }
Q: Perl script running a periodic (main) task and providing a REST interface I am working on a Perl script which does some periodic processing based on file-system contents. The overall structure is like this: # ... initialization... while(1) { # ... scan filesystem, perform actions depending on changes detected ... sleep 5; } I would like to add the ability to input some data into this process by means of exposing an interface through HTTP. E.g. I would like to add an endpoint to skip the sleep, but also some means to input data that is processed in the next iteration. Additionally, I would like to be able to query some of the program's status through HTTP (i.e. a simple fork() to run the webserver-part in a separate process is insufficient?) So far I have already used the Dancer2 framework once but it has a start; call that blocks and thus does not allow any other tasks (like my loop) to run. Additionally, I could of course move the code which is currently inside the loop to an endpoint exposed through Dancer2 but then I would need to call that periodically (though an external program?) which seems to be quite an obscure indirection compared to just having the webserver-part running in background. Is it possible to unobtrusively (i.e. without blocking the program) add a REST-server capability to a Perl script? If yes: Which modules would be used for the purpose? If no: Should I really implement an external process to periodically invoke a certain endpoint or pursue a different solution altogether? (I have tried to add a dancer2 tag, but could not do so due to insufficient reputation. Do not be mislead by this: I have so far only tried with Dancer2 not the Dancer (v.1)) A: You could try to launch your processing loop in a background thread, before you run start;. See man perlthrtut You probably want use threads::shared; to declare some variables shared between the REST part and the background thread. Or use dedicated queues/event mechanisms.
{ "pile_set_name": "StackExchange" }
Q: If lower bound of a problem is exponential then is it NP? Assuming that we have a problem $p$ and we showed that the lower bound for solving $p$ is $\mathcal{\Omega}(2^n)$. can lower bound $\mathcal{\Omega}(2^n)$ implies the problem in $NP$? A: No. For example, the halting problem has an $\Omega(2^n)$ lower bound, but it is not in NP (since it is not computable). The nondeterministic time hierarchy theorem shows that any NEXP-complete problem is another example (with $2^n$ potentially replaced by a smaller exponential function $c^{n^\epsilon}$). NP is an upper bound on the complexity of a problem. A: No. First, as Yuval points out, the problem could be much harder than the lower bound that you've proven. Second, even if the problem takes time $\Theta(2^n)$ to solve, we don't know how this relates to $\mathbf{NP}$. It's possible that $\mathbf{P}=\mathbf{NP}$, in which case any problem in $\mathrm{TIME}[\Omega(2^n)]$ is certainly not in $\mathbf{NP}$ by the time hierarchy theorem. But even if $\mathbf{P}\neq\mathbf{NP}$, it's possible that the problem requires exponential space so isn't in $\mathbf{NP}$. The best algorithms we know for $\mathbf{NP}$-complete problems take exponential time but you shouldn't assume that "in $\mathbf{NP}$" means "takes exponential time" or vice-versa.
{ "pile_set_name": "StackExchange" }
Q: Approximations using derivatives I came across the following definitions in my textbook: The differential of $x$, denoted by $dx$, is defined by $dx = \Delta x$ The differential of $y$, denoted by $dy$, is defined by $dy=f'(x) dx$ or $dy = (\frac{dy}{dx})\Delta x$ I understood the first part. However, the second part doesn't make intuitive sense to me. What is the intuitive explanation for the second definition? A: Think of derivative as slope of the tangent line to the graph of $y=f(x)$ at the point $(x,f(x))$ If you approximate your function with its tangent line, then $$m=f'(x)=\frac {dy}{dx}$$ where $dy$ is the linear approximation to $\Delta y$ which is the actual change in $y$ As you see,$$ f'(x)=\frac {dy}{dx}$$ could be written as $$dy=f'(x) dx$$
{ "pile_set_name": "StackExchange" }
Q: C++ how to work with pointers to vector pointers I have this design: vector<string*>* tmp = new vector<string*>; how do i put elements into it? I can not understand how I put in this vector a few strings. I tried it std::string srt = "abc"; tmp->push_back(srt); but the compiler curses and the syntax is incorrect, I don't know how to do A: just dont get too pointer-ish (assuming this is a relatively simple program) vector<string> tmp; std::string srt = "abc"; tmp.push_back(srt); certainly a vector of pointers to string is a a disaster waiting to happen . And probably you dont need a pointer to a vector either
{ "pile_set_name": "StackExchange" }
Q: AD: Roll out an MSI application at midnight Using Windows Server's Active Directory and Group Policy, is there a way to roll out an MSI file at a scheduled time, like midnight? A: Strickly speaking no. GPOs are not scheduled tasks. They run when the system updates (reboots, etc). We use Quest's GPOAdmin. It allows for time-based deployment of GPOs. This could help somewhat. Sounds like you really need a software deployment systems (SCCM, LANDesk, etc)
{ "pile_set_name": "StackExchange" }
Q: Push Notification with Image - iOS - Swift Hi I just want to show push Notification with Image. Im using the below code and im not sure where im doing mistake it took me more than 3 weeks, I gone through many Links but still it couldn't be fixed. the below is my App delegate code AppDelegate.Swift import UIKit import UserNotifications var deviceTokenString:String = "" var badgeCount = 0 @UIApplicationMain class AppDelegate: UIResponder, UIApplicationDelegate, UNUserNotificationCenterDelegate { var window: UIWindow? func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool { // Push Notification if #available(iOS 10.0, *) { let center = UNUserNotificationCenter.current() center.requestAuthorization(options: [.alert, .badge, .sound]) { (granted, error) in // actions based on whether notifications were authorised or not guard error == nil else { //Display Error.. Handle Error.. etc.. return } if granted { //Do stuff here.. } else { //Handle user denying permissions.. } } application.registerForRemoteNotifications() } else { // Fallback on earlier versions } registerForRemoteNotification() // iOS 10 support if #available(iOS 10, *) { UNUserNotificationCenter.current().requestAuthorization(options:[.alert, .sound]){ (granted, error) in } application.registerForRemoteNotifications() } // iOS 9 support else if #available(iOS 9, *) { UIApplication.shared.registerUserNotificationSettings(UIUserNotificationSettings(types: [.sound, .alert], categories: nil)) UIApplication.shared.registerForRemoteNotifications() } // iOS 8 support else if #available(iOS 8, *) { UIApplication.shared.registerUserNotificationSettings(UIUserNotificationSettings(types: [.sound, .alert], categories: nil)) UIApplication.shared.registerForRemoteNotifications() } // iOS 7 support else { application.registerForRemoteNotifications(matching: [.sound, .alert]) } return true } func registerForRemoteNotification() { if #available(iOS 10.0, *) { let center = UNUserNotificationCenter.current() center.delegate = self center.requestAuthorization(options: [.sound, .alert]) { (granted, error) in if error == nil{ UIApplication.shared.registerForRemoteNotifications() // UIApplication.shared.applicationIconBadgeNumber = 5 } } } else { UIApplication.shared.registerUserNotificationSettings(UIUserNotificationSettings(types: [.sound, .alert], categories: nil)) UIApplication.shared.registerForRemoteNotifications() // UIApplication.shared.applicationIconBadgeNumber = 5 } } func incrementBadgeNumberBy(badgeNumberIncrement: Int) { let currentBadgeNumber = UIApplication.shared.applicationIconBadgeNumber let updatedBadgeNumber = currentBadgeNumber + badgeNumberIncrement if (updatedBadgeNumber > 0) { UIApplication.shared.applicationIconBadgeNumber = updatedBadgeNumber } else { UIApplication.shared.applicationIconBadgeNumber = 0 } } func application(_ application: UIApplication, didFailToRegisterForRemoteNotificationsWithError error: Error) { print("Couldn't register: \(error)") } func application(_ application: UIApplication, didRegisterForRemoteNotificationsWithDeviceToken deviceToken: Data) { deviceTokenString = deviceToken.hexString() // deviceTokenString = deviceToken.reduce("", {$0 + String(format: "%02X", $1)}) print("device token: \(deviceTokenString)") } // Push notification received func application(_ application: UIApplication, didReceiveRemoteNotification data: [AnyHashable : Any]) { // Print notification payload data badgeCount = badgeCount + 1 self.incrementBadgeNumberBy(badgeNumberIncrement: badgeCount) print("Push notification received: \(data)") } // Notification will present call back @available(iOS 10.0, *) func userNotificationCenter(_ center: UNUserNotificationCenter, willPresent notification: UNNotification, withCompletionHandler completionHandler: @escaping (UNNotificationPresentationOptions) -> Void) { completionHandler([.alert, .sound, .badge]) print("UserInfo: \(notification.request.content.userInfo)") var userinfo = NSDictionary() userinfo = notification.request.content.userInfo as NSDictionary let imgData = userinfo.value(forKey: "data")! as! NSDictionary let url = imgData.value(forKey: "attachment-url") let imgUrl = URL(string: url as! String)! // 1. Create Notification Content let content = UNMutableNotificationContent() // 2. Create Notification Attachment URLSession.shared.downloadTask(with: imgUrl) {(location, response, error) in print("location: \(location!)") if error == nil { if let location = location { // Move temporary file to remove .tmp extension let tmpDirectory = NSTemporaryDirectory() let tmpFile = "file://".appending(tmpDirectory).appending(imgUrl.lastPathComponent) print("tmpFile: \(tmpFile)") let tmpUrl = URL(string: tmpFile)! print("tmpUrl: \(tmpUrl)") try! FileManager.default.moveItem(at: location, to: tmpUrl) // Add the attachment to the notification content if let attachment = try? UNNotificationAttachment(identifier: "attachment", url: tmpUrl) { content.attachments = [attachment] print("attachment: \(content.attachments)") // 3. Create Notification Request let request = UNNotificationRequest.init(identifier: String.UNNotificationRequest.NormalLocalPush.rawValue, content: content, trigger: nil) content.title = "\(userinfo.value(forKeyPath: "aps.alert.title")!)" content.body = "\(userinfo.value(forKeyPath: "aps.alert.body")!)" content.sound = UNNotificationSound.default() content.badge = (UIApplication.shared.applicationIconBadgeNumber + 1) as NSNumber; content.categoryIdentifier = String.UNNotificationCategory.Normal.rawValue // 4. Add to NotificationCenter let center = UNUserNotificationCenter.current() center.add(request) } } } else { print("Error: \(error!)") } }.resume() } @available(iOS 10.0, *) // Notification interaction response call back func userNotificationCenter(_ center: UNUserNotificationCenter, didReceive response: UNNotificationResponse, withCompletionHandler completionHandler: @escaping () -> Void) { print("\(response.notification.request.content.userInfo)") var userinfo = NSDictionary() userinfo = response.notification.request.content.userInfo as NSDictionary let imgData = userinfo.value(forKey: "data")! as! NSDictionary let url = imgData.value(forKey: "attachment-url") let imgUrl = URL(string: url as! String)! // 1. Create Notification Content let content = UNMutableNotificationContent() content.title = "\(userinfo.value(forKeyPath: "aps.alert.title")!)" content.body = "\(userinfo.value(forKeyPath: "aps.alert.body")!)" content.sound = UNNotificationSound.default() content.badge = (UIApplication.shared.applicationIconBadgeNumber + 1) as NSNumber; content.categoryIdentifier = String.UNNotificationCategory.Normal.rawValue // 设置通知类型标示 // 2. Create Notification Attachment URLSession.shared.downloadTask(with: imgUrl) { (location, response, error) in if let location = location { // Move temporary file to remove .tmp extension let tmpDirectory = NSTemporaryDirectory() let tmpFile = "file://".appending(tmpDirectory).appending(imgUrl.lastPathComponent) let tmpUrl = URL(string: tmpFile)! try! FileManager.default.moveItem(at: location, to: tmpUrl) // Add the attachment to the notification content if let attachment = try? UNNotificationAttachment(identifier: "", url: tmpUrl) { content.attachments = [attachment] } } // Serve the notification content // self.contentHandler!(content) }.resume() // if let attachement = try? UNNotificationAttachment(identifier: "attachment", url: imgUrl, options: nil) // { // content.attachments = [attachement] // } // 3. Create Notification Request let request = UNNotificationRequest.init(identifier: String.UNNotificationRequest.NormalLocalPush.rawValue, content: content, trigger: nil) // 4. Add to NotificationCenter let center = UNUserNotificationCenter.current() center.add(request) let responseNotificationRequestIdentifier = response.notification.request.identifier if responseNotificationRequestIdentifier == String.UNNotificationRequest.NormalLocalPush.rawValue || responseNotificationRequestIdentifier == String.UNNotificationRequest.LocalPushWithTrigger.rawValue || responseNotificationRequestIdentifier == String.UNNotificationRequest.LocalPushWithCustomUI1.rawValue || responseNotificationRequestIdentifier == String.UNNotificationRequest.LocalPushWithCustomUI2.rawValue { let actionIdentifier = response.actionIdentifier switch actionIdentifier { case String.UNNotificationAction.Accept.rawValue: break case String.UNNotificationAction.Reject.rawValue: break case String.UNNotificationAction.Input.rawValue: break case UNNotificationDismissActionIdentifier: break case UNNotificationDefaultActionIdentifier: break default: break } } completionHandler(); } } extension Data { func hexString() -> String { return self.reduce("") { string, byte in string + String(format: "%02X", byte) } } } And below is my Extension Code which im using for custom Push notification, Extension.swift import Foundation extension String { enum UNNotificationAction : String { case Accept case Reject case Input } enum UNNotificationCategory : String { case Normal case Cheer case CheerText } enum UNNotificationRequest : String { case NormalLocalPush case LocalPushWithTrigger case LocalPushWithCustomUI1 case LocalPushWithCustomUI2 } } extension URL { enum ResourceType : String { case Local case Local1 case Remote case AttachmentRemote } static func resource(type :ResourceType) -> URL { switch type { case .Local: return Bundle.main.url(forResource: "cheer", withExtension: "png")! case .Local1: return Bundle.main.url(forResource: "hahaha", withExtension: "gif")! case .Remote: return URL(string: "http://ww1.sinaimg.cn/large/65312d9agw1f59leskkcij20cs0csmym.jpg")! case .AttachmentRemote: return URL(string: "https://assets-cdn.github.com/images/modules/open_graph/github-mark.png")! } } } extension URLSession { class func downloadImage(atURL url: URL, withCompletionHandler completionHandler: @escaping (Data?, NSError?) -> Void) { let dataTask = URLSession.shared.dataTask(with: url) { (data: Data?, response: URLResponse?, error: Error?) in completionHandler(data, error as NSError?) } dataTask.resume() } } and My Api response is, [AnyHashable("aps"): { alert = { body = test; title = "N-Gal"; }; "mutable-content" = 1; sound = default; }, AnyHashable("data"): { "attachment-url" = "https://www.n-gal.com/image/cache/catalog/HomeBanner/Banners/1172X450-N-Gal-Footwear-Banner-100x100.jpg"; }] This code is based on the tutorial https://github.com/maquannene/UserNotifications. Please give me a solution to fix this... Thanks in Advance...! A: From your code snippet I conclude you are talking about remote notifications. That's an important distinction. If you want to 'enrich' a remote notification (e.g. add an image), you need a UNNotificationServiceExtension: For local notifications, the app adds attachments when creating the rest of the notification’s content. To add attachments to a remote notification, use a notification service extension to modify the notification content before it is delivered. For more information about implementing a notification service extension, see UNNotificationServiceExtension Source: Apple documentation. (emphasis mine) That extension lives outside of your app and is called before the user gets to see the remote notification. That way you have the chance to load all of your remote resources before the notification is scheduled for delivery. For more info on the lifecycle of extensions and how they communicate with their host app, take a look at the App Extension Programming Guide. To add the extension in Xcode, go to File > New > Target and select a Notification Service Extension: This will create new extension target and embed it in the host target: In the NotificationService.swift file you will find the entry point where you can start customising the notification content. class NotificationService: UNNotificationServiceExtension { var contentHandler: ((UNNotificationContent) -> Void)? var bestAttemptContent: UNMutableNotificationContent? override func didReceive(_ request: UNNotificationRequest, withContentHandler contentHandler: @escaping (UNNotificationContent) -> Void) { self.contentHandler = contentHandler bestAttemptContent = (request.content.mutableCopy() as? UNMutableNotificationContent) if let bestAttemptContent = bestAttemptContent { // Modify the notification content here... bestAttemptContent.title = "\(bestAttemptContent.title) [modified]" contentHandler(bestAttemptContent) } } override func serviceExtensionTimeWillExpire() { // Called just before the extension will be terminated by the system. // Use this as an opportunity to deliver your "best attempt" at modified content, otherwise the original push payload will be used. if let contentHandler = contentHandler, let bestAttemptContent = bestAttemptContent { contentHandler(bestAttemptContent) } } } Be sure to take look at the UNNotificationServiceExtension class overview for more details.
{ "pile_set_name": "StackExchange" }
Q: It is possible to replay CAN bus messages for building intercom system? I would like to automate my apartment's intercom unit to unlock the building door remotely. I am able to press the door unlock button at any time and this unlocks the door in the lobby. I scoured the internet and was able to find the unit as a wholesale in China and the documentation mentions the communication protocol is CANbus. I took the cover off my video intercom unit and found that there is a single DATA pin which I presume is transmit or maybe a single wire implementation of CAN? Impossible to tell. I measured the voltage by connecting a multimeter to ground and to the DATA pin. It was 14.5v which seems rather strange. When I press a button I can see the voltage drop slightly. Are there any tools I could use to see the messages being generated by the intercom unit? Ideally I would like to reverse engineer the CAN messages and then replay them from a WiFi enabled micro controller so I can remotely operate. A: Canbus is not normally single wire, equally the voltage makes it seem more like K-line / L-line, if that is the case you should be able to hook it up to a PC serial port and log it at 10400 baud, (CANbus is usually too fast for reliable single wire transfer over the kind of distances a home intercom would work at) It is 100% possible to replay messages, just in this case depends a little on the implementation, e.g. if it uses some kind of rolling code (unlikely) to prevent replay messages from being valid. Have a look at where that data pin connects to inside the device, usually there will be a little 8 pin IC that acts as a tranciver for the data bus, this should give you a very solid tell tale of exactly what physical layer it is using (could alternatively just be a transistor), And finally, if it is K-line, you will need to offset the voltage of any logic analyser you may hook up to it at a later stage, low is below 33% of VCC, and high is above 66%, so for your 14.5V bus you want to offset near the middle for reliable capture from some of the weirder devices. Edit: Adding image of suggested transceiver schematic.
{ "pile_set_name": "StackExchange" }
Q: jQuery - Open all links in id in new window Can anyone tell me of a way to open all links within an id in a new window? A: Put this in the head: $(function () { $('#selector').attr('target', '_blank'); })
{ "pile_set_name": "StackExchange" }
Q: Trouble plotting coordinates in degrees in matplotlib So, I am currently working on GTFS data that I have imported into python. I have this table bronx_frequencyTable that contains longitudinal coordinates in shape_pt_lon and shape_pt_lat. However, when I run the following code: for index, row in bronx_frequencyTable.iterrows(): map.plot(row['shape_pt_lon'], row['shape_pt_lat'], latlon=True, linestyle='solid') plt.show() Nothing actually shows up besides the coastlines I have already plotted. Would anyone know how I could plot lines from the data I have? The data: shape_id shape_pt_lat shape_pt_lon shape_pt_sequence num_trips 0 BX010026 40.809663 -73.928240 10001 143 1 BX010026 40.809620 -73.928135 10002 143 2 BX010026 40.810075 -73.927781 10003 143 3 BX010026 40.810130 -73.927698 10004 143 4 BX010026 40.810860 -73.927191 10005 143 5 BX010026 40.810970 -73.927140 10006 143 6 BX010026 40.811036 -73.927223 10007 143 7 BX010026 40.811104 -73.927310 10008 143 8 BX010026 40.811725 -73.928126 10009 143 9 BX010026 40.812036 -73.928541 10010 143 10 BX010026 40.812338 -73.928949 10011 143 11 BX010026 40.812621 -73.929328 10012 143 12 BX010026 40.812722 -73.929458 10013 143 13 BX010026 40.812733 -73.929476 10014 143 14 BX010026 40.812816 -73.929515 10015 143 15 BX010026 40.812840 -73.929544 10016 143 16 BX010026 40.812912 -73.929631 10017 143 17 BX010026 40.813090 -73.929844 10018 143 18 BX010026 40.813521 -73.929548 10019 143 19 BX010026 40.813521 -73.929548 20001 143 20 BX010026 40.814235 -73.929059 20002 143 21 BX010026 40.814418 -73.928990 20003 143 22 BX010026 40.814863 -73.928787 20004 143 23 BX010026 40.816515 -73.928149 20005 143 24 BX010026 40.816820 -73.928027 20006 143 25 BX010026 40.816820 -73.928027 30001 143 26 BX010026 40.817198 -73.927874 30002 143 27 BX010026 40.817568 -73.927700 30003 143 28 BX010026 40.817632 -73.927668 30004 143 29 BX010026 40.818540 -73.927240 30005 143 ... ... ... ... ... ... 41788 SBS410037 40.870280 -73.878387 90011 365 41789 SBS410037 40.870338 -73.878271 90012 365 41790 SBS410037 40.870680 -73.877612 90013 365 41791 SBS410037 40.871047 -73.876928 90014 365 41792 SBS410037 40.871118 -73.876827 90015 365 41793 SBS410037 40.871217 -73.876700 90016 365 41794 SBS410037 40.871402 -73.876513 90017 365 41795 SBS410037 40.871402 -73.876513 100001 365 41796 SBS410037 40.872958 -73.874939 100002 365 41797 SBS410037 40.873167 -73.874765 100003 365 41798 SBS410037 40.873383 -73.874570 100004 365 41799 SBS410037 40.875086 -73.873225 100005 365 41800 SBS410037 40.876954 -73.871996 100006 365 41801 SBS410037 40.877075 -73.871938 100007 365 41802 SBS410037 40.878013 -73.871755 100008 365 41803 SBS410037 40.878392 -73.871661 100009 365 41804 SBS410037 40.878392 -73.871661 110001 365 41805 SBS410037 40.878644 -73.871598 110002 365 41806 SBS410037 40.878616 -73.871378 110003 365 41807 SBS410037 40.878396 -73.870496 110004 365 41808 SBS410037 40.878352 -73.870330 110005 365 41809 SBS410037 40.878228 -73.869842 110006 365 41810 SBS410037 40.878206 -73.869762 110007 365 41811 SBS410037 40.877798 -73.868049 110008 365 41812 SBS410037 40.877632 -73.867377 110009 365 41813 SBS410037 40.877505 -73.866849 110010 365 41814 SBS410037 40.877445 -73.866643 110011 365 41815 SBS410037 40.877373 -73.866415 110012 365 41816 SBS410037 40.877326 -73.866271 110013 365 41817 SBS410037 40.877769 -73.866017 110014 365 And the full code: import matplotlib.pyplot as plt from mpl_toolkits.basemap import Basemap import numpy as np import pandas as pd #Calculate base frequency. base_max_routes = 4 base_directions = 2 base_minhour = 7 base_maxhour = 19 base_bph = 6 base_frequency = base_max_routes * base_directions * (base_maxhour - base_minhour) * base_bph # Read the shape files from the bus data. bronx_shapes = pd.read_csv('../Bronx Data/shapes.txt') bronx_trips = pd.read_csv('../Bronx Data/trips.txt') bronx_numTrips = bronx_trips.groupby('shape_id').size() bronx_numTrips.name = 'num_trips' bronx_frequencyTable = bronx_shapes.join(bronx_numTrips, on=['shape_id'], how='inner') print(bronx_frequencyTable) # Create a map of New York City centered on Manhattan. map = Basemap(resolution="h", projection="stere", width=50000, height=50000, lon_0=-73.935242, lat_0=40.730610) map.drawcoastlines() # Map the bus routes. for index, row in bronx_frequencyTable.iterrows(): map.plot(row['shape_pt_lon'], row['shape_pt_lat'], latlon=True, linestyle='solid') plt.show() A: You are plotting point by point. Instead you should plot directly the columns: map.plot(bronx_frequencyTable['shape_pt_lon'], bronx_frequencyTable['shape_pt_lat'], latlon=True, linestyle='solid')
{ "pile_set_name": "StackExchange" }
Q: mfc100u.dll missing when running application on another computer I have an MFC application which asks for missing dll in subject line when I run it on another computer which doesn't have VS2010 installed. I have come across solutions which says I have to install VS2010 redistributable package but really? Do we have to install that on every customer computer? That doesn't seem very good. The interesting thing is that I have another MFC application which does the same thing as the new dll but it doesn't need the mfc100u.dll so I am confused. A: Which dll your app depends on is something you can figure out with the dependency walker tool that used to come in every VS installation (now freeware) Redistributables are packages which should be installed when deploying applications on clients pcs. Installers usually do this automatically (and often silently) to ensure that your application will have all its dependencies met. So yes, you need to A) figure out all the dependencies of your app B) figure out which packages you need and then deploy them alongside your app when installing it That's why installers are so common in the win world
{ "pile_set_name": "StackExchange" }
Q: How do I pass two Points to __init__ to define a rectangle? How do I create an __init__ method that takes two objects of type point as input (bottom left corner and top right corner) and a string for color i.e. r1= Rectangle(Point(0,0), Point(1,1), 'blue') r1.get_bottom_left() --> Point(0,0) So far I have: class Rectangle: def __init__(self, PointbL, PointtR, 'color'): self.bL = PointbL self.tR = PointtR self.bLx= PointbL[0] self.bLy= PointbL[1] self.tRx= PointtR[0] Self.tRy= PointtR[1] self.color='color' I just learned about class methods and so I am unsure on how to do this but this looks wrong to me... I tried to figure it out with answers from the following question posted but I don't think I did it right Creating a python Rectangle object class that can print the corner coordinates A: __init__ should have variable parameters, not specific types. To support indexing you would need to implement additional special methods in Point, but I don't think it necessary in your code. Additionally, I would recommend names like topleft rather than tL as it will improve readability. class Point: def __init__(self, x, y): self.x = x self.y = y class Rectangle: def __init__(self, bottomleft, topright, color): # variable parameters self.bottomleft = bottomleft # no need for self.bottomright_x self.topright = topright # access x by rectangle.topright.x self.color = str(color) # cast to a string If you must, you can add assertions in init to ensure that the arguments passed into Rectangle are Points. assert isinstance(bottomright, Point), 'Must enter a Point'
{ "pile_set_name": "StackExchange" }
Q: Are there any gotchas with open(my $f, '<:encoding(UTF-8)', $n) I am having a problem that I am unable to reproduce in a manner suitable for Stackoverflow although it's reproducable in my production environment. The problem occors in a Perl script that, among others, iterates over a file that looks like so: abc-4-9|free text, possibly containing non-ascii characters| cde-3-8|hällo wörld| # comment xyz-9-1|and so on| qrs-2-8|and so forth| I can verify the correctness of the file with this Perl script: use warnings; use strict; open (my $f, '<:encoding(UTF-8)', 'c:\path\to\file') or die "$!"; while (my $s = <$f>) { chomp($s); next unless $s; next if $s =~ m/^#/; $s =~ m!(\w+)-(\d+)-(\d+)\|([^|]*)\|! or die "\n>$s<\n didn't match on line $."; } print "Ok\n"; close $f; When I run this script, it won't die on line 10 and consequently print Ok. Now, I use essentially the same construct in a huge Perl script (hence irreproducable for Stackoverflow) and it will die on line 2199 of the input file. If I change the first line (which is completely unrelated to line 2199) from something like www-1-1|A line with some words| to www-1-1|x| the script will process line 2199 (but fail later). Interestingly, this behaviour was introduced when I changed open (my $f, '<', 'c:\path\to\file') or die "$!"; to open (my $f, '<:encoding(UTF-8)', 'c:\path\to\file') or die "$!"; Without the :encoding(UTF-8) directive, the script does not fail. Of course, I need the encoding directive since the file contains non-ascii characters. BTW, the same script runs without problems on Linux. On Windows, where it fails, I use Strawberry Perl 5.24 A: I do not have a full and correct explanation of why this is necessary, but you can try opening the file with '<:unix:encoding(UTF-8)' This may be related to my question "Why is CRLF set for the unix layer on Windows?" which I noticed when I was trying to figure out stuff which I ended up never figuring out.
{ "pile_set_name": "StackExchange" }
Q: How to Get QGIS to calculate Area in Layer's Units? I have a Polygon Shapefile in a custom CRS which has feet as units. The Qgis Map is also set to the same CRS. When I open the field Calculator, and enter $area, I see that the proper area is calculated: But when I save it in the Attribute table, some other value gets saved: The ratio between the values seems to suggest that it is doing some conversion from sqFeet to SqMeters. I was under the impression that the Field calculator would calculate areas in the units of Layer, but here it seems to be doing something else. Instead of manually converting, is there a way to get QGIS to calculate areas in Layer's Units? A: After reading this comment: How to read the area column in QGIS? I found that there is a setting in Project>>Properties which you need to set. By default it is set to Sq Meters, and that is why Field Calculator was calculating area in Sq Meters, and not Sq feet as I expected. Once I set this property to Sq Feet, the calculation gave me the area in expected values.
{ "pile_set_name": "StackExchange" }
Q: How to only retrieve the average of values in one day over the duration of multiple days Hopefully the title isn't too confusing. Here is my code SELECT SRV_NAME, TOT_CPU, TOT_MEM, SNAP_DATE FROM capacity2.SRV_CAPACITY_UNIX WHERE TOT_CPU >= 90 OR TOT_MEM >= 90 AND SNAP_DATE BETWEEN to_date('14-jun-2012 00:00:00', 'dd-mon-yyyy hh24:mi:ss') AND to_date('14-jul-2012 00:00:00', 'dd-mon-yyyy hh24:mi:ss') ORDER BY SRV_NAME desc, SNAP_DATE desc; A sample of the data the code returns is as follows: SNAP_DATE TOT_CPU TOT_MEM SRV_NAME 6/22/2012 0:00 99.98 70.86 server555 6/22/2012 0:05 99.98 70.9 server555 6/22/2012 0:10 99.98 70.93 server555 6/22/2012 0:15 99.98 71.06 server555 6/22/2012 0:20 99.98 70.87 server555 ... 6/22/2012 23:35 99.97 71.12 server555 6/22/2012 23:40 99.99 71.12 server555 6/22/2012 23:45 99.97 71.12 server555 6/22/2012 23:50 99.98 71.12 server555 6/22/2012 23:55 99.98 71.12 server555 The objective: Extract data for servers (the servers are within the capacity2.SRV_CAPACITY_UNIX database) that are 90% and above (in TOT_CPU and TOT_MEM) over a given duration (in this case a month/30 days). I need to graph this data and by default the data is recorded every 5 minutes so I'm left with 288 rows in one day over the span of approximately 30 days. This is approximately 8,600 rows that I am left with to plot on a graph which is obviously impractical. Therefore what I need to do is to write a SQL query that will only extract the average TOT_CPU and TOT_MEM of each day so I will only be left with 30 rows of data I need to plot for 30 days (one row for each day). This is my first time using Stack Overflow so I have attempted to be as clear as possible. If you need any information I will surely provide it. A: Based on the TO_DATE function, I'll assume this is Oracle. Therefore, it should look somewhat like this: select SRV_NAME, TRUNC(SNAP_DATE) SNAP_DATE, avg(TOT_CPU) AVG_TOT_CPU, avg(TOT_MEM) AVG_TOT_MEM FROM capacity2.SRV_CAPACITY_UNIX WHERE TOT_CPU >= 90 OR TOT_MEM >= 90 AND SNAP_DATE BETWEEN to_date('14-jun-2012 00:00:00', 'dd-mon-yyyy hh24:mi:ss') AND to_date('14-jul-2012 00:00:00', 'dd-mon-yyyy hh24:mi:ss') group by SRV_NAME, TRUNC(SNAP_DATE) Please notice that this is broken down by server as well, you probably will plot one series per server. Also notice that, if by any chance, there's no logged data for a specific day, that day will not be present in the results and your plotting logic should deal with that.
{ "pile_set_name": "StackExchange" }
Q: If $\frac{\sin(x)}{a}=\frac{\cos(x)}{b}$ then $a\sin(2x)+b\cos(2x)=?$ $b\sin(x)=a\cos(x)$; $\tan(x)=\frac{a}{b}$. I couldn't simplify after that. I'm sure that there is an identity somewhere to solve it that I missed, so let me know. A: Though there are lots of answers here, I'd like to put an answer with almost no computations. \begin{align} a\sin(2x)+b\cos(2x)&=\frac{a}{\sin(x)}(\sin(x)\sin(2x)+\cos(x)\cos(2x))\\ &=\frac a{\sin(x)}\cos(x)=b \end{align} A: $\tan{x}=\frac{a}{b}$ and from here $$a\sin2x+b\cos2x=\frac{\frac{2a^2}{b}}{1+\frac{a^2}{b^2}}+\frac{b-\frac{a^2}{b}}{1+\frac{a^2}{b^2}}=b$$
{ "pile_set_name": "StackExchange" }
Q: why WriteConcern is being ignored in MongoDB Java driver? I'm trying to execute a blocking call to db.collection.insert(List<DBObject>, WriteConcern) method with the MongoDB java driver. No matter what I use in WriteConcern: SAFE, FSYNC_SAFE, FSYNCED, ACKNOWLEDGED, ... I can't grant the write has been performed...at least not the way I'm doing it right now. Check the code: WriteResult result = collection.insert(list, WriteConcern.FSYNC_SAFE); if (result.getN()> 0){ System.out.println("Alleluyah!"); return true; } From what I've read here, FSYNC_SAFE, should be the way to go... The data is being written, but the call to result.getN() is always zero. If there is no way to check if a write has been completed... Why create the getN() method in the first place?? Any idea of what I'm doing wrong? I do need to check this insert has been performed. I can query the collection and check, but it just looks overkill to me... Thanks! A: Check out this post regarding the getN() method. It only gives information on the number of documents affected by an update or remove operation. I'm no expert but consider this quote from the MongoDB wiki on the getLastError method: You should actually use a write concern like WriteConcern.ACKNOWLEDGED instead of calling getLastError() manually. As far as I understand it using a sufficient level of WriteConcern and getting a WriteResult back means that the operation was successful and no error occurred. Check out the article and reference on WriteConcern in the wiki as well. Since you explicitly want to verify that the insert operation executed correctly you could do something like this: WriteResult result = collection.insert(list, WriteConcern.FSYNC_SAFE); CommanResult commandResult = result.getLastError(); if (commandResult.get("err") != null) { System.out.println("Alleluyah"); return true; } Inserting a document with WriteConcern.FSYNC_SAFE calls the getLastError() method - source. If the level of WriteConcern isn't strict enough calling getLastError() may result in an error - examine the WriteResult code on github . If no error has occurred than the err field will be null, otherwise it will contain an error message - source.
{ "pile_set_name": "StackExchange" }
Q: PHP SOAP Client and Server I'm attempting to write my first SOAP server, having done a bit with SOAP clients. When I tried with a sending a single string, this worked fine. But when trying to send multiple parameters to the server I'm coming unstuck. Is there a better method of tackling this? Server: <?php if(!extension_loaded("soap")){ dl("php_soap.dll"); } ini_set("soap.wsdl_cache_enabled","0"); function getCatalogEntry($array){ $conn = mysqli_connect($host,$user,$password,$db) or die(mysqli_error($conn)); $sql = "SELECT '" . $array[0]. "' FROM soap WHERE id = '".$array[1]."'"; $result = $conn->query($sql) or die(mysqli_error($conn)); $row = mysqli_fetch_array($result); return var_dump($array).$array[0];//$sql.$row[$field]; } $server = new SoapServer("test.wsdl"); $server->AddFunction("getCatalogEntry"); $server->handle(); ?> Client: <?php try{ $sClient = new SoapClient('server.php?wsdl'); $params = array( "field" => "field2", "id" => "id"); $response = $sClient->getCatalogEntry($params); var_dump($response); } catch(SoapFault $e){ var_dump($e); } ?> WSDL: <?xml version ='1.0' encoding ='UTF-8' ?> <definitions name='Catalog' targetNamespace='http://example.org/catalog' xmlns:tns=' http://example.org/catalog ' xmlns:soap='http://schemas.xmlsoap.org/wsdl/soap/' xmlns:xsd='http://www.w3.org/2001/XMLSchema' xmlns:soapenc='http://schemas.xmlsoap.org/soap/encoding/' xmlns:wsdl='http://schemas.xmlsoap.org/wsdl/' xmlns='http://schemas.xmlsoap.org/wsdl/'> <xsd:complexType name="KeyValueData"> <xsd:sequence> <xsd:element minOccurs="1" maxOccurs="1" name="id" type="string"/> <xsd:element minOccurs="1" maxOccurs="1" name="field" type="string"/> </xsd:sequence> </xsd:complexType> <xsd:complexType name="ArrayOfKeyValueData"> <xsd:sequence> <xsd:element minOccurs="0" maxOccurs="unbounded" name="keyval" type="tns:KeyValueData"/> </xsd:sequence> </xsd:complexType> <message name='getCatalogRequest'> <part name='catalogId' type='ArrayOfKeyValueData'/> </message> <message name='getCatalogResponse'> <part name='Result' type='xsd:string'/> </message> <portType name='CatalogPortType'> <operation name='getCatalogEntry'> <input message='tns:getCatalogRequest'/> <output message='tns:getCatalogResponse'/> </operation> </portType> <binding name='CatalogBinding' type='tns:CatalogPortType'> <soap:binding style='rpc' transport='http://schemas.xmlsoap.org/soap/http' /> <operation name='getCatalogEntry'> <soap:operation soapAction='urn:localhost-catalog# getCatalogEntry'/> <input> <soap:body use='encoded' namespace= 'urn:localhost-catalog' encodingStyle='http://schemas.xmlsoap.org/soap/encoding/'/> </input> <output> <soap:body use='encoded' namespace= 'urn:localhost-catalog' encodingStyle='http://schemas.xmlsoap.org/soap/encoding/' /> </output> </operation> </binding> <service name='CatalogService'> <port name='CatalogPort' binding= 'CatalogBinding'> <soap:address location='server.php'/> </port> </service> </definitions> A: I see your problem you return a var_dump() but var_dump has no return value. Use var_export() instead. Take a look at the manual. So in your code this will look like: $result = $conn->query($sql) or die(mysqli_error($conn)); $row = mysqli_fetch_array($result); return var_export($array,true);
{ "pile_set_name": "StackExchange" }
Q: shell running hadoop commands i wanted to avoid keep tying all the commands to run simple mapreduce everytime i want to test out mapper and reducer files so i wrote this script and i am new to shell scripting. I want to know if these hadoops commands will run in shell scripting and if the script would need any changes? echo textFile :"$1" echo mapper : "$2" echo reducer: "$3" echo inputDir :"$4" echo outputDir: "$5" hdfs dfs -copyFromLocal /home/hduser/"$2" # copies mapper.py file from argument to hdfs dir hdfs dfs -copyFromLocal /home/hduser/"$3" # copies reducer.py file from argument to hdfs dir hdfs dfs -test -d ~/"$5" #checks to see if hadoop output dir exists if [ $? == 0 ]; then hdfs dfs -rm -r ~/"$5" else echo "Output file doesn't exist and will be created when hadoop runs" fi hdfs dfs -test -d ~/"$4" #checks to see if hadoop input dir exists if [ $? == 0 ]; then hdfs dfs -rm -r ~/"$4" echo "Hadoop input dir alread exists deleting it now and creating a new one..." hdfs dfs -mkdir ~/"$4" # makes an input dir for text file to be put in else echo "Input file doesn't exist will be created now" hdfs dfs -mkdir ~/"$4" # makes an input dir for text file to be put in fi hdfs dfs -copyFromLocal /home/hduser/"$1" ~/"$4" # sends textfile from local to hdfs folder # runs the hadoop mapreduce program with given parameters hadoop jar /usr/local/hadoop/share/hadoop/tools/lib/hadoop-streaming- 2.6.2.jar -input /home/hduser/"$4"/* -output /home/hduser/"$5" -file /home/hduser/"$2" -mapper /home/hduser/"$2" -file /home/hduser/"$3" -reducer /home/hduser/"$3" A: Yes. it will run if proper arguments are passed.
{ "pile_set_name": "StackExchange" }
Q: Python: data Structures inside another In Perl you can do stuff like this: my $data->[texts}->{text1} = "hey"; print data->{texts}->{text1}; and it will print "hey". It is like a data structure (sort of array) inside another data structure... Obviously this is possible in Python: data = { 'jack': 4098, 'sape': 4139 } print data['jack']; But I want something like: data['texts']['text1'] like it was done in Perl. And I need to be able to easily remove and add to this structure... Help? A: You are using dict object here. It can store any type of elements you want, including another dict object. That means, you can initialize your data like: data = {'jack': {'age': 20, 'gender': 'M' }, 'sape': {'age': 35, 'gender': 'F' } } And then refer to it's internal values: print(data['jack']['age'] # prints "20"
{ "pile_set_name": "StackExchange" }
Q: python pyplot specify the density of the dashed grid in matplotlib.style file I would ask if somebody know if is possible or not in the file .mplstyle defining the density of the dashed grid .. currently i'm using a class in order to prepare my plots, I would try the method to create a mplstyle file and use this like: plt.style.use(['mystyle', 'mystyle-vega']) I found almost all option .. but once that I didn't realize how to do is how can I specify the dases=(5,9) of the grid, I find just the way to setting the grid as ' -- ' EDIT Ok thank you very much ! A: To set the grid via rcParameters you can use the grid.linestyle parameter. Set this to any value you want. import matplotlib.pyplot as plt plt.rcParams["axes.grid"] = True plt.rcParams["grid.linestyle"] = (5,9) plt.gca() plt.show() It's currently not possible to use such tuples in an external matplotlibrc file.
{ "pile_set_name": "StackExchange" }
Q: Which part of mvc design pattern represents the business logic? I am a little confused about the definition of the business logic in programming because I used to develop without paying an attention to any of these terminologies but now I want to be a good developer. While I was reading in Wiki about the definition of the business logic I have read the following definition: In computer software, business logic or domain logic is the part of the program that encodes the real-world business rules that determine how data can be created, displayed, stored, and changed. and in another website I have read the following definition with example: Business logic is that portion of an enterprise system which determines how data is: Transformed and/or calculated. For example, business logic determines how a tax total is calculated from invoice line items. Routed to people or software systems, aka workflow. So I am wondering about which part of the MVC represents the business logic, is it the controller or the model or another part could be in the MVC? I know that the controller is responsible for sending commands to the model but is it responsible for applying the rules of the business? For example let's take the above example of the tax: Suppose that we got the invoice data from a form in a view, the data will be directed to the controller but where the tax will be calculated, will we calculate it in the controller or will we ask for the help of an external class to compute it or will we compute it in the model before updating the database? Could Example would be appreciated. A: You can put the tax calculation logic in the Controller, but you're better off putting it in the Model as it is more loosely coupled. It is reasonable to want to calculate tax on lots of pages, so don't put it in lots of controllers, put it in a model that can be re-used. If you hear someone talking about "Fat" Controllers vs "Thin" Controllers, that is what they're talking about. Most devs will advocate having very little code in their controllers (making them "thin") and really just acting as a relay/orchestrator off to the Model. I do think the term Model is a bit confusing though because in Service Oriented Architecture (and functional languages for that matter), they stress trying to have "dumb" objects that don't have any functionality and they frequently refer those dumb objects as "Models". But when it comes to MVC, the "Model" really refers to the Business Model, which incorporates dumb objects that hold values AND the services that work on top of them.
{ "pile_set_name": "StackExchange" }
Q: Why is using an optional preferential to null-checking the variable? Take the two code examples: if(optional.isPresent()) { //do your thing } if(variable != null) { //do your thing } As far as I can tell the most obvious difference is that the Optional requires creating an additional object. However, many people have started rapidly adopting Optionals. What is the advantage of using optionals versus a null check? A: Optional harnesses the type system for doing work that you'd otherwise have to do all in your head: remembering whether or not a given reference may be null. This is good. It's always smart to let the compiler handle boring drugework, and reserve human thought for creative, interesting work. Without Optional, every reference in your code is like an unexploded bomb. Accessing it may do something useful, or else it may terminate your program wth an exception. With Optional and without null, every access to a normal reference succeeds, and every reference to an Optional succeeds unless it's unset and you failed to check for that. That is a huge win in maintainability. Unfortunately, most languages that now offer Optional haven't abolished null, so you can only profit from the concept by instituting a strict policy of "absolutely no null, ever". Therefore, Optional in e.g. Java is not as compelling as it should ideally be. A: An Optional brings stronger typing into operations that may fail, as the other answers have covered, but that is far from the most interesting or valuable thing Optionals bring to the table. Much more useful is the ability to delay or avoid checking for failure, and to easily compose many operations that may fail. Consider if you had your optional variable from your example code, then you had to perform two additional steps that each might potentially fail. If any step along the way fails, you want to return a default value instead. Using Optionals correctly, you end up with something like this: return optional.flatMap(x -> x.anotherOptionalStep()) .flatMap(x -> x.yetAnotherOptionalStep()) .orElse(defaultValue); With null I would have had to check three times for null before proceeding, which adds a lot of complexity and maintenance headaches to the code. Optionals have that check built in to the flatMap and orElse functions. Note I didn't call isPresent once, which you should think of as a code smell when using Optionals. That doesn't necessarily mean you should never use isPresent, just that you should heavily scrutinize any code that does, to see if there is a better way. Otherwise, you're right, you're only getting a marginal type safety benefit over using null. Also note that I'm not as worried about encapsulating this all into one function, in order to protect other parts of my code from null pointers from intermediate results. If it makes more sense to have my .orElse(defaultValue) in another function for example, I have much fewer qualms about putting it there, and it's much easier to compose the operations between different functions as needed. A: It highlights the possibility of null as a valid response, which people often assume (rightly or wrongly) is not going to be returned. Highlighting the few times when null is valid allows omitting littering your code with huge numbers of pointless null checks. Ideally setting a variable to null would be a compile time error anywhere but in an optional; eliminating runtime null pointer exceptions. Obviously backwards compatibility precludes this, sadly
{ "pile_set_name": "StackExchange" }
Q: R Alternative to nested for loop to create a list of URLs (expand.grid) I am trying to generate a list of URL combining the two following lists: County<-list("ADAMS", "ALLEGHENY", "ARMSTRONG", "BEAVER", "BEDFORD", "BERKS", "BLAIR", "BRADFORD", "BUCKS", "BUTLER", "CAMBRIA", "CAMERON", "CARBON", "CENTRE", "CHESTER", "CLARION", "CLEARFIELD", "CLINTON", "COLUMBIA", "CRAWFORD", "CUMBERLAND", "DAUPHIN", "DELAWARE", "ELK", "ERIE", "FAYETTE", "FOREST", "FRANKLIN", "FULTON", "GREENE", "HUNTINGDON", "INDIANA", "JEFFERSON", "JUNIATA", "LACKAWANNA", "LANCASTER", "LAWRENCE", "LEBANON", "LEHIGH", "LUZERNE", "LYCOMING", "MCKEAN", "MERCER", "MIFFLIN", "MONROE", "MONTGOMERY", "MONTOUR", "NORTHAMPTON", "NORTHUMBERLAND", "PERRY", "PHILADELPHIA", "PIKE", "POTTER", "SCHUYLKILL", "SNYDER", "SOMERSET", "STATE LEVEL SITES", "SULLIVAN", "SUSQUEHANNA", "TIOGA", "UNION", "VENANGO", "WARREN", "WASHINGTON", "WAYNE", "WESTMORELAND", "WYOMING", "YORK") RepPeriod<-list ("15AUGU","15JULU","15JUNU","15MAYU","15APRU", "15MARU", "15FEBU", "15JANU", "2015-1", "2014-2","2014-1","2014-0", "2013-2","2013-1","2013-0", "2012-2","2012-1","2012-0","2011-2","2011-1","2011-0", "2010-3","2010-2","2010-0", "2009-0","2008-0","2007-0", "2006-0","2005-0","2004-0","2003-0","2002-0","2001-0","2000-0") In total it will be a list of 2312 elements (68 COUNTIES* 34 REPORTING PERIODS) I have tried this: URLlist<-as.character(c(1:2312)) for (a in 1:2312){ for (i in 1:length(RepPeriod)){ for (j in 1:length(County)){ URLlist[a]<-paste0("https://www.paoilandgasreporting.state.pa.us/publicreports/Modules/Production/ProductionByCountyExport.aspx?UNCONVENTIONAL_ONLY=false&INC_HOME_USE_WELLS=true&INC_NON_PRODUCING_WELLS=true&PERIOD=",RepPeriod[i],"&COUNTY=",County[j]) } } } And it is just pasting the last reporting period and county 2312 times, instead of generating permutations: URLlist[1:3] [1] "https://www.paoilandgasreporting.state.pa.us/publicreports/Modules/Production/ProductionByCountyExport.aspx?UNCONVENTIONAL_ONLY=false&INC_HOME_USE_WELLS=true&INC_NON_PRODUCING_WELLS=true&PERIOD=2000-0&COUNTY=YORK" [2] "https://www.paoilandgasreporting.state.pa.us/publicreports/Modules/Production/ProductionByCountyExport.aspx?UNCONVENTIONAL_ONLY=false&INC_HOME_USE_WELLS=true&INC_NON_PRODUCING_WELLS=true&PERIOD=2000-0&COUNTY=YORK" [3] "https://www.paoilandgasreporting.state.pa.us/publicreports/Modules/Production/ProductionByCountyExport.aspx?UNCONVENTIONAL_ONLY=false&INC_HOME_USE_WELLS=true&INC_NON_PRODUCING_WELLS=true&PERIOD=2000-0&COUNTY=YORK" Can anybody help me see what I am doing wrong? Links to useful posts would help too. A: You can eliminate loops using expand.grid, which expands all combinations of two vectors: z <- expand.grid(RepPeriod, County) URLlist <- paste0("https://www.paoilandgasreporting.state.pa.us/publicreports/Modules/Production/ProductionByCountyExport.aspx?UNCONVENTIONAL_ONLY=false&INC_HOME_USE_WELLS=true&INC_NON_PRODUCING_WELLS=true&PERIOD=",z$Var1,"&COUNTY=",z$Var2) A: The loop was not working because each element of the iteration of the first loop URLlist[a] is overwritten 68*34 times and at the end of each time only the last combination, i.e York a and and 2000-0 is stored. You have to have an incremental counter in the middle of the loop to avoid this such as this loop: {k = 0 for (i in 1:length(RepPeriod)){ for (j in 1:length(County)){ URLlist[j+k]<-paste0("........",RepPeriod[i],"&COUNTY=",County[j]) } k = k + 68 }}
{ "pile_set_name": "StackExchange" }
Q: SQL LIKE Operator with hash I have problem with my query. Mysql: ------------------------------------ | id | string | ------------------------------------ | 1 | people#-#car#-#music#-#art | | 2 | music#-#sport#-#cars | | 3 | photography#-#sport#-#art | | 4 | car#-#peoples#-#photography | | 5 | music#-#games#-#sport | .... ------------------------------------ I want to find all items with (for. example) specific word people Now when i do: SELECT `id` FROM `components` WHERE lower(string) LIKE '%$word%' will find: ------------------------------------ | id | string | ------------------------------------ | 1 | people#-#car#-#music#-#art | | 4 | car#-#peoples#-#photography | .... ------------------------------------ But I need to find only a specific word. In my case (id4) peoples does not fit. And my solution does not work ... SELECT `id` FROM `components` WHERE lower(string) = '$word' OR lower(string) LIKE '$word#-#%' OR lower(string) LIKE '%#-#$word' OR lower(string) LIKE '%#-#$word#-#%' In this case, I do not have any result. What am I doing wrong? Good thinking? I ask for advice. Is hash in a string a bad solution? A: The direct answer to your question is: SELECT `id` FROM `components` WHERE concat('#', string, '#') like concat('%#', $word, '#%); The correct approach is to not store lists of things as strings. SQL has a great structure for storing lists -- it is called a table. You should have a table like ComponentWords with one row for each component and word.
{ "pile_set_name": "StackExchange" }
Q: Comparing two n×m matrices I have to do a pairwise compare of a couple of text files. They have the following format: id_1|errorcode_1|m|data_1_1|data_1_2|...|data_1_m id_2|errorcode_2|m|data_2_1|data_2_2|...|data_2_m . . . id_n|errorcode_n|m|data_n_1|data_n_2|...|data_n_m The columns are ordered, but the rows can be in an arbitrary order. And there are always a couple of data-fields that have to be ignored. This is my method that I call twice (switching the target and source array). It takes around 20 minutes to execute with two files where n = 16000 and m = 100. private static string[] GetDifferences( IEnumerable<string> source , IEnumerable<string> target , IEnumerable<int> ignoredIndices) { var uniqueInSource = new List<string>(); foreach (var sourceLine in source) { var found = false; var sourceSplit = sourceLine.Split('|').ToList(); // filter out malformed lines if (sourceSplit.Count > 2 && !int.TryParse(sourceSplit[2], out _)) { continue; } foreach (var index in ignoredIndices) { sourceSplit.RemoveAt(index + 3); } foreach (var targetLine in source) { var targetSplit = targetLine.Split('|').ToList(); if (targetSplit.Count > 2 && !int.TryParse(targetSplit[2], out _)) { continue; } foreach (var index in ignoredIndices) { targetSplit.RemoveAt(index + 3); } if (targetSplit.SequenceEqual(sourceSplit)) { found = true; break; } } if (!found) { uniqueInSource.Add(string.Join("|", sourceLine)); } } return uniqueInSource.ToArray(); } I feel that my approach might be too naive since I think it's somewhere around O(m*n2). A: Your method is currently doing three things at a time and repeating the same logic multiple times: parsing logs-1 (parsing a line) parsing logs-2 (parsing a line again with the exact same code) comparing both logs It would be much easier to optimize and use it if you refactored it into specialized structures/methods. You put the first two cases into the static factory method Parse. It'll encapsulate the parsing part so you have implemented it only once. Then you need to encapsulate the equality. You do this by implementing the IEquatable<LogLine> interface. You should do this very carefuly because both the Equals and GetHashCode methods are important for the performance. This new class can now be very easily tested for: equality, parsing and performance. public class LogLine : IEquatable<LogLine> { public string Errorcode { get; set; } public string[] Data { get; set; } // add other properties... public static LogLine Parse(string log, IEnumerable<int> ignoreColumns) { // implement parsing } public bool Equals(LogLine other) { // implement log equality } public override bool Equals(object obj) { return base.Equals(obj as LogLine); } public override int GetHashCode() { // implement log hash-code } } And that's it because you now have a very LINQ-friendly class that you can use like this where you can parse all lines with a simple Select: // dummies var logs1 = Enumerable.Empty<string>(); var logs2 = Enumerable.Empty<string>(); var ignoreColumns = Enumerable.Empty<int>(); var logLines1 = logs1.Select(l => LogLine.Parse(l, ignoreColumns)); var logLines2 = logs2.Select(l => LogLine.Parse(l, ignoreColumns)); And you can compare them however you like with Except or Intersect or use GroupBy or Distinct or whatever you want. If you use more then a single extension should call .ToList() after each parsing so that you don't parse them multiple times.
{ "pile_set_name": "StackExchange" }
Q: IE10 is out, they talk about 'Web Workers' which offload long-running JS out of the webpage into a seperate worker, Does Google Chrome have this? Does Google Chrome do something like what they're describing in IE10? Or is Chrome lacking here? or is it different? A: Firefox has had Web Workers since version 3.5 Chrome has had Web Workers since version 3 Opera has had Web Workers since version 10.60 Internet Explorer was the last of the major browsers to implement Web Workers.
{ "pile_set_name": "StackExchange" }
Q: WordPress: Delete all post with English language in title I have a blog site that auto blog, and now I want to delete all posts that have English language in title. For example:- I have title like (UGG Ascot 5775 Noir Homme Super Qualité) its English title, I need to delete all this title, Because i need to show Arabic title only There are too many posts (400+) to do it manually. Are there any plugin that may help to auto delete a blog post? Thanks! A: Run this query on mysql command prompt or phpmyadmin (or each tool that you use): DELETE FROM wp_posts WHERE post_title REGEXP '([A-Za-z])';
{ "pile_set_name": "StackExchange" }
Q: Is this conjecture on Fibonacci sums correct? I have the following conjecture, need to know a proof just in case mine is wrong, or the conjecture itself is wrong. The sum of $k$ distinct Fibonacci numbers can be written in at most $k$ ways as the sum of another $k$ distinct numbers from the sequence Proof: Take, say 4, Fibonacci numbers with distinct values: $$144+34+3+2$$ We can see that we must maintain the distinctness and number of the Fibonacci numbers. So when we split the numbers; $$[89+55]+[13+21]+3+2$$ It is immediately apparent that if some numbers are split into smaller Fibonacci numbers, others must be merged to keep only 4 numbers. Thus, 3+2 becomes 5, and either 144 or 34 kept as-is. Thus, $$144+34+3+2=89+55+34+5=144+21+13+5$$ I know it's a rudimentary proof, and $k$ ways of writing the same sum is probably a wrong upper bound, but at least it works for now. Please do let me know what's wrong. Thanks. A: We show that the conjectured upper bound of $k$ is a fair distance from the truth. Let $W$ be a set of $3n$ distinct Fibonacci numbers, of which $n$ are lonely (no other Fibonacci number in $W$ is near them) and the remaining $2n$ are lonely couples. A lonely couple is a set of two consecutive Fibonacci numbers, with no other element of $W$ near them. Let $N$ be the sum of the Fibonacci numbers in $W$. Then we can get another representation of $N$ as a sum of $3n$ Fibonacci numbers by selecting any $k$ lonely numbers, and representing each of them as a sum of two consecutive Fibonacci numbers, and selecting any $k$ lonely couples, and expressing each of their sums as a single Fibonacci number. The $k$ lonely numbers can be chosen from the $n$ lonely numbers in $W$ in $\binom{n}{k}$ ways. For each choice, the $k$ lonely couples can be chosen in $\binom{n}{k}$ ways. It follows that $N$ has at least $$\sum_{k=0}^n \binom{n}{k}\binom{n}{k}$$ representations as a sum of distinct Fibonacci numbers. It is well-known that the above sum is equal to $\binom{2n}{n}$. For large $n$, this is much larger than the $3n$ that the conjecture would suggest. Indeed by using the Stirling formula, one can show that $\binom{2n}{n}$ is asymptotically equal to $\sqrt{\frac{2}{\pi}}\frac{2^{2n}}{\sqrt{2n+1}}$.
{ "pile_set_name": "StackExchange" }
Q: Cognito save changes in user pool is not working We are using cognito userpool for authentication and I had enabled email verification under MFA and verification , so after some time I am trying remove that verification by unchecking the email check box , I always get an error Your roles are still being created. Please wait and try again . I waited for week , still the problem presists. I just need to uncheck email verification. Thank you in advance. A: I had an issue where the SMS role was accidentally deleted. It may have never been created either. At the bottom of the MFA section you'll see an input box with the ability to name the role and then a button "Create Role" to click on. If you have a grayed out role name already. Look for it in IAM. If it doesn't exist, you will need to re-create it. Unfortunately there is no way to do this in IAM and have it work for Cognito because it requires a path prefix for the service role (of service-role). I tried re-creating via CLI and while it made the matching role, it still didn't work. So the best thing to do is make a new (dummy) pool and create the SMS role there that matches the name of the one used by the other pool where you're seeing that error message. Then you will need to update the role to ensure the ExternalId matches (it's a UUID). The only way you can find this UUID is via CLI, so you'll need to find it using the command: aws cognito-idp get-user-pool-mfa-config --user-pool-id=xxxx It should return the current role name and it's ExternalId so you can then go back to IAM and find the newly AWS created SMS role and update it's policy JSON to include the proper UUID. Finally, get rid of the dummy pool you had created because it too will now be afflicted with the "Your roles are still being created." bug. Essentially, it's just stuck and needs it's config pointed to the proper role (using it's ExternalId) and unfortunately there's not enough dashboard controls to fix the issue. You have to kinda hack around it a little bit until they can fix it.
{ "pile_set_name": "StackExchange" }
Q: How to remove .local suffix from Ubuntu hostname? When I try to connect to my Ubuntu device with the ssh terminal command, I have to add .local suffix to Ubuntu hostname. For example, if I want to ssh to my device with its hostname, my ssh command must have the .local suffix and command has the following format: ssh username@device_hostname.local I'm aware that the .local suffix was appended by avahi (zeroconf) Linux service, but is there any quick (easy) way to bypass it? I want to be able to "ssh" my device only by its hostname without .local suffix at the end of the command, like this: ssh username@device_hostname What exactly I want to achieve is to completely remove the .local suffix from host name. I've read about running my private DNS server, but I would like to know if there is a less complex solution. A: Your clients will need at least one way to resolve the hostname to an IP address. The mechanism you already discovered works through automatic configuration and happens within the .local domain name. You could define the mapping from hostname to IP addresses on every client, but this is not recommended practice. You will have to go through some kind of automatic name resolution mechanism, all of which require use of some kind of domain name in the background. However, you can easily get rid of having to type the domain name each time by doing one of: ssh hostname canonicalisation If you put the following into /etc/ssh_config or ~/.ssh/config: Host * CanonicalDomains local CanonicalizeHostname yes ssh will automatically append local to any hostname. So when you type ssh host it will actually do ssh host.local. search domains While the above will work only for ssh, you can also configure a similar thing for all network connections by configuring the dns resolver on your client accordingly. It is the "classical" way of having hostname shortcuts. Depending on which setup you have, you would add local to the list of search domains. This is what I do. There should be plenty description for this available, like this. hardcoding to /etc/hosts This is not recommended at all, because although it seems simple in the beginning, it will become hard to maintain soon. But it is the only way known to me that allows to completely get rid of domain names. Edit the file /etc/hosts and add lines like this: 192.168.1.3 host3 192.168.1.4 host4 While the first word in each line is the IP address of the corresponding host. You have to do that on every client. After that you can use ssh host3, and not even in the background a domain name will be used.
{ "pile_set_name": "StackExchange" }
Q: I'm confused about what the point of community wiki is So I've read the FAQ, http://blog.stackoverflow.com/2011/08/the-future-of-community-wiki/ and What are "Community Wiki" posts?. Honestly, I'm more confused about what it should be now (even after). Is this merely a voting and points issue? It seems like an attempt to disconnect a question or answer from the user because it is not asked or answered well - if that's the case, what does wiki have to do with it? I'm not asking saying it should be changed because clearly a lot is written about it but could it summarized in like 1 sentence? Not asking to be argumentative but would just like a crystallized idea. A: I principally use community wiki when I'm doing clean-up. Example: The user wrote a question and THEN solved his question and put elements of solution in the question body instead of in an answer. Well then I edit the question, fix the issues and remove the solution from the question and put it in an answer. It's not MY answer so I put it in a CW post with the attribution to the author. That way the question is clean, and the answer can now be found easily in an answer. In fact you can consider making a CW post when you are posting a (good) solution that doesn't come from you AND (very important) you want other people to help you improve it, or think other could come and improve it. For example someone put the solution in comments, but is no longer active and cannot post it in an answer, well then you post it in an answer as CW and mention who it comes from. Please, though, provide an answer that is well formatted and that fits the FAQ about how to answer! If you take an incomplete answer from comments, and improve it yourself and make it a complete answer, you should take the credit for it and post it with your name. You should NOT use CW to write answers that you know would get downvoted (and you want to avoid the -rep for it), or as a way to post really incomplete answers. A: The Community Wiki status is a way to encourage collaboration on an answer, by lowering the required reputation threshold for edits to 100. That's about it, in a sentence. After suggested edits were introduced, Community Wiki became more a relic of the past than a useful feature. Frankly, it's probably the most abused feature on Stack Exchange, but we don't care because all the possible ways we can abuse it are extremely harmless.
{ "pile_set_name": "StackExchange" }
Q: PHP: csv remodelling I'm a newbie trying code, and have an issue that i can not get my head around and would appreciate some help. I have Csv's file looks something like this (always id,url but the amount of urls / id differ): 12345,wwwurl1 12345,wwwurl2 12345,wwwurl3 12346,wwwurl1 12347,wwwurl1 12347,wwwurl2 ...,... and so on How to make this look like this, easier to show than explain i think :) 12345,"wwwurl1,wwwurl2,wwwurl3" 12346,"wwwurl1" 12347,"wwwurl1,wwwurl2" ...,"...,..." and so on Basically only one row per id, all urls added with comma separator. All help is appreciated! A: <?php $file = fopen(__DIR__ . '\\' . 'test.csv', 'r+');//your file $fileResult = fopen(__DIR__ . '\\' . 'result.csv', 'w+');//result file $rowInfo = []; while (($data = fgetcsv($file, 0, ",")) !== FALSE) { $rowInfo[$data[0]] .= $data[1] . ','; } foreach ($rowInfo as $key => $value) { fputcsv($fileResult,[$key,rtrim($value, ",")]); } fclose($file); fclose($fileResult);
{ "pile_set_name": "StackExchange" }
Q: Convert time stored in DB based on DST I have times stored in DB. ID TYPE STARTTIME ENDTIME 1 WEEKDAY 08:00:00 18:00:00 2 SATURDAY 10:00:00 16:00:00 3 SUNDAY 10:00:00 16:00:00 4 HOLIDAY 10:00:00 16:00:00 Java code : ZonedDateTime.of(date, entity.getStartTime(), ZoneId.systemDefault()); This is retrieving start time as 09:00:00 and end time as 19:00:00 when the day light saving ends. Whats the best way to retrieve these times based on the day light saving? Example For a weekday I need to get the start time as 08:00:00 and end time as 18:00:00 irrespective of DST. A: I know this might be weird to do, but you can convert it into UTC and then use withZoneSameLocal for converting same instant into another zone This method changes the time-zone and retains the local date-time. The local date-time is only changed if it is invalid for the new zone, determined using the same approach as ofLocal(LocalDateTime, ZoneId, ZoneOffset). ZonedDateTime dateTime = ZonedDateTime.of(LocalDate.now(),LocalTime.now() ,ZoneId.of("UTC")); System.out.println(dateTime); //2019-11-19T10:48:12.324356Z[UTC] ZonedDateTime result = dateTime.withZoneSameLocal(ZoneId.of("America/Chicago")); // provide ZoneId.systemDefault() System.out.println(result); //2019-11-19T10:48:12.324356-06:00[America/Chicago]
{ "pile_set_name": "StackExchange" }
Q: NoClassDefFoundError with AppEngine sample projects I have installed AppEngine Eclipse plugin for Juno according to the instructions here: https://developers.google.com/appengine/docs/java/tools/eclipse However when running a number of the provided sample projects (eg. ShardedCounter), a NoClassDefFoundError would be thrown, stating that class com/google/appengine/tools/development/DevAppServerFactory$CustomSecurityManager$StackTraceAnalyzer cannot be found: java.lang.NoClassDefFoundError: com/google/appengine/tools/development/DevAppServerFactory$CustomSecurityManager$StackTraceAnalyzer at com.google.appengine.tools.development.DevAppServerFactory$CustomSecurityManager.appHasPermission(DevAppServerFactory.java:334) at com.google.appengine.tools.development.DevAppServerFactory$CustomSecurityManager.checkPermission(DevAppServerFactory.java:379) at com.google.appengine.tools.development.DevAppServerFactory$CustomSecurityManager.checkAccess(DevAppServerFactory.java:408) at java.lang.ThreadGroup.checkAccess(ThreadGroup.java:299) at java.lang.Thread.init(Thread.java:336) at java.lang.Thread.<init>(Thread.java:608) at java.util.concurrent.Executors$DefaultThreadFactory.newThread(Executors.java:541) at com.google.appengine.tools.development.ApiProxyLocalImpl$DaemonThreadFactory.newThread(ApiProxyLocalImpl.java:644) at java.util.concurrent.ThreadPoolExecutor.addThread(ThreadPoolExecutor.java:672) at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:721) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:270) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:255) at com.google.appengine.tools.development.ApiProxyLocalImpl.makeAsyncCall(ApiProxyLocalImpl.java:203) at com.google.apphosting.api.ApiProxy.makeAsyncCall(ApiProxy.java:190) at com.google.appengine.api.datastore.DatastoreApiHelper.makeAsyncCall(DatastoreApiHelper.java:56) at com.google.appengine.api.datastore.PreparedQueryImpl.runQuery(PreparedQueryImpl.java:127) at com.google.appengine.api.datastore.PreparedQueryImpl.asIterator(PreparedQueryImpl.java:60) at com.google.appengine.api.datastore.BasePreparedQuery$1.iterator(BasePreparedQuery.java:25) at com.google.appengine.demos.shardedcounter.java.v1.ShardedCounter.getCount(ShardedCounter.java:59) at com.google.appengine.demos.shardedcounter.java.v1.CounterPage.doGet(CounterPage.java:36) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166) at com.google.appengine.api.socket.dev.DevSocketFilter.doFilter(DevSocketFilter.java:74) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.ResponseRewriterFilter.doFilter(ResponseRewriterFilter.java:123) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.HeaderVerificationFilter.doFilter(HeaderVerificationFilter.java:34) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.api.blobstore.dev.ServeBlobFilter.doFilter(ServeBlobFilter.java:63) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:43) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.StaticFileFilter.doFilter(StaticFileFilter.java:125) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.DevAppServerModulesFilter.doDirectRequest(DevAppServerModulesFilter.java:368) at com.google.appengine.tools.development.DevAppServerModulesFilter.doDirectModuleRequest(DevAppServerModulesFilter.java:351) at com.google.appengine.tools.development.DevAppServerModulesFilter.doFilter(DevAppServerModulesFilter.java:116) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418) at com.google.appengine.tools.development.DevAppEngineWebAppContext.handle(DevAppEngineWebAppContext.java:97) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at com.google.appengine.tools.development.JettyContainerService$ApiProxyHandler.handle(JettyContainerService.java:485) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:923) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:547) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) The only modification I did to the sample projects was to remove the usage of Google Web Toolkit, as the test server could not be started if GWT is used. I have just started trying out AppEngine, any help is greatly appreciated. A: Set Eclipse to use JDK 7 seem to fix the issue, which explains why it runs on the actual App Engine server but fails locally. Still the error it gives is quite strange... Probably will do further testing and update the info.
{ "pile_set_name": "StackExchange" }
Q: Repeating to the end of an array within a while? I have the following code: var_dump(explode(',',$_POST['colum_names'])); echo '<table border="1">'; $result = $con->query("" . $_POST['sql_command'] . ""); while($row = $result->fetch_array()) { echo '<tr>'; ... echo '</tr>'; } echo '</table>'; Inbetween the echo '<tr>' and echo '</tr>' I wish to be able to use the explode array hard to explain... So for example it might look like: $_POST['colum_names'] = a,b,c => array(3) { [0]=> string(1) "a" [1]=> string(1) "b" [2]=> string(1) "c" } => echo '<tr>'; echo $row[array[0]]; echo '</tr>'; => echo '<tr>'; echo $row[array[1]]; echo '</tr>'; => echo '<tr>'; echo $row[array[3]]; echo '</tr>'; If you get what I mean, I hope you can help! A: $columns = explode(',',$_POST['colum_names']); echo '<table border="1">'; $result = $con->query("" . $_POST['sql_command'] . ""); while($row = $result->fetch_array()) { echo '<tr>'; foreach( $columns as $column ) { echo '<td>' . $column . '</td>'; } echo '</tr>'; } echo '</table>'; Untested but should work
{ "pile_set_name": "StackExchange" }
Q: No way to export video: h264 (ai56 / 0x36356961) FCP only? I have to give up on this and turn to you. Got several videos for editing that use this codec Video: h264 (ai56 / 0x36356961). I can read that using ffmpeg, it´s one of the channels of the .mov file. None of the usual suspects can read it: iMovie, QuickTime, VLC, ffmpeg, Perian, .... Nothing. I have the impression this codec can only be used inside Final Cut Pro. Am I right? Here is the smallest one Video: http://dl.dropbox.com/u/4234369/00101W.mov Thanks!! A: If you can get the codec, you can decode and play the file. The ai56 4cc is the Calibrated{Q} AVC Intra codec. It seems straight forward to use this codec in QuickTime, but probably will require significant effort (as it contact the vendor and code something special) to get it to work with any other software, including your own. It seems that the only application that has built-in support for ai56 is FinalCut 6.
{ "pile_set_name": "StackExchange" }
Q: Is there a way to use PySpark with Hadoop 2.8+? I would like to run a PySpark job locally, using a specific version of Hadoop (let's say hadoop-aws 2.8.5) because of some features. PySpark versions seem to be aligned with Spark versions. Here I use PySpark 2.4.5 which seems to wrap a Spark 2.4.5. When submitting my PySpark Job, using spark-submit --local[4] ..., with the option --conf spark.jars.packages=org.apache.hadoop:hadoop-aws:2.8.5, I encounter the following error: py4j.protocol.Py4JJavaError: An error occurred while calling o32.sql With the following java exceptions: java.lang.NoClassDefFoundError: org/apache/hadoop/fs/StorageStatistics Or: java.lang.IllegalAccessError: tried to access method org.apache.hadoop.metrics2.lib.MutableCounterLong.<init (Lorg/apache/hadoop/metrics2/MetricsInfo;J)V from class org.apache.hadoop.fs.s3a.S3AInstrumentation I suppose that the Pyspark Job Hadoop version is unaligned with the one I pass to the spark-submit option spark.jars.packages. But I have not any idea of how I could make it work? :) A: Ok, I found a solution: 1 - Install Hadoop in the expected version (2.8.5 for me) 2 - Install a Hadoop Free version of Spark (2.4.4 for me) 3 - Set SPARK_DIST_CLASSPATH environment variable, to make Spark uses the custom version of Hadoop. (cf. https://spark.apache.org/docs/2.4.4/hadoop-provided.html) 4 - Add the PySpark directories to PYTHONPATH environment variable, like the following: export PYTHONPATH=$SPARK_HOME/python/lib/py4j-0.10.7-src.zip:$SPARK_HOME/python:$SPARK_HOME/python/build:$PYTHONPATH (Note that the py4j version my differs) That's it.
{ "pile_set_name": "StackExchange" }
Q: How to transfer a webapp to https from the cloudflare? It was not in my plans to install the server as nginx, so my web application is launched on the node.js server. There, similar constructions are used to refer to certain pages: on server: if(process.env.NODE_ENV === 'production') { app.use('/', express.static(path.join(__dirname, '../', 'client', 'dist'))) } app.use('/api/bonds', bonds); const port = 80; on client: const url = '1.2.3.4:80/api/bonds'; class BondsService { static getBonds() { return new Promise(async (resolve, reject) => { try { const res = await axios.get(url); const data = res.data; resolve(data.map(bond => ({ ...bond }))); } catch (e) { reject(e); } }) } I transferred my domain to cloudflare, and set the free SSL certificate to flexible mode. When I access the application through http, everything works, but when does the http give such an error: xhr.js:178 Mixed Content: The page at 'https://example.com/' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint 'http://1.2.3.4/api/bonds'. This request has been blocked; the content must be served over HTTPS. How can you fix it? A: We had an extensive(an hour long) discussion in the chat and I dont know how to refer the chat here entirely.So i am just posting the solution which can help you go in the right direction: If you are part of organisation then you first have to check with Cloudflare team or whoever is the contact person from infra Connection between browser and cloudfare is https or not ? if its already https then does the CA certificate is already part of the browser/system or do we need to explicitly load it. What about the connection between cloudflare and node.js server - is it encrypted or not...if it is encrypted then you also have to load the certificates of node.js server or cloudflare server's certificates into each other's trust store. This will depend on whether it is mutual TLS or not. If it is going to be http traffic between cloudfare and node.js server then no certificates required. Please get more understanding on https/SSL handshake process to get more clarity.
{ "pile_set_name": "StackExchange" }
Q: Use OOPS to design database tables schema? Usually, we firstly check the project requirements and set up tables, and then do 1/2/3-NF normalisation. I don't like this way, because it is not Object-oriented way. So any body could share exprience how we use OOP to design complicate table schema/relationship ? Even a link/book ISBN is welcome. That is very important for me. Thanks A: Relational databases can't be object oriented. Trying force them into an object-oriented model has been the cause of many poor designs over the years. The core of object oriented programming is putting the code and the data into the same "object". Putting code into a relational database is a bad design. Make your relational database good at storing normalized data (the 1/2/3-NF you talked about). You can do your application design first (please do), and that will influence what tables are created, and how much you normalize them, but the database design itself should not be object oriented.
{ "pile_set_name": "StackExchange" }
Q: Archive pom.xml Maven Hibernate I am changing a project of type: Java Web Project to a Type Project: Maven Web Project and I have problems with the pom.xml file and dependencies. pom.xml <dependencies> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.40</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <version>4.3.1.Final</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-entitymanager</artifactId> <version>4.3.1.Final</version> </dependency> <!-- https://mvnrepository.com/artifact/org.hibernate/hibernate-commons-annotations --> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-commons-annotations</artifactId> <version>4.0.2.Final</version> </dependency> <!-- https://mvnrepository.com/artifact/org.hibernate/hibernate-c3p0 --> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-c3p0</artifactId> <version>4.3.1.Final</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-ehcache</artifactId> <version>4.3.1.Final</version> </dependency> <dependency> <groupId>javax</groupId> <artifactId>javaee-web-api</artifactId> <version>7.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.apache.poi</groupId> <artifactId>poi</artifactId> <version>3.15</version> </dependency> <dependency> <groupId>org.apache.poi</groupId> <artifactId>poi-ooxml</artifactId> <version>3.15</version> </dependency> </dependencies> And Netbeans generates this error: **Could not find artifact org.hibernate:hibernate-commons- annotations:jar:4.0.2.Final in unknown-jars-temp-repo** Any solution? Thanks! A: Looks like net beans add repository unknown-jars-temp-repo for dependencies it could not identify. Check this. However, your problem looks to be incorrect group id of hibernate-commons-annotations. It should be org.hibernate.common and not org.hibernate if you would like it to be downloaded from Maven central. Go and search maven central if you face similar problem.
{ "pile_set_name": "StackExchange" }
Q: Authenticating to VisualStudioOnline REST API with Personal Access Token using Python 3.6 I am trying to use the VisualStudioOnline REST API using python 3.6. (Plenty of examples using python 2.x.) The python script response is the generic html login page. I have tested the url generated by this script using REST Console Chrome plug-in and it worked fine using my personal access token. import json import base64 import urllib.request personal_access_token = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" headers = {} headers['Content-type'] = "application/json" headers['Authorization'] = b'Basic ' + base64.b64encode(personal_access_token.encode('utf-8')) instance = "mycompany.visualstudio.com" project = "MyProject" repository ="MyRepository" pullrequest = "3468" api_version = "3.0" repositories_url = ("https://%s/DefaultCollection/%s/_apis/git/repositories? api-version=%s" % (instance, project, api_version)) print(repositories_url) request = urllib.request.Request(repositories_url, headers=headers) opener = urllib.request.build_opener() response = opener.open(request) print(response.read()) Powershell Example How do I authenticate to Visual Studio Team Services with a Personal Access Token? C# and curl example https://www.visualstudio.com/en-us/docs/integrate/get-started/authentication/pats A: In my experience with doing this via other similar mechanisms, you have to include a leading colon on the PAT, before base64 encoding.
{ "pile_set_name": "StackExchange" }
Q: MySqlConversionException when accessing DateTime field from DataReader I have a C# application over MySql, using MySQL Connector; I'm trying to make a DataReader request, the query executes fine, however, when trying to access a DateTime field, i'm getting MySqlConversionException {"Unable to convert MySQL date/time value to System.DateTime"} this is the prototype if (dr != null && !dr.Read()) return; sesion.Id = Convert.ToInt32(dr["id"]); sesion.Usuario = Convert.ToInt32(dr["usuario"]); sesion.Estado = Convert.ToByte(dr["estado"]); // doesn't work sesion.FchCreacion = Convert.ToDateTime(dr["fch_creacion"]); Any suggestions? Thanks in advance A: This error sometimes occurs if you have zero datetime values in your MySQL database (00/00/0000 00:00). Try adding this to the end of your connection string: Allow Zero Datetime=true
{ "pile_set_name": "StackExchange" }
Q: How to get a user's groups/places through API (can see in browser, not through API) This works: https://developers.facebook.com/tools/explorer/145634995501895/?method=GET&path=me/groups This doesn't: https://developers.facebook.com/tools/explorer/145634995501895/?method=GET&path=1234567890/groups (where 1234567890 is someone who is not a friend, but is in or applying to a group I admin). However, this works: https://www.facebook.com/1234567890 and I can see their groups just fine. QUESTION: Is there a way I can act-as-me or other permission to see someone's groups through an app? This is likely a permissions/token thing - but I need to 'act-as-me' and see what I can see/have-rights-to. END GOAL: I need to assist my rejecting of automated spammed join-requests. Easy to recognize who is fake, so I want to automate that. But, how can I have an app see what I see in FB (without screen scraping). Thanks, A: Even though you can see their groups publicly, you can't get their groups using the Graph API unless you have a valid Access Token from that user with user_groups permission. The documentation for the User Groups clearly says that: A user access token with user_groups permission is required to view the groups that person is a member of. A user access token with friend_groups permission is required to view the groups that person's friends are members of. You can find more details about Access tokens here.
{ "pile_set_name": "StackExchange" }
Q: print markers with v2 API with different order than the table info i have a sql query and i pass the result with mysql_fetch_array into a while loop, and i am printing a table. The think is that with the same info included in each row i am building the markers on the map. As i am building the map, the problem is that the map always centered at the final row of the table (C marker). Instead of this i want the map centered to the marker A (first row of the table). I was searching about to reverse the array but doesn't work. <?php while($info4 = mysql_fetch_array($result4)) { ?> // A function to create the marker and set up the event window function createMarker(point, name, html, flag) { //set the icon of the marker var letteredIcon = new GIcon(baseIcon); letteredIcon.image = "markers/"+flag+".png"; // Set up our GMarkerOptions object markerOptions = { icon:letteredIcon }; var marker = new GMarker(point, markerOptions); GEvent.addListener(marker, "click", function() { marker.openInfoWindowHtml(html); }); // save the info we need to use later for the side_bar gmarkers.push(marker); // add a line to the side_bar html side_bar_html += '<td><a href="javascript:myclick(' + (gmarkers.length-1) + ')">' + name + '<\/a><\/td>'; return marker; } // add the points map.setCenter(new GLatLng(<?php print $info4['placeY'];?>,<?php print $info4['placeX'];?>), 14); var point = new GLatLng(<?php print $info4['placeY'];?>,<?php print $info4['placeX'];?>); var marker = createMarker(point,"<?php print utf8_decode($info4['placeName']);?>","<?php print utf8_decode($info4['placeName'])."<br>".utf8_decode($info4['placeAddress'])."<br>".utf8_decode($info4['placeDistrict']);?>","<?=$flag;?>") map.addOverlay(marker); <?php $flag=$flag+1; } ?> An example: Table: A Sight-seeing Odeon B Sight-seeing Phenomenon of Tides C Sight-seeing Red House In this example the map is centered on the C marker instead of A marker which i want to see. A: So, instead of using while loop, the correct code is below. //make an array $rows = array(); while ($row=mysql_fetch_array($result4)){ //push the rows to the empty array array_push($rows, $row); } // reverse the order of the rows $reversedarray=array_reverse($rows); //use the info of each row as the variable $info4 foreach ($reversedarray as $info4) { //the code
{ "pile_set_name": "StackExchange" }
Q: Grabbing ID Field from Element jQuery Each I'm trying to cycle through all of the children in a <ul> list, assign an onclick event listener and use the ID of the element for that event. I am using jQuery and jQueryMobile in my application. For some reason, the ID property of the element always shows up blank? When I alert the actual element, I get [ObjectHTMLLIElement]. When I use the DevTools, the ID field is present in all of the Elements? This is the ListView This is how I am trying to accomplish this... function set_onclicks() { var count = $('#main-listview').children().length; if (count > 0) { $('#main-listview').children().each(function() { this.onclick = function() { project_id = this.id; $.mobile.changePage('#main-project-page', 'slide', true, true); } }); } } Can anyone point me in the right direction and help explain why I cannot grab the ID? Thanks in advance for any help! Nathan Recent Edits I have tried implementing this, now that I know I do not need to bind through the .each() loop. $('#main-listview').children().click(function() { alert(this); alert(this.id); project_id = this.id; $.mobile.changePage('#main-project-page', 'slide', true, true); }); The first alert gives me [ object HTTMLLIElement ] the next alert gives me a blank...? EDIT 2 - HTML Example I think I know why now. jQueryMobile is adding some <div> elements before it adds an <a> link. Now, I am going to have to figure out how to get around this.. <ul data-role="listview" data-inset="true" data-filter="true" data-filter-theme="a" id="main-listview" class="ui-listview ui-listview-inset ui-corner-all ui-shadow"> <li data-corners="false" data-shadow="false" data-iconshadow="true" data-wrapperels="div" data-icon="arrow-r" data-iconpos="right" data-theme="a" class="ui-btn ui-btn-icon-right ui-li-has-arrow ui-li ui-first-child ui-btn-up-a"> <div class="ui-btn-inner ui-li"> <div class="ui-btn-text"> <a id="51e0278f2a1cb33a08000002" class="ui-link-inherit">123456 - The Project</a> </div> <span class="ui-icon ui-icon-arrow-r ui-icon-shadow">&nbsp;</span> </div> </li> </ul> A: i have no idea why you are using function to set the click event.. (and no need to use loop $.each to set the click event for each elements). just this should work $('#main-listview').children().click(function() { project_id = this.id; $.mobile.changePage('#main-project-page', 'slide', true, true); } });
{ "pile_set_name": "StackExchange" }
Q: How many nodes a typical CAN/LIN/MOST Networks will contain? I would like to know number of nodes a typical network(CAN/LIN/MOST) Contain and On which basis we will decide? A: There's no fixed number but it depends on multiple factors: Baud rate: Lower the baud rate more the number of nodes. It takes more time for signal to propogate and higher baud rate won't allow that delay. Wiring: Every node will add capacitance to bus so your wiring scheme will also impact node count. Signal strength weakens as bus length/node count increases. Hence repeaters may be requred.
{ "pile_set_name": "StackExchange" }
Q: About symmetry, and about electron density in crystals in particular The book Introduction to Solid State Physics by Kittel says: "We have seen that a crystal is invariant under any translation of the form T [...]. Any local physical property of the crystal, such as the charge concentration, electron number density, or magnetic moment density is invariant under T. [...] n(r+T) = n(r)" Why is this the case? Even though the underlying atoms are invariant under translations in T, why should the electron density also be invariant under the same transformation? Couldn't for example if you have this 1D crystal: o o o o o This is invariant under transformations $T = {...,-1,0,1,2,3,...}$. Why couldn't the electron densities be like this: o h o l o h o l o h o Where h is a high density and l is a low density. So they would only be invariant under $T = {...,-2,0,2,4,...}$. This is just an example: why do the local physical properties have to be invariant under T? I suppose this question could be phrased even more generally: if the causes of some phenomenon have a symmetry, does the phenomenon also always have the same symmetry? If yes, why, and if no, what are the conditions under which the phenomenon does have the same symmetry? A: Of course you can have configurations which do not respect the underlying symmetry of they dynamics, the issue is that those will not be eigenstates of the Hamiltonian. Typically when talking solid state people are interested in energy bands which are eigenstates of the Hamiltonian, and these must also be eigenstates of the lattice translations. This follows from elementary operator algebra. The non-symmetric configurations would correspond to superpositions of different energy bands and would not be stationary in time.
{ "pile_set_name": "StackExchange" }
Q: How can I make this jquery function more terse? I have this code in a common javascript file in my asp.net project. jQuery-Lint returns "You've used the same selector more than once" whenever I mouse over one of the buttons that was affected by this function. //turns all the buttons into jqueryUI buttons //#mainBody is on the master page, #childBody is on the modal page. $("#mainBody button, #mainBody input:submit, #mainBody input:button, #childBody button, #childBody input:submit, #childBody input:button").livequery(function () { $(this).button().each(function (index) { $(this).ajaxStart(function () { $.data(this, "old_button_val", $(this).val()); $.data(this, "old_button_disabled", $(this).button("option", "disabled")); $(this).button("option", "disabled", true).val("Wait..."); }).ajaxStop(function () { $(this).val($.data(this, "old_button_val")).button("option", "disabled", $.data(this, "old_button_disabled")); }).ajaxError(function () { $(this).val($.data(this, "old_button_val")).button("option", "disabled", $.data(this, "old_button_disabled")); }); }); }); A similar question was asked here. A: // Might be a good idea now to add a class to these element // instead of using a long selector like this // Additionally, :button already includes <button> elements var selector = "#mainBody input:submit, #mainBody input:button, #childBody input:submit, #childBody input:button"; $(selector).livequery(function() { // Store a copy of $(this), which we'll reuse... and reuse... and reuse var t = $(this); // Create the callback function shared berween // ajaxStop and ajaxError function ajaxCallback () { t.button('option', { label: t.data("old_button_val"), disabled: t.data('old_button_disabled') }); } t.button() .ajaxStart(function() { // Use $.fn.data instead of $.data t.data({ // Using 'label' instead of 'val' // because <button> elements do not have 'value's "old_button_val", t.button('option', 'label'), "old_button_disabled": t.button("option", "disabled") }).button('option', { disabled: true, label: 'Wait...' }); }).ajaxStop(ajaxCallback).ajaxError(ajaxCallback); }); }); Disclaimer: Not tested, therefore not guaranteed to work.
{ "pile_set_name": "StackExchange" }
Q: Designing Puzzles Requiring Halt Undead and Animate Dead I am currently designing an adventure filled with several puzzles. The point of each puzzle is to get a character to use a power they don't know that they have in order to proceed. For example, for the character whose power is Speak With Dead, they are put in a chamber with an endlessly spawning supply of grappling zombies, and told the only way to make it stop is to get the necromancer who created the trap to speak a word of power, but they find only the necromancer's corpse (in this case the power is a SLA with a casting time of a standard action, instead of the usual 10 minutes). For those of you familiar with Eberron, this is a Test of Siberys. I have most of the necessary puzzles designed, but I'm stuck on two; one requiring Animate Dead and one requiring Halt Undead. I've got a few ideas for both but I'm not sold on any of them yet. I was wondering if anyone here had any good ideas. They don't need to be overly complex. Most of my players are still fairly new and they don't really metagame if only because they lack the required knowledge. A: The use of animate dead could, of course, be almost anything, but I particularly like simply having a switch in an area too hazardous (gas, radiation, whatever) to send a living thing, but some necromancer’s interrupted animation ritual available somewhere nearby. Black onyx already in place (though SLAs don’t actually need it), body prepared and ready (though animate dead doesn’t seem to actually require that either, and an SLA would reduce it to a Standard Action anyway), so they can make it happen. Halt undead is trickier. My thinking is something like having a skeleton marching in a predefined pattern endlessly, but they need to get it to stop somewhere on the route to trigger something (hold down a switch, its negative energy react with a crystal, whatever) to get the prize. This isn’t really as interesting, but then neither is halt undead itself. Since halt undead targets 3 undead at a time, you could maybe make that a part of it. Or even something more complex, where you have separate groups of undead that need to be frozen in place at different points to produce the correct pattern. That relies on the SLA being usable more than 1/day, though. A: Years of 16 bit video gaming seems to come into full swing! To tell the truth, I don't see why these two spells can't be the same puzzle but here's a few ideas: Pressure Plate Puzzle: The door to the next room is a Prismatic Wall. There are seven creatures in the room and the player must navigate them through a small maze to the pressure plates in the floor. Whenever an undead steps on the right plate in the order you choose, it dispels the next tier of the wall. The catch is that the players have to protect the undead from low challenge rating creatures such as small Earth Elementals because losing one means they have to go back and find another creature to sacrifice. Such a fun ethical quandary depending on alignments. Moving Platform Puzzle: Fun when you actually have the dungeon planned out. Some large brutish zombie has a platform on its head and either wanders aimlessly unless guided by the players somehow, or on a pattern (such as hitting a wall and turning around). These zombies are in pits and once the halt spell is cast it provides a temporary bridge for the party. The Old Scales and Lifts Gag: Move the undead into pulley oriented elevators and spring loaded platforms. The best of both worlds, this means that the party is in a centralized room of the dungeon. Creating platforms in the right places by propping weak catwalks or by lowering some by certain undead strategically placed using both animate and halt as desired. The paths created allows the players to access different rooms, sometimes "rescuing" more undead to lead back to the center to complete more of the path. A: How about a puzzle that involves reaching a switch that's located somewhere that can only be reached by flying? A platform floating in mid-air, perhaps, or a ledge on a wall that's covered by a continuously flowing, magically re-supplied stream of slippery goo that makes Climb checks unfeasible. Depending on what other powers your characters possess, they would need to recruit something that's able to fly up there for them - and there's some corpses of winged creatures in the room, or perhaps in the previous room, requiring them to go back and fetch the bodies of monsters they just had to defeat in combat, 'dungeon recycling' if you will. Winged zombies can fly, but if the party's Animate Dead spell gives them skeletons, they would need a creature that had a magical flying ability in life - winged skeletons can't fly. Again, depending on their caster level, a Beholder Skeleton might be out of their HD range, but a Tiny little Beholderkin Skeleton would certainly be a possibility. Conversely, you could have a puzzle involving flying undead that have already been animated, and are winging around in pre-determined flight patterns inside a large cavern. Beneath them, there are pressure plates, but they're placed on top of large pillars jutting out of a deep chasm. The pressure plates are too far away for the PCs to simply throw stuff at them, so they'll need to use Halt Undead to make one of the undead (giant zombie owls, perhaps, to stick with a theme of 'only those possessed of Wisdom may pass through here') land on a pressure plate; this triggers a mechanism that deploys a bridge across the chasm, which the PCs can use to go cross. Even trickier, the PC will have to time their spell just right (perhaps requiring a Dexterity check, a Spellcrafting skill check, or an Initiative check) to get the undead to land where they want. Perhaps one of the previous puzzles rewarded the PCs with a potion that boosts their speed, or Spellcrafting skill, which would help them in this situation. Or you could allow one of the other PCs with a high Dexterity to act as a 'spotter' to the spellcaster's 'sniper' - e.g. the party Rogue could an aid another action to act as a look-out, giving the spellcaster a bonus to their roll. You might allow the caster to make a Concentration check, if that gives them better odds, since this situation could be seen as "the spellcaster is just about to cast the spell, but is holding back the arcane energies until the Rogue says 'Go!'". Or you might be a nice GM and just let them hold a readied action to cast the spell until the undead is in the right position. The best solution is probably to let the PCs come up with their own strategy for timing the spell, and the roll with it (and for it). Alternative: Chamber full of tiny zombie owls, and one of them is carrying a key that the PCs need to unlock the next door. The door might be completely bereft of keyholes of doorhandles, but there is a sign with the embossed letters: THE WISDOM THAT STILL LINGERS HERE UNLOCKS YOUR MIND FROM ALL ITS FEARS SMALL IN SIZE YET GREAT INDEED WILL BRING THE ANSWERS THAT YOU NEED The PCs can make Spot checks to find the zombie owl that's different from the others - a glint of metal from the item it's carrying, perhaps. The DC shouldn't be too high, even if there's lots of owls; flying zombies are clumsy, according to the rules, nowhere near as agile as living birds. They then need to cast Halt Undead to make it stop flying, at which point it will, of course, plummet to the ground, allowing them to collect the key. Elaborating on KRyan's great idea to use the Animated Undead's invulnerability to several threats that would be lethal to living creatures, you could have a puzzle with Death Rays that need to be blocked to allow the PCs to pass. For example, the PCs just passed through a chamber that had lots of cute little woodland creatures in it, perhaps with a puzzle for a Druid or a Ranger. Some of the animals follow the PCs into the next chamber, frolicking happily around the most nature-friendly characters - and then stumble directly into the path of a sickly green laser beam and drops to the ground, stone cold dead. This way, the PCs are immediately alerted to the danger that the beams present. (Of course, if you want to be a not-quite-killer GM, you could have the beams do some kind of non-lethal stunning or Sleep effect.) If the players are unaware of the fact that undead creatures are immune to death effects, let them make a Knowledge skill check with a low DC to find out; after all, the chamber is probably decked out with a skull-and-bones motif, the floor littered with the decayed remains of previous visitors. Alternative: If your players are familiar with 'Harry Potter and the Half-Blood Prince', they probably remember the puzzle with a large font filled with dangerous liquid that had to be emptied, before Dumbledore and Harry could get the locket on the bottom; however, the only way to empty the vessel of its enchanted contents was by drinking it. This time, if the players are smart, they could just Animate some Dead and let THEM do the drinking.
{ "pile_set_name": "StackExchange" }
Q: Download of HTML of native Google Drive documents does not work I try to download Google Drive documents (with a Drive app) with this python function. it jumps every time to the # The file doesn't have any content stored on Drive part. def download_file(service, drive_file): """Download a file's content. Args: service: Drive API service instance. drive_file: Drive File instance. Returns: File's content if successful, None otherwise. """ download_url = drive_file.get('downloadUrl') if download_url: resp, content = service._http.request(download_url) if resp.status == 200: print 'Status: %s' % resp return content else: print 'An error occurred: %s' % resp return None else: # The file doesn't have any content stored on Drive. return None However I never get the HTML-content of a GDrive document. Is something wrong with my mime settings (HTML Mime type) or should I use another API function? A: You need to export the Google Doc, not download it. Check the Google Drive documentation for exporting files, you will need to replace your URL with something like: download_url = file['exportLinks']['text/html']
{ "pile_set_name": "StackExchange" }
Q: Can't Boot into either Windows or Ubuntu History I first installed Windows 8.1 and left around 60GBs Unallocated. Then I tried installing Ubuntu with a USB, but the partition table I had made did not show up. I then tweaked some settings in my BIOS/UEFI and then again tried still with no luck. Finally I did sudo gdisk /dev/sda and got rid of my GPT Partition but left the MBR(or i don't know windows partition) untouched. Then I tried and it successfully showed the partition table and I was able to make changes. I made: [sda5] linux-swap of 1.91 GiB [sda6] '/' or root of 19.07 GiB [sda7] '/home' of 34.44 GiB with 1.02 MiB Unallocated also, my sda1-3 are for windows (1-System Reserved, 2-OS, 3-Storage Drive) and sda5,6,7 are sub parts of sda4 I completed the installation and then I was successfully able to run ubuntu, but the problem was that there was no option to boot into windows, so I searched and found out there were some problems with grub. I made some tweaks and restarted and even ubuntu stopped working. So I again searched and with the help of live USB made some changes. Changes made were reinstalling grub 2 in sda6. Problem But the problem came error: file '/grub/i386-pc/normal.mod' not found and then I did that (reinstall) which led me to gnu grub minimal bash like line editing..... Now what I have is all my Operating Systems perfectly lying in my hard disk but I cannot boot any of them. All I can do is boot from live USB. I also tried to install grub 2 again but with no luck. It says Installing for i386-pc platform and then Installation finished. No errors reported. But still does not work. I have even tried resetting my BIOS settings. PS: Laptop - Lenovo B490 A: You cannot get rid of a GPT partition. GPT is a table type as is an MBR partition table. Since it appears you have a Windows 8 installation disc you can always disable UEFI, and install both operating systems in CSM (legacy bios mode) with an MBR partition table. If you wish to install under UEFI then use GPT for both operating systems. If Ubuntu isn't recognizing your partitions, open Gparted in the Ubuntu live-cd and verify that it DOES see the table. I would suggest creating the partition table in Gparted, installing Windows, and then installing Ubuntu.
{ "pile_set_name": "StackExchange" }
Q: SignatureDoesNotMatch when attempting to list a product on Amazon API I am wondering if someone could help. I am trying to list a product on Amazon through the API. When using GetOrders it works perfectly but with similar code apart from the parameters I get the following error message when using SubmitFeed _POST_PRODUCT_DATA_ "Sender SignatureDoesNotMatch The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details." All my details are correct, the secret key, aws access key etc. and I have compared the string to sign in my code to the one that is generated in the Amazon API test tool and they are exactly the same so I am not sure what the problem is. Here is the code I am using - $timestamp = date('c', strtotime($todays_date_time)); $timestamp = gmdate('Y-m-d\TH:i:s\Z', strtotime($timestamp)); $params = array( 'AWSAccessKeyId' => "MY_AWS_KEY", 'Action' => "SubmitFeed", 'Merchant' => "MY_SELLER_ID", 'FeedType' => "_POST_PRODUCT_DATA_", 'SignatureMethod' => "HmacSHA256", 'SignatureVersion' => "2", 'Timestamp'=> $timestamp, 'Version'=> "2009-01-01", 'MarketplaceIdList.Id.1' => "MY_MARKETPLACE_ID", 'PurgeAndReplace'=>'false' ); $secret = 'MY_SECRET_KEY'; $url_parts = array(); foreach(array_keys($params) as $key) { $url_parts[] = $key . "=" . str_replace('%7E', '~', rawurlencode($params[$key])); } Then here I create the XML and store it in the variable $amazon_feed and then - sort($url_parts); $url_string = implode("&", $url_parts); $string_to_sign = "POST\nmws.amazonservices.co.uk\n/\n" . $url_string; $signature = hash_hmac("sha256", $string_to_sign, $secret, TRUE); $http_header = array(); $http_header[] = 'Transfer-Encoding: chunked'; $http_header[] = 'Content-Type: application/xml'; $http_header[] = 'Content-MD5: ' . base64_encode(md5($amazon_feed, true)); $http_header[] = 'Expect:'; $http_header[] = 'Accept:'; $signature = urlencode(base64_encode($signature)); $link = "https://mws.amazonservices.co.uk/Feeds/2009-01-01?".$url_string."&Signature=".$signature; $ch = curl_init($link); curl_setopt($ch, CURLOPT_HTTPHEADER, $http_header); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $amazon_feed); $response = curl_exec($ch); print_r($response); $info = curl_getinfo($ch); curl_close($ch); Would anyone be able to help? A: You will have to ksort() your params before you pass them here: foreach(array_keys($params) as $key) { $url_parts[] = $key . "=" . str_replace('%7E', '~', rawurlencode($params[$key])); } e.g. $params = array( 'AWSAccessKeyId' => "MY_AWS_KEY", 'Action' => "SubmitFeed", 'SellerId' => "MY_SELLER_ID", 'FeedType' => "_POST_PRODUCT_DATA_", 'SignatureMethod' => "HmacSHA256", 'SignatureVersion' => "2", 'Timestamp'=> $timestamp, 'Version'=> "2009-01-01", 'MarketplaceIdList.Id.1' => "MY_MARKETPLACE_ID", 'PurgeAndReplace'=>'false' ); $secret = 'MY_SECRET_KEY'; $url_parts = array(); ksort($params); foreach(array_keys($params) as $key) { $url_parts[] = $key . "=" . str_replace('%7E', '~', rawurlencode($params[$key])); } I'm not entirely sure about the string you are signing, you should try it with this (adding /Feeds/2009-01-01 to it): "POST\nmws-eu.amazonservices.com\n/Feeds/2009-01-01\n" . $url_string Also, Amazon expects a SellerId for the _POST_PRODUCT_DATA_ operation, not Merchant. I suggest you to use mws-eu-amazonservices.com instead of the co.uk one, you can use this for all the european marketplaces and don't need to change it for each. As a sidenote: Amazon doesn't really report the correct error. The error you are getting can also occur if only SellerId is Merchant like above, or anything different that has nothing to do with what you are trying to do.
{ "pile_set_name": "StackExchange" }
Q: How to create folder inside the folder in Drive through Google script? I have used a existing folder ID to upload files, But I like to create folder by user's name whenever user pressed the submit button. Please have a look at my form. Help me to solve this issue. JS: var dropBoxId = "123454567567"; // Drive ID of 'dropbox' folder var logSheetId = "xoxoxoxooxoxox"; // Drive ID of log spreadsheet function doGet(e) { return HtmlService.createHtmlOutputFromFile('InputForm.html'); } function uploadFiles(formObject) { try { // Create a file in Drive from the one provided in the form var folder = DriveApp.getFolderById(dropBoxId); var person = createFolder(ajith); var blob1 = formObject.myFile1; var blob2 = formObject.myFile2; var blob3 = formObject.myFile3; var file1 = person.createFile(blob1); var file2 = person.createFile(blob2); var file3 = person.createFile(blob3); file.setDescription("Uploaded by " + formObject.myName); // Open the log and record the new file name, URL and name from form var ss = SpreadsheetApp.openById(logSheetId); var sheet = ss.getSheets()[0]; sheet.appendRow([formObject.myName, formObject.myEmail, formObject.myCert, formObject.myExp, formObject.mySkills, formObject.myFees, formObject.myLoc, formObject.myTravel, file1.getName(),file2.getName(),file3.getName(), file1.getUrl(),file2.getUrl(),file3.getUrl(), file.getDateCreated() ]); // Return the new file Drive URL so it can be put in the web app output return file.getUrl(); } catch (error) { return error.toString(); } } HTML: <form> <h2 class="text-center title">Application Form</h2> <div class="col-sm-6" id="myForm"> <div><label>Name:</label><input type="text" name="myName" placeholder="Your full name..." required/></div> <div><label>Mobile Number:</label><input type="number" name="myNum" placeholder="Mobile number..." required/></div> <div><label>Email:</label><input type="email" name="myEmail" placeholder="Email ID..." required/></div> <div><label>Certificate:</label><input type="text" name="myCert" placeholder="Certificate Name..." required/></div> <div><label>Experience:</label><input type="number" name="myExp" placeholder="Experience in years..." required/></div> <div><label>Skills:</label><input type="text" name="mySkills" placeholder="Skills(seperate by comma).." required/></div> </div> <div class="col-sm-6" id="myForm"> <div><label>Fees:</label><input type="text" name="myFees" placeholder="Fees..." required/></div> <div><label>Locality:</label><input type="text" name="myLoc" placeholder="Locality..." required/></div> <div><label>Willingness to travel:</label><select><option>Yes</option><option>No</option></select></div> <div><label>Attach your latest Photo 1</label><input name="myFile1" type="file" required/></div> <div><label>Attach your latest Photo 2</label><input name="myFile2" type="file" required/></div> <div><label>Attach your latest Photo 3</label><input name="myFile3" type="file" required/></div> </div> <input type="button" class="btn btn-success" value="Submit" onclick="google.script.run .withSuccessHandler(updateUrl) .withFailureHandler(onFailure) .uploadFiles(this.parentNode)" /> </form> <div id="output"></div> A: Try something like this var name ="Myfoldernanme"; var newfolder = folder.createFolder(name);
{ "pile_set_name": "StackExchange" }
Q: Increment a variable inside a loop in jquery I need to select two values from two dropdown lists, say, 1001 from the first dropdown and 1003 from the second dropdown. After clicking the 'add' button, I need to send these values, together with the values in between, to the $.ajax() method like this: 1001, 1002 and 1003. Below is the code I used for this. var s_no = "" + $("#start_num option:selected").text(); var e_no = "" + $("#end_num option:selected").text(); var diff = e_no - s_no; var regno = 0; for (i = 0; i <= diff; i++) { regno = s_no; $.ajax({ type: 'POST', dataType: "json", url: '<?php echo base_url('SchoolAdmin/inserthall'); ?>', data: { regno: regno }, success: function (result) { console.log(result); } }); s_no = regno + 1; } The same value in s_no is getting inserted into the database than the other values. 1001 is repeatedly inserting into database. I need 1002 and 1003 together with 1001. Thanks in advance. A: You need to convert the strings returned from val() to integers so that you can do the subtraction on them. Also note that the logic which calculates the number can be tidied. Try this: var s_no = parseInt($("#start_num option:selected").text(), 10); var e_no = parseInt($("#end_num option:selected").text(), 10); var diff = e_no - s_no; for(i = 0; i <= diff; i++) { var regno = s_no + i; $.ajax({ type: 'POST', dataType:"json", url: '<?php echo base_url('SchoolAdmin/inserthall'); ?>', data: { regno: regno }, success: function (result) { console.log(result); } }); } Also note that it would be much better practice to make one AJAX request sending all the reg_no values in one go.
{ "pile_set_name": "StackExchange" }
Q: How to send one value after successful ajax post request to a page redirect I am learning ASP.net MVC - 5 and I am stuck at one problem. So I need to open a URL after successful Ajax Post Request. But I also want to pass one value to the new URL's Controller Action. Below is what I have till now. AJAX CALL $.ajax({ url: URL, type: 'POST', data: data, success: function (result) { if (result == true) { int TEMPVAR = 2; DisplayError('Save Successful', 'Success', function () { window.location.href = '/Settings/Customize/'; }); }, error: function (error) { } }); Controller Action [AuthorizeSettings] public ActionResult Customize() { //I want to be able to access TEMPVAR value here // code removed for brevity return View(configData); } Question: How to pass the TEMPVAR data to the Customize Action Points: I know there are some ways to pass data. TempData,Viewbag,SessionVariable, Embedding the TEMP value in URL Request, Anonymous Objects, ViewData, Static variable for the class, Global Variable, JSON. But I am totally confused how to pass data. I am newbiew please guide me here. EDIT: AJAX CALL $.ajax({ url: URL, type: 'POST', data: data, success: function (result) { if (result == true) { int TEMPVAR = 2; DisplayError('Save Successful', 'Success', function () { window.location.href = '/Settings/Customize/'; }); TEMPDATA["value"] = TEMPVAR; }, error: function (error) { } }); A: Based on comments, you want to send data from SaveStyles to Customize. If so, you can use TempData - public class PersistController : Controller { [HttpPost] public ActionResult SaveStyles() { TempData["Status"] = true; TempData["Val"] = 4; return Json(true); } } public class SettingsController : Controller { public ActionResult Customize() { bool status = Convert.ToBoolean(TempData["Status"]); int val = Convert.ToInt32(TempData["Val"]); return View(); } }
{ "pile_set_name": "StackExchange" }
Q: Why does an event handler never get called if it's added within a loop on an ienumerable? Why does an event handler never get called if it's added within a loop on an ienumerable? For instance: IEnumerable<MyType> list = someCollection.Select(i => new MyType(i)); foreach (var item in list) item.PropertyChanged += item_PropertyChanged; <-- this never gets called Bu if list is assigned like list = someCollection.Select(i => new MyType(i)).ToArray(); the event handler does get called.. Why? (I imagine it has something to do with the fact that a LINQ query is lazy, but the fact of looping through the result isn't enough?) A: Your Select call is creating new instances of MyType, which means that... When list is typed as IEnumerable<MyType> then you're dealing with a new sequence of new objects each time you enumerate list. The objects to which you're adding event handlers are not the same objects that you're subsequently testing. When list is typed as MyType[] (by using the ToArray call) then you're dealing with the same collection of objects each time you enumerate list. The objects to which you're adding event handlers are the same objects that you're subsequently testing.
{ "pile_set_name": "StackExchange" }
Q: this context in base constructor is module itself - Typescript NOTE: changed title post-answer for better searchability as this had nothing to do with backbone. module App.BackBone.Collections { export class MixedChartCollection extends Backbone.Collection { public model: App.BackBone.Models.BaseChartModel; constructor(models?: any, options?: any) { super(models, options); } } } It seems the base Backbone.Collection constructor is called with this context being my module rather than my class o_O Here is the constructor from backbone.js: var Collection = Backbone.Collection = function (models, options) { options || (options = {}); if (options.url) this.url = options.url; if (options.model) this.model = options.model; if (options.comparator !== void 0) this.comparator = options.comparator; this._reset(); //ERROR: this._reset is not a function this.initialize.apply(this, arguments); if (models) this.reset(models, _.extend({ silent: true }, options)); }; In the image you can see how App.BackBone.Collections has the same 3 members (in red, with one of which is the class in question) as the this context and this._reset ends up being undefined because the what we really want it being "wrapped" my some object out of no where Why is it doing this? Here is compiled code for this class: var App; (function (App) { (function (BackBone) { (function (Collections) { var MixedChartCollection = (function (_super) { __extends(MixedChartCollection, _super); function MixedChartCollection(models, options) { _super.call(this, models, options); //HERE "this" is not MixedChartCollection instance } return MixedChartCollection; })(Backbone.Collection); Collections.MixedChartCollection = MixedChartCollection; })(BackBone.Collections || (BackBone.Collections = {})); var Collections = BackBone.Collections; })(App.BackBone || (App.BackBone = {})); var BackBone = App.BackBone; })(App || (App = {})); A: Ok figured this out. It has to do with the way I instantiated the class: var mixedCollection: App.BackBone.Collections.MixedChartCollection = App.BackBone.Collections.MixedChartCollection(); Just noticed I left out the "new" keyword which caused my issue. Why did the compiler not catch this? I don't know. Not necessary but if somebody wants answer the reasoning behind that will mark as answer.
{ "pile_set_name": "StackExchange" }
Q: How to share worker among two different applications on heroku? I have two separate applications running on heroku and pointing to same database, first one responsible for user interfaceand second one for admin interface, I am using sidekiq with redis for background job processing, I have added one worker and I am able to share 'redis-server' by setting environment variable pointing to same Redis providing Addon, Now i wish to share worker too, because adding the extra worker will cost double. Please suggest, whether this is even possible or not? A: If both apps are using the same Redis URL and same namespace, you can spin up one worker with that same Redis config and it will be shared by both. Note that your Sidekiq process will boot one app or the other. The code for your Workers must be in that app. The other app won't be able to reference the code but can push jobs using: Sidekiq::Client.push('class' => 'SomeWorker', 'args' => [1,2,3]) Note that 'class' is a String so SomeWorker can actually be defined in the other app.
{ "pile_set_name": "StackExchange" }
Q: React-Router Switch component not rendering components with same path I recently implemented the react-router Switch component into my routes in order to render a NoMatch component (which is just a 404 error component). However, after implementing this into my routes I noticed that on my home page only 1 component will render, the Heading component. Both Heading and SearchBar should render to the same path. My code below: const routes = [ { path: "/", exact: true, component: () => <Heading /> }, { path: "/", exact: true, component: () => <SearchBar /> }, { component: NoMatch } ]; class App extends Component { render() { return ( <div> <BrowserRouter> <div> <MenuBar /> <Switch> {routes.map((route, index) => <Route key={index} path={route.path} exact={route.exact} component={route.component} /> )} </Switch> </div> </BrowserRouter> </div> ); } } I noticed if I remove the Switch component then everything will render just fine, but then the NoMatch component will also render to the route. Question: Why can't I render multiple components on the same path inside of Switch? How can I fix this problem when I need to render both Heading, and SearchBar component on the "/" path? A: I think you are missing how Switch works. Switch will start looking for a matching Route, whenever it will find a match it will stop looking for matches and render that particular component. Always define unique routes, if you want to render multiple component for same path then wrap all of them by a div. Write it like this: const routes = [ { path: "/", exact: true, component: () => <div> <Heading /> <SearchBar /> </div> }, { component: NoMatch } ];
{ "pile_set_name": "StackExchange" }
Q: TextBlock does not grey out I need to grey out the a text block in my datagrid when "CandEdit" is false and i can't figure out why my code doesnt work... what i tried: <DataGridTextColumn Views:FilterDataGridColumn.CanFilter="True" MinWidth="80" IsReadOnly="True" Header="Alarms" Binding="{Binding Path=AlarmName}"> <DataGridTextColumn.ElementStyle> <Style TargetType="TextBlock" > <Setter Property="IsEnabled" Value="{Binding Path=CanEdit}"/> </Style> </DataGridTextColumn.ElementStyle> A: TextBlock is not a intractable element, it can't be edited like a TextBox, so disabling it doesn't change it's appearance by deafult. You can just set it's font color to gray if that's what you are trying to achieve.
{ "pile_set_name": "StackExchange" }
Q: disable other checkboxes when one with similar class is clicked I need to disable rest of the checkboxes with the same class as soon as one of them gets checked. $('.Cat .RoleChk').change(function(){ if($(this).is(':checked')){ $('.Cat .RoleChk').attr('disabled',true); $(this).attr('disabled',''); } else{ $('.Cat .RoleChk').attr('disabled',''); } }); <div class="Cat" style="width: 155px; float: left"> <p>Employee Role</p> <input class="RoleChk" name="Role" type="checkbox" value="OfficeEmployee"/>&nbsp;Office Employee<br/> <input class="RoleChk" name="Role" type="checkbox" value="Marketer"/>&nbsp;Marketer<br/> <input class="RoleChk" name="Role" type="checkbox" value="ProjectManager"/>&nbsp;Project Manager<br/> </div> <div class="Cat" style="width: 155px; float: left"> <p>Groups</p> <input class="GrpChk" name="Group" type="checkbox" value="Aviation"/>&nbsp;Aviation<br/> <input class="GrpChk" name="Group" type="checkbox" value="Electrical"/>&nbsp;Electrical<br/> <input class="GrpChk" name="Group" type="checkbox" value="Mining"/>&nbsp;Mining </div> A: How about: $(".RoleChk, .GrpChk").change(function() { this.checked ? $("." + this.className).not(this).prop("disabled", true) : $("." + this.className).not(this).prop("disabled", false); }); Demo: http://jsfiddle.net/tymeJV/96Wvq/1/ I kept the original checkbox that was checked enabled, this allows the user to uncheck and re-enable. If you want to remove this functionality, take the .not(this) out of the ternary.
{ "pile_set_name": "StackExchange" }
Q: Why does Omega Centauri have a distinct chemical signature from the rest of the Milky Way? In answering a question about the orbital path of Omega Centauri, I learned that it has a distinct chemical signature from the rest of the Milky Way. Basically, it is very rich in s-process elements, which I think are primarily produced in Asymptotic Giant Branch stars. It is not totally clear to me why that would be the case. Are AGB stars dominating metallicity in Omega Centauri, and if so, why? If it is not what, what is the cause? A: After looking through a few papers, in particular Chemical Abundances and Kinematics in Globular Clusters and Local Group Dwarf Galaxies and Their Implications for Formation Theories of the Galactic Halo and references therein, I think I have a reasonable answer. Omega Centauri's chemical abundance seems to be most easily explained by it being an accreted dwarf spheroidal galaxy. The metallicities of its stars match up quite well with those of almost all of the Milky Way's dwarf spheroidal satellites. The reason for the dwarf spheroidal galaxies having different metallicities is thought to be due to two factors: they form stars at lower efficiency than the Milky Way, and the material for star formation (that is, gas) is more easily driven out of dwarf spheroidals by galactic winds. Chemical Abundances for 855 Giants in the Globular Cluster Omega Centauri (NGC 5139) says that in this respect, Omega Centauri differs even from dwarf spheroidals, in that Type Ia supernovae played a minimal role in enriching the stars in the Omega Centauri system. Instead, its metallicity is dominated by the injection of elements produced by Type II supernovae early in the system's history, and "pollution" caused by intermediate mass stars in the asymptotic giant branch (AGB) phase, where certain elements found preferentially in Omega Centauri can form, and then are ejected because AGB stars are unstable.
{ "pile_set_name": "StackExchange" }
Q: What is the diffrence between website and web application? I read a lot of IT books but I have mixed between Web application and Website. Is that two words are same? If no can anyone explain? A: Website is an umbrella term, you can make simple websites with nothing but HTML in Notepad that do nothing more than show pages like an online poster for a sale. When you start talking applications, that's when you're talking about adding extra functions with additional languages like C# and JavaScript. So in a way, you can say all web apps are web sites, but not all sites are apps.
{ "pile_set_name": "StackExchange" }
Q: According to Advaita philosophy, is the Absolute Brahman formless? In Kena Upanishad following has been said about the Absolute: ‘What sight fails to see, but what sees sight— know thou That alone as Brahman, and not this that people worship here.’ As per my understanding the meaning of the above is that Brahman is formless. Because if Brahman had any form(material or spiritual) then sight would have seen it. So my question is: Is the Absolute Brahman formless? A: Yes, Brahman is formless. Astavakra Samhita: I.5 - You do not belong to the Brahmana or any other caste or to any ashrama. You are not perceived by the senses. Unattached, formless and witness of all are you. Be happy. I.17 - You are unconditioned, immutable, formless, of cool disposition, of unfathomable intelligence and unperturbed. Desire Consciousness alone. I.18 - Know that which has form to be unreal and the formless to be permanent. Through this spiritual instruction you will escape the possibility of rebirth VII.3 - In me, the boundless ocean, is the imagination of the universe. I am quite tranquil and formless. In this alone do I abide. XVIII.57 - The sense of duty, indeed, is the world of relativity. It is transcended by the wise who realizes himself as all-pervasive, formless, immutable, and untainted.
{ "pile_set_name": "StackExchange" }
Q: Android Studio - Can not open device monitor I just downloaded Android Studio and it seems to import my existing Eclipse project well. However, if I try to open "Android Device Monitor" I get the message "An error has occurred" with a reference to a log file. My log file is included underneath. I am not sure why there is a reference to "Eclipse" int it? Anyhow, all in all, I have no idea where to go from here !SESSION 2015-01-05 04:00:15.329 ----------------------------------------------- eclipse.buildId=unknown java.version=1.8.0_25 java.vendor=Oracle Corporation BootLoader constants: OS=win32, ARCH=x86_64, WS=win32, NL=en_US Command-line arguments: -os win32 -ws win32 -arch x86_64 -data @noDefault !ENTRY org.eclipse.osgi 4 0 2015-01-05 04:00:15.930 !MESSAGE Bundle reference:file:org.apache.ant_1.8.3.v201301120609/@4 not found. !ENTRY org.eclipse.osgi 4 0 2015-01-05 04:00:15.934 !MESSAGE Bundle reference:file:org.apache.jasper.glassfish_2.2.2.v201205150955.jar@4 not found. !ENTRY org.eclipse.osgi 4 0 2015-01-05 04:00:15.935 !MESSAGE Bundle reference:file:org.apache.lucene.core_2.9.1.v201101211721.jar@4 not found. !ENTRY org.eclipse.osgi 4 0 2015-01-05 04:00:15.975 !MESSAGE Bundle reference:file:org.eclipse.help.base_3.6.101.v201302041200.jar@4 not found. !ENTRY org.eclipse.osgi 4 0 2015-01-05 04:00:15.975 !MESSAGE Bundle reference:file:org.eclipse.help.ui_3.5.201.v20130108-092756.jar@4 not found. !ENTRY org.eclipse.osgi 4 0 2015-01-05 04:00:15.975 !MESSAGE Bundle reference:file:org.eclipse.help.webapp_3.6.101.v20130116-182509.jar@4 not found. !ENTRY org.eclipse.osgi 4 0 2015-01-05 04:00:15.977 !MESSAGE Bundle reference:file:org.eclipse.jetty.server_8.1.3.v20120522.jar@4 not found. !ENTRY org.eclipse.osgi 4 0 2015-01-05 04:00:15.981 !MESSAGE Bundle reference:file:org.eclipse.platform.doc.user_4.2.2.v20130121-200410.jar@4 not found. !ENTRY org.eclipse.osgi 4 0 2015-01-05 04:00:16.009 !MESSAGE Bundle reference:file:org.eclipse.team.core_3.6.100.v20120524-0627.jar@4 not found. !ENTRY org.eclipse.osgi 4 0 2015-01-05 04:00:16.009 !MESSAGE Bundle reference:file:org.eclipse.team.ui_3.6.201.v20130125-135424.jar@4 not found. !ENTRY org.eclipse.osgi 4 0 2015-01-05 04:00:16.010 !MESSAGE Bundle reference:file:org.eclipse.ui.cheatsheets_3.4.200.v20120521-2344.jar@4 not found. !ENTRY org.eclipse.osgi 4 0 2015-01-05 04:00:16.013 !MESSAGE Bundle reference:file:org.eclipse.ui.intro_3.4.200.v20120521-2344.jar@4 not found. !ENTRY org.eclipse.osgi 2 0 2015-01-05 04:00:17.340 !MESSAGE One or more bundles are not resolved because the following root constraints are not resolved: !SUBENTRY 1 org.eclipse.osgi 2 0 2015-01-05 04:00:17.340 !MESSAGE Bundle initial@reference:file:plugins/org.eclipse.equinox.jsp.jasper.registry_1.0.300.v20120912-130548.jar was not resolved. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper.registry 2 0 2015-01-05 04:00:17.340 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(|(&(osgi.ee=CDC/Foundation)(version=1.0))(&(osgi.ee=JavaSE)(version=1.3)))". !SUBENTRY 1 org.eclipse.osgi 2 0 2015-01-05 04:00:17.340 !MESSAGE Bundle initial@reference:file:plugins/org.eclipse.equinox.http.jetty_3.0.1.v20121109-203239.jar was not resolved. !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2015-01-05 04:00:17.340 !MESSAGE Missing imported package org.eclipse.jetty.server.ssl_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2015-01-05 04:00:17.340 !MESSAGE Missing imported package org.eclipse.jetty.server_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2015-01-05 04:00:17.340 !MESSAGE Missing imported package org.eclipse.jetty.server.nio_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2015-01-05 04:00:17.340 !MESSAGE Missing imported package org.eclipse.jetty.server.handler_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2015-01-05 04:00:17.340 !MESSAGE Missing imported package org.eclipse.jetty.server.bio_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2015-01-05 04:00:17.340 !MESSAGE Missing imported package org.eclipse.jetty.server.session_[8.0.0,9.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2015-01-05 04:00:17.340 !MESSAGE Bundle initial@reference:file:plugins/org.apache.lucene.analysis_2.9.1.v201101211721.jar was not resolved. !SUBENTRY 2 org.apache.lucene.analysis 2 0 2015-01-05 04:00:17.340 !MESSAGE Missing required bundle org.apache.lucene.core_[2.9.1,3.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2015-01-05 04:00:17.340 !MESSAGE Bundle initial@reference:file:plugins/org.eclipse.ltk.ui.refactoring_3.7.0.v20120523-1543.jar was not resolved. !SUBENTRY 2 org.eclipse.ltk.ui.refactoring 2 0 2015-01-05 04:00:17.340 !MESSAGE Missing required bundle org.eclipse.team.ui_[3.4.100,4.0.0). !SUBENTRY 2 org.eclipse.ltk.ui.refactoring 2 0 2015-01-05 04:00:17.340 !MESSAGE Missing required bundle org.eclipse.team.core_[3.4.100,4.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2015-01-05 04:00:17.340 !MESSAGE Bundle initial@reference:file:plugins/org.eclipse.platform_4.2.2.v201302041200/ was not resolved. !SUBENTRY 2 org.eclipse.platform 2 0 2015-01-05 04:00:17.342 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(|(&(osgi.ee=JavaSE)(version=1.4))(&(osgi.ee=CDC/Foundation)(version=1.0))(&(osgi.ee=JavaSE)(version=1.3)))". !SUBENTRY 2 org.eclipse.platform 2 0 2015-01-05 04:00:17.342 !MESSAGE Missing required bundle org.eclipse.ui.intro_[3.2.0,4.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2015-01-05 04:00:17.342 !MESSAGE Bundle initial@reference:file:plugins/org.apache.lucene_2.9.1.v201101211721.jar was not resolved. !SUBENTRY 2 org.apache.lucene 2 0 2015-01-05 04:00:17.342 !MESSAGE Missing required bundle org.apache.lucene.core_[2.9.1,3.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2015-01-05 04:00:17.342 !MESSAGE Bundle initial@reference:file:plugins/org.eclipse.ui.intro.universal_3.2.600.v20120912-155524/ was not resolved. !SUBENTRY 2 org.eclipse.ui.intro.universal 2 0 2015-01-05 04:00:17.342 !MESSAGE Missing required bundle org.eclipse.ui.intro_[3.4.0,4.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2015-01-05 04:00:17.342 !MESSAGE Bundle initial@reference:file:plugins/org.eclipse.equinox.jsp.jasper_1.0.400.v20120912-130548.jar was not resolved. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2015-01-05 04:00:17.342 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(|(&(osgi.ee=CDC/Foundation)(version=1.0))(&(osgi.ee=JavaSE)(version=1.3)))". !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2015-01-05 04:00:17.342 !MESSAGE Missing imported package org.apache.jasper.servlet_[0.0.0,6.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2015-01-05 04:00:17.342 !MESSAGE Bundle initial@reference:file:plugins/org.eclipse.jetty.security_8.1.3.v20120522.jar was not resolved. !SUBENTRY 2 org.eclipse.jetty.security 2 0 2015-01-05 04:00:17.342 !MESSAGE Missing imported package org.eclipse.jetty.server_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.security 2 0 2015-01-05 04:00:17.342 !MESSAGE Missing imported package org.eclipse.jetty.server.handler_[8.1.0,9.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2015-01-05 04:00:17.342 !MESSAGE Bundle initial@reference:file:plugins/org.eclipse.jetty.servlet_8.1.3.v20120522.jar was not resolved. !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2015-01-05 04:00:17.342 !MESSAGE Missing imported package org.eclipse.jetty.server.session_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2015-01-05 04:00:17.342 !MESSAGE Missing imported package org.eclipse.jetty.server.nio_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2015-01-05 04:00:17.342 !MESSAGE Missing imported package org.eclipse.jetty.server.handler_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2015-01-05 04:00:17.342 !MESSAGE Missing imported package org.eclipse.jetty.server.ssl_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2015-01-05 04:00:17.342 !MESSAGE Missing imported package org.eclipse.jetty.server_[8.1.0,9.0.0). !ENTRY org.eclipse.osgi 2 0 2015-01-05 04:00:17.398 !MESSAGE The following is a complete list of bundles which are not resolved, see the prior log entry for the root cause if it exists: !SUBENTRY 1 org.eclipse.osgi 2 0 2015-01-05 04:00:17.398 !MESSAGE Bundle org.apache.lucene_2.9.1.v201101211721 [25] was not resolved. !SUBENTRY 2 org.apache.lucene 2 0 2015-01-05 04:00:17.398 !MESSAGE Missing required bundle org.apache.lucene.core_[2.9.1,3.0.0). !SUBENTRY 2 org.apache.lucene 2 0 2015-01-05 04:00:17.399 !MESSAGE Missing optionally required bundle org.apache.lucene.analysis_[2.9.1,3.0.0). !SUBENTRY 2 org.apache.lucene 2 0 2015-01-05 04:00:17.399 !MESSAGE Missing optionally required bundle org.apache.lucene.highlighter_[2.9.1,3.0.0). !SUBENTRY 2 org.apache.lucene 2 0 2015-01-05 04:00:17.399 !MESSAGE Missing optionally required bundle org.apache.lucene.memory_[2.9.1,3.0.0). !SUBENTRY 2 org.apache.lucene 2 0 2015-01-05 04:00:17.399 !MESSAGE Missing optionally required bundle org.apache.lucene.queries_[2.9.1,3.0.0). !SUBENTRY 2 org.apache.lucene 2 0 2015-01-05 04:00:17.399 !MESSAGE Missing optionally required bundle org.apache.lucene.snowball_[2.9.1,3.0.0). !SUBENTRY 2 org.apache.lucene 2 0 2015-01-05 04:00:17.399 !MESSAGE Missing optionally required bundle org.apache.lucene.spellchecker_[2.9.1,3.0.0). !SUBENTRY 2 org.apache.lucene 2 0 2015-01-05 04:00:17.399 !MESSAGE Missing optionally required bundle org.apache.lucene.misc_[2.9.1,3.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2015-01-05 04:00:17.399 !MESSAGE Bundle org.apache.lucene.analysis_2.9.1.v201101211721 [26] was not resolved. !SUBENTRY 2 org.apache.lucene.analysis 2 0 2015-01-05 04:00:17.399 !MESSAGE Missing required bundle org.apache.lucene.core_[2.9.1,3.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2015-01-05 04:00:17.399 !MESSAGE Bundle org.eclipse.equinox.http.jetty_3.0.1.v20121109-203239 [90] was not resolved. !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2015-01-05 04:00:17.399 !MESSAGE Missing imported package org.eclipse.jetty.server_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2015-01-05 04:00:17.399 !MESSAGE Missing imported package org.eclipse.jetty.server.bio_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2015-01-05 04:00:17.399 !MESSAGE Missing imported package org.eclipse.jetty.server.handler_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2015-01-05 04:00:17.399 !MESSAGE Missing imported package org.eclipse.jetty.server.nio_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2015-01-05 04:00:17.399 !MESSAGE Missing imported package org.eclipse.jetty.server.session_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2015-01-05 04:00:17.399 !MESSAGE Missing imported package org.eclipse.jetty.server.ssl_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2015-01-05 04:00:17.399 !MESSAGE Missing imported package org.eclipse.jetty.servlet_[8.0.0,9.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2015-01-05 04:00:17.400 !MESSAGE Bundle org.eclipse.equinox.jsp.jasper_1.0.400.v20120912-130548 [93] was not resolved. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2015-01-05 04:00:17.400 !MESSAGE Missing imported package org.apache.jasper.servlet_[0.0.0,6.0.0). !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2015-01-05 04:00:17.400 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(|(&(osgi.ee=CDC/Foundation)(version=1.0))(&(osgi.ee=JavaSE)(version=1.3)))". !SUBENTRY 1 org.eclipse.osgi 2 0 2015-01-05 04:00:17.400 !MESSAGE Bundle org.eclipse.equinox.jsp.jasper.registry_1.0.300.v20120912-130548 [94] was not resolved. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper.registry 2 0 2015-01-05 04:00:17.400 !MESSAGE Missing imported package org.eclipse.equinox.jsp.jasper_0.0.0. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper.registry 2 0 2015-01-05 04:00:17.400 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(|(&(osgi.ee=CDC/Foundation)(version=1.0))(&(osgi.ee=JavaSE)(version=1.3)))". !SUBENTRY 1 org.eclipse.osgi 2 0 2015-01-05 04:00:17.400 !MESSAGE Bundle org.eclipse.jetty.security_8.1.3.v20120522 [137] was not resolved. !SUBENTRY 2 org.eclipse.jetty.security 2 0 2015-01-05 04:00:17.400 !MESSAGE Missing imported package org.eclipse.jetty.server_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.security 2 0 2015-01-05 04:00:17.400 !MESSAGE Missing imported package org.eclipse.jetty.server.handler_[8.1.0,9.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2015-01-05 04:00:17.400 !MESSAGE Bundle org.eclipse.jetty.servlet_8.1.3.v20120522 [138] was not resolved. !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2015-01-05 04:00:17.400 !MESSAGE Missing optionally imported package org.eclipse.jetty.jmx_8.0.0. !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2015-01-05 04:00:17.400 !MESSAGE Missing imported package org.eclipse.jetty.security_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2015-01-05 04:00:17.400 !MESSAGE Missing imported package org.eclipse.jetty.server_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2015-01-05 04:00:17.400 !MESSAGE Missing imported package org.eclipse.jetty.server.handler_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2015-01-05 04:00:17.400 !MESSAGE Missing imported package org.eclipse.jetty.server.nio_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2015-01-05 04:00:17.400 !MESSAGE Missing imported package org.eclipse.jetty.server.session_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2015-01-05 04:00:17.400 !MESSAGE Missing imported package org.eclipse.jetty.server.ssl_[8.1.0,9.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2015-01-05 04:00:17.400 !MESSAGE Bundle org.eclipse.ltk.ui.refactoring_3.7.0.v20120523-1543 [146] was not resolved. !SUBENTRY 2 org.eclipse.ltk.ui.refactoring 2 0 2015-01-05 04:00:17.400 !MESSAGE Missing required bundle org.eclipse.team.core_[3.4.100,4.0.0). !SUBENTRY 2 org.eclipse.ltk.ui.refactoring 2 0 2015-01-05 04:00:17.400 !MESSAGE Missing required bundle org.eclipse.team.ui_[3.4.100,4.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2015-01-05 04:00:17.401 !MESSAGE Bundle org.eclipse.platform_4.2.2.v201302041200 [149] was not resolved. !SUBENTRY 2 org.eclipse.platform 2 0 2015-01-05 04:00:17.401 !MESSAGE Missing required bundle org.eclipse.ui.intro_[3.2.0,4.0.0). !SUBENTRY 2 org.eclipse.platform 2 0 2015-01-05 04:00:17.401 !MESSAGE Missing optionally required bundle org.eclipse.ui.cheatsheets_[3.2.0,4.0.0). !SUBENTRY 2 org.eclipse.platform 2 0 2015-01-05 04:00:17.401 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(|(&(osgi.ee=JavaSE)(version=1.4))(&(osgi.ee=CDC/Foundation)(version=1.0))(&(osgi.ee=JavaSE)(version=1.3)))". !SUBENTRY 1 org.eclipse.osgi 2 0 2015-01-05 04:00:17.401 !MESSAGE Bundle org.eclipse.search_3.8.0.v20120523-1540 [151] was not resolved. !SUBENTRY 2 org.eclipse.search 2 0 2015-01-05 04:00:17.401 !MESSAGE Missing required bundle org.eclipse.ltk.ui.refactoring_[3.5.0,4.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2015-01-05 04:00:17.401 !MESSAGE Bundle org.eclipse.ui.intro.universal_3.2.600.v20120912-155524 [163] was not resolved. !SUBENTRY 2 org.eclipse.ui.intro.universal 2 0 2015-01-05 04:00:17.401 !MESSAGE Missing required bundle org.eclipse.ui.intro_[3.4.0,4.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2015-01-05 04:00:17.401 !MESSAGE Bundle org.eclipse.ui.navigator.resources_3.4.400.v20120705-114010 [165] was not resolved. !SUBENTRY 2 org.eclipse.ui.navigator.resources 2 0 2015-01-05 04:00:17.401 !MESSAGE Missing required bundle org.eclipse.ltk.ui.refactoring_[3.5.0,4.0.0). !ENTRY org.eclipse.osgi 4 0 2015-01-05 04:00:17.403 !MESSAGE Application error !STACK 1 java.io.IOException: The folder "C:\Users\My%20Example%Name.android\monitor-workspace.metadata" is read-only. at org.eclipse.core.runtime.internal.adaptor.BasicLocation.lock(BasicLocation.java:206) at org.eclipse.core.runtime.internal.adaptor.BasicLocation.set(BasicLocation.java:164) at org.eclipse.core.runtime.internal.adaptor.BasicLocation.set(BasicLocation.java:137) at com.android.ide.eclipse.monitor.MonitorApplication.start(MonitorApplication.java:53) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:353) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:180) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:629) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:584) at org.eclipse.equinox.launcher.Main.run(Main.java:1438) A: The solution posted in the following link worked for me. I added the answer below, but you can take a look at the link for more information. Also, I am using Windows 8 and it worked as well. This is what I had to do to resolve it in Windows 7 64 bit: Run the task manager End the process for monitor Also end the java processes. Just doing monitor won't do the trick. Relaunch the monitor. Now it runs.
{ "pile_set_name": "StackExchange" }
Q: Como converter uma string para um inteiro em um programa socket? Aqui está um trecho do código: def CriarServer(self): Host = self.Txt1.get() Port = self.Txt2.get() sockobj = socket(AF_INET, SOCK_STREAM) #Erro sockobj.bind((Host, Port)) sockobj.listen(5) while True: data = conexão.recv(1024) if not data: break conexão.close() Como eu estou em um programa em Tkinter, pedi ao Python para que ele pegasse o texto de duas entradas, uma de porta e outra de host, porém quando se pega o texto de uma Form, ele vem como uma string e o socket bind só aceita números inteiros para fazer a atribuição. Como converto a string para int? Erro: sockobj.bind(int(Host, Port))TypeError: 'str' object cannot be interpreted as an integer. A: Prsesumo que precises que o Port seja um inteiro, então faz: Host = int(self.Txt1.get()) int() Assim tens de ter a certeza que o input vai ser númerico, caso contrário isso vai gerar erro ValueError: invalid literal for int() with base 10:... Podes certeficar-te que "há de ser inteiro" e adaptar ao teu código assim: Host = None while not isinstance(Host, int): try: Host = int(input('número')) except ValueError as err: print('numro incorreto') O erro que colocaste num edição recente, já depois desta resposta que era acerca de outra questão, acontece porque estás a tentar tranformar em inteiro duas variáveis, int(Host, Port), não pode ser porque no metodo bind entra como argumento um tuple (host, port) em que o host é uma string e o port é um inteiro, ou seja, tens de fazer só: sockobj.bind((Host, Port)) Em que port será um inteiro, com a maneira que coloquei acima, Host = int(self.Txt1.get())
{ "pile_set_name": "StackExchange" }
Q: Different C++ Redistributable DLL's for VS 2013 and VS 2015 I built my application using VS 2013, and delivered two DLLs: msvcp120.dll msvcr120.dll Building the same application using VS 2015, instead we need: msvcp140.dll vcruntime140.dll Is vcruntime140.dll replacing the former msvcp120.dll? A: Yes. Visual Studio usually breaks binary compatibility with older releases when it gets a major version update. The one notable exception is the transition from VS2015(14.x) to VS2017(15.x), which did not break binary compatibility. For all other releases, when you change the version of Visual Studio, you'll need to change which Runtime Redistributables get installed onto the target computer. EDIT: Per Christopher's observation: Don't manually install the .DLL files onto the target computer. Download the Redistributable Installer from Microsoft, and ship that with your program, with instructions (or an installer) that installs that first. This link goes to the 2015 version, but you should grab whichever version corresponds to the specific version of visual studio that you are using.
{ "pile_set_name": "StackExchange" }
Q: Folder tree looking forest that needs to align at the top I wish to top align a horizontal folder structure type of tree. However, top aligning does not produce a good result unfortunately. I have taken some code from a previous post (Making a (simple) directory tree) to generate a folder structure type of tree. It looks perfectly aligned at the bottom as shown here: When I wish to topalign using either begin draw/.code={\begin{tikzpicture}[baseline=(current bounding box.north)]} or top align using \begin{adjustbox}{valign=t} I get: Full code: %% Compile and read me! \documentclass[a4paper,12pt]{article} \usepackage{forest} \newcommand{\fff}[1]{%\begin{adjustbox}{valign=t} \begin{forest} for tree={ font=\ttfamily, s sep=.5em, inner sep=1, grow'=0, child anchor=west, parent anchor=south, anchor=west, % node distance=1.2cm, calign=first, % align=top, edge path={ \noexpand\path [draw, \forestoption{edge}] (!u.south west) +(7.5pt,0) |- node[fill,inner sep=1.25pt] {} (.child anchor)\forestoption{edge label}; }, before typesetting nodes={ if n=1 {insert before={[,phantom]}} {} }, fit=band, before computing xy={l=15pt}, begin draw/.code={\begin{tikzpicture}[baseline=(current bounding box.north)]} } #1\end{forest} } \begin{document} \begin{figure} \centering % \begin{tabular}{ccc} % \fff{ [Forward ]}& % \fff{ [repeat [Forward ] ]}& % \fff{ [if \textit{PathAhead} [Forward] [TurnLeft] ]} % \end{tabular} \fff{ [Forward ]} \fff{ [repeat [Forward ] ]} \fff{ [if \textit{PathAhead} [Forward] [TurnLeft] ] } \caption{Partial correct programs} \label{tree:partial-correct} \end{figure} \end{document} A: I would do something much simpler, taking advantage of current Forest's edges library and avoiding abuse of font. Also, it is less to type. I wouldn't, personally, bother creating \fff unless you have tens to do, at least, because it is much clearer not to use it and hardly more typing. However, you can, of course, if you wish. Note that your original definition introduces spaces. I'm not sure if that's wanted or not, but I've omitted them here. \documentclass[]{standalone} \usepackage[edges]{forest} \forestset{ fff/.style={ for tree={folder, grow'=0, delay={+content=\strut}, edge label={node [midway, inner sep=1.25pt, fill] {}}, font=\ttfamily}, baseline=t } } \newcommand\fff[1]{\Forest{fff#1}}% personally, I wouldn't bother with this \begin{document} \Forest{fff[Forward]} \Forest{fff[repeat[Forward]]} \Forest{fff[if \textit{PathAhead} [Forward][TurnLeft]]} % if you must \fff{ [Forward ]} \fff{ [repeat [Forward ] ]} \fff{ [if \textit{PathAhead} [Forward] [TurnLeft] ]} \end{document} Double trouble: A: Adding a \strut to the font does remedy the offset if all your contents are of less or equal height and depth than a \strut: %% Compile and read me! \documentclass[a4paper,12pt]{article} \usepackage{forest} \newcommand{\fff}[1]{%\begin{adjustbox}{valign=t} \begin{forest} for tree={ font=\strut\ttfamily, s sep=.5em, inner sep=1, grow'=0, child anchor=west, parent anchor=south, anchor=west, % node distance=1.2cm, calign=first, % align=top, edge path={ \noexpand\path [draw, \forestoption{edge}] (!u.south west) +(7.5pt,0) |- node[fill,inner sep=1.25pt] {} (.child anchor)\forestoption{edge label}; }, before typesetting nodes={ if n=1 {insert before={[,phantom]}} {} }, fit=band, before computing xy={l=15pt}, begin draw/.code={\begin{tikzpicture}[baseline=(current bounding box.north)]} } #1\end{forest} } \begin{document} \begin{figure} \centering % \begin{tabular}{ccc} % \fff{ [Forward ]}& % \fff{ [repeat [Forward ] ]}& % \fff{ [if \textit{PathAhead} [Forward] [TurnLeft] ]} % \end{tabular} \fff{ [Forward ]} \fff{ [repeat [Forward ] ]} \fff{ [if \textit{PathAhead} [Forward] [TurnLeft] ] } \caption{Partial correct programs} \label{tree:partial-correct} \end{figure} \end{document}
{ "pile_set_name": "StackExchange" }
Q: How to put two dimensional array into mysql_query in php The question in simple. What i have and what's the problem? I do have two dimensional array $someArray[][]. The first bracket i could put "subject" or "date". The second one, goes from 1 to 4 (just an example - $someArray['date'][0]) Now when i try to get some data from database with mysql_query() i have some problems. I am trying to use this two dimensional array in WHERE part in query. Examples what works and what doesn't $result = mysql_query("SELECT some from table where date='$someArray[date][0]' AND subject='$someArray[subject][0]') or die(mysql_error()); When i use this, it doesn't return me anything. But when i first assing those values to new variables: $variable1 = $someArray['date'][0]; $variable2 = $someArray['subject'][0]; and then use them in query `$result = mysql_query("SELECT some from table where date='$variable1' AND subject='$variable2') or die(mysql_error()); It works like a charm. Question Whats wrong with my first query, am I writing those arrays wrong? I get no errors. Tried to put single apostrophes inside [] brackets in mysql query, but then i do get errors. Also it works without them if i use array like: $someotherArray[somedata] in query. A: Array interpolation only works for a single level of subscripting. For multidimensional arrays, you need to use {...} wrappers: $result = mysql_query("SELECT some from table where date='{$someArray['date'][0]}' AND subject='{$someArray['subject'][0]}') or die(mysql_error());
{ "pile_set_name": "StackExchange" }