text
stringlengths
64
89.7k
meta
dict
Q: What should be included in student portfolios for CS? Should CS students be keeping a portfolio, and if so, what should it look like? What would you, as a potential employer or college recruiter, like to see in/on a potential employee or student portfolio? Are there ways other than portfolios to showcase a student's work? Many high schools, mine included, are encouraging, if not requiring, students to build an online portfolio showcasing their achievements and growth through their high school career - and not just in the arts. Many of the students at my school have built web sites for themselves using free services such as Weebly and are adding pages and tabs for every year and every class. Each page or tab displays work completed for the class (sometimes but not always their best) - an essay, a math test they did well on, etc. - with their reflections about what they learned, what they could've done better, how they worked in groups, etc. What is the best platform for this showcase? Is Github good enough, or would you like to see something a little more formal, like a Weebly page (with embedded code or links to code)? A: The answer is Yes, but be careful. As far as employment goes, no recruiter is going to want to go through a student's completed homework. Professional Portfolios do NOT include practice However, these students are not professionals. Keeping a website to showcase their practice and skills is good. It's good practice. When these students graduate, the skills they learned in developing a practice portfolio will help them build a professional portfolio. To clarify: Professional Portfolios can still contain neat little projects, but they must be built with a use, or even as a proof-of-concept. Certain college projects may qualify here, but pretty much all coding homework will not. Here are a few random tips: Practice portfolios might help with college recruitment. If a student really feels the need to include a practice portfolio as part of a professional resume or job application, make sure it is completely separate from real-world projects and labeled as practice. Github is normally sufficient. Designers and those who are interested in visual careers would benefit greatly from a website portfolio with links to a Gitbhub (all programmers should demonstrate they can use source control, though). Having a website portfolio is also a good place to show off soft skills in writing and communication. Make sure to get good information to help the students. This is serious stuff. A bad portfolio is worse than no portfolio Include the project and technical details for portfolio items. Don't include tests, reflections, or "what I could have done better". These become useless as soon as the student receives a final grade. You're goal is to prepare them for the real world, not make them type a lot of words no one will read. Portfolios are for projects. Not classes. Relevant classes belong in a resume, in tiny font. No one is ever going read a paragraph about what the student "thought of the class", or even "what I learned". Portfolios are demonstrations, not promises. The other answer by Buffy suggests it will be good for the school to showcase student work. Okay, cool, but forget about that. Portfolios are meant to help the students convey their skills to employers. They are not trophies, so don't give yourself a conflict of interest. Similarly, the permanence of portfolios do not concern the school. You're not a portfolio factory. Keep example portfolios if you must, but ask the student and make a copy of it to demonstrate, privately, to future students. I'm going to add to my answer, as you seem to have a lot of misconceptions about what a portfolio is. A portfolio is a concise collection of tangible projects created by or in part by the author of the portfolio Classes, tests, homework, personal reflections, etc do not belong in a portfolio. Not even a practice portfolio. Do NOT do this, please. This is the opposite of the real world. No one wants to read about someone's entire educational experience. Ever. Not to say it isn't useful information, but that's what interviews are for. Interviews go over the candidates experiences. Please remember that resumes are promises of your skills. Portfolios are demonstrations of skill. Interviews are insights into experiences and live demonstrations of skill.
{ "pile_set_name": "StackExchange" }
Q: Non-portable path to file yoga-prefix.pch, specified path differs in case. Corrupted file path with duplicated characters I've created a project with react-native-cli and installed some pods via cocoapods but eveytime I try building the project in Xcode it gives me this strange error: Non-portable path to file /UUsersCcrysilDDesktopaapp-demoiiosPPodsTTarget Support Filesyyogayyoga-prefix.pch; specified path differs in case from the file name on disk For some reason the path duplicates the first character for each file/folder and removes all backslashes. I can find this file without problems in the finder and I haven't touched any of the path variables automatically set by cocoapods so I'm not sure what could be causing this. I've also tried deleting all the pods and reinstalling them, deleting the whole /ios folder and rebuilding it but nothing seems to work. I'm using Xcode 9.2, react-native 0.53.3 and cocoapods 1.4.0 and here's my Podfile incase it might help: https://nofile.io/f/6oCNuZ6HEYb/Podfile A: For me, I had to remove and reinstall both node_modules and Pods. Always reinstall node_modules first, as Pods rely on them. rm -Rf node_nodules npm install cd ios rm -Rf Pods pod install
{ "pile_set_name": "StackExchange" }
Q: How can I get a list of all packages in a repository section from command line? In Synaptic, one can list packages by section. For example, in the image below all packages of the "Amateur Radio (universe)" section are listed. How can I get such a list (edit: with package description) at the command line? I need a raw list; a terminal application like aptitude will not do. A: Make sure the dctrl-tools package is installed. It provides useful commands for searching the apt and dpkg package lists. To get a full description of all packages from a particular section that are installable with apt, run grep-aptavail -F Section hamradio This will show the full package metadata for every package in the hamradio section. If all you want to see are the package names, run grep-aptavail -n -F Section -s Package hamradio If your system is set up for multiarch, the same package may show up more than once in this listing if it is built for more than one architecture. So to refine this further, use either grep-aptavail -n -F Section -s Package hamradio | sort | uniq or grep-aptavail -n -F Section -s Package hamradio | sort -u to sort the package list and remove duplicate packages with the same name. Note that you will have to use the actual name of the section, which is different from the "human-readable" name that Synaptic shows in its GUI. For example, the searches above use the section name hamradio instead of the string "Amateur Radio" shown in Synaptic. See the man page for grep-aptavail for a full description of all options and some examples. A: Well, though you say you don't want to use aptitude because of the output, you need to know that you can modify it to get what you like: aptitude -F'|%p|%d|' search '?section(hamradio)' The trick is in the -F switch that modifies the output format. %p means package. This also outputs when package has various architectures (ie amd64 vs i386), and %d which outputs the description. You can personalize the search pattern even more to for example not installed packages: aptitude -F'|%p|%d|' search '?section(hamradio) !~i' where ~i means installed and the ! is a not, so it reads as "not (!) installed (~i)", or if you only want the ones that are available to your architecture: aptitude -F'|%p|%d|' search '?section(hamradio) ~r native' ~r being ?architecture() which matches the architecture of the package and native which lists only the ones that have the same architecture as the system, the equivalent to dpkg --print-architecture. The previous line can therefore be written even more concisely as: aptitude -F'|%p|%d|' search '~s hamradio ~r native' A: More fields on a single line, with arbitrary separator The following one-liner will print all unique package names of a repository section, together with their description, each on a single line. All fields are separated by a pipe character, i.e. ready for conversion into a Markdown pipe table. The resulting table can be found on my web site. grep-aptavail -n -s Package,Description -F Section hamradio |paste -sd '||\n' |sed 's:^:|:' |sort -u
{ "pile_set_name": "StackExchange" }
Q: login modal for django? we can decorate a view with login_required so that unauthenticated user would be redirected to a login page. I'd like to create a decorator to show a modal in the current page when an unauthenticated user accesses a view. I guess I can handle when request is made with ajax. I could use $.ajaxSetup to handle 401 errors to show the modal. But how do I show a modal when a regular request is made? Edit Similar question is asked, Django authentication and Ajax - URLs that require login But it only covers the ajax requests. A: Hmm i would offer different approach. Modals and such pop up when javascript does something. In order for javascript to "do something" there has to be some kind of marker added to html. I suggest you use context processor for it. 1) Create context processor, whichs only purpose is to create/add marker in the end of the file. Relevant template part would be something like <script>var show_modal = {{ SHOW_MODAL }}</script> Where the value of SHOW_MODAL comes from context processor 2) Add your js code to show modal, if the value of show_modal variable indicates you should do it.
{ "pile_set_name": "StackExchange" }
Q: EKReminder with RecurrenceRules I was work on fetching reminder, I have no problem fetching title, last modified date, notes, etc but I only have problem is recurrenceRules. Here my code: print(get_reminder_detail.recurrenceRules) And when I ran the app, it said: [EKRecurrenceRule <0x28051c1e0> RRULE FREQ=WEEKLY;INTERVAL=1;UNTIL=20200815T061923Z] As I see two things I am not sure how to pull information from this...first, how can I take FREQ and INTERVAL into the string? Second, how can I pull the UNTIL into the DateComponents? A: Do not look at the string that is used to print out the recurrence rule in the console. Look at the properties of the recurrence rule itself. https://developer.apple.com/documentation/eventkit/ekrecurrencerule Everything you need is right there.
{ "pile_set_name": "StackExchange" }
Q: React-Native = Invariant Violation: Maximum update depth exceeded I have this error, and i didn't have it before : here is the image of the error Invariant Violation: Maximum update depth exceeded. This can happen when a component repeatedly calls setState inside componentWillUpdate or componentDidUpdate. React limits the number of nested updates to prevent infinite loops. This error is located at: in Connect (at LoginForm.js:75) render() { const { inputStyle, containerStylePass, containerStyleIdent, barStyle, textInputStyle } = styles; return ( <View> <View>{/* all the password form*/} <View style={containerStylePass}> icon <Text style={inputStyle}>Mot de passe</Text> </View> <TextInput secureTextEntry autoCorrect={false} style={textInputStyle} /> <View style={barStyle} /> </View> <View> <Connect /> </View> </View> I don't know why is an error, can anyone help? here is my code : import React, { Component } from 'react'; import { Text, TouchableOpacity } from 'react-native'; import LinearGradient from 'react-native-linear-gradient'; class Connect extends Component { render() { return ( <TouchableOpacity onPress={this.setState({ butPressed: true })}> <LinearGradient colors={['#56EDFF', '#42F4A0']} start={{ x: 0.0, y: 1.0 }} end={{ x: 1.0, y: 1.0 }} > <Text style={textStyle}> Se connecter </Text>; </LinearGradient> </TouchableOpacity> ); } } A: try: <TouchableOpacity onPress={() => this.setState({ butPressed: true })}> instead of <TouchableOpacity onPress={this.setState({ butPressed: true })}> Assigning {this.setState} to onPress without arrow function cause to render over and over again because setState casing the component to render again and then it comes again to the assignment of onPress = {}. Using arrow function instead causing to assign a function so the setState is actually doesn't happen until the function is activated. (only when onPress is activated)
{ "pile_set_name": "StackExchange" }
Q: Full text search in Google's Cloud Spanner Does Cloud Spanner support the CONTAINS method or is there a better way to do a full text search on a string? In the docs I've found REGEXP_CONTAINS; is this an alternative? A: Rather than CONTAINS, the operator you are looking for is LIKE. REGEXP_CONTAINS is also definitely a method to achieve text search in Cloud Spanner. It allows you to specify regular expressions (supported by the re2 library). You may also want to consider STARTS_WITH and ENDS_WITH if you only want to do prefix or suffix text matching, or STRPOS for simple text matching anywhere.
{ "pile_set_name": "StackExchange" }
Q: Python / Pandas Dict to find closest match then end loop trying to apply this logic to the following DF I have a df as follows import pandas as pd import numpy as pd df = pd.read_csv('subjects.csv') Subjects Media information Media Digital Media I then try to map my subjects to a dict to output a validated corrected_subject d = {'Media' : 'Film & Media', 'Information' : 'ICT', 'Digital' : 'ICT'} df['subject_corrected'] = df['subjects'](lambda x: ', '.join([d[i] for i in d if i in x])) Subjects subject_corrected Media Film & Media information Media Film & Media, ICT Digital Media Film & Media, ICT now using this loops through my DF giving me all matches where I want it to find the closest match and exit the loop. so Digital Media would be ICT and not Media I have tried the following but it hasn't really boded well for me! for for k,v in d.items(): if k in df['subjects']: df['subject_corrected'] = d.values(): Subjects subject_corrected Media Film & Media information Media ICT Digital Media ICT I've had a look at quite a few similar posts but couldn't work this one out. am I going around this the wrong way, shall I pass this into two lists/arrays and use an if statement to loop through any matches? also how is a dict different from a 2D Array. Any help is appreciated. A: You can use: df['Subjects'].apply(lambda x: ', '.join([d[i] for i in d if i in x])).str.split(', ').str[-1] Output: Subjects subject_corrected 0 Media Film & Media 1 Information Media ICT 2 Digital Media ICT You can directly achive the output via the below line of code as well, which simply takes the last element from list. df['Subjects'].apply(lambda x: [d[i] for i in d if i in x][-1])
{ "pile_set_name": "StackExchange" }
Q: HOW to get round MAX_JOIN_SIZE with cakephp v3.x I am getting this the following MAX_JOIN_SIZE error when I deploy my cakephp v3 app to shared hosting: Error: SQLSTATE[42000]: Syntax error or access violation: 1104 The SELECT would examine more than MAX_JOIN_SIZE rows; check your WHERE and use SET SQL_BIG_SELECTS=1 or SET MAX_JOIN_SIZE=# if the SELECT is okay This seems like a common problem, but can anyone tell me how to get round this when using cakephp v3.x? I have tried putting the following code in app/src/Model/AppModel.php: function beforeFind() { $this->query('SET SQL_BIG_SELECTS=1'); } but this doesn't seem to have any effect. A: CakePHP 3.0 doesn't use an AppModel anymore. You really should read the migration guide. Implement it via an event listener on Model.beforeFind or as a behavior. Further setting SQL_BIG_SELECTS to 1 seems like bad practice to me. You should not work around the symptoms but fix the cause: Figure out what's wrong with your query. See MySQL - SQL_BIG_SELECTS which has a good answer.
{ "pile_set_name": "StackExchange" }
Q: Jquery won't recognize non table cells as descendants in table rows? Working on a tooltip for a table row, I put a span in a table row with a class of "tip" and then tried to select that through find('.tip') but this wouldn't work. $(".tip_trigger").hover(function(){ tip = $(this).find('.tip'); tip.show(); When I put the .tip class on a td it worked fine showing the tooltip. A: If you use invalid elements/structure, you'll get unpredictable results :) A <span> cannot be a child of a <tr>, you should lay it out a different way, for example putting that <span> inside a <td> in the row.
{ "pile_set_name": "StackExchange" }
Q: Segmentation fault on Ubuntu while running fine on Windows, programmed in c I am writing a small oscillatory motion program that would run on Ubuntu and Windows. After completing part of the program (major part) I tried to test it on windows, works fine (working with Pelles C) Then, i copied my data on the Ubunutu computer, Running on a virtual Machine (VMware Workstation). i compiled it fine using GCC and crashes with "Segmentation fault (core dumped)" error. Output before crashing: Simulation Starting... Creating Containers..... Done! Initializing Containers..... Initializing Container Done!... Initializing Container Done!... Initializing Container Segmentation fault (core dumped) -------------------------------- Part of zDA.c, the function required to initialize vectors before using in simulation int InitializeArray (DATA *item) { printf("\nInitializing Container"); item->num_allocated = 0; item->num_elements = 0; item->the_array = NULL; printf("\nDone!..."); if (!item->the_array) { return -1; } return 0; } While calling the function, snippet from zSim.c int Simulate(SIMOPT so,PRINTOPT po) { printf("\nSimulation Starting...\nCreating Containers..."); //Create Data Objects //vectors DATA Theta,Omega,T; DATA *pTheta = &Theta; DATA *pOmega = &Omega; DATA *pT = &T; //Initial Values int method = so.method; float g = so.g; float l = so.l; float itheta = so.theta; float iomega = so.omega; float dt = so.dt; float df = so.df; float dw = so.dw; float q = so.q; float maxtime = so.maxtime; //backend variables float i = 0; //Simulation Counter int k=0; //Counter to Count array size; int kmax = 0; float th,thi,om,omi,t,ti; //Simulation variables int gt,go,pl,mat; printf(".."); printf("\nDone!\nInitializing Containers..."); printf(".."); //Initialize Containers InitializeArray(pTheta); InitializeArray(pOmega); InitializeArray(pT); //**FOR SOME REASON, it stops working here -_- printf("DONE! NIT"); //It worked fine on windows, there are no dependencies. th = pTheta->the_array[0]; om = pOmega->the_array[0]; t = pT->the_array[0]; i don't get why it worked on windows, didn't on ubuntu, either the compiler on Pelles fixed something for me, or my virtual machine is going crazy, i mean... it already Initialized 2 out of 3 arrays, what's wrong with the third :)"? A: The initialization looks like it would not crash. But the code immediately following it looks suspect. The array has not been initialized (well ... it is initialized, but it is initialized to NULL), yet the following code accesses it: th = pTheta->the_array[0];
{ "pile_set_name": "StackExchange" }
Q: SSIS execute sql task parameter mapping I'm trying to execute a sql script using the task in SSIS. My script just inserts a bunch of nambe value pairs in a table. For example - insert into mytable (name, value) values (?, 'value1'), (?, 'value2') Now, I want to map a variable defined in SSIS to be mapped to the parameters in the statement above. I tried defining a scalar variable but I guess the sql task doesn't like that. Oh and all the name parameters in the insert statement resolve to a single variable. For example I want insert into mytable (name, value) values ('name1', 'value1'), ('name1', 'value2') When I open the Parameter Mapping tab for the task, it wants me to map each parameter invidually like - Variable Name - User::Name Direction - Input Data Type - LONG Parameter Name - 0 Parameter Size - -1 Variable Name - User::Name Direction - Input Data Type - LONG Parameter Name - 1 Parameter Size - -1 This quickly gets out of hand and cumbersome if have 5-10 values for a name and forces me to add multiple assignments for the same name. Is there an easy(-ier) way to do this? A: The easiest (and most extensible) way, is to use a Data Flow Task instead of using an Execute SQL Task. Add a Dataflow Task; I assume that you have all the variables filled with the right parameters, and that you know how to pass the values onto them. Create a dummy row with the columns you will need to insert, so use whatever pleases you the most as a source (in this example, i've used an oledb connection). One good tip is to define the datatype(s) of each column in the source as you will need them in your destination table. This will align the metadata of the dataflow with the one the insert table (Screenshot #1). Then add a multicast component to the dataflow. For the first parameter/value, add a derived column component, name it cleanly and proceed to substitute the content of your parameters with your variables. For each further parameter/value that needs to be added; copy the previously created derived column component, add one extra branch from the multicast component and proceed to substitute the column parameter/value as necessary. Add a union all and join all the flows Insert into the table Voilà! (Screenshot #2) The good thing about this method is that you can make as extensible as you wish... validate each value with different criteria, modify the data, add business rules, discard non-compliant values (by checking the full number of complying values)... ! Have a nice day! Francisco. PS: I had prepared a couple more screenshots... but stackoverflow has decided that I am too new to the site to post things with images or more than two links (!) Oh well..
{ "pile_set_name": "StackExchange" }
Q: jMock - allowing() a call multiple times with different results I want to call allowing() several times and provide different results. But I'm finding that the first allowing() specification absorbs all the calls and I can't change the return value. @Test public void example() { timeNow(100); // do something timeNow(105); // do something else } private void timeNow(final long timeNow) { context.checking(new Expectations() {{ allowing(clock).timeNow(); will(returnValue(timeNow)); }}); } If I change allowing(clock) to oneOf(clock) it works fine. But ideally I want to use allowing() and not over-specify that the clock is called only once. Is there any way to do it? A: I would recommend taking a look at states - they allow you to change which expectation to use based on what "state" the test is in. @Auto private States clockState; @Test public void example() { clockState.startsAs("first"); timeNow(100); // do something clockState.become("second"); timeNow(105); // do something else } private void timeNow(final long timeNow) { context.checking(new Expectations() {{ allowing(clock).timeNow(); will(returnValue(timeNow)); when(clockState.is("first")); allowing(clock).timeNow(); will(returnValue(timeNow + 100)); when(clockState.is("second")); }}); }
{ "pile_set_name": "StackExchange" }
Q: Fechar o modal Bootstrap após efetuar o cadastro Colegas Tenho um sistema do qual o usuário ao clicar no link Cadastrar Matéria abrirá o modal abaixo: Porém gostaria que ao clicar em Salvar, o modal fechasse automaticamente. Tentei usar o código abaixo, o modal fecha, mas o fundo permanece: $(document).ready(function(){ $('#submit').click(function(ev){ ev.preventDefault(); $("#myModal").hide(); }); }); O fundo permanece: A: subtitua: $("#myModal").hide(); por: $("#myModal").modal('hide'); Se não me engano é isso
{ "pile_set_name": "StackExchange" }
Q: VBA Regex and String Handling Q1 . In VBA, I am working on Web scraping and I am able to fetch the string and store into a variable. The string looks something like this: x = "123434[STM]CompilationError_Lib.c23434[STM]LinkingError432122[STM]Null Pointer Exception" What I want to do is, I will define an array , and store the substring into each index of the array. arr[0] = 123434[STM]CompilationError_Lib.c arr[1] = 23434[STM]LinkingError arr[2] = 432122[STM]Null Pointer Exception Caveat: There can be any number of substrings. It is not always three. The regex pattern I have written for this is : myRegExp.Pattern = `"\d+[:].[A-Za-z]*.[A-Za-z._]*[^0-9]"` But it is capturing only the first substring not all the three matches.How can I do that? A: So I will answer your first question. I suggest you split the other parts into separate questions as already mentioned in comments. Option Explicit ' Or add in Tools > References > VBScript Reg Exp for early binding Public Sub testing() Dim x As String, arr() As String, i As Long, matches As Object x = "123434[STM]CompilationError_Lib.c23434[STM]LinkingError432122[STM]Null Pointer Exception" Static re As Object If re Is Nothing Then Set re = CreateObject("VBScript.RegExp") re.Global = True 'Don't know how you will deploy shown here for demo re.MultiLine = True 'Don't know how you will deploy shown here for demo End If re.IgnoreCase = False re.Pattern = "\d+\[[a-zA-Z]{3}][^0-9]+" Set matches = re.Execute(x) ReDim arr(0 To matches.Count - 1) For i = LBound(arr) To UBound(arr) arr(i) = matches(i) Next i End Sub Output:
{ "pile_set_name": "StackExchange" }
Q: Python: Pinging a URL multiple times at once for testing I have a link that I want to test for robustness, for lack of a better word. What I have code that pings the URL multiple times, sequentially: # Testing for robustness for i in range(100000): city = 'New York' city = '%20'.join(city.split(' ')) res = requests.get(f'http://example.com/twofishes?query={city}') data = res.json() geo = data['interpretations'][0]['feature']['geometry']['center'] print('pinging xtime: %s ' % str(i)) print(geo['lat'], geo['lng']) I want to take this code, but ping the link say, 10 or 12 times at once. I don't mind the sequential pinging, but that's not as efficient as pinging multiple times at once. I feel like this is a quick modification, where the for loop comes out and a PULL function goes in? A: Here is an example program which should work for this task. Given that I do not want to be blacklisted, I have not actually tested the code to see if it works. Regardless, it should at least be in the ballpark of what your looking for. If you want actually have all of the threads execute at the same time I would look into adding events. Hope this helps. Code import threading import requests import requests.exceptions as exceptions def stress_test(s): for i in range(100000): try: city = 'New York' city = '%20'.join(city.split(' ')) res = s.get(f'http://example.com/twofishes?query={city}') data = res.json() geo = data['interpretations'][0]['feature']['geometry']['center'] print('pinging xtime: %s ' % str(i)) print(geo['lat'], geo['lng']) except (exceptions.ConnectionError, exceptions.HTTPError, exceptions.Timeout): pass if __name__ == '__main__': for i in range(1, 12): s = requests.session() t = threading.Thread(target=stress_test, args=(s,)) t.start() for th in threading.enumerate(): if th != threading.current_thread(): th.join()
{ "pile_set_name": "StackExchange" }
Q: Subtracting 2 pointers with a signed result, appropriate for input to a red-black-tree "comparator" In C does the following work? struct fdBase *left, *right; int result = (int)(left - right); result can be negative. If that doesn't work, how do I write it? My goal is to have something to provide to my red-black-tree sort function, a so called "comparator" for pointers. I am not doing array work, I need the actual difference between the pointers, in bytes. A: The result of subtraction of two pointers in C has a signed result, by definition. The result has ptrdiff_t type, which is a signed integral type. The important detail here is that you are not allowed to subtract just two arbitrary pointers. In order for the result to be defined, the pointers have to point to elements of the same array (or to the imaginary "one past the end" element). The result of the subtraction is expressed in elements, not in bytes, i.e. it works consistently with the rest of pointer arithmetic in C. The result of A - B can and will be be negative, if A points to an element with greater index than B. If you need the difference in bytes between the raw addresses the pointers are pointing to, the more-or-less formally valid way to do it would be the following intptr_t difference = (intptr_t) left - (intptr_t) right; That way you are not subtracting pointers (since it is not defined for arbitrary pointers), but rather subtracting their integer representations. The result of (intptr_t) some_pointer conversion is implementation-defined, but typically it is the physical memory address stored in the pointer. This method, unfortunately, has some problems of its own: it might produce incorrect results for pointers that have 1 is the high-order bits. Such pointers will normally produce negative values when converted to intptr_t.
{ "pile_set_name": "StackExchange" }
Q: How do I only extract numbers between 2 specific words using regex I am trying to pull numbers between 2 specific words using regex. The problem is that they are multiline. I am trying to extract these from a PDF so it has to be between these 2 words only WORD1: (23) (56) (78) END I tried this \((.*?)\) and it pulls the numbers between () but I need it to only search between the words WORD1 and END instead of the whole PDF. Is there a way to do it ? Expected Output: 23 56 78 A: Use the \G construct (?s)(?:(WORD1:)(?=(?:(?!WORD1:|END).)*?\d(?:(?!WORD1:|END).)*END)|(?!^)\G)(?:(?!\d|WORD1:|END).)*?\K\d+ https://regex101.com/r/il00WG/1 Explained (?s) # Dot-all inline modifier (?: ( WORD1: ) # (1), Flag start of new set (?= # Lookahead, must be a digit before the END (?: (?! WORD1: | END ) . )*? \d (?: (?! WORD1: | END ) . )* END ) | # OR, (?! ^ ) \G # Start where last match left off ) (?: (?! \d | WORD1: | END ) # Go past non-digits . )*? \K # Ignor previous match up to here \d+ # Digits, the only match
{ "pile_set_name": "StackExchange" }
Q: Using variables inside a string in plpgsql I'm trying to build a regex match with variables as part of the matched string: var_a := 'somestring' var_b := 'someotherstring' IF EXISTS (SELECT 1 FROM some_table WHERE some_field ~* 'var_a.+?(?=\-)\-var_b)') How do I insert the variables into the string? Like in js you can simply do: `${var_a} restofstring` A: You could use the format() function: WHERE some_field ~* format('%s.+?(?=\-)\-%s)', var_a, var_b)
{ "pile_set_name": "StackExchange" }
Q: colnames command produces error: attribute [13] must be the same length as the vector [12] I am trying to rename columns in a data.frame. However, I keep getting the following error when I try to run the names, or colnames command in R Error in names(HourlyTotal)["ZoneElectric"] <- "Meas.Elec" : 'names' attribute [13] must be the same length as the vector [12] Here is the code I am trying to run: names(HourlyTotal)["ZoneElectric"] <- "Meas.Elec" However, the code works fine if I use a column number instead of the column name. names(HourlyTotal)[3] <- "Meas.Elec" Any ideas why this might be happening? I appreciate any help, as this has me stumped for a while now. A: When you write names(HourlyTotal)["ZoneElectric"], you are asking R to give you the element named "ZoneElectric" from the vector names(HourlyTotal) However, names(HourlyTotal) is an unnamed vector. What you want (I am guessing/assuming) is the element of names(HourlyTotal) whose value is "ZoneElectric". If you happen to know in which position in the vector it occurs, then you can use numeric indexing, as you have discovered (ie, names(HourlyTotal)[3]). However, a more robust solution is to filter for that specific value: Therefore, use: names(HourlyTotal)[names(HourlyTotal) == "ZoneElectric"] <- ... # Instead of # names(HourlyTotal)["ZoneElectric"] <- ... Or you can use setnames from the data.table package: library(data.table) setnames(HourlyTotal, old="ZoneElectric", new="NewName")
{ "pile_set_name": "StackExchange" }
Q: Where can I find high-level Corki VOD's? I'd like to watch some standard high-level Corki gameplay, as he's my preferred character. I've never seen him used, though, in the dozen or so professional games that I've watched. Could someone link to a specific VOD of a tournament game where Corki is chosen? Preferably the Corki plays an important role in a close game, so that his skill and item build is highlighted, but isn't so dominant that the game isn't a completely unrealistic expectation when I try to emulate his play. A: Look in to some of "imaqtpie"s videos. He played Corki quite a few times at IEM Hanover. http://leaguecraft.com/news/intel-extreme-masters-hannover-match-replays-132.xhtml Check the Dignitas matches.
{ "pile_set_name": "StackExchange" }
Q: Stack descent to sheaf descent via Grothendieck construction? Let S be a Grothendieck site, the (either left or right adjoint to the) Grothendieck construction assigns to each groupoid fibration over S a presheaf valued in groupoids. The following feels it might be morally correct: A prestack is a stack $\iff$ its corresponding presheaf of groupoids is a sheaf Is this true? Does the Grothendieck construction map descent data for stacks into descent data for sheaves? (Note I haven’t put too much thought into this yet so the question might be silly or trivial) A: Here's a variation which is true, when interpreted in a suitably non-strict / higher categorical sense (for example, "functor" means "pseudofunctor" below). I'm not sure on which side of the Grothendieck construction you prefer to define (pre)stacks and (pre)sheaves, so let's do both versions: Define a prestack on $B$ to be a functor $B^{op} \to \mathsf{Cat}$, a presheaf valued in groupoids to be a functor $B^{op} \to \mathsf{Gpd}$, and stacks / sheaves to be those satisfying descent. Then a prestack $F: B^{op} \to \mathsf{Cat}$ is a stack if and only if, for every category $C$, the presheaf underlying $F^C$ is a sheaf. Moreover, it suffices to check the case where $C$ is the arrow category. Define a prestack on $B$ to be a fibration $E \to B$, a presheaf on $B$ to be a fibration $E \to B$ whose fibers are groupoids, and stacks / sheaves to be those satisfying descent. Then a prestack $E \to B$ is a stack if and only if for every category $C$, the presheaf underlying the mapping prestack $\underline{Fun}_B(C\times B, E) \to B$ is a sheaf. Moreover, it suffices to check the case where $C$ is the arrow category. The point is that descent is a limit condition. Since limits are defined representably, it can be checked by mapping in from objects $C \in \mathsf{Cat}$. And moreover, it suffices to check on a strong generator of $\mathsf{Cat}$, such as the arrow category (by which I mean the category $0 \to 1$ with two objects and one non-identity morphism, which goes between them).
{ "pile_set_name": "StackExchange" }
Q: Struggling with ASP.NET MVC auto-scaffolder template I'm trying to write an auto-scaffolder template for Index views. I'd like to be able to pass in a collection of models or view-models (e.g., IQueryable<MyViewModel>) and get back an HTML table that uses the DisplayName attribute for the headings (th elements) and Html.Display(propertyName) for the cells (td elements). Each row should correspond to one item in the collection. Here's what I have so far: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> <% var items = (IQueryable<TestProj.ViewModels.TestViewModel>)Model; // How do I make this generic? var properties = items.First().GetMetadata().Properties .Where(pm => pm.ShowForDisplay && !ViewData.TemplateInfo.Visited(pm)); %> <table> <tr> <% foreach(var property in properties) { %> <th> <%= property.DisplayName %> </th> <% } %> </tr> <% foreach(var item in items) { HtmlHelper itemHtml = ????; // What should I put in place of "????"? %> <tr> <% foreach(var property in properties) { %> <td> <%= itemHtml.Display(property.DisplayName) %> </td> <% } %> </tr> <% } %> </table> Two problems with this: I'd like it to be generic. So, I'd like to replace var items = (IQueryable<TestProj.ViewModels.TestViewModel>)Model; with var items = (IQueryable<T>)Model; or something to that effect. A property Html is automatically created for me when the view is created, but this HtmlHelper applies to the whole collection. I need to somehow create an itemHtml object that applies just to the current item in the foreach loop. I'm not sure how to do this, however, because the constructors for HtmlHelper don't take a Model object. How do I solve these two problems? A: Phil Haack to the rescue! http://haacked.com/archive/2010/05/05/asp-net-mvc-tabular-display-template.aspx
{ "pile_set_name": "StackExchange" }
Q: Is there any significance in the order of table in sql join statement Is there any significance in the order of table in sql join statement. For example SELECT dept_name, emp_name FROM Employee INNER JOIN Department ON Employee.dept_id = Department.dept_id and SELECT dept_name, emp_name FROM Department INNER JOIN Employee ON Employee.dept_id = Department.dept_id Is there is any performance advantage in the order of tables? A: No there isn't. Most (if not all) DBMS's use of a Cost based optimizer. The order in which you specify your statements does not affect speed of execution. Oracle SQL cost based optimization Oracle's cost-based SQL optimizer (CBO) is an extremely sophisticated component of Oracle that governs the execution for every Oracle query. The CBO has evolved into one of the world's most sophisticated software components, and it has the challenging job of evaluating any SQL statement and generating the "best" execution plan for the statement. Both your statements will generate the same execution plan and hence have the same perfomance characteristics. Note that the cost will be based on available statistics. Updated statistics is very important for the optimizer to be able to generate the most efficient execution plan. A: In general, no it won't matter. The optimizer should be able to figure out the most efficient order in which to join the tables regardless of the order they appear in the query. It is possible, however, that the order of the tables will affect the query plan. This generally wouldn't be the case if you have a simple two table join but as the number of tables in a query increase, the number of possible joins grows at an O(n!) rate. Pretty quickly, it becomes impossible for the optimizer to consider all possible join orders so it has to use various heuristics to prune the tree. That, in turn, leads to situations where the optimizer picks a different driving table if that table is listed first in the SQL statement as opposed to when that table is the tenth table in the query. Jonathan Lewis has a nice blog post showing how the order tables appear in a query can affect the query plan. If you want to be extra-careful, listing the driving table first is a reasonable thing to do-- it won't help very frequently but it may occasionally do some good.
{ "pile_set_name": "StackExchange" }
Q: How To Work With SQL Database Added As Item In Visual Studio 2008? If I'm letting Visual Studio take care of adding an SQL Server database to an existing project, it adds the connection string to app.config. How can I use use THAT connection string to make my own connections and datasets? A: Use the ConfigurationManager.AppSettings to read the connection string when required. For example, if you opening a SQL Connection, use the assign the "Connection String" property to the value retrieved from ConfigurationManager.AppSettings ("MyConnectionString") If it is placed in the appropriate section in the app config file, then you can use ConfigurationManager.ConnectionStrings to retrieve it as well. Read more here http://msdn.microsoft.com/en-us/library/ms254494.aspx A: Well, both of those helped get me on the right track. I found this quite simple, yet highly annoying. The solution I used was: using System.Configuration; add a reference System.configuration create a new connection with SqlConnection(ConfigurationManager.ConnectionStrings["MyDatabaseConnectionFromApp.Config"].ConnectionString)
{ "pile_set_name": "StackExchange" }
Q: Get the follower Count in Django I am currently trying to get the total amount of followers a user has and also the total amount of users following a particular user.I want to return an integer value but cant seem to get it right. Is there anything i need to add in my models.py or in my views.py? I tried UserLink.objects.filter(from_user=User).values('to_user').count but it keeps returning zero (0) Models.py class UserLink(models.Model): from_user = models.ForeignKey(User, related_name='following_set') to_user = models.ForeignKey(User, related_name='follower_set') date_added = models.DateTimeField(default = datetime.now) I am new to django and any help is needed A: count is a function, which must be called. So put a pair of parentheses after it. Another potential problem here is your passing User into the filter function in views.py. You want to pass in an instance of the class User, not the class itself. So you'd want something like this: UserLink.objects.filter(from_user=whichuser).values('to_user').count() where the variable whichuser is an instance of the User class. As for the task you're trying accomplish, I'm not sure how "the total amount of followers a user has" is different from "the total amount of users following a particular user". To get either one you'd want the following: UserLink.objects.filter(to_user=whichuser).count()
{ "pile_set_name": "StackExchange" }
Q: If an arcane trickster rogue uses his mage hand and makes it invisible, does that mean anything the hand picks up is also invisible? From the PHB (emphasis mine): Starting at 3rd level, when you cast Mage Hand, you can make the spectral hand invisible, and you can perform the following additional tasks with it: • You can stow one object the hand is holding in a container worn or carried by another creature. • You can retrieve an object in a container worn or carried by another creature. • You can use thieves' tools to pick locks and disarm traps at range. You can perform one of these tasks without being noticed by a creature if you succeed on a Dexterity (Sleight of Hand) check contested by the creature's Wisdom (Perception) check. In addition, you can use the bonus action granted by your Cunning Action to control the hand. And the invisibility spell (emphasis mine): A creature you touch becomes invisible until the spell ends. Anything the target is wearing or carrying is invisible as long as it is on the target’s person. The spell ends for a target that attacks or casts a spell. I know the spell is slightly different but it’s the most comparable thing I could find. For example: I lift some jail keys off of a guard and then turn the hand invisible. I grab 3 platinum coins off the mayor’s desk when I think no one is watching and close the hand around them so they’re totally inside the fist. I give the hand a lockpick and pick a lock from across a crowded room. In which of these situations (if any) would the objects be visible? Context of the question: I'm the Rogue player, not the not DM. I view the mage hand for the arcane trickster as a type of highly specialized familiar because I can summon or dismiss it a will, and I can "command" it to do some simple tasks. But I'm fairly new and inexperienced so I might be looking at it wrong. My reason for posting the question was to see if I am missing something that should have been obvious, or if there was room to play around with this class feature. A: Any carried object will be visible Starting at 3rd level, when you cast Mage Hand, you can make the spectral hand invisible "You can make the spectral hand invisible" does not mean "you can cast the Invisibility spell on it". It does what it says — you make the hand invisible, and only the hand, not the item it is carrying. However, D&D 5th edition empowers the DM in ways that 3rd, 3.5, and 4th did not. While rule zero has always applied, 5th edition chooses not to explicitly codify many things. If your DM says the item will be invisible too, it will. Jeremy Crawford, the lead game designer, suggests prioritizing story over the rules: The rules are intentionally silent on these corner cases, leaving adjudication to DMs. As always, I say go with what's best for your story
{ "pile_set_name": "StackExchange" }
Q: Use jQuery script with Angular 6 CLI project I'm developing an application using angular 6. My application use Minton Theme . I included all theme scripts in the index.html file of my angular project. But when I logged in or navigate between routes some jquery methods not working properly. I had to refresh page manually to make them work. Is there an fix for this? I couldn't find any working solution yet. Project components structure app -components --footer --left-side-bar --header --pages ---dashboard ----home ----accounts ---login index.html file <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>Logical Position</title> <base href="/"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="shortcut icon" href="assets/minton_theme/images/favicon.ico"> <link href="assets/minton_theme/plugins/switchery/switchery.min.css" rel="stylesheet" /> <link href="assets/minton_theme/plugins/jquery-circliful/css/jquery.circliful.css" rel="stylesheet" type="text/css" /> <link href="assets/minton_theme/css/bootstrap.min.css" rel="stylesheet" type="text/css"> <link href="assets/minton_theme/css/icons.css" rel="stylesheet" type="text/css"> <link href="assets/minton_theme/css/style.css" rel="stylesheet" type="text/css"> <script src="assets/minton_theme/js/modernizr.min.js"></script> </head> <body class="fixed-left widescreen"> <app-root></app-root> <script> var resizefunc = []; </script> <!-- Plugins --> <script src="assets/minton_theme/js/jquery.min.js"></script> <script src="assets/minton_theme/js/popper.min.js"></script> <!-- Popper for Bootstrap --> <script src="assets/minton_theme/js/bootstrap.min.js"></script> <script src="assets/minton_theme/js/detect.js"></script> <script src="assets/minton_theme/js/fastclick.js"></script> <script src="assets/minton_theme/js/jquery.slimscroll.js"></script> <script src="assets/minton_theme/js/jquery.blockUI.js"></script> <script src="assets/minton_theme/js/waves.js"></script> <script src="assets/minton_theme/js/wow.min.js"></script> <script src="assets/minton_theme/js/jquery.nicescroll.js"></script> <script src="assets/minton_theme/js/jquery.scrollTo.min.js"></script> <script src="assets/minton_theme/plugins/switchery/switchery.min.js"> </script> <!-- Counter Up --> <script src="assets/minton_theme/plugins/waypoints/lib/jquery.waypoints.min.js"></script> <script src="assets/minton_theme/plugins/counterup/jquery.counterup.min.js"></script> <!-- circliful Chart --> <script src="assets/minton_theme/plugins/jquery-circliful/js/jquery.circliful.min.js"></script> <script src="assets/minton_theme/plugins/jquery-sparkline/jquery.sparkline.min.js"></script> <!-- skycons --> <script src="assets/minton_theme/plugins/skyicons/skycons.min.js" type="text/javascript"></script> <!-- Page js --> <script src="assets/minton_theme/pages/jquery.dashboard.js" defer> </script> <!-- Custom main Js --> <script src="assets/minton_theme/js/jquery.core.js"></script> <script src="assets/minton_theme/js/jquery.app.js"></script> </body> </html> app.routing.module.ts file import { NgModule } from '@angular/core'; import { RouterModule, Routes } from '@angular/router'; import { DashboardComponent } from './components/pages/dashboard/dashboard.component'; import { LoginComponent } from './components/pages/login/login.component'; import { UnAuthGuardService } from './guards/un-auth.guard'; import { AuthGuardService } from './guards/auth.guard'; import { AccountsComponent } from './components/pages/dashboard/accounts/accounts.component'; import { ViewAccountsComponent } from './components/pages/dashboard/accounts/view-accounts/view- accounts.component'; import { MyAccountsComponent } from './components/pages/dashboard/accounts/my-accounts/my- accounts.component'; import { CreateAccountComponent } from './components/pages/dashboard/accounts/create-account/create- account.component'; import { HomeComponent } from './components/pages/dashboard/home/home.component'; const routes: Routes = [ { path: 'login', component: LoginComponent, canActivate: [UnAuthGuardService] }, { path: '', component: DashboardComponent, canActivate: [AuthGuardService], children: [ { path: '', redirectTo: 'dashboard', pathMatch: 'full' }, { path: 'dashboard', component: HomeComponent }, { path: 'accounts', component: AccountsComponent, children: [ { path: '', component: ViewAccountsComponent }, { path: 'create', component: CreateAccountComponent }, { path: ':username', component: MyAccountsComponent } ] } ] }, { path: '**', redirectTo: '/dashboard', pathMatch: 'full' } ]; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule] }) export class AppRoutingModule { } A: The Jquery code works only in the starting page and not between routes because it is not under the angular's change detection. You need to hook it up into the angular life cycle hooks. Try follow this references: https://medium.com/@swarnakishore/how-to-include-and-use-jquery-in-angular-cli-project-592e0fe63176 https://www.youtube.com/watch?v=mAwqk-eIPL8
{ "pile_set_name": "StackExchange" }
Q: Prevent soft keyboard from being dismissed There are many questions related to how to programatically show/hide the soft keyboard. However, as we all know the android back button will cause the keyboard to be dismissed. Is there a way to prevent the user from dismissing the keyboard with a back button press? I tried to capture the back button, but when the keyboard is displayed onKeyDown in my activity is not invoked when the back key is pressed and soft keyboard is visible. Any suggestions would be greatly appreciated. A: I've found solution: public class KeyBoardHolder extends EditText { public KeyBoardHolder(Context context) { super(context); } public KeyBoardHolder(Context context, AttributeSet attrs) { super(context, attrs); } public KeyBoardHolder(Context context, AttributeSet attrs, int defStyleAttr) { super(context, attrs, defStyleAttr); } @TargetApi(Build.VERSION_CODES.LOLLIPOP) public KeyBoardHolder(Context context, AttributeSet attrs, int defStyleAttr, int defStyleRes) { super(context, attrs, defStyleAttr, defStyleRes); } @Override public boolean onKeyPreIme(int keyCode, KeyEvent event) { if (keyCode == KeyEvent.KEYCODE_BACK) { return true; } return false; } } This prevents keyboard from being closed by back button.
{ "pile_set_name": "StackExchange" }
Q: Using Jquery UI sortable with css display: grid I am trying to create a sortable css grid using jquery UI. $( function() { $( "#sortable" ).sortable({ handle: "handle" }); }); gridHolder{ display:grid; background:tan; grid-template-columns:1fr 1fr 1fr 2fr; } gridHeader > *{ padding:10px; background:yellow; border:thin solid black; } gridContent , gridHeader{ display:contents; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <script src="https://ajax.googleapis.com/ajax/libs/jqueryui/1.10.3/jquery-ui.min.js"></script> <gridHolder id="sortable"> <gridHeader> <handle>Handle</handle> <name>Name</name> <address>Address</address> <phone>Phone</phone> </gridHeader> <gridContent> <handle>Handle</handle> <name>Adam</name> <address>111 Main St</address> <phone>123-4567</phone> </gridContent> <gridContent> <handle>Handle</handle> <name>Bob</name> <address>222 Brown Ave</address> <phone>987-6543</phone> </gridContent> <gridContent> <handle>Handle</handle> <name>Carl</name> <address>333 East Ave</address> <phone>555-1343</phone> </gridContent> </gridHolder> https://jsfiddle.net/Lj9wnh7x/6/ The issue I am running into is the sortable doesn't seem to work properly with the css grid and display:contents property. I understand that display:contents is not so much a display as much as a organizer for children elements but I can't see how it would effect the jquery ui sortable. A: I think, on a fundamental CSS Level, the grid content cannot be sorted, dragged, or moved. Note: float, display: inline-block, display: table-cell, vertical-align and column-* properties have no effect on a grid item. I suspect that the lack of these properties causes jQuery UI to be unable to manage DnD properly. This is what I tested with, following proper HTML/CSS Syntax, and it still does not work: $(function() { $("#sortable").sortable({ handle: ".handle", items: "> div.gridContent" }); $("#sortable").disableSelection(); }); .gridHolder { display: grid; background: tan; grid-template-columns: 1fr 1fr 1fr 2fr; } .gridHeader>* { padding: 10px; background: yellow; border: thin solid black; } .gridContent, .gridHeader { display: contents; } <link rel="stylesheet" href="//code.jquery.com/ui/1.12.1/themes/base/jquery-ui.css"> <script src="https://code.jquery.com/jquery-1.12.4.js"></script> <script src="https://code.jquery.com/ui/1.12.1/jquery-ui.js"></script> <div class="gridHolder" id="sortable"> <div class="gridHeader"> <handle>Handle</handle> <name>Name</name> <address>Address</address> <phone>Phone</phone> </div> <div class="gridContent"> <div class="handle">Handle</div> <div class="name">Adam</div> <div class="address">111 Main St</div> <div class="phone">123-4567</div> </div> <div class="gridContent"> <div class="handle">Handle</div> <div class="name">Jenny</div> <div class="address">222 Brown St</div> <div class="phone">867-5309</div> </div> <div class="gridContent"> <div class="handle">Handle</div> <div class="name">Carl</div> <div class="address">222 Brown St</div> <div class="phone">555-1212</div> </div> </div> Still doing some research to see if this is the case. Update Another issue to be aware of in CSS Grid Layout and to a lesser extent in CSS Flexbox, is the temptation to flatten markup. As we have discovered, for an item to become a grid item it needs to be a direct child of the grid container. Therefore, where you have a <ul> element inside a grid container, that ul becomes a grid item – the child <li> elements do not. See More. So it's the structure relationship, child and parent, that is causing an issue here. Here is another example that is working. $(function() { $("#sortable").sortable({ handle: "handle" }); $("#sortable").disableSelection(); }); #sortable { list-style-type: none; margin: 0; padding: 0; } .gridHolder { background: tan; } .gridHolder li { display: grid; grid-template-columns: 1fr 1fr 1fr 2fr; } .gridHeader * { float: left; padding: 10px; background: yellow; border: thin solid black; } <link rel="stylesheet" href="//code.jquery.com/ui/1.12.1/themes/base/jquery-ui.css"> <script src="https://code.jquery.com/jquery-1.12.4.js"></script> <script src="https://code.jquery.com/ui/1.12.1/jquery-ui.js"></script> <ul class="gridHolder" id="sortable"> <li class="gridHeader row"> <handle>Handle</handle> <name>Name</name> <address>Address</address> <phone>Phone</phone> </li> <li class="gridContent"> <handle>Handle</handle> <name>Adam</name> <address>111 Main St</address> <phone>123-4567</phone> </li> <li class="gridContent"> <handle>Handle</handle> <name>Bob</name> <address>222 Brown Ave</address> <phone>987-6543</phone> </li> <li class="gridContent"> <handle>Handle</handle> <name>Carl</name> <address>333 East Ave</address> <phone>555-1343</phone> </li> </ul>
{ "pile_set_name": "StackExchange" }
Q: EC2 User Data not working via python boto command I am trying to launch an instance, have a script run the first time it launches as part of userdata. The following code was used (python boto3 library): import boto3 ec2 = boto3.resource('ec2') instance = ec2.create_instances(DryRun=False, ImageId='ami-abcd1234', MinCount=1, MaxCount=1, KeyName='tde', Placement={'AvailabilityZone': 'us-west-2a'}, SecurityGroupIds=['sg-abcd1234'], UserData=user_data, InstanceType='c3.xlarge', SubnetId='subnet-abcd1234') I have been playing around with the user_data and have had no success. I have been trying to echo some string to a new file in an existing directory. Below is the latest version I attempted. user_data = ''' #!/bin/bash echo 'test' > /home/ec2/test.txt ''' The ami is a CentOS based private AMI. I have tested the commands locally on the server and gotten them to work. But when I put the same command on the userdata (tweaked slightly to match the userdata format), it does not work. Instance launches successfully but the file I specified is not present. I looked at other examples (https://github.com/coresoftwaregroup/boto-examples/blob/master/32-create-instance-enhanced-with-user-data.py) and even copied their commands. Your help is appreciated! Thanks :) A: For User Data to be recognized as a script, the very first characters MUST be #! (at least for Linux instances). However, your user_data variable is being defined as: "\n #!/bin/bash\n echo 'test' > /home/ec2/test.txt\n " You should define it like this: user_data = '''#!/bin/bash echo 'test' > /tmp/hello''' Which produces: "#!/bin/bash\necho 'test' > /tmp/hello" That works correctly. So, here's the final product: import boto3 ec2 = boto3.resource('ec2') user_data = '''#!/bin/bash echo 'test' > /tmp/hello''' instance = ec2.create_instances(ImageId='ami-abcd1234', MinCount=1, MaxCount=1, KeyName='my-key', SecurityGroupIds=['sg-abcd1234'], UserData=user_data, InstanceType='t2.nano', SubnetId='subnet-abcd1234') After logging in: [ec2-user@ip-172-31-2-151 ~]$ ls /tmp hello hsperfdata_root
{ "pile_set_name": "StackExchange" }
Q: How can I get the most recent SSL certificate for a domain using PowerShell? I'm trying to find the most recent certificate in the Web Hosting certificate store for a given domain (e.g. www.example.com) It's easy enough to find any number of matching certificates, but how can I find only the most recent one, ordered by expiration date (furthest into the future)? My existing code is: (Get-ChildItem -Path cert:\LocalMachine\WebHosting | Where-Object {$_.Subject -match "example.com"}).Thumbprint; However this returns two certificates sometimes as usually the previous certificate (prior to a renewal) must be left in the certificate store for a short while. A: You can try to sort then by the property notafter To have a look to all properties : (Get-ChildItem -Path cert:\LocalMachine\WebHosting | Where-Object {$_.Subject -match "example.com"}) | fl * To sort by notAfter property : (Get-ChildItem -Path cert:\LocalMachine\ca | Where-Object {$_.Subject -match ".*microsoft.*"}) | Sort-Object -Property NotAfter -Descending
{ "pile_set_name": "StackExchange" }
Q: You must include the platform port before the LWUIT in the classpath runtime exception I recently started using LWUIT. Great job and great program. I'm using Netbeans 6.9.1, S60 SDK and the webstart version of LCWUIT. The first problem I faced was that I couldn't preverify Transitions3D.java file , however that was not an issue. I just removed that part of the code and recompiled the library from scratch. So I created a simple form with a "Hello World" Label and tried the "Create Netbeans Project" option of the resource editor. I did a Clean Build at the test_MIDP (where test is the name of my project) and tried to run it on the emulator. However I'm receiving this error message: TRACE: <at java.lang.RuntimeException: You must include the platform port before the LWUIT in the classpath>, startApp threw an Exception java.lang.RuntimeException: **You must include the platform port before the LWUIT in the classpath** at com.sun.lwuit.impl.ImplementationFactory.createImplementation(ImplementationFactory.java:67) at com.sun.lwuit.Display.init(Display.java:400) at userclasses.MainMIDlet.startApp(MainMIDlet.java:15) at javax.microedition.midlet.MIDletTunnelImpl.callStartApp(), bci=1 at com.sun.midp.midlet.MIDletPeer.startApp(), bci=7 at com.sun.midp.midlet.MIDletStateHandler.startSuite(), bci=269 at com.sun.midp.main.AbstractMIDletSuiteLoader.startSuite(), bci=52 at com.sun.midp.main.CldcMIDletSuiteLoader.startSuite(), bci=8 at com.sun.midp.main.AbstractMIDletSuiteLoader.runMIDletSuite(), bci=161 at com.sun.midp.main.AppIsolateMIDletSuiteLoader.main(), bci=26 java.lang.RuntimeException: You must include the platform port before the LWUIT in the classpath at com.sun.lwuit.impl.ImplementationFactory.createImplementation(ImplementationFactory.java:67) at com.sun.lwuit.Display.init(Display.java:400) at userclasses.MainMIDlet.startApp(MainMIDlet.java:15) at javax.microedition.midlet.MIDletTunnelImpl.callStartApp(), bci=1 at com.sun.midp.midlet.MIDletPeer.startApp(), bci=7 at com.sun.midp.midlet.MIDletStateHandler.startSuite(), bci=269 at com.sun.midp.main.AbstractMIDletSuiteLoader.startSuite(), bci=52 at com.sun.midp.main.CldcMIDletSuiteLoader.startSuite(), bci=8 at com.sun.midp.main.AbstractMIDletSuiteLoader.runMIDletSuite(), bci=161 at com.sun.midp.main.AppIsolateMIDletSuiteLoader.main(), bci=26 "You must include the platform port before the LWUIT in the classpath" Any ideas on how to fix this error? I tried to run the MIDlet with both S60 and JavaME SDK 3.0 emulator and I received the same error. StackOverflow warned me that there are similar questions however I couldn't find anything about related to my issue. If not please inform me. A: I shall answer my own post: The problem was that in the UI jar I was including. LWUIT comes with 2 "sets" of UI.jar. The generic one which is in LWUIT\UI folder and the platform specific ones which are in the LWUIT\Ports folder.The generic one is being used as "parent" project containing all the common code,however if you MUST import the .jar file which is for your platform. As readme file says: While these projects will compile easily they will be useless for any purpose since they don't include the binding glue for the platform, to use the platform one needs to use the appropriate projects underneath the specific ports directory to a given platform. While I was recompiling the library in order to remove Transitions3D.java file, I recompiled (and then imported ) the generic UI.jar. The correct thing to do is compile, the parent project (the generic UI.jar) THEN compile the port specific library (in my case the LWUIT\ports\MIDP\UI.jar) and then import it in your project and you are done.
{ "pile_set_name": "StackExchange" }
Q: Longest to Shortest tractates of Bavli by daf Guessing it's been asked & answered already but not seeing it clearly. I simply would like a listing of longest to shortest (or vice versa!) tractates in Bavli by daf, ideally listing the number of daf/tractate. Thanks so much. A: Here's the list of the longest to shortest Masechtot in the Talmud Bavli (H/T to this helpful answer) based off the amount of daf per Masechta: Bava Basra - 176 Shabbos - 157 Chullin - 142 Yevamos - 122 Pesachim - 121 Zevachim - 120 Bava Kamma - 119 Bava Metzia - 119 Sanhedrin - 113 Ketubot - 112 Menachos - 110 Eruvin - 105 Nedarim - 91 Gittin - 90 Yoma - 88 Kiddushin - 82 Avodah Zarah - 76 Niddah - 73 Nazir - 66 Berachos - 64 Bechoros - 61 Sukkah - 56 Sotah - 49 Shevuos - 49 Beitzah - 40 Rosh Hashanah - 35 Temurah - 34 Arachin - 34 Megillah - 32 Ta'anis - 31 Mo'ed Kattan - 29 Kerisus - 28 Chagigah - 27 Makkos - 24 Me'ilah - 22 Horayos - 14 Tamid - 10
{ "pile_set_name": "StackExchange" }
Q: Syntax error in PHP after upgrading to 5.4 I am getting the following error ever since I upgraded from PHP 5.2x or 5.3x (not sure which) to 5.4x: syntax error, unexpected T_PAAMAYIM_NEKUDOTAYIM, expecting T_VARIABLE The following is the code that generates the error. Essentially I have a class to create SVG image with a static draw() method defined in a derived class and a static helper function drawPng() on the base class that converts the SVG to PNG using Imagick. The error is at the marked line. static function drawPng($filename, $data, &$options=array()) { ob_start(); static::draw($data, $options); // <-- Error occurs $svg = ob_get_clean(); $im = new Imagick(); if(!$im) die('Imagick not installed'); $bg = (empty($options['background']) ? 'transparent' : $options['background']); $im->setBackgroundColor(new ImagickPixel($bg)); $im->readImageBlob($svg); $im->setImageFormat('png'); if($filename) $im->writeImage($filename); else echo $im->getImageBlob(); } The code as shown above has worked until the upgrade. Thanks for the assistance. A: T_PAAMAYIM_NEKUDOTAYIM is the hebrew name (for some reason - Zend was started by Israelis folk, as ceejayoz pointed out. ) for the double colon, aka :: Change static to self: static::draw($data, $options); self::draw($data, $options);
{ "pile_set_name": "StackExchange" }
Q: Good libraries/technologies for video manipulation I'm looking into starting a project that will be heavily dependent on video manipulation, and I'd like to get some leads on good technologies that I can use. My language of choice is typically Python, but it looks like the available libraries are either abandoned or insufficiently featureful. Given that, I'm relatively agnostic on the specific language, though I'd prefer an option other than C or C++. The requirements for the project include: Ability to handle a variety of common formats Video playback (variable speed playback a plus) Clipping sections out of larger videos Merging clips together into a single video Extracting single frames Multi-platform (preferably deployable on Windows/Mac/Linux) Free, or licenced at a reasonable cost for indie (but commercial) development I haven't done much work with video on the desktop before, so I'm not sure if such a thing exists. Are there any good candidates, or am I searching for a mythical beast here? A: After some looking, I think Xuggler will probably do the trick (which under the hood, is a wrapper around FFMPEG).
{ "pile_set_name": "StackExchange" }
Q: Google Analytics on an Intranet - What constituers a Fully Qualified Domain Name? We want to use Google Analytics on our intranet. It currently lives at http://intranet, which obviously won't work because Google Analytics requires an FQDN. It's also available at http://intranet.companyname.dom/ but I'm unable to add a property using that address, either. So what exactly are the requirements? Does it need to be a "proper" web address, e.g. intranet.companyname.com (albeit only visible internally)? (Just to cover it off, we're OK with the privacy/security/etc implications of using a 3rd party analytics system on our intranet) A: Google says the intranet must be accessible by a fully qualified domain name. So your .dom should be .com or any other TLD In order for Google Analytics to generate reports for your corporate intranet usage, your corporate network must be able to reach the Google Analytics JavaScript file (analytics.js). Try loading the file in your browser using one of the following links: http://www.google-analytics.com/analytics.js https://www.google-analytics.com/analytics.js If you can reach one of these URLs from your internal network, you can use Google Analytics to collect data from your intranet. Your intranet must also be accessible through a fully qualified domain name such as http://intranet.example.com. The Google Analytics JavaScript won't work if your intranet can only be accessed using a domain name that isn't fully qualified, such as http://intranet. https://support.google.com/analytics/answer/1009688?hl=en
{ "pile_set_name": "StackExchange" }
Q: Microsoft Azure Cosmos DocumentDB Optimal Read Query Performance We have implemented an Azure CosmosDB (MongoDB with SQL API) database in the cloud. Through java, we'd like to generate reports based on the data hiding in the MongoDB. I'm not yet too happy with the performance of my read queries, and I was wondering what can be improved to my current setup. Like said, I use Java to query the database. I use the Microsoft Azure DocumentDB library to query the database: <dependency> <groupId>com.microsoft.azure</groupId> <artifactId>azure-documentdb</artifactId> <version>1.16.2</version> </dependency> Currently, the best performance I have been able to get, was to query around 38.000 documents in memory in around 20 seconds, with 50,000 RU/s configured (local cosmos emulator). I would really like this improved, because we might query millions of documents soon. I have the feeling that the way we store the data, might not be optimal. Each document now look as follows: { "deviceid": "xxx", "devicedata": { "datetime": "2018-08-28T00:00:02.104Z", "sensors": [ { "p_A2": "93095", "p_A3": "303883", "p_batterycurrent": "4294967.10000", "p_batterygauge": "38.27700", "p_batteryvoltage": "13.59400", ** ... around 200 more key - value pairs ... ** } ] }, "id": "aa5d3cf5-10fa-48dd-a0d2-a536284eddac", "_rid": "PtEIANkbMQABAAAAAAAAAA==", "_self": "dbs/PtEIAA==/colls/PtEIANkbMQA=/docs/PtEIANkbMQABAAAAAAAAAA==/", "_etag": "\"00000000-0000-0000-4040-006a7f2501d4\"", "_attachments": "attachments/", "_ts": 1535619672 } A query that we would use a lot, would look as follows: SELECT c.deviceid, c.devicedata.datetime, c.devicedata.sensors[0].p_A2, c.devicedata.sensors[0].p_A3, c.devicedata.sensors[0].p_batterycurrent, c.devicedata.sensors[0].s_humidity FROM c WHERE c.deviceid = 'xxx' AND c.devicedata.datetime >= '2018-08-28T00:00:00.000Z' AND c.devicedata.datetime < '2018-08-30T00:00:00.000Z' order by c.devicedata.datetime desc I cut these queries per deviceId. So per device, I run a thread with this query. This seems to go a lot faster than a single thread with a single query. Such a query as above would take us around 20 seconds. I have noticed however, if I only query on the deviceid and devicedata.datetime, the query is done within 2 seconds. It seems that getting the sensor data out of the sensor list is a really tough cookie. If I do a select * (so no filtering on the sensor data), it is also faster than when I let the SQL API filter out the sensors: around 15 seconds. My question is, what can I do to improve upon this? Is my document list too long? is there any way I can set this up differently? The sensor key value pairs are not fixed, and can differ per device. Some more technical details: I have an unlimited collection, partitioned on /deviceid. I have used the standard index policy of Azure (which is index everything), as well as excluding the sensors from it. I have tried all the tips as described here: https://docs.microsoft.com/en-us/azure/cosmos-db/performance-tips-java This is my current Java setup, although I have tried lots of different things: //This piece of code is currently in a seperate thread. There is one thread per deviceId to query documentClient = new DocumentClient(HOST, MASTER_KEY, ConnectionPolicy.GetDefault(), ConsistencyLevel.Session); FeedOptions options = new FeedOptions(); options.setEnableCrossPartitionQuery(true); documentList = documentClient .queryDocuments(getAlldataCollection().getSelfLink(), query, options) .getQueryIterable().toList(); I'm fairly sure MongoDB can query hundreds of thousands of documents within seconds, so I'm pretty sure I'm doing something wrong with my current setup. Any suggestions? A: I cannot provide a definite solution to your problem, but hopefully give you ideas to get to a solution with desired performance level. NoSql a good fit? First, to get this off the table, are you sure your scenario is a good fit for noSQL? CosmosDB shines when the primary scenario is working with pinpoint data (create, select-by-id, update-by-id, delete-by-id). Yes, it definitely can do limited mass operations and aggregations, but querying millions is pushing it. SQL on the other had is designed to work with large sets of data and is really good in doing aggregations. Let's assume this design decision was carefully weighted and noSQL is the best fit for unmentioned reasons. Debug for hard data Don't do performance tests against local cosmosDB emulator. Don't. That's obviously not the real thing (consider network, storage bandwidth/seek times, system impact), but only emulates it. You could get very misleading results. Spin up a real test instance. First step to debugging your query performance problems would be to enable query-execution-metrics and see where those 20 seconds are actually spent. Also, loading 38000 documents most likely will never arrive in single batch, check how many continuation queries are actually made to the cosmosDB server. Also, run a profiler and make sure the bottleneck is really in the CosmosDB. If you are making many continuation calls AND concurrently querying over many devices then that may be a lot happening in client as well, and queries flying on the network. Make sure you are not throttled in client (GC, Http stack, internal locking, connection/thread pools, etc). Data/Query design Reduce queried data If you already know deviceid, then don't query for it 38000+ times - that's just ballast. Reduce model object size /* around 200 more key - value pairs */ That's a huge object. I would test if splitting it up to smaller objects would help cosmosDB to spend less time internally loading and processing documents. Ex: { "p_A2": "93095", "p_A3": "303883", "battery" : { "current": "4294967.10000", "gauge": "38.27700", "voltage": "13.59400" } ... } Not sure how docDB is internally storing the documents (full graph vs subdocuments) but you could test if it makes an impact. The difference of 2s vs 20s is so huge that it hints that it may be relevant. Sensors array? The query only queries for the first first measurement set. Is the array necessary? You could test if omitting this level has any performance impact. Data types in model battery_current etc are storing sensor measurement numerical values as longish strings. If they are always numbers, then you could store them as numbers instead and reduce document size in server & client. Client performance would probably impacted more (string = heap allocation). Ex: "4294967.10000" is 13 chars = 26B in client (UTF-16). App design Do you really need all those 38000 or millions of documents every time? Consider if you could get by with a subset.. If this is for data movement then consider other options (Data factory, change feed processing) to incrementally transfer measurements. If this is on-request app need then consider loading smaller timeframes (= less documents) and use caching for past timeframes. If you can, pre-aggregate results before caching. Sensor data of past is most likely not going to change. As always, consider your business case for ROI. Optimization is always possible, but sometimes its more beneficial to adjust a business requirement instead of technical solution.
{ "pile_set_name": "StackExchange" }
Q: PROC REPORT : Is it possible to don't print a specific row? I am writing a report to Excel with PROC REPORT. the first column is grouped, and I add a break line before some values of it. This break line contains the value of the column if it match some conditions. Eg. My table contains this rows : nom_var | val1 | val2 | val3 | _____________________________________________________ Identification | . | . | . | Name | Ou. Dj. | . | . | date B. | 00/01/31 | . | . | NAS | 1122334 | . | . | Revenues | . | . | . | | R1 1250 $ | R2 1000 $ | . | _____________________________________________________ In the report I have : _____________________________________________________ Identification _____________________________________________________ Identification | . | . | . | Name | Ou. Dj. | . | . | date B. | 00/01/31 | . | . | NAS | 1122334 | . | . | ____________________________________________________ Revenues _____________________________________________________ Revenues | . | . | . | | R1 1250 $ | R2 1000 $ | . | _____________________________________________________ Please, how can I revove the lines containing "Identification" and "Revenues" in the first column "nom_var"? I mean : Identification | . | . | . | and Revenues | . | . | . | Here is my code : ods listing close; *options générales; options topmargin=1in bottommargin=1in leftmargin=0.25in rightmargin=0.25in ; %let fi=%sysfunc(cat(%sysfunc(compress(&nom)),_portrait_new.xls)); ods tagsets.ExcelXP path="&cheminEx." file="&fi" style=seaside options(autofit_height="yes" pagebreaks="yes" orientation="portrait" papersize="letter" sheet_interval="none" sheet_name="Infos Contribuable" WIDTH_POINTS = "12" WIDTH_FUDGE = ".0625" /* absolute_column_width est en pixels*/ absolute_column_width="120,180,160,150" ); ods escapechar="^"; *rapport1; /*contribuable*/ proc report data=&lib..portrait nowindows missing spanrows noheader style(report)=[frame=box rules=all foreground=black Font_face='Times New Roman' font_size=10pt background=none] style(column)=[Font_face='Times New Roman' font_size=10pt just=left] ; /*entête du tableau est la première variable de la table ==> à gauche du rapport */ define nom_var / group order=data style(column)=[verticalalign=middle background=#e0e0e0 /* gris */ foreground=blue fontweight=bold ]; /* Contenu */ define valeur_var1 / style(column)=[verticalalign=top]; define valeur_var2 / style(column)=[verticalalign=top]; define valeur_var3 / style(column)=[verticalalign=top]; compute before nom_var / style=[verticalalign=middle background=#e0e0e0 foreground=blue fontweight=bold font_size=12pt]; length rg $ 50; if nom_var in ("Identification","Actifs", "Revenus") then do; rg= nom_var; len=50; end; else do; rg=""; len=0; end; line rg $varying50. len; endcomp ; title j=center height=12pt 'Portrait du contribuable'; run; ods tagsets.ExcelXP close; ods listing; A: You have a artificial data construct that is not in a categorical form appropriate to the task of outputting your informative line. This sample shows how a DATA Step can tweak the data so you have a mySection variable that organizes the rows introduced by the nom_var row of interest (Identification and Revenues) The new arrangement of data is more suited for the task you are undertaking. data have; length nom_var val1 val2 val3 $50; infile cards dlm='|'; input nom_var val1 val2 val3 ; datalines; Identification | . | . | . | Name | Ou. Dj. | . | . | date B. | 00/01/31 | . | . | NAS | 1122334 | . | . | Revenues | . | . | . | | R1 1250 $ | R2 1000 $ | . | run; Tweak original data so there is a categorical mySection data need; set have; retain mySection; select (nom_var); when ('Identification') mySection = nom_var; when ('Revenues') mySection = nom_var; otherwise OUTPUT; * NOTE: Explicit OUTPUT means there is no implicit OUTPUT, which means the rows that do mySection= are not output; end; run; Use the new variable (mySection) for grouping (compute before), but keep it's column hidden (noprint) proc report data=need; column mySection nom_var val1 val2 val3; define mySection / group noprint; compute before mySection; line mySection $50.; endcomp; run;
{ "pile_set_name": "StackExchange" }
Q: Refresh DataServiceCollection I wonder if there is a code there that refresh my DataServiceCollection that is loaded in a WPF client using BeginExecute async await Task as show below: public static async Task<IEnumerable<TResult>> ExecuteAsync<TResult>(this DataServiceQuery<TResult> query) { //Thread.Sleep(10000); var queryTask = Task.Factory.FromAsync<IEnumerable<TResult>>(query.BeginExecute(null, null), (asResult) => { var result = query.EndExecute(asResult).ToList(); return result; }); return await queryTask; } I call the extension method as the following: public async void LoadData() { _ctx = new TheContext(the URI); _dataSource = new DataServiceCollection<TheEntity>(_ctx); var query = _ctx.TheEntity; var data = await Task.Run(async () => await query.ExecuteAsync()); _dataSource.Clear(true); _dataSource.Load(data); } LoadData is Called in a ViewModel Change a field value by hand using SQL Management Studio Call LoadData again <<<< Does not refresh!!! Meanwhile, If I use Load method the data is refreshed without any problems as: var query = _ctx.TheEntity; _dataSource.Load(query); Another issue is that I do not know how to Cancel client changes. Last question is whether the MergeOption has any effect to BeginExecute..EndExecute or it only works with Load method? A: I suspect that the data context does not like being accessed from multiple threads. I recommend that you first remove all processing from your extension method: public static Task<IEnumerable<TResult>> ExecuteAsync<TResult>(this DataServiceQuery<TResult> query) { return Task.Factory.FromAsync(query.BeginExecute, query.EndExecute, null); } Which you should use without jumping onto a background thread: public async Task LoadDataAsync() { _ctx = new TheContext(the URI); _dataSource = new DataServiceCollection<TheEntity>(_ctx); var query = _ctx.TheEntity; var data = await query.ExecuteAsync(); _dataSource.Clear(true); _dataSource.Load(data); }
{ "pile_set_name": "StackExchange" }
Q: Given the characteristic and minimal polynomial of a linear operator, find the possibilities of its' Jordan form Given $T: V \to V$ a linear operator, and given $P_T(x) = x^2(x-1)^5(x+4)^4$ and $M_T(x) = x(x-1)^2(x+4)^3$. Find the possibilities of the Jordan form of $T$. So I said this: Eigenvalues $0,1,-4$. The sum of the sizes of the Jordan blocks for eigenvalue $0$ is 2, for eigenvalue $1$ is $5$, and for eigenvalue $-4$ is $4$. And then I said, the max block size of eigenvalue of $0$ is $1$, for eigenvalue $1$ is $2$, for eigenvalue $-4$ is $3$. So from this I figured out the following possibilities: (I) For eigenvalue $0$: 2 blocks of size $1 \times 1$ each. For eigenvalue $-4$: one block of size $3 \times 3$ and one block of size $1 \times 1$. For eigenvalue $1$: two blocks $2 \times 2$ and one block $1 \times 1$. (II) For eigenvalue $0$: 2 blocks of size $1 \times 1$ each. For eigenvalue $-4$: one block of size $3 \times 3$ and one block of size $1 \times 1$. For eigenvalue $1$: one block of size $2 \times 2$ and one block of size $3 \times 3$. (III) For eigenvalue $0$: 2 blocks of size $1 \times 1$ each. For eigenvalue $-4$: one block of size $3 \times 3$ and one block of size $1 \times 1$. For eigenvalue $1$: one block $2 \times 2$, 3 blocks $1 \times 1$. Is that correct or am I missing something? A: Apparently my answer was one-hundred-per-cent correct, I asked my professor about it.
{ "pile_set_name": "StackExchange" }
Q: Eggs won't hatch while travelling on a boat I'm on a boat and the average speed is about 15 km/h, but the egg progress does not change. It worked fine the first day but now it doesn't. It usually works when I'm near land and travel slower so I guess it only works near land now? Or is the speed required different while at sea? I could not have been banned since I haven't cheated or even played the game while going too fast. In addition, I really doubt it has anything to do with connection issues, since I'm writing this while travelling and I haven't had any problems thus far. Lastly, this has been happening over the course if a week at the moment. So, the same problem seems to occur as distance only counts when travelling near land as not even ~5mph speeds don't count towards hatching eggs so I'm guessing egg grinding is not supposed to happen at sea. A: Your eggs not hatching is independent of your mode of transit. This means that whether you are on a boat or not does not affect your eggs hatching. What does affect distance being recorded is connection to the internet, travel speed, and being soft banned. Connection to the internet - when you visit a lake, often you are going to remote places with no wifi and poor cell coverage. Thus, if the app cannot accurately track your location and connect to Niantic servers, you distance will not be tracked for your eggs. Check your available networks and reassure that you still have ample data coverage on your boat. Travel speed - it is now confirmed by a large number of players that traveling too fast can affect egg hatching. It is a general consensus among players that the maximum speed for hatching eggs is ~10-12mph (15km/h is about 9.3mph), so perhaps consider slowing down, as you are traveling at just about the maximum speed for tracking egg hatching distance. You can reference this question for more about speed: How fast can I travel? Soft ban - if you have been recorded for cheating via GPS spoofing or any other form of cheating, you will likely be soft-banned (or banned for a short amount of time). When you are soft banned, you cannot catch Pokemon (they all run away), some distances don't track, and you can't get items from PokeStops. Refer to this question for more about soft bans: Why can't I catch Pokemon in Pokemon Go anymore? In addition, if it worked yesterday and doesn't work today, it might just be a server issue with Niantic, and will resolve itself in time. It's a bit hard to conclude anything with only 24 hours of evidence, but wait a little longer and try again. If your distance is still not tracking, consider one of the above prognoses.
{ "pile_set_name": "StackExchange" }
Q: Why Haskell range needs spaces when using [LT .. GT]? Why is it that when I do range in Haskell, this works: [LT .. GT] but this doesn't: [LT..GT] and what does this cryptic error mean: <interactive>:1:2: Failed to load interface for `LT': Use -v to see a list of the files searched for. <interactive>:1:2: A section must be enclosed in parentheses thus: (`LT..` GT) However, When I use Ints, the second form (without spaces) works: [1..3] A: It's because LT.. is interpreted as the . operator in the LT module. <interactive>:1:2: Failed to load interface for `LT': Use -v to see a list of the files searched for. It means GHC cannot find a module named LT. The same message appears if you use a qualified name with a non-existing library: Prelude> SDJKASD.sdfhj <interactive>:1:1: Failed to load interface for `SDJKASD': Use -v to see a list of the files searched for. <interactive>:1:2: A section must be enclosed in parentheses thus: (`LT..` GT) In Haskell, a section is an infix operator with a partial application, e.g. (* 3), which is equivalent to \x -> x * 3. In your case, LT.. is interpreted as an infix . operator, and the GT is part of the section formed with this operator. A section must be enclosed in parenthesis, and since the misinterpretation does not, the parser will complain like this. Another example of the error: Prelude> [* 3] <interactive>:1:2: A section must be enclosed in parentheses thus: (* 3) A: Because of the maximal munch rule, LT.. gets interpreted as the qualified name of the (.) operator in the LT module. Since you can define your own operators in Haskell, the language allows you to fully qualify the names of operators in the same way as you can with functions. This leads to an ambiguity with the .. used in ranges when the name of the operator starts with ., which is resolved by using the maximal munch rule, which says that the longest match wins. For example, Prelude.. is the qualified name of the function composition operator. > :info Prelude.. (.) :: (b -> c) -> (a -> b) -> a -> c -- Defined in GHC.Base infixr 9 . > (+3) Prelude.. (*2) $ 42 87 The reason why [1..3] or [x..y] works, is because a module name must begin with an upper case letter, so 1.. and x.. cannot be qualified names. A: Failed to load interface for `LT': Kenny and Hammar have explained what this means: LT.. is assumed to be the . function in the LT module. Since there is no LT module, your interpreter naturally cannot load it. A section must be enclosed in parentheses thus: (LT.. GT) Along the same vein, assuming that LT.. is a reference to the . function in the LT module, your interpreter is apparently assuming that you made the mistake of using square brackets instead of parens in order to for a "section" ( a section is, for example, (+1) ). This is simply an obnoxious little wart in the Haskell language; just remember to use spaces.
{ "pile_set_name": "StackExchange" }
Q: Is there a more efficient way to convert a directory of XML Files to a single Pandas Dataframe? I have a directory filled with XML Files. Now, I have some code to read and write the data from these XML Files into a Pandas Dataframe. I am converting the XML Files to a dictionary which I then Json_normalize. Is there a more efficient way to do this? Do you have any suggestions? Here my code: #libraries from pandas.io.json import json_normalize import pandas as pd import xmltodict import os import glob import errno import tkinter from tkinter import Frame, Button #directory specifications and variables path = r'C:\Users\Nutzer\Desktop\XML_Files\*.xml' files = glob.glob(path) frame_list = [] def convert_xml(files): for name in files: #the try clause ensures that non-xml files are passed (skipped) try: with open(name) as f: #reading,parsing and normalization of XML Data frame_list.append(json_normalize(xmltodict.parse(f.read()), sep = '_')) pass #exception is raised in case file is not found or dic is full except IOError as exc: if exc.errno != errno.EISDIR: raise return frame_list #concat list of frame to one large frame; sort = True ensure the insertion of NaN for missing values df = pd.concat(convert_xml(files), ignore_index=True, sort=True) df Here an example of what my XML Files look like: <?xml version="1.0" encoding="UTF-8"?> <Data> <Contract_Information> <Company>Enterprisa</Company> <Time_Stamp>2019-07-18T10:24:51</Time_Stamp> <Datei-ID>3785690</Datei-ID> </Contract_Information> <Calculations Document_ID="2668815"> <Calculationsoftware>Sonstige</Calculationsoftware> <Contractdate>2019-05-31</Contractdate> <Documentnumber>23864836</Documentnumber> <case> <casenumber>XX123456778</casenumber> </case> </Calculations> <Closing_case>false</Closing_case> <Additionaldata> <customer_ID>354634287</customer_ID> <services>3</services> </Additionaldata> <Messages> <Message Code="1" Stufe="Notification">Message</Message> </Messages> </Data> Note that I have multiple of these files, where they have a similar structure but may not always contain the same number of fields and attributes. That's why I used sort = true and ignore_index=True in my pd.concat line. A: Data is not fully extracted, but you can make this as base and work on it. CODE : import xml.etree.ElementTree as et import os path_to_xmls = r'C:\Users\Nutzer\Desktop\XML_Files\' xml_files = [pos_xml for pos_xml in os.listdir(path_to_xmls) if pos_xml.endswith('.xml')] for xml_file in xml_files: xtree = et.parse(xml_file) xroot = xtree.getroot() for node in xroot: for n in node: print(n.tag + ':' + n.text) OUTPUT : Company:Enterprisa Time_Stamp:2019-07-18T10:24:51 Datei-ID:3785690 Calculationsoftware:Sonstige Contractdate:2019-05-31 Documentnumber:23864836 case: customer_ID:354634287 services:3 Message:Message
{ "pile_set_name": "StackExchange" }
Q: PhpStorm git integration doesn't show "compare" menu item Very weird. Since today morning the menu items have changed in the Git integration panel. Let's say I'm on branch "develop" and want to compare another branch. Normally the menu shows a "Compare" item, which now has been replaced by "Compare with...". I'm not sure what I changed on the repository to cause this. Any thoughts? A: I'll move the comment into a reply: That's just a wording change on IDE side.
{ "pile_set_name": "StackExchange" }
Q: How do I access POST parameters in actions.class.php in symfony 1.4? I'm still fairly new to symfony, so my apologies if this is a stupid question. I'm using symfony 1.4 with Doctrine. My colleague wrote a JavaScript to make a report from our client side widget to our server: $j.post(serverpath, {widget_id:widget_id, user_id:user_id, object_id:object_id, action_type:action_type, text_value:stuff_to_report }); I created a route in routing.yml to receive this request: widget_report: url: /widget/report/ options: {model: ReportClass, type: object } param: {module: widget, action: reports} requirements: object_id: \d+ user_id: \d+ action_type: \d+ sf_method: [post] I created an action in actions.class.php to handle the request: public function executeReports(sfWebRequest $request) { foreach($request->getParameterHolder()->getAll() as $param => $val) { // $param is the query string name, $val is its value $this->logMessage("executeReports: $param is $val"); } try { [...] $actionHistory->setUserId($request->getParameter('user_id', 1)); $this->logMessage("executeReports success: "); } catch { [...] } } My log file reports: Jul 20 18:51:35 symfony [info] {widgetActions} Call "widgetActions->executeReports()" Jul 20 18:51:35 symfony [info] {widgetActions} executeReports: module is widget Jul 20 18:51:35 symfony [info] {widgetActions} executeReports: action is reports Jul 20 18:51:35 symfony [info] {widgetActions} executeReports success: I must be missing a step here. We had this working when passing the variables in the URL (and specifying the variables in the route, of course), but for various reasons we want to use POST instead. Why are my POST parameters not accessible in actions.class.php? A: Try this code: if ($request->isMethod('post')) { foreach($request->getPostParameters() as $param => $val) { $this->logMessage("executeReports: $param is $val"); } } else { $this->logMessage("executeReports: request method is not POST"); } If this not helps, try: $this->logMessage("executeReports: " . var_export($_POST, true)); Or enable symfony debug toolbar and see if POST vars are coming from browser. If $_POST array is empty, then problem could be in wrong request headers, to check that, try this: $fp = fopen('php://input','r'); $this->logMessage("executeReports: " . stream_get_contents($fp)); Good luck! EDIT: Maybe you can find your answer here $.post not POSTing anything. E.g. you should check if all JS vars are not empty. In either case, I'd recommend you to use Firebug console to see what data is sent to the server.
{ "pile_set_name": "StackExchange" }
Q: Logo sizing while in mobile I have attached an image to help explain what I am trying to accomplish. I would like to have my logo to display similar to the image when viewed in mobile. Not sure if it is some padding that is causing it to sit above and not horizontal to the menu tab. Website address is https://chris-schilling-jksc.squarespace.com/ Password is fsj A: It is displaying horizontal only. but what I feel is, you have set a height. Might help if you remove that style. #header #logoWrapper, #header #logoImage { width: 350px; height: auto; } And also it is better to combine it with the @media queries for mobile if it works for the desktop. @media screen and (max-width:640px) { #header #logoWrapper, #header #logoImage { width: 350px; height: auto; } } @media screen and (max-device-width:640px) { #header #logoWrapper, #header #logoImage { width: 350px; height: auto; } }
{ "pile_set_name": "StackExchange" }
Q: notepad++ search and replace columns I have used the systinternals tool listdlls to list all loaded dlls for a certain process. Now I have a list that looks like this: 0x0000000077260000 0x1a9000 C:\Windows\SYSTEM32\ntdll.dll Verified: Microsoft Windows Publisher: Microsoft Corporation Description: DLL für NT-Layer Product: Betriebssystem Microsoft® Windows® Version: 6.1.7601.18247 File version: 6.1.7601.18247 Create time: Thu Aug 29 04:17:08 2013 0x00000000744c0000 0x3f000 C:\Windows\SYSTEM32\wow64.dll Verified: Microsoft Windows Publisher: Microsoft Corporation Description: Win32 Emulation on NT64 Product: Microsoft® Windows® Operating System Version: 6.1.7601.18247 File version: 6.1.7601.18247 Create time: Thu Aug 29 04:19:10 2013 This list is pretty long. Now I would like to use notepad++ to change every "0x0000*" line into "only c:\*". Is this possible? Thanks alot in advance Wolfgang A: Does the following help? Find: (0x00.*).(0x.*).(C:.*) Replace: \3
{ "pile_set_name": "StackExchange" }
Q: What is kernel mode software? I'm looking into signing a driver I made. A lot of the Microsoft documentation references "kernel mode software." What is that? It's mentioned in a lot of places, but it doesn't seem to be defined anywhere. How do I know if my driver is kernel mode software? My driver is a customized version of the Silicon Labs VCP driver. Thanks. A: Here is a good general link on this: Windows Programming/User Mode vs Kernel Mode
{ "pile_set_name": "StackExchange" }
Q: Voltage at output pin of a controller I would like to ask a simple question. I do have controller whose pins are configured were configured as o/p. The controller runs at 3.3v and it's from the Pic18f family of controllers.Why is it that the o/p pin shows only voltage of 2.23v when configured as o/p. Is it the maximum or should it show >3.0 v. It's been on my mind for some time. Is it a natural thing or some configuration mishap? I would like your take on this phenomena. edit: Void main() { TRISD=0x00; while(1) { PORTEbits.RE2=1; } } Regarding schematic all the Vdd and Vss pins were connected to 3.3v and gnd respectively. A: Well, you're not disclosing the part number of the chip nor the exact schematic, so it's pretty hard to guess, but I'll try. Take a typical PIC18F part, the PIC18F1220. When operating from a 3V supply, the \$V_{OH}\$ of a port pin looks like this: You are observing a drop of about 1.07V from Vdd. From the graph, that would typically represent a current draw of about 8mA. So I might guess that you've got about a 200-300 ohm load (to Vss) on the port pin, which is quite a heavy load. Edit: Given the actual data sheet for the PIC18F in question, a curve similar to the above is not provided, however there is a relevant line in the specifications: The PIC is guaranteed to provide at least 2.4V provided you draw less than 6mA over the -40~85°C temperature range. Which leads us to the conclusion that the load exceeds 6mA by some margin (since it's probably not 85°C and since 2.23V < 2.4V), as previously concluded.
{ "pile_set_name": "StackExchange" }
Q: Sql for finding the distance between using latitude and longitude returns wrong distance i need to get the distance between location of the latitude and longitude, mysql query works without error but it returns the wrong distance value When i enter the same latitude and longitude as in the database it gives the distance as "2590.4238273460855" instead of zero, i dont known whats wrong in this mysql query is as given below here latitude and longitude are my table column name $sql = "SELECT id,(3956 * 2 * ASIN(SQRT( POWER(SIN(( $latitude - latitude) * pi()/180 / 2), 2) +COS( $latitude * pi()/180) * COS(latitude * pi()/180) * POWER(SIN(( $longitude - longitude ) * pi()/180 / 2), 2) ))) as distance from table_name ORDER BY distance limit 100"; can anyone help me please.. A: There is an error in your Haversine formula. Haversine formula is: Haversine_distance= r * 2 * ASIN(SQRT(a)) where a = POW(SIN(latDiff / 2), 2) + COS(lat1) * COS(lat2) * POW(SIN(LonDiff / 2), 2) Therefore, to correct your code change your query to: "SELECT id,(3956 * 2 * ASIN(SQRT( POWER(SIN(( $latitude - latitude) / 2), 2) +COS( $latitude) * COS(latitude) * POWER(SIN(( $longitude - longitude ) / 2), 2) ))) as distance from table_name ORDER BY distance limit 100";
{ "pile_set_name": "StackExchange" }
Q: How to block "via" senders in Gmail I have this annoying problem in Gmail. Spammers are by-passing the Gmail spam filters and are so clever that I am not able to create a manual filter of my own to block them. They are sending me junk mails from e-mail IDs of the form: [email protected] via watchstore.cleanmail.in [email protected] via watchstore.cleanmail.in In general, [email protected] via watchstore.cleanmail.in Now, I cant set a filter for each and every mail as they are from different IDs. The common part is the via address which is watchstore.cleanmail.in. But when I try creating a filter with watchstore.cleanmail.in in From field, Gmail doesn't list these emails. In short, the filter is not able to detect via addresses. Report Spam/Unsubscribe options aren't working. How can I get rid of these annoying spammers? A: The reason Gmail's built-in filters won't work in this situation is because they can't be applied to the "X-Forwarded-For" header. That header is where the "via" domain info is stored. My solution was to use a Google Apps Script to check my inbox every few minutes and automatically filter out messages sent via a specific domain. It actually works really well. Since implementing the script, I haven't had to deal with this type of spam at all. You can read my full walkthrough here: http://www.geektron.com/2014/01/how-to-filter-gmail-using-email-headers-and-stop-via-spam/
{ "pile_set_name": "StackExchange" }
Q: How to make a call on button click in a ListView? I have done the following code inside of my getView() method of CustomAdapter class. If I try to simply show a Toast, Toast appears, but if i do the following code to make a call, application crashes. What i am doing wrong here? call.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { Intent callIntent = new Intent(Intent.ACTION_CALL); callIntent.setData(Uri.parse("tel:" + phone.getText().toString())); context.startActivity(callIntent); } }); and the stacktrace shows this error : AndroidRuntime: FATAL EXCEPTION: main Process: com.example.hammad.contactme, PID: 26713 android.util.AndroidRuntimeException: Calling startActivity() from outside of an Activity context requires the FLAG_ACTIVITY_NEW_TASK flag. A: Try adding the correct permission for making call in AndroidManifest <uses-permission android:name="android.permission.CALL_PHONE" /> and if you are running on android 6.0 or above you should also get this permission on runtime getting runtime permission
{ "pile_set_name": "StackExchange" }
Q: Why the mirror modifier ruins the boolean shape The top shape is the one I try to replicate. I can use the boolean modifier to create those shapes, but as long as I put the mirror modifier below the boolean modifier, the faces, for some reason, are lost. Does anyone know what caused this problem? My Blend File A: If you take the mesh with only the mirror, you can notice that the mirror will not merge the parts: Here in the top part of the image, the mesh before the mirror is applied and on the bottom, after its application (both parts are not merged). As they are not merge, Blender can only see opened meshes, so the boolean differences operate on open meshes too and that's why the result is opened too. To fix that, just move the empty a bit along X. Secondly, and for the same reason, the mirror needs to be set on top of the stack (at least before the booleans), as if it is not, when a boolean operates, the mesh is opened and you have the result shown in your question. So the best is to set the mirror on top, then the subdivision and last the booleans: Here is the result (after the empty is moved a bit along +x and the modifiers reordered:
{ "pile_set_name": "StackExchange" }
Q: Is it ok to answer based someone's older answer on SE? Possible Duplicate: Is it okay to copy-paste answers from other questions? What to do when plagiarism is discovered? This question will probably be closed in a few minutes. It is a duplicate but I couldn't find the question using with meta search. Anyway, here is older answer, here is the new answer. In this case, is vote to close the new question as a possible dublicate not the best choice? Looks to me a little bit not ethical. A: Nothing wrong with that, so long as you give proper attribution. Of course, it is better to add details and making it a better answer. However, if the older answer fully answers the new question, chances are good that the new question is a duplicate of the older one and should be closed as such.
{ "pile_set_name": "StackExchange" }
Q: Invoke wso2 admin services SOAPUI I m working on wso2 admin services. I get url as http://localhost:9763/services/AuthenticationAdmin?wsdl for AuthencticationAdmin. Now, when I hit the login operation, with admin,admin,127.0.0.1, I get true as return. ESB console shows logged in. Now, when I hit logout operation, I dont get any response. Also I notice that header of the response does not contain any session ID. My ESB is 4.6.0. login request : <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:aut="http://authentication.services.core.carbon.wso2.org"> <soapenv:Header/> <soapenv:Body> <aut:login> <!--Optional:--> <aut:username>admin</aut:username> <!--Optional:--> <aut:password>admin</aut:password> <!--Optional:--> <aut:remoteAddress>127.0.0.1</aut:remoteAddress> </aut:login> </soapenv:Body> </soapenv:Envelope> login response : <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Body> <ns:loginResponse xmlns:ns="http://authentication.services.core.carbon.wso2.org"> <ns:return>true</ns:return> </ns:loginResponse> </soapenv:Body> </soapenv:Envelope> In the response, when I hit login I see, at bottom I only get 6 elements in header as follows : > Date Tue, 25 Jun 2013 14:31:42 GMT > Transfer-Encoding chunked > #status# HTTP/1.1 200 OK > Content-Type text/xml; charset=UTF-8 > Connection Keep-Alive > Server WSO2-PassThrough-HTTP Now, I dont get session ID. Can you please point out where m I going wrong? My scenario is that I want to login to WSO2 and then hit some other admin service operation. A: After some time debugging with the java client (see my other answer), I noticed that the SOAPUI Endpoint was not using the 9443 port that I was using in the Java client. See the image below. The 8243 port was picked up from the WSDL by SOAPUI. When I changed the SOAP UI Endpoint port from 8243 to 9443, the JSESSIONID gets returned in the response, as seen below: HTTP/1.1 200 OK Set-Cookie: JSESSIONID=573D42750DE6C0A287E1582239EB5847; Path=/; Secure; HttpOnly Content-Type: text/xml;charset=UTF-8 Transfer-Encoding: chunked Date: Wed, 26 Jun 2013 22:14:20 GMT Server: WSO2 Carbon Server <?xml version='1.0' encoding='UTF-8'?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"><soapenv:Body><ns:loginResponse xmlns:ns="http://authentication.services.core.carbon.wso2.org"><ns:return>true</ns:return></ns:loginResponse></soapenv:Body></soapenv:Envelope> I have no idea what the difference is between the ports 8243 and 9443, or why one returns a JSESSIONID and the other doesn't.
{ "pile_set_name": "StackExchange" }
Q: Where does the Doctor take the TARDIS to get it repaired? Where does the Doctor take the TARDIS to get it repaired? Like in the episode "The Eleventh Hour". He can't go to Gallifrey as it is Timelocked. He can't go to the past as technology isn't advanced enough. If he goes to the future, where does he go? A: I believe he does most of the repairs himself, excepting what the Tardis can do with Self-Repair systems (as happens in 'The 11th Hour') We've seen that the Tardis has the ability to reconfigure it's internal layout, 'grown' new components (seen in Journey to the Centre of the TARDIS), and, at the end of the 11th Hour, even produce a new Sonic for him. It appears that the Tardis has quite a few 'standard' repairs it can do without the Doctor having to be involved; regrowing damaged / lost rooms being an example. This also ties back to an element that was cut from a few episodes, that Tardis' are grown. (E.g., cut from 'Journey's End'; "the Doctor hands his clone a coral-like piece of the TARDIS, telling him to grow his own.") As you mention - there aren't many places he can get it repaired; he has been on the run in one sense or another for a long time. That being said, we have seen at least one; Logopolis. Back when it was still around and functioning, they were going to fix his Chameleon circuit with Block Transfer Computation. There may be other examples. We've also seen him working on repairing parts of the Tardis in countless episodes; even companions have helped him from time to time. Although, when he stole the Tardis, it was in the Repair shop, not all Time Lords necessarily have the same technical skills, and others may have needed to 'bring it to the shop' for repairs. Clearly the Doctor has the ability to perform at least some, if not most (an exception being what the Logopolians were going to fix for him), repairs that it has needed, based on how often we've seen it damaged, and him repairing it.
{ "pile_set_name": "StackExchange" }
Q: Trigger a spark job whenever an event occurs I have a spark application which should run whenever it receives a kafka message on a topic. I won't be receiving more than 5-6 messages a day so I don't want to take spark streaming approach. Instead I tried to submit the application using SparkLauncher but I don't like the approach as I have to set spark and Java classpath programmatically within my code along with all the necessary spark properties like executor cores, executor memory etc. How do I trigger the spark application to run from spark-submit but make it wait until it receives a message? Any pointers are very helpful. A: You can use shell script approach with nohup command to submit job like this... "nohup spark-submit shell script <parameters> 2>&1 < /dev/null &" Whenever, you get messages then you can poll that event and call this shell script. Below is the code snippet to do this... Further more have a look https://en.wikipedia.org/wiki/Nohup - Using RunTime /** * This method is to spark submit * <pre> You can call spark-submit or mapreduce job on the fly like this.. by calling shell script... </pre> * @param commandToExecute String */ public static Boolean executeCommand(final String commandToExecute) { try { final Runtime rt = Runtime.getRuntime(); // LOG.info("process command -- " + commandToExecute); final String[] arr = { "/bin/sh", "-c", commandToExecute}; final Process proc = rt.exec(arr); // LOG.info("process started "); final int exitVal = proc.waitFor(); LOG.trace(" commandToExecute exited with code: " + exitVal); proc.destroy(); } catch (final Exception e) { LOG.error("Exception occurred while Launching process : " + e.getMessage()); return Boolean.FALSE; } return Boolean.TRUE; } - Using ProcessBuilder - Another way private static void executeProcess(Operation command, String database) throws IOException, InterruptedException { final File executorDirectory = new File("src/main/resources/"); private final static String shellScript = "./sparksubmit.sh"; ProcessBuilder processBuilder = new ProcessBuilder(shellScript, command.getOperation(), "argument-one"); processBuilder.directory(executorDirectory); Process process = processBuilder.start(); try { int shellExitStatus = process.waitFor(); if (shellExitStatus != 0) { logger.info("Successfully executed the shell script"); } } catch (InterruptedException ex) { logger.error("Shell Script process was interrupted"); } } - Third way : jsch Run a command over SSH with JSch - YarnClient class -fourth way One of my favourite book Data algorithms uses this approach // import required classes and interfaces import org.apache.spark.deploy.yarn.Client; import org.apache.spark.deploy.yarn.ClientArguments; import org.apache.hadoop.conf.Configuration; import org.apache.spark.SparkConf; public class SubmitSparkJobToYARNFromJavaCode { public static void main(String[] arguments) throws Exception { // prepare arguments to be passed to // org.apache.spark.deploy.yarn.Client object String[] args = new String[] { // the name of your application "--name", "myname", // memory for driver (optional) "--driver-memory", "1000M", // path to your application's JAR file // required in yarn-cluster mode "--jar", "/Users/mparsian/zmp/github/data-algorithms-book/dist/data_algorithms_book.jar", // name of your application's main class (required) "--class", "org.dataalgorithms.bonus.friendrecommendation.spark.SparkFriendRecommendation", // comma separated list of local jars that want // SparkContext.addJar to work with "--addJars", "/Users/mparsian/zmp/github/data-algorithms-book/lib/spark-assembly-1.5.2-hadoop2.6.0.jar,/Users/mparsian/zmp/github/data-algorithms-book/lib/log4j-1.2.17.jar,/Users/mparsian/zmp/github/data-algorithms-book/lib/junit-4.12-beta-2.jar,/Users/mparsian/zmp/github/data-algorithms-book/lib/jsch-0.1.42.jar,/Users/mparsian/zmp/github/data-algorithms-book/lib/JeraAntTasks.jar,/Users/mparsian/zmp/github/data-algorithms-book/lib/jedis-2.5.1.jar,/Users/mparsian/zmp/github/data-algorithms-book/lib/jblas-1.2.3.jar,/Users/mparsian/zmp/github/data-algorithms-book/lib/hamcrest-all-1.3.jar,/Users/mparsian/zmp/github/data-algorithms-book/lib/guava-18.0.jar,/Users/mparsian/zmp/github/data-algorithms-book/lib/commons-math3-3.0.jar,/Users/mparsian/zmp/github/data-algorithms-book/lib/commons-math-2.2.jar,/Users/mparsian/zmp/github/data-algorithms-book/lib/commons-logging-1.1.1.jar,/Users/mparsian/zmp/github/data-algorithms-book/lib/commons-lang3-3.4.jar,/Users/mparsian/zmp/github/data-algorithms-book/lib/commons-lang-2.6.jar,/Users/mparsian/zmp/github/data-algorithms-book/lib/commons-io-2.1.jar,/Users/mparsian/zmp/github/data-algorithms-book/lib/commons-httpclient-3.0.1.jar,/Users/mparsian/zmp/github/data-algorithms-book/lib/commons-daemon-1.0.5.jar,/Users/mparsian/zmp/github/data-algorithms-book/lib/commons-configuration-1.6.jar,/Users/mparsian/zmp/github/data-algorithms-book/lib/commons-collections-3.2.1.jar,/Users/mparsian/zmp/github/data-algorithms-book/lib/commons-cli-1.2.jar,/Users/mparsian/zmp/github/data-algorithms-book/lib/cloud9-1.3.2.jar", // argument 1 to your Spark program (SparkFriendRecommendation) "--arg", "3", // argument 2 to your Spark program (SparkFriendRecommendation) "--arg", "/friends/input", // argument 3 to your Spark program (SparkFriendRecommendation) "--arg", "/friends/output", // argument 4 to your Spark program (SparkFriendRecommendation) // this is a helper argument to create a proper JavaSparkContext object // make sure that you create the following in SparkFriendRecommendation program // ctx = new JavaSparkContext("yarn-cluster", "SparkFriendRecommendation"); "--arg", "yarn-cluster" }; // create a Hadoop Configuration object Configuration config = new Configuration(); // identify that you will be using Spark as YARN mode System.setProperty("SPARK_YARN_MODE", "true"); // create an instance of SparkConf object SparkConf sparkConf = new SparkConf(); // create ClientArguments, which will be passed to Client ClientArguments cArgs = new ClientArguments(args, sparkConf); // create an instance of yarn Client client Client client = new Client(cArgs, config, sparkConf); // submit Spark job to YARN client.run(); } }
{ "pile_set_name": "StackExchange" }
Q: How to check element's immediate node is text? Suggest how to check element 'simplepara' is having text immediately after it. [Template match should be from 'simplepara' as given in below XSLT] XML: <article> <simplepara>Fig 1</simplepara>The text1<simplepara>Fig 2</simplepara><simplepara>Fig 3</simplepara>The text2<simplepara>Fig 4</simplepara><simplepara>Fig 5</simplepara> the text3<simplepara>Fig 5</simplepara> </article> XSLT: <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="2.0"> <xsl:template match="@*|node()"> <xsl:copy><xsl:apply-templates select="@*|node()"/></xsl:copy> </xsl:template> <xsl:template match="simplepara"> <xsl:copy><xsl:apply-templates select="@*|node()"/></xsl:copy> <xsl:if test="following-sibling::node()[1][text()][normalize-space()!='']"> <xsl:comment select="'Text-Node'"/> </xsl:if> </xsl:template> </xsl:stylesheet> Required result: <?xml version="1.0" encoding="UTF-8"?><article> <simplepara>Fig 1</simplepara><!--Text-Node-->The text1<simplepara>Fig 2</simplepara><simplepara>Fig 3</simplepara><!--Text-Node-->The text2<simplepara>Fig 4</simplepara><simplepara>Fig 5</simplepara><!--Text-Node--> the text3<simplepara>Fig 5</simplepara> </article> A: Change the xpath check to use self axis instead of child axis: following-sibling::node()[1][self::text()][normalize-space()!='']
{ "pile_set_name": "StackExchange" }
Q: Bash Function Won't Complete I am trying to sort the output of apt-cache search. Here is what I have currently; sch () { apt-cache search $1 sort -nr } When I run this it works but will not return me to the command prompt so I can type another command. Can anyone complete this or show me a better way. TIA. A: sch(){ apt-cache search "$1" | sort -nr } You missed quotes and pipe |
{ "pile_set_name": "StackExchange" }
Q: Interpreting regression results for different units of measure Could you kindly help me with interpreting the results from the Probit model for different units of measure of the covariates? Consider the Probit Model $$ Y=1\{X\beta+\epsilon \geq 0\} $$ with $\epsilon\perp X$ and $\epsilon \sim N(0,1)$. Let $\hat{\beta}$ be the MLE estimator of $\beta$. Suppose $X$ is continuous. Then, the estimated marginal effect of $X$ on $\mathbb{P}(Y=1|X=x)$ when $x=\bar{x}_n$, where $\bar{x}_n$ is the sample average, is $$ \frac{\partial \hat{\mathbb{P}}(Y=1|X=x)}{\partial x} \Big|_{x=\bar{x}_n}=\hat{\beta}_{\text{MLE}} \times \phi(\bar{x}_n\hat{\beta}_{\text{MLE}}) $$ where $\phi$ is the standard normal pdf. Suppose you get $\frac{\partial \hat{\mathbb{P}}(Y=1|X=x)}{\partial x}=0.2$ and $Y$ is employment status (working/not working). I believe that, for any unit of measure of $x$, $0.2$ means that: a marginal increase of $x$ increases the conditional probability for an individual featuring the values of $X$ at the sample mean to be employed by $0.2$ (or, equivalently, by $20$ percentage points). Now, I want to ask whether we can (and in positive case, how) refine the interpretation of $0.2$ if $X_k$ is income measured in $1,000\$$? $X_k$ is the logarithm of the income measured in $1,000\$$? (here I guess there is some percentage variation to use?) Please forgive the possible no-sense of the variable meaning and/or the number $0.2$. A: When you are trying to find the appropriate interpretation of parameters in a regression model, it is best to work with the original model form, and not complicate this by bringing in consideration of the estimators. Now, under your model you have $P(x) \equiv \mathbb{P}(Y=1|X=x) = \Phi ( x \beta )$, which gives: $$\begin{equation} \begin{aligned} P'(x) &= \frac{\beta}{\sqrt{2 \pi}} \cdot \exp \Big( -\frac{\beta^2}{2} x^2 \Big), \\[10pt] P''(x) &= - \frac{x \beta^3}{\sqrt{2 \pi}} \cdot \exp \Big( -\frac{\beta^2}{2} x^2 \Big). \end{aligned} \end{equation}$$ So you have: $$\frac{P''(x)}{P'(x)} = - \frac{x \beta^3}{\sqrt{2 \pi}} \Big/\frac{\beta}{\sqrt{2 \pi}} = - x \beta^2.$$ Re-arranging yields the parameter of interest: $$\beta = \sqrt{- \frac{1}{x} \cdot \frac{P''(x)}{P'(x)}}.$$ From this equation we see that the interpretation of $\beta$ is quite complicated --- it can be interpreted as a quantity pertaining to the rates-of-change of the conditional probability of a positive response. Contrary to the assertion in your question, the model (and hence the interpretation of the parameter) is not invariant to changes in the scale of $x$.
{ "pile_set_name": "StackExchange" }
Q: Tricky reshaping of a data frame I have a data frame with inventory data for the past 12 months. I created a mock data frame below for three months which is similar to my data set. inventory <- data.frame(ID=c(1,1,1,1,2,2,3,3,3,3,4,4,4), SKU=c("375F","375F","375F","375F","QX51","QX51","AEC","AEC","AEC","AEC","115332H","115332H","115332H"), inventory=c(3,4,14,5,18,5,4,13,4,10,3,2,2), sold=c(3,2,0,1,4,0,0,3,1,5,0,2,1), returned=c(1,0,2,0,0,0,1,0,1,1,0,2,0), month=c(0,1,2,3,0,2,3,0,1,2,3,2,3)) I'm trying to manipulate the data frame to generate a report that displays each variable with their ID and SKU and a column for each month, like the image below. Reshaped data frame I've attempted using the dplyr and data.table libraries but haven't had any success. How can I transform the data to have a column for each month like the image I posted? I'm still pretty new to R, so go easy on me. Thanks. A: There is a duplication for ID = 4 and SKU = 115332H so I had to change the value to remove the duplication. # Creating the data frame inventory <- data.frame(ID=c(1,1,1,1,2,2,3,3,3,3,4,4,4), SKU=c("375F","375F","375F","375F","QX51","QX51","AEC","AEC","AEC","AEC","115332H","115332H","115332H"), inventory=c(3,4,14,5,18,5,4,13,4,10,3,2,2), sold=c(3,2,0,1,4,0,0,3,1,5,0,2,1), returned=c(1,0,2,0,0,0,1,0,1,1,0,2,0), month=c(0,1,2,3,0,2,3,0,1,2,1,2,3)) # Reshaping the data # Melting the data frame inv2 <- melt(inventory,id=c("ID","SKU","month")) # Reshaping inv2_wide <- reshape(inv2,v.names = "value",idvar = c("ID","SKU","variable"), timevar = "month", direction = "wide") # Ordering by ID variables inv2_wide <- inv2_wide[order(inv2_wide$ID,inv2_wide$SKU),] # Renaming the variables names(inv2_wide) <- gsub("value\\.","Month",names(inv2_wide)) ID SKU variable Month0 Month1 Month2 Month3 1 1 375F inventory 3 4 14 5 14 1 375F sold 3 2 0 1 27 1 375F returned 1 0 2 0 5 2 QX51 inventory 18 NA 5 NA 18 2 QX51 sold 4 NA 0 NA 31 2 QX51 returned 0 NA 0 NA 7 3 AEC inventory 13 4 10 4 20 3 AEC sold 3 1 5 0 33 3 AEC returned 0 1 1 1 11 4 115332H inventory NA 3 2 2 24 4 115332H sold NA 0 2 1 37 4 115332H returned NA 0 2 0
{ "pile_set_name": "StackExchange" }
Q: zgrep for multiple pipe symbols in a bunch of gz files I have a lot of .gz files in a folder /a/b/c1.gz /a/b/c2.gz /a/b/c3.gz and so on. Some of the files have a single pipe delimiter, some have two, three and four and so on, in such a way: xyz|abc xyz|abc|wty xyz|abc|wty|asd and so on. How do I find all the files that have two pipe delimiters overall, three delimiters overall etc ? A: Let's create three test files: echo 'xyz|abc' > c1 echo 'xyz|abc|wty' > c2 echo 'xyz|abc|wty|asd' > c3 gzip c* Files containing one pipe in a line: $ zgrep '^[^|]*|[^|]*$' *.gz c1.gz:xyz|abc For any other numbers (including one pipe in a line), you can use the following pattern: Two pipes in a line: $ zgrep -E '^([^|]*\|){2}[^|]*$' *.gz c2.gz:xyz|abc|wty Three pipes in a line: $ zgrep -E '^([^|]*\|){3}[^|]*$' *.gz c3.gz:xyz|abc|wty|asd Two or three pipes in a line: $ zgrep -E '^([^|]*\|){2,3}[^|]*$' *.gz c2.gz:xyz|abc|wty c3.gz:xyz|abc|wty|asd Max. three pipes in a line: $ zgrep -E '^([^|]*\|){,3}[^|]*$' *.gz c1.gz:xyz|abc c2.gz:xyz|abc|wty c3.gz:xyz|abc|wty|asd If you only need the filename, add option -l, i.e. zgrep -lE ... My zgrep version doesn't support the recursive -r option. You could use find for a recusive search and run zgrep on the result: $ find . -type f -name '*.gz' -exec zgrep -lE '^([^|]*\|){3}[^|]*$' {} \; ./c3.gz
{ "pile_set_name": "StackExchange" }
Q: Ionic native plugin - platform check - to prevent code break in browser when run via 'ionic serve' I am new to ionic development and am using the below code (on button click event) to ensure that the native plugin call code doesn't break/error out when executing on web browser using 'ionic serve': if (!this.platform.is('cordova')) { console.warn('Push notifications not initialized. Cordova is not available - Run in physical device'); return; } --other wise run the native code.. My question is - when this runs on a real device, what exactly is the output of the if check? For Android and iOS is the platform cordova? Should I also be writing if check for this.platform.is('Android') and this.platform.is('iOS') also? A: Depending on the platform the user is on, is(platformName) will return true or false. Note that the same app can return true for more than one platform name. For example, an app running from an iPad would return true for the platform names: mobile, ios, ipad, and tablet. Additionally, if the app was running from Cordova then cordova would be true, and if it was running from a web browser on the iPad then mobileweb would be true. Now to answer your questions when this runs on a real device, what exactly is the output of the if check? If you are application(android,ios,windows) is built using the cordova framework and installed into the respective devices, then this.platform.is('cordova') would return true. On the other hand, If you host the application on any web server and try accessing it through your mobile browser, then this.platform.is('cordova') would return false. For Android and iOS is the platform cordova? Yes, only if you have created the native application( .apk , .ipa ) from cordova framework. If you are accessing the web application in the browser from your mobile device, then the platform is not cordova. Should I also be writing if check for this.platform.is('Android') and this.platform.is('iOS') also? Depends on the situation, In most of the cases if you have generated the native application( .apk, .ipa ) from cordova framework and you have some generic code for both the platforms then this.platform.is('cordova') is enough. You can check the following table for reference | Platform Name | Description | * |-----------------|------------------------------------| * | android | on a device running Android. | * | cordova | on a device running Cordova. | * | core | on a desktop device. | * | ios | on a device running iOS. | * | ipad | on an iPad device. | * | iphone | on an iPhone device. | * | mobile | on a mobile device. | * | mobileweb | in a browser on a mobile device. | * | phablet | on a phablet device. | * | tablet | on a tablet device. | * | windows | on a device running Windows. | Reference:https://ionicframework.com/docs/v3/api/platform/Platform/
{ "pile_set_name": "StackExchange" }
Q: How do I use a (generic) vector in go? I am using a Vector type to store arrays of bytes (variable sizes) store := vector.New(200); ... rbuf := make([]byte, size); ... store.Push(rbuf); That all works well, but when I try to retrieve the values, the compiler tells me I need to use type assertions. So I add those in, and try for i := 0; i < store.Len(); i++ { el := store.At(i).([]byte); ... But when I run this it bails out with: interface is nil, not []uint8 throw: interface conversion Any idea how I can 'cast'/convert from the empty Element interface that Vector uses to store its data to the actual []byte array that I then want to use subsequently? Update (Go1): The vector package has been removed on 2011-10-18. A: This works fine for me. Have you initialised the first 200 elements of your vector? If you didn't they will probably be nil, which would be the source of your error. package main import vector "container/vector" import "fmt" func main() { vec := vector.New(0); buf := make([]byte,10); vec.Push(buf); for i := 0; i < vec.Len(); i++ { el := vec.At(i).([]byte); fmt.Print(el,"\n"); } }
{ "pile_set_name": "StackExchange" }
Q: How to run scheduled code only once in several Docker replicas? I have a contaner in Docker. It has three replicas. I have also a scheduler (library), which runs every minute and invokes a function in my code. Since I have two more replicas, it invokes three times :( What I want to achieve is: When scheduler invokes the code, the first replica should execute the task, the other two should detect that and skip doing the task. How can I achieve this? A: You should break those two out into their own services. Service 1 would have the app running replicas=3 Service 2 would just run the scheduler with replicas=1 For example, if you were in Laravel, you could have all your code in one image and just change the service command to run in the scheduler service. If it was in something where the app also starts a scheduler and you can't just start the scheduler by itself, then the above still works, you just wouldn't send any connections to the 2nd service since 1st service is doing that work.
{ "pile_set_name": "StackExchange" }
Q: function bounded by an exponential has a bounded derivative? here's the question. I want to be sure of that. Let $v:[0,\infty) \rightarrow \mathbb{R}_+$ a positive function satisfying $$\forall t \ge 0,\qquad v(t)\le kv(0) e^{-c t}$$ for some positive constants $c$ and $k$. Can I conclude that $$\dot{v}(t) \le -c v(t)$$ ? A: I don't think you can. Look at something like $v(t) = e^{-t}\sin(g(t) t)$. Then $v'(t) = e^{-t}(\sin(g(t)t) + g(t)\sin(g(t)t)$. If you choose $g(t)$ so that it grows fast enough, your original function will be bounded from above by exponential decay, but you can make the derivative grow as fast as you like. Edit: This reply doesn't look at the positivity condition, but this should be fixable by replacing $\sin(g(t) t)$ with something like $2 + sin(g(t) t)$
{ "pile_set_name": "StackExchange" }
Q: Including a custom class in Concrete5 theme I'm learning Concrete5 and ran into something that has stumped me, but seems like it should be simple to fix. My class (which resides at \application\src\Derp\Derp.php): namespace Application\Src\Derp; use Concrete\Core\Database\Connection\Connection; class Derp { return 'Hello'; } I'm calling it from a page in my theme, like so: use Application\Src\Derp; $derp = new Derp(); var_dump($derp); I keep getting 'An Unexpected Error Occurred'. Class 'Application\Src\Derp\Derp' not found. What have I missed? A: You should reference the path all the way to the class. So try use Application\Src\Derp\Derp;
{ "pile_set_name": "StackExchange" }
Q: Geometry('POINT') column being returned as str object I have an sqlalchemy model object which has the following column: gps = Column(Geometry('POINT')) I have implemented a to_dict function in the model class, for which I need to deconstruct the gps object to give me lat and long. This successfully works for me in another model. But for some reason, in the class in question, the following piece of code results in an attribute error ('str' object has no attribute 'data'): point = wkb.loads(bytes(self.gps.data)) I store the gps data like so: gps = Point(longitude, latitude).wkt Here's the table description from postgresql: Column | Type | Modifiers | Storage | Stats target | Description -------------+-----------------------------+---------------------------------------------------+---------+--------------+------------- id | integer | not null default nextval('pins_id_seq'::regclass) | plain | | gps | geometry(Point) | | main | | I am calling the as dict method as soon as the Pin object gets created like so: gps = Point( float(data['longitude']), float(data['latitude']) ).wkt pin = Pin(gps=gps) # Commit pin to disk # otherwise fields will # not return properly with transaction.manager: self.dbsession.add(pin) transaction.commit() print (pin.as_dict()) What's driving me insane is the fact that the exact some code works for the other model. Any insight would be mucho appreciated. Edit: Following Ilja's comment, I understood that the issue is that the object isn't getting written to the disk, and apparently the Geometry column will get treated as a string till that happens. But I am getting the same error even now. Basically, at this point, the transaction.commit() function isn't doing what I think it is supposed to... Relevant to that is the configuration of the session object. Since all this is under the Pyramid web framework, I am using the default session configuration, as described here (you can skip the first few paragraphs, until they start discussing the /models/__init__.py file. Ctrl + F if need be). In case I have left some important detail out, reproducing the problematic class here below: from geoalchemy2 import Geometry from sqlalchemy import ( Column, Integer, ) from shapely import wkb from .meta import Base class Pin(Base): __tablename__ = 'pins' id = Column(Integer, primary_key=True) gps = Column(Geometry('POINT')) def as_dict(self): toret = {} point = wkb.loads(bytes(self.gps.data)) lat = point.x lon = point.y toret['gps'] = {'lon': lon, 'lat': lat} return toret A: At first I thought that the cause of the Traceback (most recent call last): ... File "/.../pyramid_test/views/default.py", line 28, in my_view print(pin.as_dict()) File "/.../pyramid_test/models/pin.py", line 18, in as_dict point = wkb.loads(bytes(self.gps.data)) AttributeError: 'str' object has no attribute 'data' was that zope.sqlalchemy closes the session on commit, but leaves instances unexpired, but that was not the case. This was due to having used Pyramid some time ago when the global transaction would still affect the ongoing transaction during a request, but now the default seems to be an explicit transaction manager. The actual problem is that transaction.commit() has no effect on the ongoing transaction of the current session. Adding some logging will make this clear: with transaction.manager: self.dbsession.add(pin) transaction.commit() print("Called transaction.commit()") insp = inspect(pin) print(insp.transient, insp.pending, insp.persistent, insp.detached, insp.deleted, insp.session) which results in about: % env/bin/pserve development.ini 2018-01-19 14:36:25,113 INFO [shapely.speedups._speedups:219][MainThread] Numpy was not imported, continuing without requires() Starting server in PID 1081. Serving on http://localhost:6543 ... Called transaction.commit() False True False False False <sqlalchemy.orm.session.Session object at 0x7f958169d0f0> ... 2018-01-19 14:36:28,855 INFO [sqlalchemy.engine.base.Engine:682][waitress] BEGIN (implicit) 2018-01-19 14:36:28,856 INFO [sqlalchemy.engine.base.Engine:1151][waitress] INSERT INTO pins (gps) VALUES (ST_GeomFromEWKT(%(gps)s)) RETURNING pins.id 2018-01-19 14:36:28,856 INFO [sqlalchemy.engine.base.Engine:1154][waitress] {'gps': 'POINT (1 1)'} 2018-01-19 14:36:28,881 INFO [sqlalchemy.engine.base.Engine:722][waitress] COMMIT As can be seen no commit takes place and the instance is still in pending state, and so its gps attribute holds the text value from the assignment. If you wish to handle your serialization the way you do, you could first flush the changes to the DB and then expire the instance attribute(s): gps = Point( float(data['longitude']), float(data['latitude']) ).wkt pin = Pin(gps=gps) self.dbsession.add(pin) self.dbsession.flush() self.dbsession.expire(pin, ['gps']) # expire the gps attr print(pin.as_dict()) # SQLAlchemy will fetch the value from the DB On the other hand you could also avoid having to handle the (E)WKB representation in the application and request the coordinates from the DB directly using for example column_property() accessors: class Pin(Base): __tablename__ = 'pins' id = Column(Integer, primary_key=True) gps = Column(Geometry('POINT')) gps_x = column_property(gps.ST_X()) gps_y = column_property(gps.ST_Y()) def as_dict(self): toret = {} toret['gps'] = {'lon': self.gps_y, 'lat': self.gps_x} return toret With that the manual expire(pin) becomes unnecessary, since the column properties have to refresh themselves anyway in this case. And of course since you already know your coordinates when you're constructing the new Pin, you could just prefill them: lon = float(data['longitude']) lat = float(data['latitude']) gps = Point(lon, lat).wkt pin = Pin(gps=gps, gps_x=lat, gps_y=lon) and so no flushing, expiring, and fetching is even needed.
{ "pile_set_name": "StackExchange" }
Q: scrolling a background image and floating buttons I need a screen that is filled with an image as background and three buttons floating over the image. I created this fine in xml and in vertical orientation it does what I want. However in landscape it doesnt so I have tried to add a scrollview and am getting very confused as to how to combine these elements correctly. After various tries I reached the stage shown below <?xml version="1.0" encoding="utf-8"?> <ScrollView android:layout_height="fill_parent" android:layout_width="fill_parent" android:id="@+id/home_container" xmlns:android="http://schemas.android.com/apk/res/android"> <ImageView android:src="@drawable/alert" android:layout_height="fill_parent" android:layout_width="fill_parent" android:scaleType="fitCenter" android:clickable="true" android:enabled="true" /> <LinearLayout android:layout_width="fill_parent" android:paddingBottom="25px" android:paddingLeft="25px" android:orientation="vertical" android:paddingTop="90px" android:id="@+id/toplinear" android:layout_height="fill_parent" android:paddingRight="25px" android:gravity="top"> <Button android:layout_width="fill_parent" android:textSize="20dp" android:id="@+id/startprogrambutton" android:paddingBottom="12dp" android:text="@string/text1" android:layout_height="wrap_content" android:paddingTop="12dp"></Button> <Button android:layout_width="fill_parent" android:textSize="20dp" android:id="@+id/button1" android:paddingBottom="12dp" android:layout_marginTop="10dp" android:background="@drawable/webview_buttons" android:text="@string/text2" android:layout_height="wrap_content" android:paddingTop="12dp"></Button> <Button android:layout_width="fill_parent" android:textSize="20dp" android:id="@+id/button2" android:paddingBottom="12dp" android:layout_marginTop="10dp" android:background="@drawable/webview_buttons" android:text="@string/text3" android:layout_height="wrap_content" android:paddingTop="12dp"></Button> <Button android:layout_width="fill_parent" android:textSize="20dp" android:id="@+id/button3" android:paddingBottom="12dp" android:layout_marginTop="10dp" android:background="@drawable/webview_buttons" android:text="@string/text4" android:layout_height="wrap_content" android:paddingTop="12dp"></Button> </LinearLayout> Now it errors with scrollview can only contain one child. Is there some way to rearrange so that I can have my image and the floating buttons all scroll together? A: put both ImageView and LinearLayout in another Vertical LinearLayout. This solves your scrollView child problem. <ScrollView> <LinearLayout> <Imageview /> <LinearLayout /> </LinearLayout> </ScrollView> Why don't you give image as background for linearLayout with buttons, it will serve your purpose.
{ "pile_set_name": "StackExchange" }
Q: Does Watson Personality Insights use retweets? Does Personality Insights use retweets when using a Twitter feed as input? And if so, is there a way to exclude them? A: I think you misunderstand the way that the Watson Personality Insights service works. The way the API is structured, the end user passes in content for Watson to analyze. That content can be anything from a collection of tweets, to a chapter of a book, to any long-form body of writing. The only requirement is that the number of words is greater than 100 and the total size is less than 20 MB. Looping back to your question, the Personality Insights API will use whatever content you pass it, so it will only analyze retweets if it is given them. For more info on the API, check out the REST API documentation.
{ "pile_set_name": "StackExchange" }
Q: View with embedded template can't be rendered Giving a simple sample app with Ember 1.0.0-rc.1: App = Ember.Application.create({ LOG_TRANSITIONS: true }); App.ApplicationView = Ember.View.extend({ templateName: 'application' }); App.IndexView = Ember.View.extend({ template : Ember.Handlebars.compile('Hello new Ember!') }); http://jsfiddle.net/nL5vf/ IndexView can't be rendered because during rendering it's view is set to 'undefined'. Is it intentional? A: It seems like you can't replace the template of any routed view in this fashion. You can do this for other views however. The following renders nothing: App = Ember.Application.create(); App.IndexView = Ember.View.extend({ template: Ember.Handlebars.compile('Hello new Ember!') }); But this does: App = Ember.Application.create(); App.MyView = Ember.View.extend({ template: Ember.Handlebars.compile('Hello new Ember!') }); (template) <script type="text/x-handlebars" data-template-name="index"> {{view App.MyView}} </script> I'll have to let someone more qualified weigh in as to whether or not this is intentional.
{ "pile_set_name": "StackExchange" }
Q: Core Data - Large Datasets and Very Long Load Times I've got about 5000-7000 objects in my core data store that I want to display in a table view. I'm using a fetched results controller and I haven't got any predicates on the fetch. Just a sort on an integer field. The object consists of a few ints and a few strings that hold about 10 to 50 chars in. My problem is that it's taking a good 10 seconds to load the view. Is this normal? I believe that the FRC handles large datasets and handle's batches and such things to allow large datasets. Are there any common pitfalls or something like that because I'm really stumped. I've stripped my app down to a single table view, yet it still takes around 10 seconds to load. And I'm leaving the table view as the default style and just displaying a string in the cell. Any advice would be greatly appreciated! A: Did you check the index checkbox for the integer you are sorting on in your Core Data model? A: On your fetch request, have you used -setFetchBatchSize: to minimize the number of items fetched at once (generally, the number of items onscreen, plus a few for a buffer)? Without that, you won't see as much of a performance benefit from using an NSFetchedResultsController for your table view. You could also limit the properties being fetched by using -setPropertiesToFetch: on your fetch request. It might be best to limit your fetch to only those properties of your objects that will influence their display in the table view. The remainder can be lazy-loaded later when you need them.
{ "pile_set_name": "StackExchange" }
Q: How to animate the background color of a UILabel? This looks like it should work, but doesn't. The color turns green at once. self.labelCorrection.backgroundColor = [UIColor whiteColor]; [UIView animateWithDuration:2.0 animations:^{ self.labelCorrection.backgroundColor = [UIColor greenColor]; }]; A: I can't find it documented anywhere, but it appears the backgroundColor property of UILabel is not animatable, as your code works fine with a vanilla UIView. This hack appears to work, however, as long as you don't set the background color of the label view itself: #import <QuartzCore/QuartzCore.h> ... theLabel.layer.backgroundColor = [UIColor whiteColor].CGColor; [UIView animateWithDuration:2.0 animations:^{ theLabel.layer.backgroundColor = [UIColor greenColor].CGColor; } completion:NULL]; A: Swift (important) Set the UILabel's background color to clear (either in IB or in code). For example: override func viewDidLoad() { super.viewDidLoad() myLabel.backgroundColor = UIColor.clear } Animate the layer background color. @IBAction func animateButtonTapped(sender: UIButton) { UIView.animate(withDuration: 1.0, animations: { self.myLabel.layer.backgroundColor = UIColor.red.cgColor }) } Note that CGColor is added after the UIColor. Result A: Swift 3 To animate your label background color from white to green, set up your label like this: self.labelCorrection.backgroundColor = UIColor.clear self.labelCorrection.layer.backgroundColor = UIColor.white.cgColor Animate like this: UIView.animate(withDuration: 2.0) { self.labelCorrection.layer.backgroundColor = UIColor.green.cgColor } To go back to the original state, so you can animate again, make sure you remove animations: self.labelCorrection.layer.backgroundColor = UIColor.white.cgColor self.labelCorrection.layer.removeAllAnimations()
{ "pile_set_name": "StackExchange" }
Q: How can I create a slice of a surface plot to create a line? (Matlab) Given some function z = f(x,y), I'm interested in creating a (1D) line plot along an arbitrary cutting plane in x,y,z. How do I do this in Matlab? Slice, for example, provides a higher dimensional version (colormap of density data) but this is not what I'm looking for. E.g.: z = peaks(50); surf(z); %->plot z along some defined plane in x,y,z... This has been asked before, e.g. here, but this is the answer given is for reducing 3D data to 2D data, and there is no obvious answer on googling. Thanks. A: If the normal vector of the plane you want to slice your surface will always lay in the xy plane, then you can interpolate the data over your surface along the x,y coordinates that are in the slicing line, for example, let the plane be defined as going from the point (0,15) to the point (50,35) % Create Data z=peaks(50); % Create x,y coordinates of the data [x,y]=meshgrid(1:50); % Plot Data and the slicing plane surf(z); hold on patch([0,0,50,50],[15,15,35,35],[10,-10,-10,10],'w','FaceAlpha',0.7); % Plot an arbitrary origin axis for the slicing plane, this will be relevant later plot3([0,0],[15,15],[-10,10],'r','linewidth',3); Since it is a plane, is relatively easy to obtain the x,y coordinates alogn the slicing plane with linspace, I'll get 100 points, and then interpolate those 100 points into the original data. % Create x and y over the slicing plane xq=linspace(0,50,100); yq=linspace(15,35,100); % Interpolate over the surface zq=interp2(x,y,z,xq,yq); Now that we have the values of z, we need against what to plot them against, that's where you need to define an arbitrary origin axis for your splicing plane, I defined mine at (0,15) for convenience sake, then calculate the distance of every x,y pair to this axis, and then we can plot the obtained z against this distance. dq=sqrt((xq-0).^2 + (yq-15).^2); plot(dq,zq) axis([min(dq),max(dq),-10,10]) % to mantain a good perspective
{ "pile_set_name": "StackExchange" }
Q: Perform menu bar actions I have a test program that I use to play around with menu bars. It looks like this: (There's supposed to be an image here. If there isn't, it's just a small window with a menu bar.) (source: 000webhostapp.com) My code is very simple and I have all I need design wise. The only thing I need help with is performing actions when they're clicked. Here's what I have: switch (message) { case WM_CREATE: { HMENU hMunubar = CreateMenu(); HMENU hFile = CreateMenu(); HMENU hEdit = CreateMenu(); HMENU hHelp = CreateMenu(); /* Create the "File" tab */ AppendMenu(hMunubar, MF_POPUP, (UINT_PTR) hFile, "File"); AppendMenu(hFile, MF_STRING, (UINT_PTR) 1, TEXT("Exit Alt+f4")); /* Create the "Edit" tab */ AppendMenu(hMunubar, MF_POPUP, (UINT_PTR) hEdit, "Edit"); AppendMenu(hEdit, MF_STRING, (UINT_PTR) 2, TEXT("Copy Ctrl+C")); AppendMenu(hEdit, MF_STRING, (UINT_PTR) 3, TEXT("Cut Ctrl+X")); AppendMenu(hEdit, MF_STRING, (UINT_PTR) 4, TEXT("Paste Ctrl+V")); /* Create the "Help" tab */ AppendMenu(hMunubar, MF_POPUP, (UINT_PTR) hHelp, "Help"); AppendMenu(hHelp, MF_STRING, (UINT_PTR) 5, TEXT("Visit Forum")); SetMenu(hwnd, hMunubar); break; } case WM_DESTROY: PostQuitMessage (0); break; default: return DefWindowProc (hwnd, message, wParam, lParam); } I learned how to create the menu off of this youtube video. The guy explained in depth how to CREATE the menu, but not so much how to do anything with it. Right now, all I want it to do is exit when I click File->Exit. I tried using this in my switch (message) function: case WM_COMMAND: { if(LOWORD(wParam) == 1){ return 0; } break; } But that did not work. How do I go about doing this? A: The first if expression checks if the incoming WM_COMMAND message refers to a menu command. The second if expression checks the menu identifier, so you can apply different actions to different menu options. case WM_COMMAND: { if (!HIWORD(wParam)) { if (LOWORD(wParam) == 1) // Checks for the menu identifier of the Exit option { DestroyWindow(hwnd); } } return 0; } I recommend you to create macro definitions, or actually since you tagged your question as C++, constant variables to provide the numeric constants representing the menu identifiers with meaningful names.
{ "pile_set_name": "StackExchange" }
Q: Iterable multiprocessing Queue not exiting import multiprocessing.queues as queues import multiprocessing class I(queues.Queue): def __init__(self, maxsize=0): super(I, self).__init__(maxsize) self.length = 0 def __iter__(self): return self def put(self, obj, block=True, timeout=None): super(I, self).put(obj,block,timeout) self.length += 1 def get(self, block = True, timeout = None): self.length -= 1 return super(I, self).get(block, timeout) def __len__(self): return self.length def next(self): item = self.get() if item == 'Done': raise StopIteration return item def thisworker(item): print 'got this item: %s' % item return item q=I() q.put(1) q.put('Done') the_pool = multiprocessing.Pool(1) print the_pool.map(thisworker, q) I'm trying to create an iterable queue to use with multiprocessing pool map. The idea is that the function thisworker would append some items to the queue until a condition is met and then exit after putting 'Done' in the queue (I've not done it here in this code yet) But, this code never completes, it always hangs up. I'm not able to debug the real cause. Request your help PS: I've used self.length because the map_async method called from under the_pool.map requires to use the length of the iterable to form a variable: chunksize, which will be used to get tasks from the pool. A: The problem is that you're treating 'Done' as a special-case item in the Queue, which indicates that the iteration should stop. So, if you iterate over the Queue using a for loop with your example, all that will be returned is 1. However, you're claiming that the length of the Queue is 2. This is screwing up the map code, which is relying on that length to accurately represent the number of items in the iterable in order to know when all the results have returned from the workers: class MapResult(ApplyResult): def __init__(self, cache, chunksize, length, callback): ApplyResult.__init__(self, cache, callback) ... # _number_left is used to know when the MapResult is done self._number_left = length//chunksize + bool(length % chunksize) So, you need to make the length actually be accurate. You can do that a few ways, but I would recommend not requiring a sentinel to be loaded into the Queue at all, and use get_nowait instead: import multiprocessing.queues as queues import multiprocessing from Queue import Empty class I(queues.Queue): def __init__(self, maxsize=0): super(I, self).__init__(maxsize) self.length = 0 ... <snip> def next(self): try: item = self.get_nowait() except Empty: raise StopIteration return item def thisworker(item): print 'got this item: %s' % item return item q=I() q.put(1) the_pool = multiprocessing.Pool(1) print the_pool.map(thisworker, q) Also, note that this approach isn't process safe. The length attribute will only be correct if you only put into the Queue from a single process, and then never put again after sending the Queue to a worker process. It also won't work in Python 3 without adjusting the imports and implementation, because the constructor for multiprocessing.queues.Queue has changed. Instead of subclassing multiprocessing.queues.Queue, I would recommend using the iter built-in to iterate over the Queue: q = multiprocessing.Queue() q.put(1) q.put(2) q.put(None) # None is our sentinel, you could use 'Done', if you wanted the_pool.map(thisworker, iter(q.get, None)) # This will call q.get() until None is returned This will work on all versions of Python, is much less code, and is process-safe. Edit: Based on the requirements you mentioned in the comment to my answer, I think you're better off using imap instead of map, so that you don't need to know the length of the Queue at all. The reality is, you can't accurately determine that, and in fact the length may end up growing as you're iterating. If you use imap exclusively, then doing something similar to your original approach will work fine: import multiprocessing class I(object): def __init__(self, maxsize=0): self.q = multiprocessing.Queue(maxsize) def __getattr__(self, attr): if hasattr(self.q, attr): return getattr(self.q, attr) def __iter__(self): return self def next(self): item = self.q.get() if item == 'Done': raise StopIteration return item def thisworker(item): if item == 1: q.put(3) if item == 2: q.put('Done') print 'got this item: %s' % item return item q=I() q.put(1) q.put(2) q.put(5) the_pool = multiprocessing.Pool(2) # 2 workers print list(the_pool.imap(thisworker, q)) Output: got this item: 1 got this item: 5 got this item: 3 got this item: 2 [1, 2, 5, 3] I got rid of the code that worried about the length, and used delegation instead of inheritance, for better Python 3.x compatibility. Note that my original suggestion, to use iter(q.get, <sentinel>), still works here, too, as long as you use imap instead of map.
{ "pile_set_name": "StackExchange" }
Q: Ant not rebuilding Android application with `ant debug install` Starting with a clean project created with: android create project -n something -t android-7 -p something -k com.example.something -a Something When I run ant debug install and open the application in my emulator, I see (as expected) Here's where it goes bad. I now change something trivial in the application. In this example, I'm going to remove the setContentView call from the main activity so it looks like this: package com.example.something; import android.app.Activity; import android.os.Bundle; public class Something extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); //setContentView(R.layout.main); REMOVED } } Now I rebuild the application with ant debug install and run it in the emulator. I see this: This is wrong. I just removed the text with my previous edit. If I do ant clean before ant debug install, I get the expected result: I don't want to have to run ant clean before each time I run ant debug install. How can I make ant actually rebuild the program without running ant clean each time? Details: Here's the output from the inital ant debug install: $ ant debug install Buildfile: /home/x/android/something/build.xml -set-mode-check: -set-debug-files: -set-debug-mode: -debug-obfuscation-check: -setup: [echo] Gathering info for something... [setup] Android SDK Tools Revision 16 [setup] Project Target: Android 2.1 [setup] API level: 7 [setup] [setup] ------------------ [setup] Resolving library dependencies: [setup] No library dependencies. [setup] [setup] ------------------ [setup] [setup] WARNING: No minSdkVersion value set. Application will install on all Android versions. -build-setup: [echo] Creating output directories if needed... [mkdir] Created dir: /home/x/android/something/bin [mkdir] Created dir: /home/x/android/something/bin/res [mkdir] Created dir: /home/x/android/something/gen [mkdir] Created dir: /home/x/android/something/bin/classes -pre-build: -code-gen: [echo] ---------- [echo] Handling aidl files... [aidl] No AIDL files to compile. [echo] ---------- [echo] Handling RenderScript files... [renderscript] No RenderScript files to compile. [echo] ---------- [echo] Handling Resources... [aapt] Generating resource IDs... -pre-compile: -compile: [javac] Compiling 2 source files to /home/x/android/something/bin/classes -post-compile: -obfuscate: -dex: [dex] Converting compiled files and external libraries into /home/x/android/something/bin/classes.dex... -crunch: [crunch] Crunching PNG Files in source dir: /home/x/android/something/res [crunch] To destination dir: /home/x/android/something/bin/res [crunch] Crunched 0 PNG files to update cache -package-resources: [aapt] Creating full resource package... -package: [apkbuilder] Current build type is different than previous build: forced apkbuilder run. [apkbuilder] Creating something-debug-unaligned.apk and signing it with a debug key... -do-debug: [zipalign] Running zip align on final apk... [echo] Debug Package: /home/x/android/something/bin/something-debug.apk debug: [propertyfile] Creating new property file: /home/x/android/something/bin/build.prop [propertyfile] Updating property file: /home/x/android/something/bin/build.prop [propertyfile] Updating property file: /home/x/android/something/bin/build.prop [propertyfile] Updating property file: /home/x/android/something/bin/build.prop install: [echo] Installing /home/x/android/something/bin/something-debug.apk onto default emulator or device... [exec] 66 KB/s (4410 bytes in 0.065s) [exec] pkg: /data/local/tmp/something-debug.apk [exec] Success BUILD SUCCESSFUL Total time: 5 seconds Here's the output from the second ant debug install after the edit: $ ant debug install Buildfile: /home/x/android/something/build.xml -set-mode-check: -set-debug-files: -set-debug-mode: -debug-obfuscation-check: -setup: [echo] Gathering info for something... [setup] Android SDK Tools Revision 16 [setup] Project Target: Android 2.1 [setup] API level: 7 [setup] [setup] ------------------ [setup] Resolving library dependencies: [setup] No library dependencies. [setup] [setup] ------------------ [setup] [setup] WARNING: No minSdkVersion value set. Application will install on all Android versions. -build-setup: [echo] Creating output directories if needed... -pre-build: -code-gen: [echo] ---------- [echo] Handling aidl files... [aidl] No AIDL files to compile. [echo] ---------- [echo] Handling RenderScript files... [renderscript] No RenderScript files to compile. [echo] ---------- [echo] Handling Resources... [aapt] No changed resources. R.java and Manifest.java untouched. -pre-compile: -compile: [javac] Compiling 1 source file to /home/x/android/something/bin/classes -post-compile: -obfuscate: -dex: [dex] No new compiled code. No need to convert bytecode to dalvik format. -crunch: [crunch] Crunching PNG Files in source dir: /home/x/android/something/res [crunch] To destination dir: /home/x/android/something/bin/res [crunch] Crunched 0 PNG files to update cache -package-resources: [aapt] No changed resources or assets. something.ap_ remains untouched -package: [apkbuilder] No changes. No need to create apk. -do-debug: [zipalign] No changes. No need to run zip-align on the apk. [echo] Debug Package: /home/x/android/something/bin/something-debug.apk debug: [propertyfile] Updating property file: /home/x/android/something/bin/build.prop [propertyfile] Updating property file: /home/x/android/something/bin/build.prop [propertyfile] Updating property file: /home/x/android/something/bin/build.prop [propertyfile] Updating property file: /home/x/android/something/bin/build.prop install: [echo] Installing /home/x/android/something/bin/something-debug.apk onto default emulator or device... [exec] 88 KB/s (4410 bytes in 0.048s) [exec] pkg: /data/local/tmp/something-debug.apk [exec] Success BUILD SUCCESSFUL Total time: 3 seconds Notice that the -dex, -package, and -debug steps all seem to think that I didn't change anything. A: I've just downgraded to SDK r15 — there's no such bug in it. Downloads are still there: http://dl.google.com/android/android-sdk_r15-linux.tgz http://dl.google.com/android/android-sdk_r15-windows.zip http://dl.google.com/android/installer_r15-windows.exe http://dl.google.com/android/android-sdk_r15-macosx.zip Most relevant issue in android's bugtracker for SDK r16 bug: http://code.google.com/p/android/issues/detail?id=23141 A: I was asking about this in #android-dev, apparently there is a bug in sdk r16 which breaks the and steps: 21:25 < pfn> I have that exact problem with sdk r16 21:25 < pfn> the answer is to delete classes.dex and yourapp-debug.apk before every ant debug Unfortunately this fix doesn't seem to work so it seems we're stuck with having a clean build each time.
{ "pile_set_name": "StackExchange" }
Q: Do I need to get a new AFSP approval if I change my instructor or aircraft? I plan to remain with the same provider though. A: An AFSP application is valid for one provider, one aircraft category (TSA category, not FAA category), and one training event (single-engine, IR or multi-engine). If you change any of those things, you need to submit a new application. If you're at a part 61 or 141 school that has multiple instructors, the approved provider is usually the school, not the individual instructor. So you can receive training from any instructor there without an issue. If you're getting training directly from a freelance part 61 instructor, then it's possible that the instructor himself is the approved provider. In that case, you would need to submit a new application to get training from a different instructor or school. In either case, your school or instructor should know exactly who's registered with the TSA as the provider. As for the aircraft, you only need to submit a new request if you want to train in a different aircraft category. Initial training is usually done in category 3 aircraft (maximum MTOW of 12,500lbs) and you can train in one or many physical aircraft, as long as they're all in the same category. But remember that SEL and MEL are different training events so if your current approval is for SEL training you can't receive MEL training, even if the aircraft is under 12,500lbs. Links to specific pages on the AFSP website don't work properly, but their FAQ is reasonably clear and you can email their helpdesk at [email protected] if you need more information. AOPA has a useful guide, and the regulations (including category definitions) are in 49 CFR 1552.
{ "pile_set_name": "StackExchange" }
Q: SoundPool play not working I'm trying to play a .mp3 sound from my raw folder, but somehow the sound doesn't play. The code does execute the play method but it doesn't produce sound. Here's my code: public SoundPlayer(Context context) { this.context = context; AudioManager audioManager = (AudioManager) context.getSystemService(Service.AUDIO_SERVICE); soundPool = new SoundPool(3, AudioManager.STREAM_MUSIC, 0); clickId = soundPool.load(context, R.raw.bubbleclick, -1); errId = soundPool.load(context, R.raw.bubbleclickerror, 1); countId = soundPool.load(context, R.raw.countdowntick, 1); float actualVolume = (float) audioManager.getStreamVolume(AudioManager.STREAM_MUSIC); float maxVolume = (float) audioManager.getStreamMaxVolume(AudioManager.STREAM_MUSIC); volume = actualVolume / maxVolume; } public void playBubbleClick() { if (!isMuted()) { soundPool.play(clickId, volume, volume, 1, 0, 1); } } I first instantiate the SoundPlayer class (custom class), and then call playBubbleClick() A: I didn't solve the issue but I worked around it by calling the play method in the onLoad event (when setting an onLoadListener). This does solve it, but still shouldn't be the way it is done.
{ "pile_set_name": "StackExchange" }
Q: Twig variable in extern js file I want to external my js code but there is twig variable. What are your tricks ? team: {{ 'Select your team'|trans }} Thanks, A: I just set my twig vars as globals before requiring any javascript files. <!DOCTYPE html> <html> <head> <title></title> </head> <body> <script> var my_twig_var = {% if twig_var is defined %}'{{ twig_var }}'{% else %}null{% endif %} </script> <script src="scripts/functions.js"></script> </body> </html> Another aproach I use is to forsee a javascript block in my main template base.twig.html <!DOCTYPE html> <html> <head> <title></title> </head> <body> {% block body %} {% endblock %} {% block javascript %} {% endblock %} </body> </html> page.html.twig {% extends base.twig.html %} {% block body%} <h1>Hello World</h1> {% endblock %} {% block javascript %} <script> alert('{{ twig_var|default('Hello World') }}'); </script> {% endblock %}
{ "pile_set_name": "StackExchange" }
Q: Where is the location of schema file for http://schemas.microsoft.com/winfx/2006/xaml/presentation? I've been googling for the schema file that describes WPF elements for XAML but cannot find it. The namespace declaration should has a list of all WPF features, for example types, attributes, or elements that it adds to standard XAML. I can find the schema file for XAML in Visual Studo cache directory. The file is called xaml2006.xsd. There is a wpfe.xsd, but its target namespace is http://schemas.microsoft.com/client/2007. This may sound trivial, but I've spend hours to find this schema file. Where can I found a schema file (XSD file) with targetNamespace set to "http://schemas.microsoft.com/winfx/2006/xaml/presentation"? If it is hidden inside DLL file, then perhaps there is an open source resource that host this schema file? A: Little late, hope this tutorial will be helpful who search for this : First NS you mentioned is the default namespace. and it doesn't exist in that URL provided, but its a unique name , etc. http://video.ch9.ms/ch9/ff6f/e1477e72-b989-49c1-acdd-62802c36ff6f/ABSWP81Part3_mid.mp4 Skip to 14.00 He says basically to use http://schemas.microsoft.com/winfx/2006/xaml: <Page x:Class="HelloWorld.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="using:HelloWorld" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" ?>
{ "pile_set_name": "StackExchange" }
Q: How to execute multiple goals in one maven command, but with different arguments for each goal I am trying to run 2 maven goals in one maven command like: mvn release:prepare release:perform -Darguments='-Dmaven.test.skip=true' but, I would like the first goal to skip tests and the second one not to skip tests. It has to be in one line command. Is there a way to do it other than executing them in 2 separate commands? A: You can use the following: mvn -Dmaven.test.skip=true release:prepare release:perform Within release-plugin the arguments are passed via -Darguments='....' to the sub process which is started by release:perform. The other arguments are passed to the process which is started by release:prepare.
{ "pile_set_name": "StackExchange" }
Q: In Swift, how to flip a UITableViewCell? I have a tableView that when a cell is pressed I need it to flip to another cell. For example from this: To this: One cell at a time. I saw a post on how to do it in objective-c here: How can I flip an iOS UITableViewCell? Is there a way to do it in Swift? A: Here in case you need the disclosure indicators. You will have to use a custom one in order to flip it as far as I know. I found this function that will flip a cell's view horizontally, but I don't believe that is what you want. cell.contentView.layer.setAffineTransform(CGAffineTransformMakeScale(-1, 1)) I would just make both cells with the custom disclosure indicators that you just save copies of the disclosure indicator images flipped, and then when you click the cell make them swap like so: var isFirstCell = true var indexOfSwappingCell: Int = 0 func tableView(tableView: UITableView, didSelectRowAtIndexPath indexPath: NSIndexPath) { isFirstCell = !isFirstCell tableView.reloadRowsAtIndexPaths([NSIndexPath(forRow: indexOfSwappingCell, inSection: 0)], withRowAnimation: .None) } func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell { if indexPath.row == indexOfSwappingCell { if isFirstCell { let cell = tableView.dequeueReusableCellWithIdentifier("customCell1", forIndexPath: indexPath) as! CustomCell1 //setup cell return cell } else { let cell = tableView.dequeueReusableCellWithIdentifier("customCell2", forIndexPath: indexPath) as! CustomCell2 //setup cell return cell } } else { //setup other cells return UITableViewCell() } } In case you want the code that guys used, I just ran through it into objectivec2swift //flip the view to flipView @IBAction func flipButtonAction(sender: UIButton) { UIView.transitionWithView(self.contentView, duration: 0.6, options: .TransitionFlipFromRight, animations: {() -> Void in self.contentView.insertSubview(flipView, aboveSubview: normalView) }, completion: {(finished: Bool) -> Void in }) } //flip the view back to normalView @IBAction func flipBackButtonAction(sender: AnyObject) { UIView.transitionWithView(self.contentView, duration: 0.6, options: .TransitionFlipFromLeft, animations: {() -> Void in self.contentView.insertSubview(normalView, aboveSubview: flipView) }, completion: {(finished: Bool) -> Void in }) } To respond to your comment add these 2 above viewDidLoad var isFirstLoad = true var contentViewToFlip: UIView! func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell { if indexPath.row == indexOfSwappingCell { if isFirstCell { let cell = tableView.dequeueReusableCellWithIdentifier("customCell1", forIndexPath: indexPath) as! CustomCell1 if isFirstLoad == false { UIView.transitionWithView(self.contentView, duration: 0.6, options: .TransitionFlipFromRight, animations: {() -> Void in contentViewToFlip.insertSubview(cell.contentView, aboveSubview: contentViewToFlip) }, completion: {(finished: Bool) -> Void in }) } isFirstLoad = false contentViewToFlip = cell.contentView //setup cell return cell } else { let cell = tableView.dequeueReusableCellWithIdentifier("customCell2", forIndexPath: indexPath) as! CustomCell2 UIView.transitionWithView(self.contentView, duration: 0.6, options: .TransitionFlipFromLeft, animations: {() -> Void in contentViewToFlip.insertSubview(cell.contentView, aboveSubview: contentViewToFlip) }, completion: {(finished: Bool) -> Void in }) contentViewToFlip = cell.contentView //setup cell return cell } } else { //setup other cells return UITableViewCell() } }
{ "pile_set_name": "StackExchange" }
Q: How should I validate the presence of fields that create an instance that doesn't have a model? A sign in form simply populates the sessions hash, and so doesn't have a model. As such, I don't know where to put validations for the sign in form. How should I validate presence of the username and password? Should I just do it with javascript on the clientside? I suppost that makes sense, how could I iterate through the errors hash if it were to fail validations? A: Really simple, you can realize it using ActiveModel. Just create a class as below and add some validations class Session include ActiveModel::Validations extend ActiveModel::Naming attr_accessor :username, :password validates_presence_of :username, :password def initialize(attrs = {}) attrs.each do |name, value| send("#{name}=", value) end end def persisted? false end end s = Session.new(username: 'Abc') => #<Session:0x000000058a3270 @username="Abc"> s.valid? s.errors => #<ActiveModel::Errors:0x000000058ad900 ... @messages={:password=>["can't be blank"]}> And your form should automatically show error messages if exists.
{ "pile_set_name": "StackExchange" }
Q: ANDROID Convert jpeg image stored in /res/drawable to Bitmap object Possible Duplicate: How to convert a Drawable to a Bitmap? I have just started learning android, so im a newbie I am trying to create an android application for image processing for my Project I came accross the blog entry : http://xjaphx.wordpress.com/2011/06/22/image-processing-brightness-over-image/ I am facing the following problem; I am unable to create a Bitmap object for my image stored as /res/drawable/pic_1.jpg I have tried using Bitmap myBitmap = BitmapFactory.decodeFile("/res/drawable/pic_1.jpg"); and called the function; public static Bitmap doBrightness(myBitmap, int value) {.........} but that didnt work; then I tried, Bitmap myBitmap = BitmapFactory.decodeFile("/res/drawable/pic_2.jpg"); imageview.setImageBitmap(myBitmap); without calling the doBrightness() function.yet,it displays nothing in the imageview. so I guess BitmapFactory.decodeFile() is returning null. so what i wanted to know is, how is it possible to create Bitmap object for an image stored in /res/drawable? A: you should get the Image via the resources, not by path: Bitmap bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.pic1); A: Use this Bitmap bitmap = BitmapFactory.decodeResource(ctx.getResources(), R.drawable.icon);
{ "pile_set_name": "StackExchange" }
Q: How can I read a line and split it to save it? And do the same for a second file I have been trying to read my txt file into java and then splitting the two integer columns to then be saved into a list or array. I need these two numbers seperated because I will be uploading a second txt file in which I will have more numbers that I will need to add or substract from the first files columns. So here is a sample of my txt files: file 1: 0033 2000 2390 500 etc. file 2: 0033 2 400 3829 1 3020 etc. The first file has two columns and the second file has three columns To be very honest I'm not good with java at all. So far I have only been able to read the files and print them as they are. import java.io.File; import java.io.FileReader; import java.io.BufferedReader; import java.io.IOException; public class test { public static void readlines(File f) throws IOException { FileReader fr = new FileReader(f); BufferedReader br = new BufferedReader(fr); String line; int NumberOfLines = 0; while ((line = br.readLine()) != null) { System.out.println(line); NumberOfLines++; } System.out.println("Number of lines read: " + NumberOfLines); br.close(); fr.close(); } public static void main(String[] args) { File f = new File("filename1"); File s = new File("filename2"); try { readlines(f); readlines(s); } catch (IOException e) { e.printStackTrace(); } } } I know that I should be splitting the data using .split("\t") since it's a tab but how can I save this into a column array so that I can later on in a different class add then together? Do I need to make two classes in which I read file1 and in the second file 2? Afterwards I do all my adding in the main class? Any ideas will be nice here!! Sorry for asking basic stuff but switching from matlab to java is kind of difficult for me D: A: What I'd do is to make the readlines(File f) method return an array. Arrays (in case you don't know) are lists or (commonly) similar elements. For example, in your case, that method would return a String array. split() method returns an array of String, where each element is one of the splitted elements. For example: String[] splitted = "Hello World!".split(' '); In this case, splitted would be: ["Hello", "World!"] If you make readlines(File f) return an array with the splitted read content, all you have to do in your main (if you want to keep this "matrix" appereance) is to push the returned value into another Array. I hope I explained it clear enough ^^
{ "pile_set_name": "StackExchange" }
Q: New line javascript or asp.net I want to add a new line in my code so that my files can be under each other. I will post the code from Js and asp.net with their link buttons. $(qFiles).each(function (i) { var a = document.createElement('a'); var isSharePoint = this.isSharePoint; var img =document.createElement('img') ; if (isSharePoint) { a.target = "_blank"; a.href = this.FilePath; img.src = appPath + '/Images/pagelink_16x16.png'; } else { var filePath = this.FilePath; var fileName = this.FileName; a.onclick = function() { downloadFile(filePath + "\\" + fileName); }; img.src = appPath + '/Images/download_16x16.png'; } qDiv.appendChild(img); a.innerHTML += this.FileName; a.className = "linkSurrogate singleLine"; qDiv.appendChild(a); }); qDiv = $(config.jQuerySelectors.quoteFiles); var dialog = qDiv.dialog({ autoOpen: true, height: config.dialog.height, width: config.dialog.width, closeOnEscape: true, resizable: false, title:"Quote "+ quoteNo +" Files", modal: true, buttons: { "Close": function () { qDiv.dialog('close'); } } }); As you can see here is a validation, if is SharePoint or not. And there as well is the dialog where the two files appear. And here is the code with the link buttons: <div id="quoteFiles" style="display: none"> <asp:LinkButton CssClass="linkButton" ID="downloadFile" ClientIDMode="Static" runat="server" OnClick="downloadFile_Click"></asp:LinkButton> <asp:LinkButton CssClass="linkButton" ID="sharePoint" ClientIDMode="Static" runat="server" OnClick="downloadFile_Click"></asp:LinkButton> <asp:Label runat="server" ID="quoteFiles" ClientIDMode="Static" Text="There are no files attached."></asp:Label> </div> I already tried with document.write both \n and <br /> and didn't worked. As well I tried to put a new line in the asp page, but nothing seemes to work. A: After your code line: qDiv.appendChild(a); Put this after it: var p = document.createElement('p'); qDiv.appendChild(p); This will then be written after the link (file -> link -> newline). It will create an empty p tag. You could also append a br tag, but sometimes this doesn't show and then you need 2 br's which are shown as 2 newlines on some browsers and 1 newline on other browsers, so an empty p tag is what you want cause this is always rendered correctly by all browsers.
{ "pile_set_name": "StackExchange" }
Q: PHP static vs. object calls So I have this problem where I can call an object method statically and vice versa. Is this supposed to happen or what am I doing wrong in case? PHP Version: 5.6.12 XAMPP Version: 3.2.1 function endl() { echo "<br>"; } class Base { public function objectFunc($msg) { echo "You called a non-static function from " . $msg; endl(); } public static function staticFunc($msg) { echo "You called a static function from " . $msg; endl(); } } Base::objectFunc("a static call"); Base::staticFunc("a static call"); $base = new Base; $base->objectFunc("a non-static call"); $base->staticFunc("a non-static call"); Here are the results from running this: You called a non-static function from a static call You called a static function from a static call You called a non-static function from a non-static call You called a static function from a non-static call A: This could help you: "Declaring class properties or methods as static makes them accessible without needing an instantiation of the class. A property declared as static cannot be accessed with an instantiated class object (though a static method can)" by php.net "Because static methods are callable without an instance of the object created, the pseudo-variable $this is not available inside the method declared as static. Caution: In PHP 5, calling non-static methods statically generates an E_STRICT level warning. Warning: In PHP 7, calling non-static methods statically is deprecated, and will generate an E_DEPRECATED warning. Support for calling non-static methods statically may be removed in the future. " by php.net Your code is going to work but with warnings, it depends of php version. For more see: http://php.net/manual/en/language.oop5.static.php
{ "pile_set_name": "StackExchange" }
Q: How to set environment variable or system property in spring tests? I'd like to write some tests that check the XML Spring configuration of a deployed WAR. Unfortunately some beans require that some environment variables or system properties are set. How can I set an environment variable before the spring beans are initialized when using the convenient test style with @ContextConfiguration? @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = "classpath:whereever/context.xml") public class TestWarSpringContext { ... } If I configure the application context with annotations, I don't see a hook where I can do something before the spring context is initialized. A: You can initialize the System property in a static initializer: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = "classpath:whereever/context.xml") public class TestWarSpringContext { static { System.setProperty("myproperty", "foo"); } } The static initializer code will be executed before the spring application context is initialized. A: The right way to do this, starting with Spring 4.1, is to use a @TestPropertySource annotation. @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = "classpath:whereever/context.xml") @TestPropertySource(properties = {"myproperty = foo"}) public class TestWarSpringContext { ... } See @TestPropertySource in the Spring docs and Javadocs. A: One can also use a test ApplicationContextInitializer to initialize a system property: public class TestApplicationContextInitializer implements ApplicationContextInitializer<ConfigurableApplicationContext> { @Override public void initialize(ConfigurableApplicationContext applicationContext) { System.setProperty("myproperty", "value"); } } and then configure it on the test class in addition to the Spring context config file locations: @ContextConfiguration(initializers = TestApplicationContextInitializer.class, locations = "classpath:whereever/context.xml", ...) @RunWith(SpringJUnit4ClassRunner.class) public class SomeTest { ... } This way code duplication can be avoided if a certain system property should be set for all the unit tests.
{ "pile_set_name": "StackExchange" }
Q: What exactly means universal variable x and z? I have been studying by myself Orbital Mechanics, and when solving the Lambert Problem, it's common to use the universal variable approach. I understand the algorithm, but I haven't found any book that explains well the physical meaning of the universal anomaly $x$ and the dimensionless variable $z$. What would be the physical meaning of these two variables? de la Torre Sangra & Fantino's Review of Lambert's Problem as well as Izzo's Revisiting Lambert’s problem (also ArXiv) cite Bate 1971 as Fundamentals of Astrodynamics by Roger R. Bate, Donald D. Mueller, Jerry E. White (Dover, 1971) (in google books, pdfs out there as well) which introduces $x$ and $z$: and later: A: Following page of 204 of Fundamentals of astrodynamics: These are just convenience variables that depend on change of eccentric anomaly from initial to final point of motion analyzed (or predicted). For elliptical orbit: $$x = \sqrt{a} ( E - E_0 )$$ or, for negative $a$ (hyperbolic orbit), $$x = \sqrt{-a} ( F - F_0 )$$ For parabolic orbit, $$x = D-D_0$$ For elliptical orbit: $$z = (E - E_0)^2$$ For hyperbolic orbit: $$-z = (F - F_0)^2$$ For parabolic orbit, $z = 0$ (also, when there's no change in eccentric anomaly.) where $E$ is the eccentric anomaly (page 183): $D$ is "parabolic eccentric anomaly" and $F$ - "hyperbolic eccentric anomaly" (always an imaginary value) - counterparts to $E$ for parabolic and hyperbolic trajectories. Subsequent pages explain derivation of these. As a side note, I think Eccentric Anomaly deserves a better justification and explanation than what it gets, with extending arbitrary lines to randomly chosen circles for unknown purposes. As the standard ellipse equation is $({x\over a})^2 + ({y \over b})^2 = 1$ (that's a carthesian coordinate $x$, not the universal variable $x$) the typical parametrization is: $$ x = a \cos E $$ $$ y = b \sin E $$
{ "pile_set_name": "StackExchange" }
Q: Finding good pants for winter biking Winter is soon upon us here, and I'm gearing up for the season. I currently have a pair of rain over-pants which I use during the other 3 seasons, but honestly I really dislike them. They're baggy, requiring the use of an ankle band, and not entirely warm. They do a decent job of keeping out water, but I don't know how they will perform with snow. Can someone recommend a type of pants which are more form-fitting and will do well in both rain and snow? A: It really depends on your budget, but you should check this out (or anything similar), from Gore bike wear : 179.99 USD MSRP http://www.gorebikewear.com/remote/Satellite/PROD_TULTRO?landingid=1208436873480O They aren't the tightest pants out there, but "real" tight pants are very rarely (trying not to say never) waterproof because of the type of fabric used to make them. An alternative idea would be a nylon pant (waterproof, windproof, but absulutely not warm) that's relatively tight with a warm baselayer. A: I have a pair of Sugoi long tights for commuting in the winter. I add or subtract a base layer depending on conditions. I don't really care if it's Sugoi or not; other companies make bike-tights compatible with base layers. Maybe I choose a different brand next year? Depends on what's on sale... A: Foxwear makes great rain pants and jackets. They're a little baggy, but they close at the ankle -- no ankle clip required. They're also good layered over tights for winter riding. I have two pair, and will use them until they fall apart (which may take some time).
{ "pile_set_name": "StackExchange" }
Q: How to Groupby and plot it I have the following dataframe (with different campaigns) When I use groupby and try to plot, I get several graphs df.groupby("Campaign").plot(y=["Visits"], x = "Week") I would like to have only one graph with all the visits in the same graph by every campaign during the week time. Also because the graphs show up separated, I do not know which one belongs to each campaign. I would appreciate any tips regarding this. A: You could do this: df.set_index(['Week','Campaign'])['Visits'].unstack().plot(title='Visits by Campaign') For multiple values of Week/Campaign let's aggregate them with sum or you could use mean to average the values: df.groupby(['Week','Campaign'])['Visits'].sum().unstack().plot(title='Visits by Campain') Output:
{ "pile_set_name": "StackExchange" }
Q: Converting a StreamReader(String) to be compatible with UWP API? I'm running into a snag utilizing the StreamReader class. On the StreamReader Class documentation page, it states that it supports Universal Windows Platforms (UWPs) under the Version Information header, "Universal Windows Platform - Available since 8". Upon further inspection of its constructors, the StreamReader(Stream) constructors do support UWP apps, however the StreamReader(String) constructors do not support them. I'm currently using the StreamReader(String) constructor with the complete file path to to be read, using (StreamReader sr = new StreamReader(path)) { ... } I'm seeking to learn how to convert my code for a StreamReader(String) to a StreamReader(Stream). A: In UWP StreamReader accepts only Stream with additional Options. Not String. So to use StreamReader from a particular path, you need to get the StorageFile StorageFile file = await StorageFile.GetFileFromPathAsync(<Your path>); var randomAccessStream = await file.OpenReadAsync(); Stream stream = randomAccessStream.AsStreamForRead(); StreamReader str = new StreamReader(stream);
{ "pile_set_name": "StackExchange" }
Q: What is the best way to update form controls from a worker thread? I've done some research and I can't really find a preferred way to do updating of form controls from a worker thread in C#. I know about the BackgroundWorker component, but what is the best way to do it without using the BackgroundWorker component? A: There's a general rule of thumb that says don't update the UI from any thread other than the UI thread itself. Using the features of the BackgroundWorker is a good idea, but you don't want to and something is happening on a different thread, you should do an "Invoke" or BeginInvoke to force the delegate to execute the method on the UI thread. Edit: Jon B made this good point in the comments: Keep in mind that Invoke() is synchronous and BeginInvoke() is asynchronous. If you use Invoke(), you have to be careful not to cause a deadlock. I would recommend BeginInvoke() unless you really need the call to be synchronous. Some simple example code: // Updates the textbox text. private void UpdateText(string text) { // Set the textbox text. m_TextBox.Text = text; } public delegate void UpdateTextCallback(string text); // Then from your thread you can call this... m_TextBox.Invoke(new UpdateTextCallback(this.UpdateText), new object[]{"Text generated on non-UI thread."}); The code above is from a FAQ about it here and a longer more involved one here. A: Why dont you want to do it using the BackgroundWorker? It has a fantastic callback event called ProgressChanged which lets the UI thread know about updates, perfect for progess bar-type updates and the like. link to details
{ "pile_set_name": "StackExchange" }
Q: Unable to load DLL python module in PyCharm. Works fine in IPython When I use the IPython included with Enthought Python Distribution, I can import the pyvision package just fine. However, when I try to import pyvision inside of PyCharm 1.2.1, I get the following errors File "C:\Python27\lib\site-packages\pyvision\__init__.py", line 146, in <module> from pyvision.types.img import Image,OpenCVToNumpy,NumpyToOpenCV File "C:\Python27\lib\site-packages\pyvision\types\img.py", line 43, in <module> import numpy File "C:\Python27\lib\site-packages\numpy\__init__.py", line 142, in <module> import add_newdocs File "C:\Python27\lib\site-packages\numpy\add_newdocs.py", line 9, in <module> from numpy.lib import add_newdoc File "C:\Python27\lib\site-packages\numpy\lib\__init__.py", line 13, in <module> from polynomial import * File "C:\Python27\lib\site-packages\numpy\lib\polynomial.py", line 17, in <module> from numpy.linalg import eigvals, lstsq File "C:\Python27\lib\site-packages\numpy\linalg\__init__.py", line 48, in <module> from linalg import * File "C:\Python27\lib\site-packages\numpy\linalg\linalg.py", line 23, in <module> from numpy.linalg import lapack_lite ImportError: DLL load failed: The specified module could not be found. Am I missing some path settings in Windows? A: I had the same problem. I'm using Winpython32 and trying to import win32com. Worked everywhere (I tried) except in PyCharm. sys.path and os.environ['PYTHONPATH'] had some extra entries inside Pycharm, but nothing is missing compared to when run elsewhere. The solution was to start Pycharm within the Winpython console and not using the shortcut. sys.path and os.environ['PYTHONPATH'] did not change. os.environ['PATH'] had several additional entries set, all related to the python installation. At this point I suspect it has to do with "non-standard" installations. Winpython32 tries to be "portable", while other reports of similar problems are when using Enthought or Python(x,y). Manually adding: C:\WinPython-32\python-2.7.6\ C:\WinPython-32\python-2.7.6\DLLs C:\WinPython-32\python-2.7.6\Scripts to the system path (the global PATH environment variable in Windows) solved the problem without having to run Pycharm within the Winpython command line. Note: C:\WinPython-32\python-2.7.6\Scripts alone did not solve it.
{ "pile_set_name": "StackExchange" }
Q: Modeling incoming solar radiation I want to write a model for estimating incoming solar radiation for a specific latitude on earth but I am struggling to find an appropriate source which shows the required equations for doing so. Would anyone be able to provide me with a link to where I can find equations for estimating solar radiation (irradiance) given a specific cloud cover, latitude, time of day, and day of year? A: Ok, I'm still not sure on what level you want to do this, but I will start you off with some basics. The most important factor is probably the solar elevation angle, $\theta$. As described on the wiki-page it can be calculated using this formula: $$\sin\theta=\cos h\cos\delta\cos\Phi+\sin\delta\sin\Phi$$ where $h$ is the hour angle, $\delta$ is the solar declination and $\Phi$ is the latitude. The trickiest to calculate of these is the solar declination. A few different formulas to calculate is can be found here. Which formula you use will depend on the accuarcy you need. I suggest starting with this formula: $$ \delta=-\arcsin(0.39789\cos(0.98565(N+10)+1.914\sin(0.98565(N-2)))) $$ where $N$ is the day of year beginning with $N=0$ at 00:00:00 UTC on January 1 (prefereably calulate $N$ as a decimal number to increase accuracy). Note that this formula uses degree-based trigonometric functions. Now, if we totally ignore atmosperic effects, total solar irradiance (of all wavelengths) incident on a horizontal surface will be: $$ E=A\sin\theta $$ where $A$ is the solar constant which approximatley has the value 1360 W/m$^2$ (on average, it varies by roughly 7% over the year due to the ellipticity of Earth's orbit). Since this ignores atmospheric effects, the actual irradiance on the ground will be lower due to scattering and absorption. These effects will also depend on the solar elevation angle, since a lower angle gives a longer light path through the atmosphere. Maybe, starting from this, you can explain what further aspects you need to model. A: If you are interested just in the direct irradiance you can neglect the emmision and scattering terms in the Radiative Transfer Equation RTE wich can in this case be simplified to allow only for absorbtion and is known under the name Beer-Bougert-Lambert's law of absorption $$ \cos\theta_{0}\,\frac{\partial}{\partial p}S^{i} = -\frac{\kappa^i}{g}\,S^{i} $$ Here I assumed pressure coordinates to parameterize the height above ground, $i$ is the index of the band if you have more than one, and $S$ is the solar flux in Wm$^{-2}$. The zenith angle $\theta_0(\lambda,\phi,t)$ varies with longitude, latitude, and time of the year which can be taken into account, by $$ \cos\theta_0 = \cos\phi \cos\delta\cos h + \sin\phi \sin\delta $$ where $\delta$ is the declination angle and $h$ the hour angle. The Absorption coefficient con be calculated for example as the mixing ratio $\rho_a/\rho$ of the absorber times the band strength $K^i$ $$ \kappa^i = \frac{\rho_a}{\rho}K^i $$ The only thing left to do for a simple parameterization of the solar radiation is to specify the upper boundery condition for each band you want to consider, for example $$ \begin{eqnarray} S^1(p=0) & = & C_{sun}\,\:UV_{O3}\,\cos \theta_{0}\\ S^2(p=0) & = & C_{sun}\, \:VIS_{O3}\,\cos \theta_{0}\nonumber \\ S^3(p=0) & = & C_{sun}\,\:VIS_{H2O}\, \cos \theta_{0} \nonumber \\ S^4(p=0) & = & C_{sun}\, \:UV_{O2}\,\cos \theta_{0} \nonumber \\ S^5 & = & C_{sun}\,\cos \theta_{0} \, - \,\sum\limits_{i=1}^4 S^i (p=0) \nonumber \end{eqnarray} $$ Where $C_{sun}$ is the solar "constant" of about 1362 Wm$^{-2}$, $UV_{O3}$ for example is the relative part of the incoming solar energy that goes into the ozone UV absorber band, and the fifth band contains the radiation that is directly transmitted to the surface. With this simple parameterization you can calculate the solar radiation at the surface by integration the absorption law downward. More comprehensive methods to solve the radiation problem in the shortwave regime include scattering on aerosols, clouds, and air molecules and allow for reflection at the surface too. Non-LTE (most important in for the longwave regime in the middle atmosphere for example) in the SW regime can be established by considering appropriate efficiencies for the conversion of absorbed solar radiation to kinetic energy of the atmospheric constituents. In the approach of discrete ordinates, the appropriate RTE is solved for different discrete zenith directions. The principle of invariance and the adding method apply some kind of ray tracing for the incoming solar beam inside an atmospheric layer in order to calculate the emerging radiation at the upper and lower boundary of the layer. These more comprehensive methods are explained for example here where you can find a more detailled explanation and treatment of both, longwave and shortwave radiative transfer in the atmosphere.
{ "pile_set_name": "StackExchange" }
Q: Loss of data after updating .Net TableAdapter I had a program working with a database I created using SQL Server Compact Edition. Everything was updating and showing fine. Then I decided to change the Fill SQL statement to order by a field. After doing that I lost most of the data, with only 2 records remaining. The 2 records were test records I added manually in SQL Server Management studio, before starting to build the program, and I thought I had deleted them. Anybody any ideas of what has happened? A: is your sdf file set as content file in VS ? if true, the previous run of your app had maybe worked with the bin{configuration}\yourdb.sdf file. For any reason (clean, rebuild, ...), the sdf file in your project may has been redeployed to the bin{configuration} folder...
{ "pile_set_name": "StackExchange" }
Q: using DirectShow in Winforms i could run the How To Play a File code sample in c++ win32 console application, but when i try to implement it by winforms i get these Link errors: Error 2 error LNK2020: unresolved token (0A000016) IID_IMediaEvent Error 3 error LNK2020: unresolved token (0A000017) IID_IMediaControl and some more link errors ..... here is the code of the form: #include <dshow.h> #pragma comment(lib, "Strmiids.lib") namespace Form1 { using namespace System; using namespace System::ComponentModel; using namespace System::Collections; using namespace System::Windows::Forms; using namespace System::Data; using namespace System::Drawing; /// <summary> /// Summary for Form1 /// </summary> public ref class Form1 : public System::Windows::Forms::Form { public: Form1(void) { InitializeComponent(); // //TODO: Add the constructor code here // } protected: /// <summary> /// Clean up any resources being used. /// </summary> ~Form1() { if (components) { delete components; } } private: /// <summary> /// Required designer variable. /// </summary> System::ComponentModel::Container ^components; #pragma region Windows Form Designer generated code /// <summary> /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// </summary> void InitializeComponent(void) { this->SuspendLayout(); // // Form1 // this->AutoScaleDimensions = System::Drawing::SizeF(6, 13); this->AutoScaleMode = System::Windows::Forms::AutoScaleMode::Font; this->ClientSize = System::Drawing::Size(284, 262); this->Name = L"Form1"; this->Text = L"Form1"; this->Load += gcnew System::EventHandler(this, &Form1::Form1_Load); this->ResumeLayout(false); } #pragma endregion private: System::Void Form1_Load(System::Object^ sender, System::EventArgs^ e) { IGraphBuilder *pGraph = NULL; IMediaControl *pControl = NULL; IMediaEvent *pEvent = NULL; // Initialize the COM library. HRESULT hr = CoInitialize(NULL); // Create the filter graph manager and query for interfaces. hr = CoCreateInstance(CLSID_FilterGraph, NULL, CLSCTX_INPROC_SERVER, IID_IGraphBuilder, (void **)&pGraph); hr = pGraph->QueryInterface(IID_IMediaControl, (void **)&pControl); hr = pGraph->QueryInterface(IID_IMediaEvent, (void **)&pEvent); // Build the graph. IMPORTANT: Change this string to a file on your system. hr = pGraph->RenderFile(L"C:\\Example.avi", NULL); if (SUCCEEDED(hr)) { // Run the graph. hr = pControl->Run(); if (SUCCEEDED(hr)) { // Wait for completion. long evCode; pEvent->WaitForCompletion(INFINITE, &evCode); // Note: Do not use INFINITE in a real application, because it // can block indefinitely. } } pControl->Release(); pEvent->Release(); pGraph->Release(); CoUninitialize(); } }; } How can i setup the build environment in winforms to do the DirectShow programming? im using windows SDK v7.1 And vc++ 2010 A: You are not getting a great diagnostic. The problem is that DirectShow is native code. But you are letting the compiler think it is okay to compile it in managed mode. Which works surprisingly well, until the linker takes a nosedive. You need to make it look like this: #pragma once #pragma managed(push, off) #include <dshow.h> #pragma managed(pop) #pragma comment(lib, "strmiids.lib") #pragma comment(lib, "ole32.lib") // etc.. This probably generates a flurry of errors. Right-click the project in the Solution Explorer window, Properties, Configuration Properties, General. Change Common Language Runtime support from /clr:pure to /clr. This played a sample .avi file properly when I tried it. In a DirectShow window, not the form. The sample code was designed only to work in a console application. You should also remove the calls to CoInitialize and CoUninitialize, .NET already initializes COM. Improving the error handling is advisable. Consider embedding Windows Media Player instead.
{ "pile_set_name": "StackExchange" }
Q: What does '/=' operator mean in JavaScript? I came across this looking at the source for some physics animations in JavaScript found here on github where he's written this if (this._position < 0) this._position /= 3; A quick Google yielded nothing, anyone know? A: The operator is shorthand division operator. It is equivalent to this.position = this.position / 3; The division will be performed first and then the result will be assigned to the dividend. Quoting from MDN The division assignment operator divides a variable by the value of the right operand and assigns the result to the variable.
{ "pile_set_name": "StackExchange" }