date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/19
450
1,117
<issue_start>username_0: I have a column in a dataframe that has datatype [timedelta64[ns]]. I'm trying to convert it to a float. This is a sample table: ``` ColA 227 days 00:00:00.000000000 316 days 00:00:00.000000000 226 days 00:00:00.000000000 153 days 00:00:00.000000000 ``` Below is my desired table with datatype as float: ``` ColA 227 316 226 153 ``` This is the code I tried: ``` df_EVENT5_24['ColA'] = df_EVENT5_24['ColA'].astype(float) ``` This is the error: TypeError: cannot astype a timedelta from [timedelta64[ns]] to [float64]<issue_comment>username_1: You can use `apply` and `lambda` to access the attribute `days` (<https://pandas.pydata.org/pandas-docs/stable/timedeltas.html>): ``` df_EVENT5_24['ColA'] = df_EVENT5_24.apply(lambda row: row.ColA.days, axis=1) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: We can use the dt.days attribute. Example ColA is the name of your timedelta Series. ``` ColA = ColA.dt.days ``` To convert into a float, you need either df.astype ``` ColA = ColA.astype(float) ***ColA = ColA.dt.days.astype(float)*** ``` Upvotes: 1
2018/03/19
562
1,552
<issue_start>username_0: I am trying to create a simple quadratic equation (x^2 + px + q = 0) solver but the answer I get is always wrong. My code looks like this ``` double p, q; Console.Write("Insert the value of p: "); int p = int.Parse(Console.ReadLine()); Console.Write("Insert the value of q: "); int q = int.Parse(Console.ReadLine()); Console.WriteLine("x1 = " + (-p/2 + Math.Sqrt((p/2) ^ 2 - q ))); Console.WriteLine("x2 = " + (-p/2 - Math.Sqrt((p/2) ^ 2 - q))); ``` My guess is that there is something wrong with the `"x1 = " + (-p/2 + Math.Sqrt((p/2) ^ 2 - q )));` and the `x2 = + (-p/2 - Math.Sqrt((p/2) ^ 2 - q)));` parts. Any help would be greatly appreciated.<issue_comment>username_1: **ax^2 + bx + c = 0** The formula for quadratics is: ``` (-b ± sqrt(b^2 - 4*a*c)) / 2a ``` So since your x^2 has no number in front, you can simplify to: ``` (-b ± sqrt(b^2 - 4*c)) / 2 ``` So for you: ``` (-p/2 ± sqrt(p^2 - 4*q)) / 2 ``` Upvotes: 0 <issue_comment>username_2: > > My guess is that there is something wrong with the `x1 = ...` and the `x2 = ...` parts. > > > Here is what's wrong with them: * Both `p` and `q` are `int`; they should be `double`, otherwise division by 2 would truncate the result. * `n ^ 2` does not mean "squared" in C#. Use `Math.Power(x, 2)` instead * You can keep `int.Parse` or change to `double.Parse` if you would like to allow fractional input for `p` and `q`. * You never check that `p` is positive. This is required to ensure that the square root is defined. Upvotes: 2 [selected_answer]
2018/03/19
783
2,859
<issue_start>username_0: So classmethods can be used as an alternative "constructor" in python, they are bound to the class and not to an instance, pretty clear so far. But my question is, if it is mandatory to have the same number of arguments in the returned instance of the class method as in the `__init__`. More exactly : ```py class MyClass(object): def __init__(self,name): self.name=name @classmethod def alternative_init(cls,name,surname): return cls(name,surname) ``` And if I try to create an instance `Myclass("alex")` works fine, but if I try to create an instance `Myclass.alternative_init("alex","james")` I have a `TypeError` , because I pass to many arguments, and init take only 2 . Am I missing something?<issue_comment>username_1: You could do what you want like this: ``` class MyClass(object): def __init__(self,name): self.name=name @classmethod def alternative_init(cls,name,surname): new_instance = cls(name) new_instance.surname = surname return new_instance a = MyClass.alternative_init('Bob', 'Spongy') print(a.name, a.surname) # <NAME> ``` Upvotes: 2 <issue_comment>username_2: `__init__` only takes one parameter, the name. Thus, you can pass either `name` or `surname` to `cls`, but not both. However, you can create a class instance in `classmethod`, and add an additional paramter: ``` class MyClass(object): def __init__(self,name): self.name=name def __setattr__(self, name, val): self.__dict__[name] = val @classmethod def alternative_init(cls,name,surname): v = cls(name) v.surname = surname return v ``` Upvotes: 5 [selected_answer]<issue_comment>username_3: Because `Myclass.alternative_init("alex","james")` calls the cls(name, surname) which same as `MyClass(name,surname)` which also same as `__init__(self,name,surname)` but your `__init__` function don't have `surname` parameter. You can make `surname` optional by `__init__(self,name,surname=None)` ``` class MyClass(object): def __init__(self,name,surname=None): self.name=name self.surname=surname @classmethod def alternative_init(cls,name,surname): return cls(name,surname) ``` Upvotes: 2 <issue_comment>username_4: In Python, the first argument Passed to a method is always the object itself. If you call now your method with a name, you will get self as first parameter and the name as second. When you call now the **init** method from inside of your classmethod, python has no idea what it should do with the surname. Upvotes: 2 <issue_comment>username_5: ``` class MyClass(object): def __init__(self,name): self.name=name @classmethod def alternative_init(cls,name,surname): return cls(name) a = MyClass("alex") MyClass.alternative_init("alex","james") ``` Upvotes: 1
2018/03/19
430
1,455
<issue_start>username_0: I've got a specific element I want to retrieve from an array, `{{page.myArray}}`. I have the index of the element in a variable, `{{my-index}}`, but I can't just plug in `{{page.myArray.[my-index]}}` or `{{page.myArray.[{{my-index}}]`. How do I go about doing this? I've tried out some things from SO and the Handlebars docs, but I can't for the life of me come up with a solution. Hopefully, a kind and generous soul can give me a helping hand here.<issue_comment>username_1: The answer is a 'no' as Handlebars syntax don't permit the nesting of statements. However, you can write a custom handlebars helper to achieve the same. Considering your JSON structure as, ``` { "my-index": 1, "page": { "myArray": [ "a", "b", "c" ] } } ``` Your template can be written as below, which is the helper(`indexOf`) call itself. ``` {{#indexOf page.myArray my-index}}{{/indexOf}} ``` And the helper definition would be, ``` Handlebars.registerHelper('indexOf', function(array, value) { return array[value]; }); ``` This will print `b` as the output as its in the index position `1` of the array `myArray`. Tested using <http://tryhandlebarsjs.com>. Hope this helps. Upvotes: 3 [selected_answer]<issue_comment>username_2: I Know this is very late, but you can do this now in handlebars with the **lookup** helper. You would use it like this ``` {{lookup page.myArray my-index}} ``` Upvotes: 0
2018/03/19
1,258
5,007
<issue_start>username_0: I'm wondering how and what is the best way to lock user account after X times failed logins? I have table where I keep track of users failed login attempts. Table stores time stamp, username, ip address and browser type. After I detect incorrect login information, cfquery will pull records from failed login table based on username or IP address. If there is 5 or more invalid attempts I set account for inactive. Now I would like to somehow set timer that will start counting 5 minutes since last invalid attempt for that user. Then account should change the status to active. Here is my code that I have so far: ``` SELECT UserName, Password, Salt, LockedUntil FROM Users WHERE UserName = AND Active = 1 INSERT INTO FailedLogins( LoginTime, LoginUN, LoginIP, LoginBrowser )VALUES( CURRENT\_TIMESTAMP, , , ) SELECT LoginTime FROM FailedLogins WHERE LoginUN = OR LoginIP = UPDATE Users SET LockedUntil = WHERE UserName = //Clear failed login attempts //Update lockedUntil field to NULL //User logged in authentication successful! ``` After account is set to inactive / locked what would be the best way to set time count down and change the flag status? I saw some people recommended SQL Job but I'm not sure how often job should run and how to create that statement? If anyone can provide some example please let me know. Thank you.<issue_comment>username_1: What you can do is add a condition to the `checkUser` query: ``` SELECT UserName, Password, Salt, Active FROM Users u WHERE UserName = -- AND Active = 1 AND NOT EXISTS ( SELECT 1 FROM FailedLogins fl WHERE fl.LoginUN = u.UserName AND DATEDIFF('ss', fl.LoginTime, CURRENT\_TIMESTAMP) >= 300 ) ``` I've used `300` seconds instead of `5` minutes since `DATEDIFF()`, I believe, returns an `int`. I apologize in advance if this isn't quite the ideal syntax for SQL Server (I don't often work with it). Then, above, if `Active` is zero, you can then (assuming the password is correct) update it to a value of `1` and either delete the failed logins associated with that account or somehow mark them inactive so they don't count against the 5 failed logins any more. **Query edited at the suggestion of a commenter below:** (good suggestion, by the way!) ``` SELECT UserName, Password, Salt, Active FROM Users u WHERE UserName = -- AND Active = 1 AND NOT EXISTS ( SELECT 1 FROM FailedLogins fl WHERE fl.LoginUN = u.UserName AND fl.loginTime < DATEADD(second, -300, CURRENT\_TIMESTAMP) ) ``` Upvotes: 3 <issue_comment>username_2: I think you'd have better luck with reversing your logic. Instead of having a column `status` with values `Active` or `Inactive`, consider having a column `locked_until` time instead. Initially the locked\_until value for a new user will be NULL (or 0) meaning it is not locked. When there is a series of failed logins, set this to be the current time + 5 minutes. For all actions for this user, check if current time is > locked\_until value. If not, the account is still inactivate (locked). **Edit**: I decided to write out some code because I forgot to account for users successfully logging in. Please see below; I'm not sure what language the original question is in but this answer is pseudo-python. Assuming we have a database table similar to the following (ignoring salts etc..) ``` CREATE TABLE Users ( UserName TEXT PRIMARY KEY, Password TEXT NOT NULL, LockUntil TIMESTAMP, FailedLogins INT DEFAULT 0 ); ``` The login checking function is something like the following. Key points are: * Successful login clears sets FailedLogins to 0. * Set FailedLogins to 5 (along with LockUntil) when locking account. * A new failed login where FailedLogins=5 is an attempt for a newly unlocked account. (i.e. The account was implicitly unlocked and user is trying again). ```py def try_login(username, password): row = execute("SELECT Password,LockUntil,FailedLogins FROM Users WHERE UserName=?", username); if row is None: print("Unknown username") return False if row.LockUntil is not None and current_time() < row.LockUntil: print("Account locked. Try again later.") return False if password == row.Password: print('Successful login') execute("UPDATE Users SET LockUntil=NULL, FailedLogins=0 WHERE UserName=?", username) return True if row.FailedLogins == 4: print("Too many failures; locking account for 5 mins") lock_until = current_time() + 300 execute("UPDATE Users SET LockUntil=?,FailedLogins=5 WHERE UserName=?", lock_until, username) return False failures = row.FailedLogins + 1 if failures == 6: # User had locked account, which is now unlocked again. # But they failed to login again, so this is failure 1. failures = 1 execute("UPDATE Users SET FailedLogins=? WHERE UserName=?", failures, username) return False ``` Upvotes: 3 [selected_answer]
2018/03/19
1,375
4,586
<issue_start>username_0: I can't get a `TabBarIcon` to appear in my `TabNavigator.` I've read a bunch of posts from people with a similar issue, but their solutions don't work for me. I'd really just like to be able to have a PNG image component for each `TabBarIcon` but it just won't appear. ``` const Tabs = TabNavigator({ Feed: { screen: FeedPage, navigationOptions: { tabBarLabel:"Feed", tabBarIcon: () => () } }, Me: { screen: MePage, navigationOptions: { tabBarLabel:"Me", } }, }, { initialRouteName: 'Me', tabBarOptions: { showIcon: true, showLabel: true } } ); const RootStack = StackNavigator( { //All Tabs Tabs: { screen: Tabs, }, //Other Pages Signup: { screen: SignupPage, }, ProfilePicUploader: { screen: ProfilePicUploaderPage, }, Login: { screen: LoginPage, }, User: { screen: UserPage, }, EasterEgg: { screen: EasterEggPage, }, }, { initialRouteName: 'Tabs', headerMode: 'none' } ); export default class App extends Component { render() { return ( ); } } ``` Anyone know what's going on?<issue_comment>username_1: Could you try this one? This is my setup: ``` const PrimaryNav = TabNavigator({ ['app/t1']: { screen: T1 }, ['app/t2']: { screen: T2 }, }, { headerMode: 'none', initialRouteName: 'app/t1', navigationOptions: ({ navigation }) => ({ tabBarIcon: ({ focused, tintColor }) => { const { routeName } = navigation.state; if (routeName === 'app/t1') { return ; } else if (routeName === 'app/t2') { return ; } return null; }, }), tabBarOptions: { showIcon: true, showLabel: false, activeTintColor: ApplicationStyles.screen.header.tintColor, inactiveTintColor: ApplicationStyles.screen.header.inactiveTintColor, style: { backgroundColor: ApplicationStyles.screen.header.backgroundColor, }, } }) ``` is the react-native-material-ui icon, but Image should work as well. Upvotes: 1 <issue_comment>username_2: You just need make sure that your file doesn't have whitespace name. So change `TabBar Icons_Feed.png` to `TabBar_Icons_Feed.png`. Because `android` or `ios` will find name like `/xxxx/Resources/Images/TabBar%20Icons_Feed.png`. This actually the `packager` does the job. See [here](https://facebook.github.io/react-native/docs/images.html#static-image-resources): > > The image name is resolved the same way JS modules are resolved. In > the example above, the `packager` will look for `my-icon.png` in the same > folder as the component that requires it. Also, if you have > `my-icon.ios.png` and `my-icon.android.png`, the packager will pick the > correct file for the platform. > > > --- ### Check Valid/Correct Path File if you see `.expo/packager-info.json`: ``` { "expoServerPort": 19000, "packagerPort": 19001, "packagerPid": 84703 } ``` as you see the port `packager` is `19001`, then you can access your file by `http://127.0.0.1:19001/Resources/Images/TabBar Icons_Feed.png` (make sure you already running app) in browser. You can make sure all path file is correct. --- ### Permit on Android Also, if you are using `expo` for development, make sure you set permission to write drawer for android ([source](https://lastpass.com/support.php?cmd=showfaq&id=9932)). [![enter image description here](https://i.stack.imgur.com/tGIO9m.png)](https://i.stack.imgur.com/tGIO9m.png) --- ### Best Practice I usually use `import` image like this: ``` import Icon_Feed from './Resources/Images/TabBarIcons_Feed.png'; ... tabBarIcon: () => ( ) ``` It works and more readable. Upvotes: 2 <issue_comment>username_3: I have setup a smallest possible example to show image on tabs instead of icons have a look on [**this expo snack**](https://snack.expo.io/@shhzdmrz/showing-image-on-tab-navigator). Don't use `resizeMode` or `spaces` in the image file name as you are using `TabBar Icons_Feed.png`. Just import the file and use it with proper height and width, e.g. ``` import Icon_Feed from './Resources/Images/TabBar_Icons_Feed.png'; ``` and use it like this ``` tabBarIcon: () => ( ), ``` **OR** You can use require if you want to e.g. ``` tabBarIcon: () => ( ), ``` [Read this image guide by react native official documentation](http://facebook.github.io/react-native/docs/images.html) to get the possible ways to manage image and properly display it according to your needs. Upvotes: 2
2018/03/19
424
1,019
<issue_start>username_0: I need to split a sentence base on address patter. Below is a reproducible sample of the problem I'm facing. ``` s <- c("Junipero Sierra Room 9001 coals ave","patio room2200 virginia beach ave") ``` Currently, this is what I'm using ``` gsub(".*([A-z]{1,}[0-9]{2,6})|.*([A-z]{1,} [0-9]{2,6})", "\\1",s) ``` This is what I get, ``` [1] " coals ave" "m2200 virginia beach ave" ``` but this is what I want ``` [1] "9001 coals ave" "2200 virginia beach ave" ```<issue_comment>username_1: ``` s <- c("Junipero Sierra Room 9001 coals ave","patio room2200 virginia beach ave") get.String=function(x){ sx=unlist(strsplit(x,"")) st=grep("[0-9]",sx)[1] x=substring(x,st,nchar(x)) return(x) } sapply(s,get.String) ``` Upvotes: 0 <issue_comment>username_2: It looks like you just want to cut off everything before `[0-9]{2,6}`: ``` > gsub(".*?([0-9]{2,6})", "\\1", s) [1] "9001 coals ave" "2200 virginia beach ave" ``` Upvotes: 2 [selected_answer]
2018/03/19
413
1,649
<issue_start>username_0: how can i fix this warnings found ``` et=edittext ``` [![enter image description here](https://i.stack.imgur.com/AwqyY.jpg)](https://i.stack.imgur.com/AwqyY.jpg) ***the error that displays*** Custom view `EditText` has setOnTouchListener called on it but does not override performClick less... (Ctrl+F1) If a View that overrides onTouchEvent or uses an OnTouchListener does not also implement performClick and call it when clicks are detected, the View may not handle accessibility actions properly. Logic handling the click actions should ideally be placed in View#performClick as some accessibility services invoke performClick when a click action should occur.<issue_comment>username_1: **For your code its better to just override `onClick` instead of** `setOnTouchListener` move your code from `onTouch` to `onClick` then remove all `setOnTouchListener` methods ``` etx.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // here etx is clicked add your code here } }); ``` **you can also use focus instead of click:** ``` etx.setOnFocusChangeListener(new OnFocusChangeListener() { @Override public void onFocusChange(View view, boolean hasFocus) { if (hasFocus) { // etx got focus (is selected) } else { // etx exited from focus (deselected) } } }); ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Use this: ``` yourButton.setOnClickListener(new OnClickListener() { public void onClick(View v) { //You code on click } }); ``` Upvotes: -1
2018/03/19
2,090
7,397
<issue_start>username_0: I'd like to know which event is nearer. Now, the code that I've written relies on a comparison and it doesn't seem to work every time. As always with everything that I try to do I'd like a solution which is both clean and efficient (feel free to criticize my code). The events have a specific hour (ex. 16:00), a hint on how add this level of detail would be nice. Here is the code: ``` public static int getNextEvent() { int event1 = 2; //Those events happen every week and have specific //hours (didn't implement those because I have no //idea how to, suggestions are greatly appreciated here) int event2 = 6; int result = 0; int currentDay = 6; //Days go from monday (1) to sunday (7), here this is hardcoded, in the actual program it is not. List eventList = new ArrayList(); if (currentDay <= event1) //This is the comparison I was talking about { eventList.add((event2-currentDay)); // These two are in this specific order because I use the indexOf for later methods (yes, those events are hardcoded, that's fine) eventList.add((event1-currentDay)); // } else { currentDay = currentDay - 7; eventList.add((event2-currentDay)); // These two are in this // specific order because // I use the indexOf for // later methods (yes, // those events are // hardcoded, that's fine) eventList.add((event1-currentDay)); // } result = determineClosestEvent(eventList) + 1; // I add 1 because my other methods //do not start at 0 but at 1 //(see ^ that comment) return result; } private static int determineClosestEvent(List eventList) { int closestDay = 10; int listGet; for (int i = 0; i < eventList.size(); i++) { listGet = eventList.get(i); if (closestDay > listGet) { closestDay = listGet; } else {} } closestDay = eventList.indexOf(closestDay); return closestDay; } ```<issue_comment>username_1: Design an `Event` class for your events. Use `DayOfWeek` (from java.time, the modern Java date and time API) for the event’s day of week and `LocalTime` for the time of day. To find out how far into the past or future each event is, first use `ZonedDateTime.now(yourTimeZone)` to determine the current date and time. Given that `currentDateTime` is a `ZonedDateTime`, use `currentDateTime.toLocalDate().getDayOfWeek()` to get today’s day of week. For each event, if it doesn’t happen today, consider both the previous occurrence and the next one (if this is appropriate according to your requirements). Use `TemporalAdjusters.previous()` and `TemporalAdjusters.next()` to find the dates. Then `LocalDate.atTime` and `LocalDateTime.atZone` to get a `ZonedDateTime` for when the event happened or happens. Now that you have a `ZonedDateTime` for current time and one for the event time, use `Duration.between()` to find how far into the past or the future this event is. You may want `Duration.abs()` to make sure the duration is non-negative. And `Duration.compareTo` for determining which duration is shorter. There you have the nearest event. **Links** * [Oracle Tutorial: Date Time](https://docs.oracle.com/javase/tutorial/datetime/) explaining how to use java.time * [API documentation](https://docs.oracle.com/javase/9/docs/api/java/time/package-frame.html) Upvotes: 1 <issue_comment>username_2: Without taking much care about public/private issues: ``` import java.util.*; ``` /\* You can test the code by copying it into one big file, named EventTest. You might indent the comments, or delete them. I unindent them here, for better readability. Have a look at the Interface Comparable, a very useful interface, because if you define such a method, you can sort a Collection of such things with the Java methods. \*/ ``` class MyDate implements Comparable { ``` /\* Just a simple int for the day of week, like in your attempt. \*/ int dow; ``` public MyDate (int dow) { this.dow = dow; } // if other date is over, next date is in next week - add 7. public int compareTo (MyDate other) { int diff = other.dow - dow; if (diff < 0) diff += 7; return diff; } public String toString () { return "" + dow; } } ``` /\* An event has a date - let's have a name too: \*/ ``` class Event { String name; public MyDate dow; public Event (String name, MyDate dow) { this.dow = dow; this.name = name; } ``` /\* For later usage, we supply a dateDiff method, which delegates the comparison of the date to the dow.compareTo method: \*/ ``` public int dateDiff (MyDate cmp) { return dow.compareTo (cmp); } public String toString () { return name + " " + dow; } } ``` /\* We need an extra Comparator, to compare different Events to an external given MyDate. \*/ ``` class EventComparator implements Comparator { MyDate crucialDate; public EventComparator (MyDate crucial) { crucialDate = crucial; } public int compare (Event e1, Event e2) { int diff = (e1.dateDiff (crucialDate)) - (e2.dateDiff (crucialDate)); return diff; } } ``` /\* Now a fat testing class. \*/ ``` public class EventTest { ``` /\* First try to understand, how the test2 method works, before testing the testMany () method. \*/ ``` public static void main(String[] args) { test2 (); // testMany (); } ``` /\* Here the Comparator has its big moment. Well - it sorts for a given MyDate 'current' the list of events. If the list isn't empty, the first element is (one of) the closest. \*/ ``` static List findClosest (MyDate current, List events) { EventComparator ec = new EventComparator (current); events.sort (ec); return events; } ``` /\* As in your example, 2 dates (2, 6) and a currentDay (6). The List of Events is resorted in place and contains the same elements, probably in a different order: \*/ static void test2 () { List eventList = new ArrayList<> (); eventList.add (new Event ("Dow: 2", new MyDate (2))); eventList.add (new Event ("Dow: 6", new MyDate (6))); MyDate currentDay = new MyDate (6); System.out.println ("unsorted"); show (eventList); System.out.println ("sorted"); findClosest (currentDay, eventList); show (eventList); } ``` static void testMany () { Random r = new Random (); // eventList.add (new Event ("Dow: " + day, new MyDate (day))); List eventList = new ArrayList<> (); for (int i = 0; i < 10; ++i) { int day = r.nextInt (7) + 1; eventList.add (new Event ("Dow: " + day, new MyDate (day))); } for (int today = 1; today <= 7; ++today) { MyDate currentDay = new MyDate (today); System.out.println ("unsorted " + currentDay); show (eventList); findClosest (currentDay, eventList); System.out.println ("sorted "+ currentDay); show (eventList); } } public static void show (List events) { for (Event e : events) System.out.println (e); } } ``` You may later extend the MyDate class to include hours. Semantically, I didn't check, whether the algorithm works as expected. You should write some test cases first * 2 dates lower than today * first lower, second higher * first higher, second lower * both higher than today * different distances, same distance, * same date as currentDay Make a small table and first write the expected result down, then look whether it works. Upvotes: 0
2018/03/19
366
838
<issue_start>username_0: Why TypeScript has no problem with me doing this: ``` return { Price: value.rawPrice === null ? null : value.rawPrice } ``` but has issues with me doing this: ``` return { Price: null } ``` Where field `Price`is of type number<issue_comment>username_1: Use **undefined** instead of **null** [here a working example](http://www.typescriptlang.org/play/index.html#src=interface%20Pricey%20%7B%20Price%3A%20number%3B%20%7D%0D%0Aclass%20Whatever%20%7B%20getPrice(value%3A%20Pricey)%3A%20Pricey%20%7B%20return%20%7B%20Price%3A%20value%20%3D%3D%3D%20null%20%3F%20null%20%3A%20value.Price%20%7D%3B%20%7D%20makePrice()%3A%20Pricey%20%7B%20return%20%7B%20Price%3A%20null%20%7D%3B%20%7D%7D) Upvotes: -1 <issue_comment>username_2: It was because of `--strictNullChecks` in the compiler. Upvotes: 1 [selected_answer]
2018/03/19
1,215
4,197
<issue_start>username_0: I'm new to Ransack and I've ran into what appears to be a case that isn't cleanly covered by Ransack. I'm basically trying to search a value but the searched value is wrapped in an array. ### CODE: ``` <%= f.search_field :category_or_account_number_or_status_or_account_number_or_account_name_or_accounts_name_or_accounts_number_or_user_name_or_user_rep_code_list_cont_any %> ``` At the very end there is this piece `user_rep_code_list_cont` that is a default array attribute on users it looks like this currently ["al20", "b234"] So, when I type al20 in the Ransack search bar I get this error. ### ERROR: ``` PG::UndefinedFunction: ERROR: operator does not exist: character varying[] ~~* unknown LINE 1: ..."name" ILIKE '%al20%') OR "users"."rep_code_list" ILIKE '%al... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. ``` ### CONTROLLER: ``` def index @q = Submission.submissions_for(user: current_user).ransack(params[:q]) @submissions = @q.result.includes(:user, :accounts).ordered(current_user).page(params[:page]).per(25) end ``` Again, I'm not a Ransack expert but this seems like something that should be covered by now. I want to search an attribute on a model that is an Array.<issue_comment>username_1: Working with arrays and Ransack is not that simple, you have to do a lot of the work by hand as the underlying queries quickly take you into advanced territory. Checking membership in an array is [relatively easy](https://stackoverflow.com/q/49340638/479863), checking every element of an array against a LIKE pattern is [somewhat more complicated](https://dba.stackexchange.com/a/117744/5359) as you need to do a LATERAL JOIN to an `unnest` function call to unwrap the array so that you can LIKE against its members: ``` select users.* from users, unnest(users.rep_code_list) c -- This is an implicit LATERAL JOIN where c ilike '%some-pattern%' ``` and you'll probably want to throw a `distinct on (users.id)` in the SELECT clause to clean up any duplicates that appear from other parts of the query: ``` select distinct on (users.id) users.* from users, unnest(users.rep_code_list) c -- This is an implicit LATERAL JOIN where c ilike '%some-pattern%' and ... ``` To get Ransack to use such a query requires you to add a scope (and tell Ransack that it can use the scope) or perhaps writing a custom [ransacker](https://github.com/activerecord-hackery/ransack/wiki/Using-Ransackers). Unfortunately, there doesn't seem to be any way to get Ransack to use a scope together with the usual `attr1_or_attr2_or...` parameter name parsing logic so your scope has to do it all: ``` class User < ApplicationRecord def self.ransackable_scopes(auth = nil) %i[things_like] end def self.things_like(s) select('distinct on (users.id) users.*') .joins(', unnest(users.rep_code_list) c') .where('c ilike :pat or users.category ilike :pat or ...', pat: "%#{s}%") end end ``` and then in the form: ``` <%= f.search_field :things_like %> ``` You might have better luck mashing that LATERAL JOIN logic into a custom ransacker or, better IMO, replacing the array with a separate table so that you can use Ransacker's association logic and treat the codes like first class entities in the database rather than just strings. Some sort of full test search (instead of Ransack) might be another option. You might be able to do something with PostgreSQL's [`array_to_string`](https://www.postgresql.org/docs/current/static/functions-array.html#ARRAY-FUNCTIONS-TABLE) function to flatten the array but then you'll have to deal with delimiters and you'll still be stuck with a "do it all" scope (or perhaps a custom ransacker). Upvotes: 1 <issue_comment>username_2: I ended up using a custom `Ransacker` for this case: ``` ransacker :rep_code_list do Arel.sql("array_to_string(rep_code_list, ',')") end ``` This will turn the array into a string so that Ransack can search with the `cont` predicate. Not sure if this is the best way to do it but it worked for my case. Upvotes: 3 [selected_answer]
2018/03/19
893
3,032
<issue_start>username_0: my output of error: > > Invalid configuration object. Webpack has been initialised using a > configuration object that does not match the API schema. > - configuration.module has an unknown property 'loaders'. These properties are valid: object { exprContextCritical?, > exprContextRecursive?, exprContextRegExp?, exprContextRequest?, > noParse?, rules?, defaultRules?, unknownContextCritical?, > unknownContextRecursive?, unknownContextRegExp?, > unknownContextRequest?, unsafeCache?, wrappedContextCritical?, > wrappedContextRecursive?, wrappedContextRegExp?, > strictExportPresence?, strictThisContextOnImports? } -> Options > affecting the normal modules (`NormalModuleFactory`). > > > my webpack.config.js: ``` var webpack = require('webpack'); var path = require('path'); var BUILD_DIR = path.resolve(__dirname, 'src/client/public'); var APP_DIR = path.resolve(__dirname, 'src/client/app'); var config = { entry: APP_DIR + '/index.jsx', module : { loaders : [ { test : /\.jsx?/, include : APP_DIR, loader : 'babel-loader' } ] }, output: { path: BUILD_DIR, filename: 'bundle.js' } }; module.exports = config; ``` my webpack version: ``` [email protected] ```<issue_comment>username_1: You should change `loaders` to `rules` in webpack 4: change: ``` loaders ``` to: ``` rules ``` source: [Loaders](https://webpack.js.org/concepts/loaders/#example) Example: ``` module.exports = { module: { rules: [ { test: /\.css$/, use: 'css-loader' }, { test: /\.ts$/, use: 'ts-loader' } ] } }; ``` Upvotes: 10 [selected_answer]<issue_comment>username_2: Use `rules` in webpack 4 instead of `loaders`. <https://webpack.js.org/concepts/loaders/> Upvotes: 4 <issue_comment>username_3: Above given answers are working but we can resolve this issue by changing webpack and webpack-dev-server version to ``` "webpack": "3.8.1", "webpack-dev-server": "2.9.4" ``` It can also solve the issue. Hope it will help. Upvotes: 2 <issue_comment>username_4: You should use the [migration utility](https://github.com/webpack/webpack-cli/blob/master/MIGRATE.md) to migrate your webpack config files, it worked for me. The [migration documentation](https://webpack.js.org/migrate) is also useful. Upvotes: 2 <issue_comment>username_5: Working for me below webpack.config.js ``` module.exports = { entry: [ '.src/index.js' ], output:{ path: __dirname, filename: 'app/js/main.js' }, module:{ rules: [ { test: /\.css$/, use: 'css-loader' }, { test: /\.ts$/, use: 'ts-loader' } ] } } ``` Upvotes: 2 <issue_comment>username_6: I am using Webpack 5 and I removed below config from my webpack.config. It worked for me after removing. It may help some other people who still facing error > > > ``` > { > test: /\.css$/, > use: ['style-loader', 'css-loader'] > }, > > ``` > > Upvotes: 0
2018/03/19
661
2,021
<issue_start>username_0: ``` LinkedList ar[4]; for(int i=0;i<4;i++) { ar[i]=new LinkedList(); } ar[0].add(99); ar[1].add(60); ar[0].add(66); ar[0].add(61); // how to remove 66 from List 0 index ar[0].remove(66); //but this above statement shows error ```<issue_comment>username_1: You should change `loaders` to `rules` in webpack 4: change: ``` loaders ``` to: ``` rules ``` source: [Loaders](https://webpack.js.org/concepts/loaders/#example) Example: ``` module.exports = { module: { rules: [ { test: /\.css$/, use: 'css-loader' }, { test: /\.ts$/, use: 'ts-loader' } ] } }; ``` Upvotes: 10 [selected_answer]<issue_comment>username_2: Use `rules` in webpack 4 instead of `loaders`. <https://webpack.js.org/concepts/loaders/> Upvotes: 4 <issue_comment>username_3: Above given answers are working but we can resolve this issue by changing webpack and webpack-dev-server version to ``` "webpack": "3.8.1", "webpack-dev-server": "2.9.4" ``` It can also solve the issue. Hope it will help. Upvotes: 2 <issue_comment>username_4: You should use the [migration utility](https://github.com/webpack/webpack-cli/blob/master/MIGRATE.md) to migrate your webpack config files, it worked for me. The [migration documentation](https://webpack.js.org/migrate) is also useful. Upvotes: 2 <issue_comment>username_5: Working for me below webpack.config.js ``` module.exports = { entry: [ '.src/index.js' ], output:{ path: __dirname, filename: 'app/js/main.js' }, module:{ rules: [ { test: /\.css$/, use: 'css-loader' }, { test: /\.ts$/, use: 'ts-loader' } ] } } ``` Upvotes: 2 <issue_comment>username_6: I am using Webpack 5 and I removed below config from my webpack.config. It worked for me after removing. It may help some other people who still facing error > > > ``` > { > test: /\.css$/, > use: ['style-loader', 'css-loader'] > }, > > ``` > > Upvotes: 0
2018/03/19
1,116
3,575
<issue_start>username_0: I am struggling with indexing using Solrj. I want to use SolrCloud and I set my connection like this : ```js SolrClient client = new CloudSolrClient.Builder().withSolrUrl("http://localhost:8983/solr/collectioname").build(); ``` And I have this error. I checked evruwhere before posting here but I can't resolve it ```js Exception in thread "main" java.lang.RuntimeException: Couldn't initialize a HttpClusterStateProvider (is/are the Solr server(s), [http://localhost:8983/solr/collectioname], down?) at org.apache.solr.client.solrj.impl.CloudSolrClient$Builder.build(CloudSolrClient.java:1496) at indexsolr.(indexsolr.java:29) at LoadData.toIndex(LoadData.java:100) at LoadData.loadDocuments(LoadData.java:72) at IndexLaunch.main(IndexLaunch.java:12) Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://localhost:8983/solr/collectioname: Expected mime type application/octet-stream but got text/html. Error 404 Not Found HTTP ERROR 404 -------------- Problem accessing /solr/collectioname/admin/collections. Reason: ``` Not Found ``` at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) at org.apache.solr.client.solrj.impl.HttpClusterStateProvider.fetchLiveNodes(HttpClusterStateProvider.java:189) at org.apache.solr.client.solrj.impl.HttpClusterStateProvider.(HttpClusterStateProvider.java:64) at org.apache.solr.client.solrj.impl.CloudSolrClient$Builder.build(CloudSolrClient.java:1494) ... 4 more ```<issue_comment>username_1: You should change `loaders` to `rules` in webpack 4: change: ``` loaders ``` to: ``` rules ``` source: [Loaders](https://webpack.js.org/concepts/loaders/#example) Example: ``` module.exports = { module: { rules: [ { test: /\.css$/, use: 'css-loader' }, { test: /\.ts$/, use: 'ts-loader' } ] } }; ``` Upvotes: 10 [selected_answer]<issue_comment>username_2: Use `rules` in webpack 4 instead of `loaders`. <https://webpack.js.org/concepts/loaders/> Upvotes: 4 <issue_comment>username_3: Above given answers are working but we can resolve this issue by changing webpack and webpack-dev-server version to ``` "webpack": "3.8.1", "webpack-dev-server": "2.9.4" ``` It can also solve the issue. Hope it will help. Upvotes: 2 <issue_comment>username_4: You should use the [migration utility](https://github.com/webpack/webpack-cli/blob/master/MIGRATE.md) to migrate your webpack config files, it worked for me. The [migration documentation](https://webpack.js.org/migrate) is also useful. Upvotes: 2 <issue_comment>username_5: Working for me below webpack.config.js ``` module.exports = { entry: [ '.src/index.js' ], output:{ path: __dirname, filename: 'app/js/main.js' }, module:{ rules: [ { test: /\.css$/, use: 'css-loader' }, { test: /\.ts$/, use: 'ts-loader' } ] } } ``` Upvotes: 2 <issue_comment>username_6: I am using Webpack 5 and I removed below config from my webpack.config. It worked for me after removing. It may help some other people who still facing error > > > ``` > { > test: /\.css$/, > use: ['style-loader', 'css-loader'] > }, > > ``` > > Upvotes: 0
2018/03/19
2,367
7,998
<issue_start>username_0: I am trying to tune some params and the search space is very large. I have 5 dimensions so far and it will probably increase to about 10. The issue is that I think I can get a significant speedup if I can figure out how to multi-process it, but I can't find any good ways to do it. I am using `hyperopt` and I can't figure out how to make it use more than 1 core. Here is the code that I have without all the irrelevant stuff: ``` from numpy import random from pandas import DataFrame from hyperopt import fmin, tpe, hp, Trials def calc_result(x): huge_df = DataFrame(random.randn(100000, 5), columns=['A', 'B', 'C', 'D', 'E']) total = 0 # Assume that I MUST iterate for idx_and_row in huge_df.iterrows(): idx = idx_and_row[0] row = idx_and_row[1] # Assume there is no way to optimize here curr_sum = row['A'] * x['adjustment_1'] + \ row['B'] * x['adjustment_2'] + \ row['C'] * x['adjustment_3'] + \ row['D'] * x['adjustment_4'] + \ row['E'] * x['adjustment_5'] total += curr_sum # In real life I want the total as high as possible, but for the minimizer, it has to negative a negative value total_as_neg = total * -1 print(total_as_neg) return total_as_neg space = {'adjustment_1': hp.quniform('adjustment_1', 0, 1, 0.001), 'adjustment_2': hp.quniform('adjustment_2', 0, 1, 0.001), 'adjustment_3': hp.quniform('adjustment_3', 0, 1, 0.001), 'adjustment_4': hp.quniform('adjustment_4', 0, 1, 0.001), 'adjustment_5': hp.quniform('adjustment_5', 0, 1, 0.001)} trials = Trials() best = fmin(fn = calc_result, space = space, algo = tpe.suggest, max_evals = 20000, trials = trials) ``` As of now, I have 4 cores but I can basically get as many as I need. How can I get `hyperopt` to use more than 1 core, or is there a library that can multiprocess?<issue_comment>username_1: Just some side-notes on your question. I am recently working on hyperparameter search too, if you have your own reasons, please just ignore me. **Thing is you should prefer random search over grid search.** Here is the [paper](http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf) where they proposed this. And here is some explanation: Basically random search is better distributed on the subfeatures, grid search is better distributed on the whole feature space, which is why this feels to be the way to go. [![http://cs231n.github.io/neural-networks-3/ this is where the image is from](https://i.stack.imgur.com/f9STt.jpg)](https://i.stack.imgur.com/f9STt.jpg) Image is from [here](http://cs231n.github.io/neural-networks-3/) Upvotes: 0 <issue_comment>username_2: You can use `multiprocessing` to run tasks that, through bypassing Python's Global Interpreter Lock, effectively run concurrently in the multiple processors available. To run a multiprocessing task, one must instantiate a `Pool` and have this object execute a `map` function over an iterable object. The function `map` simply applies a function over every element of an iterable, like a list, and returns another list with the elements on it. As an example with search, this gets all items larger than five from a list: ``` from multiprocessing import Pool def filter_gt_5(x): for i in x: if i > 5 return i if __name__ == '__main__': p = Pool(4) a_list = [6, 5, 4, 3, 7, 8, 10, 9, 2] #find a better way to split your list. lists = p.map(filter_gt_5, [a_list[:3], a_list[3:6], a_list[6:]) #this will join the lists in one. filtered_list = list(chain(*lists)) ``` In your case, you would have to split your search base. Upvotes: 1 <issue_comment>username_3: If you have a Mac or Linux (or Windows Linux Subsystem), you can add about 10 lines of code to do this in parallel with `ray`. If you install ray via the [latest wheels here](http://ray.readthedocs.io/en/latest/installation.html#trying-the-latest-version-of-ray), then you can run your script with minimal modifications, shown below, to do parallel/distributed grid searching with HyperOpt. At a high level, it runs `fmin` with tpe.suggest and creates a Trials object internally in a parallel fashion. ``` from numpy import random from pandas import DataFrame from hyperopt import fmin, tpe, hp, Trials def calc_result(x, reporter): # add a reporter param here huge_df = DataFrame(random.randn(100000, 5), columns=['A', 'B', 'C', 'D', 'E']) total = 0 # Assume that I MUST iterate for idx_and_row in huge_df.iterrows(): idx = idx_and_row[0] row = idx_and_row[1] # Assume there is no way to optimize here curr_sum = row['A'] * x['adjustment_1'] + \ row['B'] * x['adjustment_2'] + \ row['C'] * x['adjustment_3'] + \ row['D'] * x['adjustment_4'] + \ row['E'] * x['adjustment_5'] total += curr_sum # In real life I want the total as high as possible, but for the minimizer, it has to negative a negative value # total_as_neg = total * -1 # print(total_as_neg) # Ray will negate this by itself to feed into HyperOpt reporter(timesteps_total=1, episode_reward_mean=total) return total_as_neg space = {'adjustment_1': hp.quniform('adjustment_1', 0, 1, 0.001), 'adjustment_2': hp.quniform('adjustment_2', 0, 1, 0.001), 'adjustment_3': hp.quniform('adjustment_3', 0, 1, 0.001), 'adjustment_4': hp.quniform('adjustment_4', 0, 1, 0.001), 'adjustment_5': hp.quniform('adjustment_5', 0, 1, 0.001)} import ray import ray.tune as tune from ray.tune.hpo_scheduler import HyperOptScheduler ray.init() tune.register_trainable("calc_result", calc_result) tune.run_experiments({"experiment": { "run": "calc_result", "repeat": 20000, "config": {"space": space}}}, scheduler=HyperOptScheduler()) ``` Upvotes: 3 <issue_comment>username_4: What you are asking can be achieved by using SparkTrials() instead of Trials() from hyperopt. Refer the document [here](http://hyperopt.github.io/hyperopt/scaleout/spark/). **SparkTrials API :** SparkTrials may be configured via 3 arguments, all of which are optional: `parallelism` The maximum number of trials to evaluate concurrently. Greater parallelism allows scale-out testing of more hyperparameter settings. Defaults to the number of Spark executors. **Trade-offs:** The `parallelism` parameter can be set in conjunction with the `max_evals` parameter in `fmin()`. Hyperopt will test `max_evals` total settings for your hyperparameters, in batches of size `parallelism`. If `parallelism = max_evals`, then Hyperopt will do Random Search: it will select all hyperparameter settings to test independently and then evaluate them in parallel. If `parallelism = 1`, then Hyperopt can make full use of adaptive algorithms like Tree of Parzen Estimators (TPE) which iteratively explore the hyperparameter space: each new hyperparameter setting tested will be chosen based on previous results. Setting `parallelism` in between `1` and `max_evals` allows you to trade off scalability (getting results faster) and adaptiveness (sometimes getting better models). **Limits:** There is currently a hard cap on parallelism of 128. `SparkTrials` will also check the cluster’s configuration to see how many concurrent tasks Spark will allow; if parallelism exceeds this maximum, `SparkTrials` will reduce parallelism to this maximum. **Code snippet:** ``` from hyperopt import SparkTrials, fmin, hp, tpe, STATUS_OK spark_trials = SparkTrials(parallelism= no. of cores) best_hyperparameters = fmin( fn=train, space=search_space, algo=algo, max_evals=32) ``` [Another useful reference:](http://yourdatatalks.com/scaling-hyperopt-to-tune-machine-learning-models-in-python/) Upvotes: 1
2018/03/19
1,004
3,929
<issue_start>username_0: Hello I am trying to split the results from a string into a dictionary so I can add the numbers together. This is information received from a texting api a client will text in an account + the amount they want to donate and multiple accounts are separated by commas ex th 20.00, bf 10.00 etc. When I run the code it worked find in windows form's but when i converted over to MVC I get the error "an item with the same key has already been added" which i know means its duplicating an key. I tried entering an if statement during the foreach loop: ``` if(!tester.containsKey(j){} ``` but that did not always solve the problem and created a new error about out of range. Below is my current code: ``` public ActionResult register(text2give reg) { string body = reg.body; try { var items = body.Split(','); Dictionary tester = new Dictionary(); var j = 0; var total = 0f; while (j < body.Length) { foreach (var item in items) { var s = item.Trim().Split(' '); tester.Add(s[0], float.Parse(s[1])); total += float.Parse(s[1]); j++; } } ViewBag.total = total; } catch (Exception ex) { Response.Write(ex.ToString()); } return View(reg); } ```<issue_comment>username_1: s[0] is the duplicate key not j. You would need to use the following ``` var s = item.Trim().Split(' '); if(!tester.containsKey(s[0]){ tester.Add(s[0], float.Parse(s[1])); total += float.Parse(s[1]); j++; } ``` You might be getting duplicate data, be careful ignoring the keys as you might actually need the data. I'm just showing you how to suppress the error. Upvotes: 2 [selected_answer]<issue_comment>username_2: Your code is OK, but it makes quite a few assumptions: * It assumes the body is split properly * It assumes all items are unique (apparently they aren't, hence the error) * It assumes there are two elements in each item (it isn't, hence the indexOutOfRangeException) Here's how I would write this code to make sure it correctly guards against these cases: ``` public ActionResult register(text2give reg) { string body = reg.body; try { var items = body.Split(','); var splitItems = items.Select(i => i.Split(' ')).ToList(); var itemsWithTwoValues = splitItems.Where(s => s.Length == 2); var uniqueItems = itemsWithTwoValues.GroupBy(s => s[0]) .Where(g => g.Count() == 1) .SelectMany(g => g); var tester = uniqueItems.ToDictionary(s => s[0], s => float.Parse(s[1])); var total = tester.Sum(s => s.Value); ViewBag.total = total; } catch (Exception ex) { Response.Write(ex.ToString()); } return View(reg); } ``` Or, the shorter, condensed version: ``` public ActionResult register(text2give reg) { string body = reg.body; try { var tester = body.Split(',') // Split the initial value into items .Select(i => i.Split(' ')) // Split each item into elements .Where(s => s.Length == 2) // Take only those that have 2 items .GroupBy(s => s[0]) // Group by the key .Where(g => g.Count() == 1) // Remove all those that have a duplicate key .SelectMany(g => g) // Ungroup them again .ToDictionary(s => s[0], s => float.Parse(s[1])); // Create a dictionary where the first item is the key and the second is the parsed float var total = tester.Sum(s => s.Value); ViewBag.total = total; } catch (Exception ex) { Response.Write(ex.ToString()); } return View(reg); } ``` Upvotes: 2
2018/03/19
685
2,216
<issue_start>username_0: I'm looking to fill in this empty vector: ``` empty_vec <- rep("", times=length(number_vec)) ``` with sequential numbers from this loop: ``` for (numbers in number_vec) { sqrt <- sqrt(numbers) empty_vec[numbers] <- sqrt } ``` where `numbers_vec` is `c(16:49)`. However, when I do this, the first positions (1-15) in my `empty_vec` are not filled?<issue_comment>username_1: You can address this in two ways: First, you can create a counter, that will register which step of the loop you are, and use this as index to `empty_vect`, like this: ``` empty_vec <- rep("", times=length(number_vec)) counter=0 for (numbers in number_vec) { counter=counter+1 sqrt<-sqrt(numbers) empty_vec[counter]<-sqrt } ``` Or you can just create an empty vector and concatenate each new value, like this: ``` empty_vec <- c() for (numbers in number_vec) { sqrt<-sqrt(numbers) empty_vec <- c(empty_vec,sqrt) } ``` The way you were doing, is like you started to fill your vector in 16th position, that's way you had error. Upvotes: 3 [selected_answer]<issue_comment>username_2: First you need to understand how for loop works General express for for loop is ``` for(var in seq) expr var = A syntactical name for a variable seq = An expression evaluating to a vector (including a list and an expression) or to a pairlist or NULL. A factor value will be coerced to a character vector. ``` so note it , "seq" will be the value of the "var". In your example , you wrote ``` for (numbers in number_vec) where, numbers = Name of the variable number_vec = c(16:49) ``` So here , the initial value of "numbers" will be first value of "number\_vec" which is 16. in later step in loop, the expression ``` empty_vec[numbers]<-sqrt where , empty_vec[numbers] indicate the 16th position of the empty_vec as initially numbers started with value 16 ``` As you start with 16th position , the previous 15 position remain empty. Possible solution of your problem : ``` number_vec = c(16:49) empty_vec <- rep("", times=length(number_vec)) for (numbers in seq_along(number_vec)) { sqrt<-sqrt(number_vec[numbers]) empty_vec[numbers]<-sqrt } ``` Upvotes: 0
2018/03/19
1,357
5,596
<issue_start>username_0: First web scraping project! I've been copying various web scraping code from here but can't get around a > > run time error 13: Type Mismatch > > > on the `.document.getElementById("")` line I'm using to set a variable for a hyperlink I want to click. I figured it should be treated like the log-in button that I successfully coded. I'm not sure if I am missing a library I should be using, as pretty much every other post had different issues and solutions than what I'm running into. What am I doing wrong here? I'm using IE11 and Excel 2010. I started adding libraries I thought might provide a solution. The libraries I've activated are as follows: * Visual Basic For Applications * Microsoft Excel 14.0 Object Library * OLE Automation * Microsoft Office 14.0 Object Library * Microsoft HTML Object Library * Microsoft Internet Controls * Microsoft XML, v6.0 * Microsoft Shell Controls And Automation Here is the code and HTML DOM snippet: ``` Sub IEScrape() 'we define the essential variables Dim ie As Object Dim pwd, username Dim button Dim MemAss 'add the "Microsoft Internet Controls" reference in your VBA Project indirectly Set ie = New InternetExplorerMedium With ie .Visible = True .navigate ("internalwebsite.com") While ie.readyState <> 4 DoEvents Wend Set username = .document.getElementById("userid") 'id of the username control (HTML Control) Set pwd = .document.getElementById("password") 'id of the password control (HTML Control) Set button = .document.getElementById("loginbtn") 'id of the button control (HTML Control) username.Value = "username" pwd.Value = "<PASSWORD>" button.Click While ie.readyState <> 4 DoEvents Wend 'Run time error 13: Type mismatch on next line!!! Set MemAss = .document.getElementById("Menu:membershipassociation") 'id of the link (HTML Control) MemAss.Click While ie.readyState <> 4 DoEvents Wend End With Set ie = Nothing End Sub ``` [![td element info](https://i.stack.imgur.com/7eYYs.jpg)](https://i.stack.imgur.com/7eYYs.jpg)<issue_comment>username_1: I have no idea why this works, but I paused the process for 5 seconds and all of a sudden, it recognizes `.Document.getElementById("Menu:membershipassociation").Click`. If anyone has any critiques on my process, you can post an answer with the better code and I'll mark it correct. Code Below: ``` Option Explicit Sub IEScrape() 'we define the essential variables Dim ie As Object Dim pwd, username Dim button Dim MemAss 'add the "Microsoft Internet Controls" reference in your VBA Project indirectly Set ie = New InternetExplorerMedium With ie .Visible = True .Navigate ("internalwebsite.com") While ie.ReadyState <> 4 DoEvents Wend Set username = .Document.getElementById("userid") 'id of the username control (HTML Control) Set pwd = .Document.getElementById("password") 'id of the password control (HTML Control) Set button = .Document.getElementById("loginbtn") 'id of the button control (HTML Control) username.Value = "username" pwd.Value = "<PASSWORD>" button.Click While ie.ReadyState <> 4 DoEvents Wend Application.Wait (Now + TimeValue("0:00:05")) .Document.getElementById("Menu:membershipassociation").Click While ie.ReadyState <> 4 DoEvents Wend End With Set ie = Nothing End Sub ``` Upvotes: 0 <issue_comment>username_2: As you have mentioned that pausing the code for 5 seconds allows the code to function I would assume that there is something happening asynchronously to the HTML loading either a AJAX request or JavaScript editing the DOM. This would mean that once the HTML has loaded (Readystate = 4) the JavaScript could still be running or we could still be waiting for the AJAX response. Waiting the code as you have would allow internet explorer to finish all of its tasks before VBA can pick up the references. Although the drawback is that you are waiting for an arbitrary amount of time and there is a change it will not load in this interval. In order to build a more robust control (if needed) I would suggest loading the webpage outside of VBA and using the browsers debugger menu add a breakpoints on any DOM changes, then wait until you can see when your `"Menu:membershipassociation"` being defined. I would then pay attention to what process called this and see how you can tie your script in. An ideal outcome would be if this data is stored in the page when it loads or in another location you can reach your VBA to directly. Although when I have hit this roadblock in the past I have used an iterator to have a go at regular intervals which may be able to speed up your code at this section. I also like to use these iterators on any of the DOM that I am not 100% sure is available immediately. essentially just try to load the code every second or 0.5 seconds until it has loaded. The other suggestion that I would have if when you debug the webpage in a browser, if the data is available when page is loaded then the issue could be due to the fact that you are trying to call the click method straight away. You could try using the `IE_DocumentComplete` event to signal this is available. An Example has been posted [Here](https://stackoverflow.com/a/23300977/2502611) which may be of help. If you are able to update us with what you find from debugging the page as it loads we can point you in a better direction to solve the issue. Upvotes: 3 [selected_answer]
2018/03/19
880
3,133
<issue_start>username_0: In the following code I don't understand why the forked processes (the ten of them) print the same time. As far as I understand it, each process should wait a random amount of time (up to 15 seconds) and then print the time as of their end. Can someone explain why they print the same time? ``` int main() { int x, i; for (i = 0; i < 9; i++) { x = fork(); if (x == 0) { sleep(rand() % 15); printf("%d ended: %ld\n", i, time(NULL)); exit(0); } } while (wait(NULL) != -1); exit(0); } ```<issue_comment>username_1: All the processes use the same seed value to generate the random numbers using `rand()`. Man page of [rand()](http://man7.org/linux/man-pages/man3/rand.3.html): > > If no seed value is provided, the rand() function is automatically seeded with a value of 1. > > > So `rand()` generates the same sequence of numbers because they're all seeded with the same value, C11, [7.22.2.2](http://port70.net/~nsz/c/c11/n1570.html#7.22.2.2p2) says: > > The srand function uses the argument as a seed for a new sequence of pseudo-random numbers to be returned by subsequent calls to rand. If srand is then called with the same seed value, the sequence of pseudo-random numbers shall be repeated. If rand is called before any calls to srand have been made, the same sequence shall be generated as when srand is first called with a seed value of 1. > > > You need to set a different seed value (by calling `srand()`) in different processes. For example, you can use `getpid()` in the child process to seed: ``` if (x == 0) { srand((unsigned int)getpid()); sleep(rand() % 15); ... ``` Upvotes: 2 <issue_comment>username_2: As was mentioned in the comments, all processes are printing at the same time because they are all returning the same value for `rand`. Each time the `rand` function is called, it performs some computation on the previous value that was returned to get the next value. Before this function is called for the first time, it needs to be seeded with a starting value. So given a particular seed value, `rand` will always return the same sequence of values. The seed value is set with `srand`, however if you don't do this then the first call to `rand` effectively calls `srand(1)`. So in your case, each process calls `rand` for the first time with an implicit seed of 1. As a result, each process returns the same value. You can get around this by generating the random sleep times in the parent process. That way, each value used by the child processes will be different. Also, while it's not the cause of your issue, you should also explicitly call `srand`: ``` int main() { int x, i; srand(time(NULL) ^ getpid()); for (i = 0; i < 9; i++) { int sleep_time = rand() % 15; // generate random number in parent x = fork(); if (x == 0) { sleep(sleep_time); printf("%d ended: %ld\n", i, time(NULL)); exit(0); } } while (wait(NULL) != -1); exit(0); } ``` Upvotes: 1
2018/03/19
931
3,076
<issue_start>username_0: I'm building a simple app with react native using among other things Expo. After an update of react, expo, react-native from a previous version, I get an error I cannot seem to get rid of (see picture). I have looked in index.js, MainNavigator.js, LoginNavigator.js to ensure that I have `import React, {Component } from 'react';` in the preamble, but the problem persists. Can anyone guide me in a direction to help solve this problem? [![enter image description here](https://i.stack.imgur.com/lTVY7.png)](https://i.stack.imgur.com/lTVY7.png) Also tried: * Running `exp install:ios` to make sure that I have the latest version on the simulator * Several of the solutions mentioned [here](https://gist.github.com/jarretmoses/c2e4786fd342b3444f3bc6beff32098d), e.g. `watchman watch-del-all && react-native start --reset-cache` and `watchman watch-del-all && rm -rf $TMPDIR/react-* && rm -rf node_modules/ && npm cache verify && npm install && npm start --reset-cache` * Re-installing expo with `npm install exp --global` * Following the [upgrade guide](https://blog.expo.io/expo-sdk-v25-0-0-is-now-available-714d10a8c3f7) at from Expo for SDK v25.0.0, i.e. changing `app.json` `package.json`, deleting `node_modules` and then running `exp start -c`. * Running `react-native link`. * Using `import React from 'react';`and then `... extends React.Component` * Manually installing all packages except `react-native`, `expo` and `react` again using ´npm install [package] --save` Excerpt from package.json: [![Versions:](https://i.stack.imgur.com/7g8r9.png)](https://i.stack.imgur.com/7g8r9.png)<issue_comment>username_1: When upgrading expo versions, make sure you follow their upgrading guide and do it one version at a time: <https://blog.expo.io/expo-sdk-v25-0-0-is-now-available-714d10a8c3f7> Upvotes: 3 [selected_answer]<issue_comment>username_2: My solution was changing this line as: From ``` import React from 'react'; ``` To ``` import React, {Component} from 'react'; ``` Upvotes: 4 <issue_comment>username_3: I have fixed this issue in my project. First add react-native to your packages. ``` yarn add react-native ``` Then changing below line: **OLD** ``` import React from 'react'; ``` **NEW** ``` import React, {Component} from 'react'; ``` Upvotes: 2 <issue_comment>username_4: npm start -- --reset-cache and Invailidate Cache and Restart Upvotes: 1 <issue_comment>username_5: I've experienced this before, after searching all thorough the codebase, finally, I found it where I couldn't believe it'll be `App.js` file. So if you've tried restarting your bundler and running `npm start --reset-cache` or `yarn start --reset-cache`, and the issue still persist, check all your files and components you worked on, you imported `React` without importing `Component`, for instance: ``` import React, {Component} from 'react'; ``` Upvotes: 0 <issue_comment>username_6: see that if you are use classes in react-native then you should add {component} from react as given below. import {component} from React; Upvotes: -1
2018/03/19
669
2,706
<issue_start>username_0: I have created some code which reads through an array and saves data for each index into variables which I then pass onto to a created label. Below is the code: example of data arr content : `["2,5","5,1"]` two indexes inside array ``` for i in 0.. ``` how can I create it so that when the label is created the second time when it reads index[1] it creates a new label with same code but drop the label under the first label. I have tried to do : labelnum[i] to attempt to create a new label using the value of index for example labelnum1 when i is = 1. Any help will be Appreciated.<issue_comment>username_1: Create a variable to hold Y Position of The Label. and in each iteration add the height of previous label in Y Position variable to drop new label to below previous one. ``` class ViewController: UIViewController { let dataArr = ["2,5","5,1"] override func viewDidLoad() { super.viewDidLoad() var yPos = 90 for i in 0.. ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: There is *UIStackView* in iOS which lets you add elements dynamically at the bottom or top of the existing views. You can always add a new label which automatically appears at the bottom of the view. You can also accomplish this with *UITableView* or *UIScrollView*. Here is an example of *UIStackView*, dynamically appending new label below previous one. I hope you can infer this for your use case, ``` class ViewController: UIViewController { var lastLabelCount = 0 var stackView: UIStackView! override func viewDidLoad() { super.viewDidLoad() view.backgroundColor = UIColor.white let tap = UITapGestureRecognizer(target: self, action: #selector(tapped)) view.addGestureRecognizer(tap) createViews() } func createViews() { stackView = UIStackView(frame: .zero) stackView.translatesAutoresizingMaskIntoConstraints = false stackView.axis = .vertical stackView.alignment = .top view.addSubview(stackView) NSLayoutConstraint.activate([ stackView.leftAnchor.constraint(equalTo: view.leftAnchor), stackView.rightAnchor.constraint(equalTo: view.rightAnchor), stackView.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor), ]) } @objc func tapped() { let label = UILabel(frame: .zero) label.translatesAutoresizingMaskIntoConstraints = false label.textColor = UIColor.black label.text = "Hi I am label \(lastLabelCount)" stackView.addArrangedSubview(label) lastLabelCount += 1 } } ``` Upvotes: 2
2018/03/19
755
2,656
<issue_start>username_0: Is there a possibilty to one-hot encode characters of a text in Tensorflow or Keras? * `tf.one_hot` seem to take only integers. * `tf.keras.preprocessing.text.one_hot` seems to one-hot encode sentences to words, but not to characters. Beside that, `tf.keras.preprocessing.text.one_hot` works really strange, since the response does not really seem one-hot encoded, since the following code: ``` text = "ab bba bbd" res = tf.keras.preprocessing.text.one_hot(text=text,n=3) print(res) ``` Lead to this result: ``` [1,2,2] ``` Every time I run this program, the output is a different 3d vector, sometimes it is `[1,1,1]` or `[2,1,1]`. The documentation says, that unicity is not guaranteed, but this seems really senseless to me.<issue_comment>username_1: You can use keras `to_categorical` ```py import tensorflow as tf # define the document text = 'The quick brown fox jumped over the lazy dog.' # estimate the size of the vocabulary words = set(tf.keras.preprocessing.text.text_to_word_sequence(text)) vocab_size = len(words) print(vocab_size) # integer encode the document result = tf.keras.utils.to_categorical(tf.keras.preprocessing .text.one_hot(text, round(vocab_size*1.3))) print(result) ``` Result ``` [[1, 2, 3, 4, 5, 6, 1, 7, 8]] ``` Upvotes: 2 <issue_comment>username_2: I found a nice answer based on pure python, unfortunately I do not find the source anymore. It first converts every char to an int, and then replaces the int with an one-hot array. It has unicity over the whole program, even over all programms if the alphabet is the same length and the same order. ``` # Is the alphabet of all possible chars you want to convert alphabet = "abcdefgh<KEY>" def convert_to_onehot(data): #Creates a dict, that maps to every char of alphabet an unique int based on position char_to_int = dict((c,i) for i,c in enumerate(alphabet)) encoded_data = [] #Replaces every char in data with the mapped int encoded_data.append([char_to_int[char] for char in data]) print(encoded_data) # Prints the int encoded array #This part now replaces the int by an one-hot array with size alphabet one_hot = [] for value in encoded_data: #At first, the whole array is initialized with 0 letter = [0 for _ in range(len(alphabet))] #Only at the number of the int, 1 is written letter[value] = 1 one_hot.append(letter) return one_hot print(convert_to_onehot("hello world")) ``` Upvotes: 3 [selected_answer]
2018/03/19
790
2,532
<issue_start>username_0: There are numerous ways to do this, but using Java 8 streams (likely IntStream), how can I produce a dummy string that is N characters long? I've seen examples using IntStream.range(), and the various aggregator functions (sum, average), but I don't see a way to do this. My first random guess looks like this: ``` IntStream.range(1, 110).map(i -> "x").collect(Collectors.joining()); ``` But that's wrong in a couple of different ways.<issue_comment>username_1: You are actually almost there: ``` String s = IntStream.range(40, 110) .mapToObj(i -> Character.toString((char)i)) .collect(Collectors.joining()); System.out.println(s); ``` Produces: ``` ()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklm ``` If you want random ordering, with `N = 60` for instance: ``` Random r = new Random(); IntStream.generate(() -> 40 + r.nextInt(70)) .limit(60) .mapToObj(i -> Character.toString((char)i)) .collect(Collectors.joining())); ``` Produces ``` Z>fA+5OY@:HfP;(L:^WKDU21T(*1//@V,F9O-SA2;+),A+V/mLjm ``` Upvotes: 2 <issue_comment>username_2: If you really want to use a `Stream` for this, you can utilize `Stream#generate`, and limit it to `n` characters: ``` Stream.generate(() -> "x").limit(110).collect(Collectors.joining()); ``` Upvotes: 2 <issue_comment>username_3: You need to use `mapToObj()` and not `map()` as you actually use an `IntStream` and `IntStream.map()` takes as parameter an `IntUnaryOperator`, that is an (int->int) function. **For same character dummy (for example "x")** : ``` collect = IntStream.range(1, 110) .mapToObj(i ->"x") .collect(Collectors.joining()); ``` **Form random dummy :** You could use [`Random.ints(long streamSize, int randomNumberOrigin, int randomNumberBound)`](https://docs.oracle.com/javase/8/docs/api/java/util/Random.html#ints-int-int-). > > Returns a stream producing the given streamSize number of pseudorandom > int values, each conforming to the given origin (inclusive) and bound > (exclusive). > > > To generate a String containing 10 random characters between the 65 and 100 ASCII code : ``` public static void main(String[] args) { String collect = new Random().ints(10, 65, 101) .mapToObj(i -> String.valueOf((char) i)) .collect(Collectors.joining()); System.out.println(collect); } ``` Upvotes: 4 [selected_answer]
2018/03/19
1,511
5,312
<issue_start>username_0: I have installed zookeeper and kafka, first step : running zookeeper by the following commands : ``` bin/zkServer.sh start bin/zkCli.sh ``` second step : running kafka server ``` bin/kafka-server-start.sh config/server.properties ``` kafka should run at localhost:9092 but I am getting the following error : ``` WARN Unexpected error from /0:0:0:0:0:0:0:1; closing connection (org.apache.kafka.common.network.Selector) org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1195725856 larger than 104857600) ``` I am following the following link : [Link1](https://scotch.io/tutorials/build-a-distributed-streaming-system-with-apache-kafka-and-python) [Link2](https://www.tutorialspoint.com/apache_kafka/apache_kafka_installation_steps.htm) I am new to kafka ,please help me to set it up.<issue_comment>username_1: Try to reset `socket.request.max.bytes` value in `$KAFKA_HOME/config/server.properties` file to more than your packet size and restart kafka server. Upvotes: 4 <issue_comment>username_2: My initial guess would be that you might be trying to receive a request that is too large. The maximum size is the default size for `socket.request.max.bytes`, which is 100MB. So if you have a message which is bigger than 100MB try to increase the value of this variable under `server.properties` and make sure to restart the cluster before trying again. --- If the above doesn't work, then most probably you are trying to connect to a non-SSL-listener. If you are using the default broker of the port, you need to verify that :9092 is the SSL listener port on that broker. For example, ``` listeners=SSL://:9092 advertised.listeners=SSL://:9092 inter.broker.listener.name=SSL ``` should do the trick for you (Make sure you restart Kafka after re-configuring these properties). Upvotes: 3 <issue_comment>username_3: The answer is most likely in one of the 2 areas a. socket.request.max.bytes b. you are using a non SSL end point to connect the producer and the consumer too. *Note: the port you run it really does not matter. Make sure if you have an ELB the ELB is returning all the healthchecks to be successful.* In my case i had an AWS ELB fronting KAFKA. I had specified the Listernet Protocol as TCP instead of Secure TCP. This caused the issue. ``` #listeners=PLAINTEXT://:9092 inter.broker.listener.name=INTERNAL listeners=INTERNAL://:9093,EXTERNAL://:9092 advertised.listeners=EXTERNAL://:9092,INTERNAL://:9093 ``` listener.security.protocol.map=INTERNAL:SASL\_PLAINTEXT,EXTERNAL:SASL\_PLAINTEXT sasl.enabled.mechanisms=PLAIN sasl.mechanism.inter.broker.protocol=PLAIN Here is a snippet of my producer.properties and consumer.properties for testing externally ``` bootstrap.servers=:9092 security.protocol=SASL\_SSL sasl.mechanism=PLAIN ``` Upvotes: 1 <issue_comment>username_4: 1195725856 is `GET[space]` encoded as a big-endian, four-byte integer (see [here](https://chrisdown.name/2020/01/13/1195725856-and-friends-the-origins-of-mysterious-numbers.html) for more information on how that works). This indicates that HTTP traffic is being sent to Kafka port 9092, but Kafka doesn't accept HTTP traffic, it only accepts its own protocol (which takes the first four bytes as the receive size, hence the error). Since the error is received on startup, it is likely benign and may indicate a scanning service or similar on your network scanning ports with protocols that Kafka doesn't understand. In order to find the cause, you can find where the HTTP traffic is coming from using tcpdump: ``` tcpdump -i any -w trap.pcap dst port 9092 # ...wait for logs to appear again, then ^C... tcpdump -qX -r trap.pcap | less +/HEAD ``` Overall though, this is probably annoying but harmless. At least Kafka isn't actually allocating/dirtying the memory. :-) Upvotes: 6 [selected_answer]<issue_comment>username_5: This is how I resolved this issue after installing a Kafka, ELK and Kafdrop set up: 1. First stop every application one by one that interfaces with Kakfa to track down the offending service. 2. Resolve the issue with that application. In my set up it was Metricbeats. It was resolved by editing the Metricbeats kafka.yml settings file located in modules.d sub folder: 1. Ensuring the Kafka advertised.listener in server.properties was referenced in the hosts property. 2. Uncomment the metricsets and client\_id properties. The resulting kafka.yml looks like: ``` # Module: kafka # Docs: https://www.elastic.co/guide/en/beats/metricbeat/7.6/metricbeat-module-kafka.html # Kafka metrics collected using the Kafka protocol - module: kafka metricsets: - partition - consumergroup period: 10s hosts: ["[your advertised.listener]:9092"] client_id: metricbeat ``` Upvotes: 2 <issue_comment>username_6: In my case, some other application was already sending data to port 9092, hence the starting of server failed. Closing the application resolved this issue. Upvotes: 1 <issue_comment>username_7: Please make sure that you use .security.protocol=plaintext or you have mismatch server security compared to the clients trying to connect. Upvotes: 0 <issue_comment>username_8: For us it was kube-prom-stack that tried to scrape metrics. Once we deleted we stopped receiving those messages. Upvotes: 0
2018/03/19
909
3,294
<issue_start>username_0: I have defined `product_id` in the `class sale_order_line` as below: ``` class sale_order_line(osv.osv): _inherit = 'sale.order.line' def _get_product_ids(self): return [('sale_ok', '=', True), ('state', 'in', ['sellable', 'custom']), ('id', '=', )] _columns = { 'product_id': fields.many2one('product.product', 'Product', domain=_get_product_ids, change_default=True), } ``` The form view of `sale.order` has the below snippet where `product_id` is shown: ``` ``` Initially in `Sale Orders(model: sale.order)` I select `pricelist_id` field. And then I click on 'Add an item' in Order Lines section to add a sale order line. In the form view of `sale.order.line`, I need to only show the products in the `product_id` based on the `pricelist_id` I selected earlier. In `product.product` class the `pricelist_id` is a "dummy" field. So I am not able to figure out how to add domain filter since it will always return null value. Could you please help me how to apply hard filter on `product_id` many2one field to show only the products based on selected `pricelist_id` in parent class?<issue_comment>username_1: So sorry, but I can't fully understand your issue. You need to add a hard domain to sale.order.line's product\_id field, based on the pricelist\_id value set in sale.order. I'll quote you: > > I need to only show the products in the 'product\_id' based on the > pricelist\_id I selected earlier > > > As far as I remember on Openerp 7 (and next versions) you have a product\_id field just on the pricelist version items records: that means you have to go through this class relation `product.pricelist` → `product.pricelist.version` → `product.pricelist.item` taking into account that each pricelist may have different version and each version different items. Am I right or did I get it wrong? It sounds a bit crazy to me managing all this mess :) (unless you create some function fields directly in product.pricelist). Can you explain it better? Apart from that, it seems to me that you might manage hard domain for field declarations using lambda functions. This can give you the chance to build more complex domains. Give it a try: ``` class sale_order_line(osv.osv): _inherit = 'sale.order.line' def _get_product_ids(self, cr, uid, ids, context=None): return [(...), (...)] _columns = { 'product_id': fields.many2one('product.product', 'Product', #domain=lambda self: self._get_product_ids(), domain=lambda self: self._get_product_ids(self._cr, self._uid, [], self._context), change_default=True ), } ``` Upvotes: 1 <issue_comment>username_2: In my understanding, you need dependent dropdown many2one. For example I have two many2one fields (campus\_id and department\_id), and we want to change the department on the basis of campus field. If you want to do this then below is code snippet: ``` 1 @api.onchange('campus_id') 2 def _campus_onchange(self): 3 res = {} 4 res['domain']={'department_id':[('campus_id', '=', self.campus_id.id)]} 5 return res ``` Upvotes: 0
2018/03/19
1,274
4,541
<issue_start>username_0: For hours I've been struggeling with getting an variable element of an enum. The "[Swifticons](https://github.com/ranesr/SwiftIcons "SwiftIcons")" - pod provides me with the following enum: ``` public enum WeatherType: Int { static var count: Int { return weatherIcons.count } public var text: String? { return weatherIcons[rawValue] } case alien, barometer, celsius, owm300, owm301, owm302, and200moreOfTheseNames } private let weatherIcons = ["\u{f075}", "\u{f079}", and202moreOfTheseFontCharacters] ``` From an external API ([openWeatherMap.org](https://openweathermap.org/forecast5 "openWeatherMap.org")) I just get an weather code (let's say "300") - and I want to access Icon "owm300". But how do I access this element of the enum without knowing the rawValue (which would be - say - 198)?<issue_comment>username_1: Swift doesn't currently have enumerable sequences of enum cases. One option that you have is to copy the [list of icon names](https://github.com/ranesr/SwiftIcons/blob/420ccbddcdf548df43b60715bb3afefe978435dd/SwiftIcons/IconDetailViewController.swift#L155), then search for your icon's name, and use that index as the enum's rawValue: ``` let weatherIcons = [...] let iconName = "owm300" let possibleIconIndex = weatherIcons.index { $0.caseInsensitiveCompare(iconName) == .orderedSame } if let iconIndex = possibleIconIndex { let weatherIcon = WeatherIcon(rawValue: iconIndex)! // ... } else { // graceful fallback for when the weather icon is missing } ``` Of course, you need to figure out your own mapping between the data you get from the service and enum names, but that could be as simple as `"owm\(weatherCode)"`. When Swift 4.2 lands, you will be able to make your enums conform to a new protocol called [`CaseIterable`](https://github.com/apple/swift-evolution/blob/master/proposals/0194-derived-collection-of-enum-cases.md). Enums that conform to it get a synthesized implementation of an `allCases` static variable. You will then be able to use that enumeration to build a string-to-enum dictionary automatically: ``` let nameToEnum = WeatherIcon.allCases.map { (String($0), $0) } let mapping = Dictionary(uniqueKeysWithValues: nameToEnum) ``` That will however require `WeatherIcon` to be **declared** with the `CaseEnumerable` conformance, as adding it with an `extension` has no effect. Upvotes: 0 <issue_comment>username_2: One of the easiest way I can think of is create some kind of mapping dictionary, where you would keep track of weather response code and WeatherType that it maps to like so, ``` let weatherCodeMapping: [Int: WeatherType] = [300: .owm300, 301: .owm301, 302: .owm302] ``` With this in place, you dont need to know any specific rawValue, you can simply get code by, ``` let weatherType = weatherCodeMapping[weatherCode] ``` And then create some other mapping for your image based on the weatherType. ``` let weatherIcon = weatherIconMapping[weatherType] ``` or create a single mapping directly from weather code to icon. Upvotes: 1 <issue_comment>username_3: Here's the plan: 1. We need to enumerate all of the enum cases. We'll do that by iterating over raw values (luckily, `WeatherType` is backed by `Int`). 2. We will store lazily initialized dictionary that maps `String` to `WeatherType`. 3. And finally, we declare a static function that returns an optional `WeatherType?` because we can encounter an unknown value. Here's the code: ``` extension WeatherType { // just some little convenience private typealias W = WeatherType // 1. define the sequence of all cases private static func allCases() -> AnySequence { return AnySequence { () -> AnyIterator in var raw = 0 return AnyIterator { // Iterates while raw value can be converted to an enum case if let next = W(rawValue: raw) { raw += 1 return next } return nil } } } // 2. Static properties are lazy so we'll use them to store the dictionary with String to WeatherType mapping private static let typeMap = W.allCases().reduce([String: W]()) { acc, next in var acc = acc acc[String(describing: next)] = next return acc } // 3. Declare the mapping function static func from(string: String) -> WeatherType? { return W.typeMap[string] } } ``` Here's a little test: ``` let str = "301" let type = WeatherType.from(string: "owm\(str)") print(type == .owm301) ``` Upvotes: 2 [selected_answer]
2018/03/19
538
1,820
<issue_start>username_0: I have the following filter: ``` const auxHash = {}; myArray.filter(house => house.members === 3 && auxHash[house.id] ? false : auxHash[house.id] = true ) ``` And I get a lint error saying that an arrow function should not return an assignment. I have tried this too: ``` const auxHash = {}; myArray.filter(house => house.members === 3 && auxHash[house.id] ? false : (auxHash[house.id] = true) ) ``` But same problem. How can I solve this? To clarify. I am trying to filter out elements of an array which attribute `members` is different than `3`, and also I am trying to remove duplicate elements based on the attribute `id` (that's why I am using the hash).<issue_comment>username_1: Before question edit. If you use `{}` body you need explicitly return a result from that function. ``` const auxHash = {}; myArray.filter(house => { return house.members === 3 && auxHash[house.id] ? false : auxHash[house.id] = true; }) ``` Also, if your statement is one line, you can omit the `{}` part and remove `return` ``` const auxHash = {}; myArray.filter(house => house.members === 3 && auxHash[house.id] ? false : auxHash[house.id] = true) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You're reformatting the exact same statement and ignoring the linter's error. You are returning an assignment. That is your problem. You're doing it right here - `auxHash[house.id] = true` Your ternary resolves to false or `auxHash[house.id] = true` So, refactor to return an actual value - ``` const auxHash = {}; myArray.filter(house => { if(house.members === 3 && auxHash[house.id]) { return false; } else { auxHash[house.id] = true; return true; } }); ``` You can write it more concisely but I stretched it out to make it clear. Upvotes: 0
2018/03/19
849
3,314
<issue_start>username_0: This seems like it should be a relatively easy thing, but I can't figure out why it is happening. I am trying to declare two values based on text input to use in further calculations, but every time the results are displayed it shows them as 0. ``` var valOne = document.getElementById("value1").value; var valTwo = document.getElementById("value2").value; document.getElementById("add").addEventListener("click", function add(){ var added = +valueOne + +valueTwo; document.getElementById("display").innerHTML = added; }, true); ``` When I place the first two lines within the add() function it works fine. This is a part of a larger issue where I am trying to have the results of calculations display as the two inputted values are typed, if that helps to clear things up as to what I'm searching for. I apologize if this is a duplicate question, and please let me know if you need any other information! I feel like I have tried everything to debug it, but I'm sure the answer is right in front of me and I'm just missing it.<issue_comment>username_1: When your script would be executed `valOne` and `valTwo` would be zero, since on the load of your html page both input elements, `value1` and `value2` would be empty. On the other hand, it's almost certain that when you click the add button, your input elements have values, so you read those values, if you place your code inside your event listener. That you can do is to hold a reference to your input elements: ``` var elmOne = document.getElementById("value1"); var elmTwo = document.getElementById("value2"); ``` and then read the `value` attribute inside your event listener. Upvotes: 0 <issue_comment>username_2: This is only storing the current value of those elements: ``` var valOne = document.getElementById("value1").value; var valTwo = document.getElementById("value2").value; ``` You need to store the elements rather than the value of each element. ``` var elemOne = document.getElementById("value1"); // Store the element var elemTwo = document.getElementById("value2"); // Store the element ``` Use the object `Number` to convert to number those values. ``` document.getElementById("add").addEventListener("click", function add(){ // Now use the entered value as number. var added = Number(elemOne.value) + Number(elemTwo.value); document.getElementById("display").innerHTML = added; }, true); ``` **Aside note:** You don't need to set a name to the handler: ``` +--- This is unnecessary. | v function add(){...} ``` You can pass the function/handler as follow: ``` function(){...} ``` > > Get the innerHTML to update and display the new result as the input values are typed (without having to click a button or refresh) > > > Use the event `input` to capture the changes over those elements. ```js var elemOne = document.getElementById("value1"); var elemTwo = document.getElementById("value2"); var total = document.getElementById("total"); function calculate() { total.textContent = (Number(elemOne.value) + Number(elemTwo.value)).toFixed(2); } [elemOne, elemTwo].forEach(function(e) { e.addEventListener('input', calculate); }); ``` ```html Total: ========= --- 0 - ``` Upvotes: 2 [selected_answer]
2018/03/19
826
3,144
<issue_start>username_0: **Steps to Reproduce** 1. Android SDK, Git already installed. Flutter git was cloned successfully. 2. Running `flutter doctor` in PowerShell **Logs** ``` C:\flutter>flutter doctor Checking Dart SDK version... Downloading Dart SDK from Flutter engine ead227f... Start-BitsTransfer : flutter doctor - 69/5000 The resource loader cache doesn't have a loaded MUI entry . (HRESULT: 0x80073B01 Exception) En C:\flutter\flutter\bin\internal\update_dart_sdk.ps1: 47 Character: 1 Start-BitsTransfer -Source $dartSdkUrl -Destination $dartSdkZip + CategoryInfo : NotSpecified: (:) [Start-BitsTransfer], COMException + FullyQualifiedErrorId : System.Runtime.InteropServices.COMException,Microsoft.BackgroundIntelligentTransfer.Mana gement.NewBitsTransferCommand Error: Unable to update Dart SDK. Retrying... Waiting 2 seconds, press CTRL+C to exit ... ``` I'm using NTLM auth behind corporate proxy.<issue_comment>username_1: I experienced the same issue, what I did is uninstalling the flutter SDK by removing the directory where it installed and then installing it again, it worked as you can see below. I think the last update caused this issue. [![It worked](https://i.stack.imgur.com/OYbTl.png)](https://i.stack.imgur.com/OYbTl.png) Upvotes: 1 <issue_comment>username_2: This error occurs when you **disable** BITS [Background Task Infrastructure service] type "Services" on **Cortana** then click on services > then go to >Background Task Infrastructure service> Enable and start it > Then reboot your laptop It will help you to solve this error... BITS related errors can be solved by just enabling **Background Task Infrastructure service** Upvotes: 1 <issue_comment>username_3: This is how you can fix it : ---------------------------- Open Control Panel – System – Advanced system settings and click on the button ***Environment Variables***. --- If environment variable **ComSpec** is set to anything other than **%SystemRoot%\system32\cmd.exe** expanding usually to C:\Windows\system32\cmd.exe, then reset it to Windows default **%SystemRoot%\system32\cmd.exe**. Upvotes: 1 <issue_comment>username_4: Try git reset command if you are facing issue with Downloading Dart SDK ``` git reset head --hard ``` After this flutter doctor command will work fine: ``` flutter doctor ``` Upvotes: 1 <issue_comment>username_5: Posting this as an answer to make it more accessible for those coming from Google. If you are on Windows 8, and this error is all you have a glimpse of before the terminally shuts down, see this answer [Flutter update causing error "Flutter requires PowerShell 5.0 or newer" irrecoverable](https://stackoverflow.com/questions/65652144/flutter-update-causing-error-flutter-requires-powershell-5-0-or-newer-irrecove) In a nutshell, locate your flutter installation and set the powershell version to 3 (the default that came with the OS). If you no longer recall the install location, run `where flutter dart` from your terminal. After this is revealed, navigate to "bin\internal\update\_dart\_sdk.ps1" and update the line "$psMajorVersionRequired = 5" to 3 Upvotes: 0
2018/03/19
693
2,257
<issue_start>username_0: I need to add a 'null' default value to a logicalType: 'date' in my Avro schema. The current definition I have is like this: ``` { "type": "record", "namespace": "my.model.package", "name": "Person", "version": "1", "fields": [ {"name":"birthday","type": { "type": "int", "logicalType": "date"}} ] } ``` When I populate 'birthday' field with a `org.joda.time.LocalDate` it does work but when I do leave it `null` I get the following exception: ``` org.apache.kafka.common.errors.SerializationException: Error serializing Avro message Caused by: java.lang.NullPointerException: null of int of my.model.package.Person at org.apache.avro.generic.GenericDatumWriter.npe(GenericDatumWriter.java:145) at org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:139) at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:75) at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:62) at io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:92) at io.confluent.kafka.serializers.KafkaAvroSerializer.serialize(KafkaAvroSerializer.java:53) at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:459) at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:436) ... ``` I tried many ways to configure this 'logicalType' as nullable but could not get it working. How can I configure this field to be nullable?<issue_comment>username_1: Avro does not yet support unions of logical types. This is a known outstanding issue: <https://issues.apache.org/jira/browse/AVRO-1891> While not at all elegant, the way I have handled this is to use a sentinel value such as 1900-01-01 to represent null. **-- Update --** This issue seems to be fixed as of version 1.9.0 Upvotes: 4 [selected_answer]<issue_comment>username_2: declare it as: ``` { "name": "myOptionalDate", "type": ["null","int"], "logicalType": "date", "default" : "null" } ``` this should work Upvotes: -1 <issue_comment>username_3: This code worked for me: ``` { "name" : "birthday", "type" : ["null",{ "type" : "int", "logicalType": "date" }] } ``` Upvotes: 4
2018/03/19
925
2,664
<issue_start>username_0: I have react native project that I worked on my laptop. when I transfer the project to my pc and run yarn install (I installed the some modules using yarn and some using npm) I get > > An unexpected error occurred: "<https://registry.yarnpkg.com/jest/-/jest-23.0.0-alpha.0.tgz>: Request failed \"404 Not Found\"". > > > Yarn version on my pc and latop same ``` 1.5.1 ``` npm version my pc and latop same ``` 5.4.1 ``` node version ``` v 7.9.0 ``` in addition i get warning when run yarn install on my project ``` warning You are using Node "7.9.0" which is not supported and may encounter bugs or unexpected behavior. Yarn supports the following semver range: "^4.8.0 || ^5.7.0 || ^6.2.2 || >=8.0.0" ```<issue_comment>username_1: I found a solution ``` yarn config set registry https://registry.npmjs.org rm yarn.lock yarn ``` Upvotes: 6 [selected_answer]<issue_comment>username_2: I had a similar issue, getting: `An unexpected error occurred: "https://registry.yarnpkg.com/es-abstract/-/es-abstract-1.14.0.tgz: Request failed \"404 Not Found\"".` Deleting `yarn.lock` and running `yarn` may result in a lot of packages being upgraded, and possibly other configuration issues. Instead, I just deleted the `es-abstract` resolution section from `yarn.lock`, e.g.: ``` es-abstract@^1.11.0, es-abstract@^1.5.1, es-abstract@^1.7.0: version "1.14.0" resolved "https://registry.yarnpkg.com/es-abstract/-/es-abstract-1.14.0.tgz#f59d9d44278ea8f90c8ff3de1552537c2fd739b4" integrity sha512-lri42nNq1tIohUuwFBYEM3wKwcrcJa78jukGDdWsuaNxTtxBFGFkKUQ15nc9J+ipje4mhbQR6JwABb4VvawR3A== dependencies: es-to-primitive "^1.2.0" function-bind "^1.1.1" has "^1.0.3" has-symbols "^1.0.0" is-callable "^1.1.4" is-regex "^1.0.4" object-inspect "^1.6.0" object-keys "^1.1.1" string.prototype.trimleft "^2.0.0" string.prototype.trimright "^2.0.0" ``` and run `yarn`. This solved the issue with a minimal number of upgrades in `yarn.lock`. Upvotes: 1 <issue_comment>username_3: If you are getting this with a **private** `npm` package, make sure you are logged in with the proper user run `npm login` [Docs](https://docs.npmjs.com/logging-in-to-an-npm-enterprise-registry-from-the-command-line) Upvotes: 2 <issue_comment>username_4: I had this problem today in CircleCI, turned out to be a cache problem: ``` yarn cache clean yarn install ``` Actually I had to run `yarn install` twice but no idea why. Upvotes: 1 <issue_comment>username_5: following worked for `npm` ``` npm config set registry https://registry.npmjs.org rm package.json && rm nodemodules npm install ``` Upvotes: -1
2018/03/19
2,008
7,248
<issue_start>username_0: Since they have supposedly integrated most of the Power Tools functionality into VS2017, the new command "tf unshelve" does not work the same as the old command "tfpt unshelve" to move shelvesets to another branch, as you cannot add a source and a target using the new command. I really don't want to have to install VS2015 on my laptop just to move a shelveset. Does anyone know how to do this with VS2017? I'm having a hard time believing this is no longer possible. Any assistance is greatly appreciated!<issue_comment>username_1: Unfortunately move Shelveset to Another Branch with the command "`tftp unshelve`" or "`tf unshelve`" is still not supported for now in VS 2017. There's already a [feedback submitted here](https://developercommunity.visualstudio.com/content/problem/56349/cant-unshelve-to-another-branch.html) to track that, just as **<NAME> [MSFT]** mentioned below (source [here](https://developercommunity.visualstudio.com/content/problem/56349/cant-unshelve-to-another-branch.html?childToView=140733#comment-140733)) : > > For VS 2017 we do not have a power tools release. We are constantly > evaluating the features that are still missing in the product. The > three most notable ones are: > > > * TFS Windows Shell Extensions integration > * Find by Status for server workspaces > * **Unshelve /migrate** > > > So as a workaround, you can try below ways: * Use [tf unshelve](https://learn.microsoft.com/zh-cn/vsts/tfvc/unshelve-command) and [tf rename](https://learn.microsoft.com/zh-cn/vsts/tfvc/rename-command-team-foundation-version-control) command to achieve this in two steps. 1). `c:\projects>tf unshelve` 2). `c:\projects>tf rename 314.c ..\newdir\1254.c` * Copy shelved files manually to target folders and do compare against the server version as **<NAME>** mentioned [here](https://developercommunity.visualstudio.com/content/problem/56349/cant-unshelve-to-another-branch.html#reply-181754). * Install VS2015 and [Microsoft Visual Studio Team Foundation Server 2015 Power Tools.](https://marketplace.visualstudio.com/items?itemName=TFSPowerToolsTeam.MicrosoftVisualStudioTeamFoundationServer2015Power) Upvotes: 3 <issue_comment>username_2: Another WORKAROUND... 1. Check in the changes from existing shelveset (Changeset 1) 2. Merge the changes to the other branch and check in (Changeset 2) 3. Rollback the changes from Changeset 1. This keeps changes from changeset 2 and rolls back the original branch. This process may not work for every company and their policies but it does work. I don't like it but sometimes it is necessary. Upvotes: 3 <issue_comment>username_3: Here's how I do it when I know the branch files can be overwritten directly by the shelf-set. If you need to merge, then use win-merge with directory compare to stitch things back together. * Close all instances of Visual Studio * Rename your local source directory to a temporary name * Open Visual Studio * Unshelve your changes normally - VS will recreate the directory structure with just your changes in it. * Copy all your changed files into your branch directory (overwrite read-only files) * Close visual studio * Rename your local source directory back to its original name (get rid of the one VS recreated) * Reopen your branch project * Two options now: you can go offline then back online again to let VS figure out what changed, or just check out your whole folder and then check in again. Just make sure you don't have anything else checked out before you do it! It's a shame that this hasn't been put into VS as a native feature yet, but when you consider the complexity of un-shelving into a code merge, and preserving history, etc, the task becomes complex. Good luck! Upvotes: 2 <issue_comment>username_4: Until now (Feb 2020) TSPT is not available in Visual Studio. The workaround i found is to create a new Branch and use a tool like WinMerge (<http://winmerge.org>) to compare and merge changes from Main to the new Branch then when it's all right undo the changes on the main branch. Upvotes: 0 <issue_comment>username_5: I absolutely agree this is a greatly missed feature in 2017 and up. After too many times of doing this manually, I decided to add it to my (free) Visual Studio Extension called 'MultiMerge'. You can now right-click a shelveset and choose your target branch. There's a [VS2017](https://marketplace.visualstudio.com/items?itemName=Jesusfan.MultiMerge2017) version and [VS2019](https://marketplace.visualstudio.com/items?itemName=Jesusfan.MultiMerge2019). Or you can simply choose this extension using Tools and Extentions menu in Visual Studio. Feel free to rate the extension, and let others find it more quickly:) Upvotes: 5 <issue_comment>username_6: Another probably easier way working with VS2019 is to copy/paste your source folder to the branch target folder and use ``` tf reconcile /promote ``` to detect all added or changed files. Upvotes: 3 <issue_comment>username_7: Multimerge extension ([MultiMerge.2017](https://marketplace.visualstudio.com/items?itemName=Jesusfan.MultiMerge2017), [MultiMerge.2019](https://marketplace.visualstudio.com/items?itemName=Jesusfan.MultiMerge2019)) worked like a charm for me. (Or rather to the extent that TFVC tools as such cannot merge by reapplying a patch.) No `tfpt` for VS 2017 and VS 2019. The `unshelve /migrate` command has not been incorporated into `tf` neither. So nothing that works out of the box as yet. (Last checked on v16.7.5) Upvotes: 2 <issue_comment>username_8: Chiming in late on this one, but I got around it by getting the changeset I needed in branch A. Renamed folder to branch B. Now my new branch has the changeset I need. Then just force Get Latest under advanced to re-get the original branch A back as it was. My Branch B was brand new, so I didn't need anything else from it, not sure if that's the same situation for you, but was a quick enough fix. Upvotes: 0 <issue_comment>username_9: You can still use/install the last (VS2015) version of tfpt without installing Visual Studio 2015. Download [Microsoft Download for Team Foundation Power Tools 2015](https://marketplace.visualstudio.com/items?itemName=TFSPowerToolsTeam.MicrosoftVisualStudioTeamFoundationServer2015Power) RUn installer and select `custom` and unselect `visual studio integration` then complete installation (next next etc). After installation then you can temporarily add tfpt to powershell path for that powershell session: ``` [System.Environment]::SetEnvironmentVariable('PATH',$Env:PATH+';C:\Program Files (x86)\Microsoft Team Foundation Server 2015 Power Tools\') ``` You can then run tfpt from the workspace directory. I had to also update my workspaces from the server for the workspace to be recognised from the working folder (see [tfpt unable to determine workspace](https://stackoverflow.com/questions/6455942/is-there-workaround-for-when-tfpt-is-unable-to-determine-the-workspace-and-ref)): ``` tf workspaces /s:https://my.source.server.or.account.visualstudio.com ``` I could then run the unshelve command ``` cd c:\my\workspace\folder tfpt unshelve "ShevesetName;UserName" /migrate /source:"$/Project/Source" /target:"$/Project/Target" ``` Upvotes: 0
2018/03/19
1,008
3,582
<issue_start>username_0: I want to pass some values to frontend in form of context variables in IBM Watson through my Node app. How can I achieve it? I tried to add the value I want to add to current context variable object and sent that back. Still no help. Is there any way I can do it? --- **Edit:** Right now, I am adding the required value as a new key-value pair to the context object from Node app as follows. ``` ... let user_name = "MJ" context.user_name = user_name response.send({ output: output, context: JSON.stringfy(context) }) ... ``` And in Watson Console, in one of the dialogue nodes I have used like, ``` Hey $user_name, How are you? ``` But the output I am getting is, ``` Hey , How are you? ``` I can see `user_name` value in the context object, but I can't use it in the way I mentioned above. Is there any other way to do so? --- Any help is appreciated. Thanks in advance!<issue_comment>username_1: You can add any value to the Context object, which can be accessed in your Node.JS app and if you send that value to the front-end, then it should be accessible in the UI as well. Below I've mentioned a sample welcome response from Conversation service. You can access the Context object from the response of Conversation service and add a new key-value pair to that object. In the response, you'll see that I'm accessing a context variable **username** that has the value **MJ**, which has been added dynamically to the context. ` ``` { "intents": [], "entities": [], "input": { "text": "" }, "output": { "text": ["Hello MJ! How can I help you today?"], "nodes_visited": ["Conversation Start"], "log_messages": [] }, "context": { "username": "MJ", "conversation_id": "5835fa3b-6a1c-4ec5-92f9-22844684670e", "system": { "dialog_stack": [{ "dialog_node": "Conversation Start" }], "dialog_turn_counter": 1, "dialog_request_counter": 1, "_node_output_map": { "Conversation Start": [0] } } } ``` ` Now to update the context, fetch the response and add a new key-value pair ` ``` var convResponse = ; var context = convResponse.context; //add new value to context context["new\_key"] = "new value"; ``` ` Now the next call that you make to Conversation, use this updated context instead of the context you received from the previous call. You can send back the response from Conversation to the front-end as well which can then be shown to the user. Upvotes: 1 <issue_comment>username_2: I was having the same problem. My solution was changing the code when you call the IBM server and request .json: ``` ... method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ text, context: {username : variable}, ... ``` The `username` I set in Watson Assistant as well as a context, the variable I used a function to get the name though Query (Because in my application, I am calling the chatbot though an IFrame.), but you can use any variable set on javascript. Upvotes: 2 <issue_comment>username_3: ``` var payload = { assistantId: assistantId, sessionId: req.body.session_id, context: { skills: { "main skill": { user_defined: { username: '<NAME>' } } } }, input: { message_type: 'text', text: "blah", }, }; ``` works for me. Seen here: <https://medium.com/@pranavbhatia_26901/watson-assistant-v2-context-sharing-3ca18626ed0d> Upvotes: 1
2018/03/19
2,242
8,570
<issue_start>username_0: I have a sentence "**You could say that they regularly catch a shower , which adds to their exhilaration and joie de vivre.**" and I can't achieve to get the NLP parse tree like the following example: ``` (ROOT (S (NP (PRP You)) (VP (MD could) (VP (VB say) (SBAR (IN that) (S (NP (PRP they)) (ADVP (RB regularly)) (VP (VB catch) (NP (NP (DT a) (NN shower)) (, ,) (SBAR (WHNP (WDT which)) (S (VP (VBZ adds) (PP (TO to) (NP (NP (PRP$ their) (NN exhilaration)) (CC and) (NP (FW joie) (FW de) (FW vivre))))))))))))) (. .))) ``` I want to replicate the solution to this question <https://stackoverflow.com/a/39320379> but I have a string sentence instead of the NLP tree. BTW, I am using python 3<issue_comment>username_1: Use the `Tree.fromstring()` method: ``` >>> from nltk import Tree >>> parse = Tree.fromstring('(ROOT (S (NP (PRP You)) (VP (MD could) (VP (VB say) (SBAR (IN that) (S (NP (PRP they)) (ADVP (RB regularly)) (VP (VB catch) (NP (NP (DT a) (NN shower)) (, ,) (SBAR (WHNP (WDT which)) (S (VP (VBZ adds) (PP (TO to) (NP (NP (PRP$ their) (NN exhilaration)) (CC and) (NP (FW joie) (FW de) (FW vivre))))))))))))) (. .)))') >>> parse Tree('ROOT', [Tree('S', [Tree('NP', [Tree('PRP', ['You'])]), Tree('VP', [Tree('MD', ['could']), Tree('VP', [Tree('VB', ['say']), Tree('SBAR', [Tree('IN', ['that']), Tree('S', [Tree('NP', [Tree('PRP', ['they'])]), Tree('ADVP', [Tree('RB', ['regularly'])]), Tree('VP', [Tree('VB', ['catch']), Tree('NP', [Tree('NP', [Tree('DT', ['a']), Tree('NN', ['shower'])]), Tree(',', [',']), Tree('SBAR', [Tree('WHNP', [Tree('WDT', ['which'])]), Tree('S', [Tree('VP', [Tree('VBZ', ['adds']), Tree('PP', [Tree('TO', ['to']), Tree('NP', [Tree('NP', [Tree('PRP$', ['their']), Tree('NN', ['exhilaration'])]), Tree('CC', ['and']), Tree('NP', [Tree('FW', ['joie']), Tree('FW', ['de']), Tree('FW', ['vivre'])])])])])])])])])])])])]), Tree('.', ['.'])])]) >>> parse.pretty_print() ROOT | S ______________________________________________________|_____________________________________________________________ | VP | | ____|___ | | | VP | | | ___|____ | | | | SBAR | | | | ____|_______ | | | | | S | | | | | _______|____________ | | | | | | | VP | | | | | | | ____|______________ | | | | | | | | NP | | | | | | | | __________|__________ | | | | | | | | | | SBAR | | | | | | | | | | ____|____ | | | | | | | | | | | S | | | | | | | | | | | | | | | | | | | | | | | VP | | | | | | | | | | | ____|____ | | | | | | | | | | | | PP | | | | | | | | | | | | ____|_____________________ | | | | | | | | | | | | | NP | | | | | | | | | | | | | ________________|________ | NP | | | NP ADVP | NP | WHNP | | NP | NP | | | | | | | | ___|____ | | | | ____|_______ | ____|____ | PRP MD VB IN PRP RB VB DT NN , WDT VBZ TO PRP$ NN CC FW FW FW . | | | | | | | | | | | | | | | | | | | | You could say that they regularly catch a shower , which adds to their exhilaration and joie de vivre . ``` Upvotes: 2 <issue_comment>username_2: I am going to assume there is a good reason as to why you *need* the dependency parse tree in that format. [Spacy](https://spacy.io/usage/linguistic-features#section-dependency-parse) does a great job by using a CNN (Convolutional Neural Network) to produce CFGs (Context-Free Grammars), is production ready, and is super-fast. You can do something like the below to see for yourself (and then read the docs in the prior link): ``` import spacy nlp = spacy.load('en') text = 'You could say that they regularly catch a shower , which adds to their exhilaration and joie de vivre.' for token in nlp(text): print(token.dep_, end='\t') print(token.idx, end='\t') print(token.text, end='\t') print(token.tag_, end='\t') print(token.head.text, end='\t') print(token.head.tag_, end='\t') print(token.head.idx, end='\t') print(' '.join([w.text for w in token.subtree]), end='\t') print(' '.join([w.text for w in token.children])) ``` Now, you *could* make an algorithm to navigate this tree, and print accordingly (I couldn't find a quick example, sorry, but you can see the indexes and how to traverse the parse). Another thing you could do is to extract the CFG somehow, and then use [NLTK](http://www.nltk.org/book/ch08.html) to do the parsing and subsequent displaying in the format you desire. This is from the NLTK playbook (modified to work with Python 3): ``` import nltk from nltk import CFG grammar = CFG.fromstring(""" S -> NP VP VP -> V NP | V NP PP V -> "saw" | "ate" NP -> "John" | "Mary" | "Bob" | Det N | Det N PP Det -> "a" | "an" | "the" | "my" N -> "dog" | "cat" | "cookie" | "park" PP -> P NP P -> "in" | "on" | "by" | "with" """) text = '<NAME> Bob' sent = text.split() rd_parser = nltk.RecursiveDescentParser(grammar) for p in rd_parser.parse(sent): print(p) # (S (NP Mary) (VP (V saw) (NP Bob))) ``` However, you can see that you need to define the CFG (so if you tried your original text in place of the example's, you saw that it didn't understand the tokens not defined in the CFG). It seems the easiest way to obtain your desired format is using Stanford's NLP parser. Taken from [this SO question](https://stackoverflow.com/q/44528306/2178980) (and sorry, I haven't tested it): ``` parser = StanfordParser(model_path='edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz') parsed = parser.raw_parse('Jack payed up to 5% more for each unit') for line in parsed: print(line, end=' ') # This will print all in one line, as desired ``` I didn't test this because I don't have the time to install the Stanford Parser, which can be a bit of a cumbersome process (relative to installing Python modules), that is, assuming you are looking for a Python solution. I hope this helps, and I'm sorry that it's not a direct answer. Upvotes: 2
2018/03/19
450
1,657
<issue_start>username_0: There are several modules that are connected to app.js, for example the code that is inside: ``` var test = "TEST"; ``` Here is my webpack.config: ``` module.exports = { entry: './src/app.js', output: { filename: './dist/bundle.js' } }; ``` The problem is that when I try to call my test variable in the developer console, I get an error: [![enter image description here](https://i.stack.imgur.com/fURV8.png)](https://i.stack.imgur.com/fURV8.png) Something about the scope, when I connect app.js directly - everything works, what's the problem and how to fix it?<issue_comment>username_1: The default functionality of webpack is to scope files that are passed in to its configuration, see documentation here: <https://webpack.js.org/configuration/output/> This means that if you set a `var` in the file, then bundle it with webpack, it will become available only within its scope which, in this case, is the app.js file. If you open the file in your browser by itself, no scoping will take place hence why you don't have any issues when viewing directly. If you need to access that test variable outside of the file, you'll have to turn it into a global variable, otherwise it will remain scoped within the bundle.js file created from webpack. Upvotes: 0 <issue_comment>username_2: Yes, this is a scope problem. There are two ways to fix this: 1. Instead of using `var`, use `window.`. (`window.test = "TEST";`) 2. Forget `var` (dosen't work in strict mode).`test = "TEST";` 3. Before the , declare `test` (`var test;`) and then forget `var`. Hope this is the anwser you're looking for. Upvotes: 2
2018/03/19
682
2,825
<issue_start>username_0: I have a Lookup transformation that does not seem to be finding obvious matches. I have an input file that has 43 records that includes the same CustomerID which is set as an 8 byte-Signed Integer. I am using the Lookup to see if the CustomerID already exist in my destination table. In the destination table the CustomerID is defined as BigInt. For testing, I truncated the Lookup(destination) table. I have tried all three Cache settings with the same results. When I run the SSIS package, all 43 records are sent through the No Match Output side. I would think that only the 1st record should go that direction and all others would be considered as a match since they have the same CustomerID. Additionally, if I run the job a second time(without truncating the destination) then they are all flagged as Matched. It seems as if the cache is not being looked at in the Lookup. Ultimately I want the NO Match records to be written to the Destination table and the Matched records to have further processing. Any ideas?<issue_comment>username_1: The lookup can't be used this way. SSIS dataflows execute in a transaction. So while the package is running, no rows have been written to the destination until the entire dataflow runs. So regardless of the Cache setting, the new rows being sent to your destination table are not going to be considered by the Lookup while it's running. Then when you run it again, the rows will be considered. This is expected behavior. Upvotes: 0 <issue_comment>username_2: `Lookup transformation` is working as expected. I am not sure what's your understanding of look up is, so I'll go point by point. > > For testing, I truncated the Lookup(destination) table. I have tried > all three Cache settings with the same results. > > > When I run the SSIS package, all 43 records are sent through the No > Match Output side > > > Above behavior is expected. After truncate, lookup is essentially trying to find those 43 records within your truncated destination table. Since it can't find any, it is flagging them as *new* records ie *No match output* side. > > if I run the job a second time(without truncating the destination) > then they are all flagged as Matched > > > In this case, all those 43 records from file are matched within destination table, hence lookup refers them as *duplicates* and thus they are flagged as *Matched output* > > I am using the Lookup to see if the CustomerID already exist in my > destination table > > > To achieve this, all you need to do is send *Matched output* to some staging table which can be periodically truncated(as they are duplicate). and all the *No match output* can be send to your destination table. You can post screenshot of your lookup as well in case you want further help. Upvotes: 1
2018/03/19
1,006
3,796
<issue_start>username_0: I have a tab in excel that has about 50 columns. I export this tab as a .CSV file and upload it into a database. I am currently using this VBA code to export the .CSV file: ``` Sub ExportAsCSV() Dim MyFileName As String Dim CurrentWB As Workbook, TempWB As Workbook Set CurrentWB = ActiveWorkbook ActiveWorkbook.ActiveSheet.UsedRange.Copy Set TempWB = Application.Workbooks.Add(1) With TempWB.Sheets(1).Range("A1") .PasteSpecial xlPasteValues .PasteSpecial xlPasteFormats End With 'Dim Change below to "- 4" to become compatible with .xls files MyFileName = CurrentWB.Path & "\" & Left(CurrentWB.Name, Len(CurrentWB.Name) - 5) & ".csv" Application.DisplayAlerts = False TempWB.SaveAs Filename:=MyFileName, FileFormat:=xlCSV, CreateBackup:=False, Local:=True TempWB.Close SaveChanges:=False Application.DisplayAlerts = True End Sub ``` There are certain columns in this export tab that will have "null" (actual word null) if the connected cells in other sheets are not filled in. How can I add to this existing VBA code to not allow an export if there are any null (the word, not blanks) values? Also how can a box pop up telling you that it wont export due to nulls?<issue_comment>username_1: I fixed your code's structure and added a test at the start which checks to make sure you have no "null" values anywhere on your `ActiveSheet` - if you do, it will throw a pop-up then exit the macro. ``` Sub ExportAsCSV() If Application.WorksheetFunction.CountIf(ActiveSheet.UsedRange, "null") > 0 Then MsgBox "Null values exist in the range - exiting sub.", vbExclamation Exit Sub End If Dim MyFileName As String Dim CurrentWB As Workbook, TempWB As Workbook Set CurrentWB = ActiveWorkbook ActiveWorkbook.ActiveSheet.UsedRange.Copy Set TempWB = Application.Workbooks.Add(1) With TempWB.Sheets(1).Range("A1") .PasteSpecial xlPasteValues .PasteSpecial xlPasteFormats End With 'Dim Change below to "- 4" to become compatible with .xls files MyFileName = CurrentWB.Path & "\" & Left(CurrentWB.Name, Len(CurrentWB.Name) - 5) & ".csv" Application.DisplayAlerts = False TempWB.SaveAs Filename:=MyFileName, FileFormat:=xlCSV, CreateBackup:=False, Local:=True TempWB.Close SaveChanges:=False Application.DisplayAlerts = True End Sub ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: It's a lot more elaborate, but I think it's the right way to do it. Plus, it activates the first "null" cell for the end user to look at. Add the following lines to the top of your code: ``` Sub ExportAsCSV() Dim NullAddress As String NullAddress = FindNull(ActiveSheet.UsedRange) If NullAddress <> vbNullString Then ActiveSheet.Range(NullAddress).Activate MsgBox "Cannot Export due to ""null"" value in cell" Exit Sub End If ' ' ' End Sub ``` which replies on the test function to do the heavy lifting: ``` Function FindNull(Target As Excel.Range) As String Const NullValue As String = "null" Dim vData 'As Variant Dim Row As Long, Col As Long If Not Target Is Nothing Then vData = Target If IsArray(vData) Then For Row = 1 To Target.Rows.Count For Col = 1 To Target.Columns.Count If vData(Row, Col) = NullValue Then ' Return the Address of the first Null value found & Exit FindNull = Target.Parent.Cells(Target.Cells(1).Row + Row - 1, Target.Cells(1).Column + Col - 1).Address(RowAbsolute:=False, ColumnAbsolute:=False) Exit Function End If Next Next Else If vData = NullValue Then FindNull = Target.Address End If End If End Function ``` Upvotes: 0
2018/03/19
755
2,834
<issue_start>username_0: ``` public class experiment3 { private static void mystery(String foo, String bar, String zazz) { System.out.println(zazz + " and " + foo + " like " + bar); } public static void main(String[] args) { String foo = "peanuts"; String bar = "foo"; mystery(bar, foo, "John"); } } ``` Can somebody explain to me how this result is formed when outputting it? The output will be: > > John and foo like peanuts > > > I understand that param. name zazz always is John; I don't understand how the last 2 params. were formed?! PS: Please help me to understand how this 2 last params were formed. If there is a possibility for a schematic representation for better understanding the way that java compiler works!<issue_comment>username_1: When you're making the call to the 'mystery' method you give the params : ``` bar(="foo"), foo(="peanuts), "John". ``` the names of these variables have nothing to to with the way you declare the method, only their content, so, the method receives the params: ``` foo(="foo"), bar(="peanuts"), zazz(="John") ``` Upvotes: 0 <issue_comment>username_2: The Java compiler doesn't care about the names of the variables passed in as arguments of a method call as it pertains to the names of the parameters of the method that is called. Only the *position* of the values matters. ``` "foo" "peanuts" | | v v mystery( bar , foo , "John") | | | v v v private static void mystery(String foo, String bar, String zazz) ``` The mixed order of variable names here serves no purpose here except to confuse. ``` zazz + " and " + foo + " like " + bar ``` becomes > > John and foo like peanuts > > > Upvotes: 3 [selected_answer]<issue_comment>username_3: I do not see any mystery in this code: you must follow the order of the arguments in the signature of your method and the order of the arguments to the call of this method. If the same order had been respected in both directions with the same names of variables foo, bar and zazz; the output to the display would have been simply: ``` "john and peanuts like foo". ``` But because the order has been inverted, it is necessary to follow the position of each variable in the signature of the method to know the value that will be returned. So in the signature on: ``` foo = 1, bar = 2 and zazz = 3 ``` But in the call we have: ``` bar = 1 and its value = foo foo = 2 and its value = peanuts ``` the value of zazz = john from where the display ``` "john and foo like peanuts" ``` Upvotes: 0
2018/03/19
923
3,387
<issue_start>username_0: Every result i've found online has either been really old, specific to either Python 2.7 or 3.x, or is complicated enough that I can't understand it. My use case is very simple, but i'm having a hard time finding a simple solution. My existing code looks like this: ``` return_val = func_that_makes_a_request(val1, val2) lots = "of" other() code() foo = some_other_func_that_makes_a_request(val3, val4) something_else(foo, return_val) ``` The top line ends making an HTTP request. The next few lines compute some values that are then used to make another HTTP request in the 2nd to last line. These two sections (top line, everything after that except for the last line) are completely independent of each other. The final line though requires the result of both. All I really want is to make these two HTTP requests in parallel, since neither needs to wait for the other. What's the simplest way to do this, that works in Python 2.7+? **Edit for future readers** Went with this solution: ``` with multiprocessing.dummy.Pool(1) as thread_pool: return_async = thread_pool.apply_async(func_that_makes_a_request, (val1, val2)) lots = "of" other() code() foo = some_other_func_that_makes_a_request(val3, val4) return_val = return_async.get() something_else(foo, return_val) ```<issue_comment>username_1: When you're making the call to the 'mystery' method you give the params : ``` bar(="foo"), foo(="peanuts), "John". ``` the names of these variables have nothing to to with the way you declare the method, only their content, so, the method receives the params: ``` foo(="foo"), bar(="peanuts"), zazz(="John") ``` Upvotes: 0 <issue_comment>username_2: The Java compiler doesn't care about the names of the variables passed in as arguments of a method call as it pertains to the names of the parameters of the method that is called. Only the *position* of the values matters. ``` "foo" "peanuts" | | v v mystery( bar , foo , "John") | | | v v v private static void mystery(String foo, String bar, String zazz) ``` The mixed order of variable names here serves no purpose here except to confuse. ``` zazz + " and " + foo + " like " + bar ``` becomes > > John and foo like peanuts > > > Upvotes: 3 [selected_answer]<issue_comment>username_3: I do not see any mystery in this code: you must follow the order of the arguments in the signature of your method and the order of the arguments to the call of this method. If the same order had been respected in both directions with the same names of variables foo, bar and zazz; the output to the display would have been simply: ``` "john and peanuts like foo". ``` But because the order has been inverted, it is necessary to follow the position of each variable in the signature of the method to know the value that will be returned. So in the signature on: ``` foo = 1, bar = 2 and zazz = 3 ``` But in the call we have: ``` bar = 1 and its value = foo foo = 2 and its value = peanuts ``` the value of zazz = john from where the display ``` "john and foo like peanuts" ``` Upvotes: 0
2018/03/19
1,193
2,724
<issue_start>username_0: We all know that since Java 1.8 PermGen was removed and replaced by Metaspace. I have read a lot of topics about Metaspace and I am completely sure that it's exists, but today I was asked about the reference to Oracle JVM specification where is said about metaspace but using search for all the spec I have not find any match for word "metaspace". Can someone share a link to this information with me or tell why I didn't find any info about it? <https://u13831699.ct.sendgrid.net/ls/click?upn=uvt7rQO8Ptvu-2B8hr9RMS5fH-2FNP6DkmoL4CYkMe-2FSSn4-3Dw6tp_KHSll5rOC-2B1S2wQWe0nP-2FIhkravvodBKAjLnih-2F698V3J78Iu7GL6p-2FlrmT6EpJuZE1QjZS1z9DqBUfYXUfjvnsy4WpgSla5uREbboH11eB9xf4RFjTO5aaC-2F5DLuJE371qIPam0Zq3jA15qfWnRd80epgPQjiG-2BzeXhiJSfP9RIOJg35GegDITWQt1noUni-2FLu5XMMRaphUkzzTawMEXGJxo73SFD4Ub6TWQpH7reRc53nYr2jo-2Fl6h0kLcvWoNFoX7N5-2FrNmNM5-2FK41YeIjSjq4BZOyKqYZ1HKn4YqFOfe3EoK6qk6lefF8uUpdUHik-2BUBRC7CpHGIbIyyBW0jqgNl12ugKAQoCirxitSCvF5-2BYsh3YtPjKlmlhJwJAz4Z> ``` ======= flowable - license ======= Licensee = <EMAIL> Primary Contact = <NAME> Issue Date = 2022-12-19 Product = Flowable Design Version = 3 Expiry Date = 2023-01-19 Type = Trial License Product = Flowable Platform Version = 3 Expiry Date = 2023-01-19 Type = Trial License Product = Flowable Work Version = 3 Expiry Date = 2023-01-19 Type = Trial License Product = Flowable Engage Version = 3 Expiry Date = 2023-01-19 Type = Trial License Product = Flowable Inspect Version = 3 Expiry Date = 2023-01-19 Type = Trial License ======= flowable - license ======= N<KEY> ```<issue_comment>username_1: That's an implementation detail of [HotSpot](https://en.wikipedia.org/wiki/HotSpot), not something that is or should be in the specification of a generic [JVM](https://docs.oracle.com/javase/specs/jvms/se8/html/index.html). You can see it described in [JEP 122: Remove the Permanent Generation](http://openjdk.java.net/jeps/122) and some issues linked from it [JDK-8046112](https://bugs.openjdk.java.net/browse/JDK-8046112), [JDK-6964458](https://bugs.openjdk.java.net/browse/JDK-6964458). Upvotes: 4 [selected_answer]<issue_comment>username_2: JVM specification describe method area. For example, method area looks like interface,metaspace looks like the implemention. So you cann't find any match for word "metaspace" in JVM specification. Upvotes: -1
2018/03/19
2,516
9,587
<issue_start>username_0: I am running into a `globalKey` error after I navigate from `Screen A` to `Screen B` and click a "Cancel" button to go back to `Screen A`. It seems like the issue is that `Screen B` is either * A) Not being disposed of correctly * B) Is not doing something that it otherwise could And I don't actually know: * What bad things are happening if I just remove the use of a `globalKey`? (as to get a better understanding of the fundamentals) * How can I correctly resolve this issue? StatefulWidget documentation states:[enter link description here](https://docs.flutter.io/flutter/widgets/StatefulWidget-class.html) > > A StatefulWidget keeps the same State object when moving from one > location in the tree to another if its creator used a GlobalKey for > its key. Because a widget with a GlobalKey can be used in at most one > location in the tree, a widget that uses a GlobalKey has at most one > associated element. **The framework takes advantage of this property > when moving a widget with a global key from one location in the tree > to another by grafting the (unique) subtree associated with that > widget** from the old location to the new location (instead of > recreating the subtree at the new location). The State objects > associated with StatefulWidget are grafted along with the rest of the > subtree, which means the State object is reused (instead of being > recreated) in the new location. **However, in order to be eligible for > grafting, the widget must be inserted into the new location in the > same animation frame** in which it was removed from the old location. > > > Console Error Output: ``` ══╡ EXCEPTION CAUGHT BY WIDGETS LIBRARY ╞═══════════════════════════════════════════════════════════ The following assertion was thrown while finalizing the widget tree: Duplicate GlobalKey detected in widget tree. The following GlobalKey was specified multiple times in the widget tree. This will lead to parts of the widget tree being truncated unexpectedly, because the second time a key is seen, the previous instance is moved to the new location. The key was: - [LabeledGlobalKey>#3c76d] This was determined by noticing that after the widget with the above global key was moved out of its previous parent, that previous parent never updated during this frame, meaning that it either did not update at all or updated before the widget was moved, in either case implying that it still thinks that it should have a child with that global key. The specific parent that did not update after having one or more children forcibly removed due to GlobalKey reparenting is: - Column(direction: vertical, mainAxisAlignment: start, crossAxisAlignment: center, renderObject: RenderFlex#7595c relayoutBoundary=up1 NEEDS-PAINT) A GlobalKey can only be specified on one widget at a time in the widget tree. ``` So this part of the error output: > > previous parent never updated during this frame, meaning that it > either did not update at all or updated before the widget was moved > > > makes me think there was some opportunity for my old Stateful widget to do something (either reposition itself or release something as to be disposed correctly. This seems to be failing in `framework.dart` on `assert(_children.contains(child))`: ``` @override void forgetChild(Element child) { assert(_children.contains(child)); assert(!_forgottenChildren.contains(child)); _forgottenChildren.add(child); } ```<issue_comment>username_1: Thanks to Gunter's commments, I determined that this is because the Screens are not being properly disposed. Flutter's [pushReplacement](https://github.com/flutter/flutter/blob/master/packages/flutter/lib/src/widgets/navigator.dart#L984-L1010) makes a call to `Route.dispose` which will ultimately dispose the screen. I am still unsure as to this comes into play: > > widget must be inserted into the new location in the same animation > frame > > > I'm not sure what situation would benefit from such trickery. However, my problem is solved. I just need to make a call to pop or replace. Here are the available options: * Use `push` from `A` to `B` and just `Navigator.pop` from `B` * Use `pushReplacement` from `A` to `B` and from `B` to `A` I've recently started playing with [Fluro](https://github.com/goposse/fluro) for routing and there are a few more ways to to handle these situations (Note the optional argument [replace](https://github.com/goposse/fluro/blob/master/lib/src/router.dart#L43-L59)): * Use `router.navigateTo(context, route, replace: false)` from `A` to `B` and `Navigator.pop` from `B` * Use `router.navigateTo(context, route, replace: true)` from `A` to `B` the same from `B` to `A` (the key is `replace: true`) Upvotes: 5 <issue_comment>username_2: I also had a similar error. My answer was that after I updated Flutter some widgets no longer had child or children properties. In my case it was the CircleAvatar. The build doesn't error out initially, but when navigating back and forth through the app it will fail. \*Please review all widgets that require a child then review the updated documentation and make sure you're parameters are still correct. Upvotes: -1 <issue_comment>username_3: In my case I wanted to use the `static GlobalKey \_scaffoldKey` but when I used the same widget multiple times it gave this duplicate error. I wanted to give it a unique string and still use this *scaffold state*. So I ended up using: ``` static GlobalObjectKey \_scaffoldKey ``` and in the `initState`: ``` _scaffoldKey = new GlobalObjectKey(id); ``` Edit: Actually, silly me. I just simply removed the `static` and made it `GlobalKey` again :) Upvotes: 1 <issue_comment>username_4: In my case, it likes a hot reload bug. Just restart debugging works for me. Upvotes: 7 <issue_comment>username_5: Best way to solve that, which worked for me: ``` class _HomeScreenState extends State { GlobalKey \_homeKey = GlobalKey(debugLabel: '\_homeScreenkey'); @override Widget build(BuildContext context) { return Container( key: \_homeKey, ); } } ``` Upvotes: 3 <issue_comment>username_6: please take SingleChildScrollview: and after if you use the bloc pettern then use strem with broadcast code is here: ``` body: Container( decoration: BoxDecoration( image: DecorationImage( image: AssetImage('assets/images/abcd.jpg'), fit: BoxFit.cover, ), ), child: Container(child:Form(key: _key, child: Padding( padding: EdgeInsets.symmetric(vertical: 100.0, horizontal: 20.0), child: SingleChildScrollView(child:Column( children: [ Padding( padding: const EdgeInsets.all(10.0), child: Image.asset('assets/images/logo.png', height: 80, width:80,), ), email(), Padding( padding: EdgeInsets.all(5.0), ), password(), row(context), ], ), ), ), ), ), ), resizeToAvoidBottomPadding: false, ); } ``` and the bloc pettern code is here: ``` final _email = StreamController.broadcast(); final \_password = StreamController.broadcast(); Stream get email => \_email.stream.transform(validateEmail); Stream get password=> \_password.stream.transform(validatepassword); Function(String) get changeEmail=> \_email.sink.add; Function(String) get changePassword => \_password.sink.add; dispose(){ \_email.close(); \_password.close(); } } final bloc=Bloc(); ``` Upvotes: 1 <issue_comment>username_7: Remove the static and final type from the key variable so if ``` static final GlobalKey \_abcKey = GlobalKey(); ``` change it to ``` GlobalKey \_abcKey = GlobalKey(); ``` Upvotes: 6 <issue_comment>username_8: I had this issue too. I had a four screen bottom tabbed application and a 'logout' method. However, that logout method was calling a pushReplacementNamed. This prevented the class that held the global keys (different from the logout function) from calling dispose. The resolution was to change pushReplacementNamed with popAndPushNamed to get back to my 'login' screen. Upvotes: 3 <issue_comment>username_9: Make sure that you don't have a `Form` parent and a `Form` child with the same `key` Upvotes: 3 <issue_comment>username_10: This happened to me, what I did was enclosed the whole view into a navigator using an extension I made ``` Widget addNavigator() => Navigator( onGenerateRoute: (_) => MaterialPageRoute( builder: (context2) => Builder( builder: (context) => this, ), ), ); ``` Upvotes: 1 <issue_comment>username_11: I also got this error. There was a static bloc object in a class and I removed the static keyword which fixed the error. Events should be added by using the BlocProvider anyway. Upvotes: 0 <issue_comment>username_12: I had similar issue on a StatelessWidget class, Converted it to StatefulWidget and error is gone. Upvotes: 1 <issue_comment>username_13: If you have multiple forms with different widgets, you must use separate **GlobalKey** for each form. Like I have two forms, one with Company signup & one with Profile. So, I declared ``` GlobalKey signUpCompanyKey = GlobalKey(); GlobalKey signUpProfileKey = GlobalKey(); ``` Upvotes: 2 <issue_comment>username_14: I was getting this error for using multiple loader in a single page for every API response delay. Applied above all solution but no luck. finally, used separate Globalkey for every loader calling ``` GlobalKey \_loaderDialog = GlobalKey(); GlobalKey \_loaderDialogForSubmit = GlobalKey(); ``` And solved my issue. Upvotes: 0
2018/03/19
554
2,149
<issue_start>username_0: I am exploring `vscode` after using `atom` for a long while. One of the things I'm missing is an equivalent of the lovely package [`advanced-open-file`](https://atom.io/packages/advanced-open-file). Is there something similar to this in vscode? I found the [`advanced-new-file`](https://marketplace.visualstudio.com/items?itemName=patbenatar.advanced-new-file) extension, but it is only helpful when it comes to new files. I would like to be able to quickly open files from all over my local files (not only the workspace). **Edit:** I found the option of `workbench.action.quickOpen`; but it doesn't allow opening files from the whole file system.<issue_comment>username_1: Sorry, but **currently** the answer is no. The problem is that input box doesn't provide a way to listen to key events: [GitHub issue](https://github.com/Microsoft/vscode/issues/426), so even the extensions can't do that currently. Here's the comment from [advanced-new-file](https://marketplace.visualstudio.com/items?itemName=patbenatar.advanced-new-file#user-content-notes) extension creator: > > Because VSCode extensions don't yet have the ability to do type-ahead autocomplete within the text input box (See <https://github.com/Microsoft/vscode/issues/426>), we work around this limitation and provide autocomplete using a two-step workflow of selecting existing path, then providing new filename/path relative to the selection. > > > The **good news** is that there is [a new API](https://github.com/Microsoft/vscode/issues/426#issuecomment-403413948) addressing this issue, but it's currently in ['proposed'](https://code.visualstudio.com/updates/v1_25#_proposed-extension-apis) state and can't be used for published extensions. One workaround could be typing `code -r some/path` in integrated terminal and using 'tab' for autocomplete. Upvotes: 3 <issue_comment>username_2: The [Fuzzy search extension](https://marketplace.visualstudio.com/items?itemName=tatosjb.fuzzy-search) seems to work for me. It adds a new action to the command palette which allows you to search for files in the current project and open them. Upvotes: 1
2018/03/19
692
2,497
<issue_start>username_0: I am trying to create a simple javascript library to learn something new. I am trying to use es6 and webpack. Everything is working fine but I am stuck at one point, when I try to use it as standalone, I mean when I add to my HTML page and try to access `MyLibrary` variable. It gives me ReferenceError. Can someone please guide me how to properly setup and compile code so that it could be run without require.js etc.<issue_comment>username_1: To make your own js library you make a js file just like normal and attach it just as you seem to be doing. Your issue might be coming in where you are referencing `MyLibrary`. Are you referencing the variable before your js file loads? Upvotes: -1 <issue_comment>username_2: I understand your question as mainly being about getting from typescript to an importable library which can be included in HTML and used in your `..`. If this is correct you can use the below minimal setup. At least it should get you started. package.json: ``` { "name": "qlib", "version": "1.0.0", "scripts": { "build": "webpack --progress --colors --config webpack.config.js", "start": "webpack-dev-server --content-base ./ " }, "license": "ISC", "devDependencies": { "ts-loader": "^4.1.0", "typescript": "^2.7.2", "webpack": "^4.1.1", "webpack-cli": "^2.0.12", "webpack-dev-server": "^3.1.1" } } ``` tsconfig.json: ``` { "compilerOptions": { "module": "commonjs", "moduleResolution": "node", "target": "es6", "noImplicitAny": false, "lib": [ "es6", "dom" ] } } ``` webpack.config.js: ``` var webpack = require("webpack"); module.exports = { entry: './main.ts', module: { rules: [ { test: /\.tsx?$/, use: ['ts-loader'], exclude: /node_modules/ } ] }, resolve: { extensions: [".ts", ".js"], }, output: { filename: 'bundle.js', libraryExport: 'default', library: 'qlib' } }; ``` main.ts: (the library entry point) ``` export class QLib { public helloWorld() { console.log("hello world"); } } var QInstance = new QLib(); export default QInstance; ``` index.html: ``` MyLib Testing qlib.helloWorld(); testing ``` And finally install, build and start: ``` npm install && npm run build && npm start ``` Upvotes: 2 [selected_answer]
2018/03/19
903
3,162
<issue_start>username_0: I just get a website to give some updates but I've been out of CSS coding for some years and still adapting to the new concepts. The idea is to give some extra parameters where the actual responsivity is not working well. So what I'm trying to do objectively is to centralize the logo and resize it to be a little bigger. this is the HTML block regarding this section ``` [![Brand](img/logo_nomesistema.png)](index.php) ``` What I need to modify is that img called "logo\_nomesistema.png". I tried several different parameters but it's not changing the img. My actual code for this section is: ``` @media screen and (min-width: 768px) { .navbar > .container .navbar-brand, .navbar > .container-fluid .navbar-brand > img { min-width: 200px; } } @media screen and (min-width: 768px) { .navbar-collapse > .navbar-header > .navbar-brand > img { min-width: 200px; } } ``` As you can see I have two different options because I'm not sure that I'm correctly adressing the div. If somebody could help me to solve this problem I'll be very glad.<issue_comment>username_1: To make your own js library you make a js file just like normal and attach it just as you seem to be doing. Your issue might be coming in where you are referencing `MyLibrary`. Are you referencing the variable before your js file loads? Upvotes: -1 <issue_comment>username_2: I understand your question as mainly being about getting from typescript to an importable library which can be included in HTML and used in your `..`. If this is correct you can use the below minimal setup. At least it should get you started. package.json: ``` { "name": "qlib", "version": "1.0.0", "scripts": { "build": "webpack --progress --colors --config webpack.config.js", "start": "webpack-dev-server --content-base ./ " }, "license": "ISC", "devDependencies": { "ts-loader": "^4.1.0", "typescript": "^2.7.2", "webpack": "^4.1.1", "webpack-cli": "^2.0.12", "webpack-dev-server": "^3.1.1" } } ``` tsconfig.json: ``` { "compilerOptions": { "module": "commonjs", "moduleResolution": "node", "target": "es6", "noImplicitAny": false, "lib": [ "es6", "dom" ] } } ``` webpack.config.js: ``` var webpack = require("webpack"); module.exports = { entry: './main.ts', module: { rules: [ { test: /\.tsx?$/, use: ['ts-loader'], exclude: /node_modules/ } ] }, resolve: { extensions: [".ts", ".js"], }, output: { filename: 'bundle.js', libraryExport: 'default', library: 'qlib' } }; ``` main.ts: (the library entry point) ``` export class QLib { public helloWorld() { console.log("hello world"); } } var QInstance = new QLib(); export default QInstance; ``` index.html: ``` MyLib Testing qlib.helloWorld(); testing ``` And finally install, build and start: ``` npm install && npm run build && npm start ``` Upvotes: 2 [selected_answer]
2018/03/19
470
1,669
<issue_start>username_0: In react I have a code like this: ``` var myButtons=[]; /*Products is an array of objects where each object identify a product*/ for (var p of Products) { var button = p.name myButtons.push(button) } ``` I will use this react array of buttons on a render command. The problem I have is that I do not know how to make one of these buttons to show its label p.name through the onClickFunction.<issue_comment>username_1: You can add your label as paremeter : ``` onClickFunction(p.name)}> p.name ``` And : ``` onClickFunction = (label) => () =>{ console.log(label) } ``` Upvotes: 1 <issue_comment>username_2: A simpler more user friendly way is to iterate the data with a function. (note that this does not take into account scope, so `this` may be needed if it's inside a component) ``` function makeButton(data) { return ( onClickFunction(data.label)}> //pass parameter for callback here if binding isn't used data.name ); } ``` Now you can simply use a binding map inside your div! ``` {Products.map(makeButton, this)} ``` Upvotes: 2 <issue_comment>username_3: The easiest way is to use ES6 syntax and array `map`. The `name` property should be unique, and don't forget provide a key for each button: ``` const myButtons = Products.map(p => ( { this.onClickFunction(e, p.name); }} key={p.name}/> {p.name} )); ``` Using an arrow function, so it doesn't require `.bind(this)`. Add `e.preventDefault()` to prevent default behavior, if the buttons are in a `form`. ``` onClickFunction = (e, name) => { e.preventDefault(); // Your button behavior goes here. } ``` Upvotes: 1
2018/03/19
451
1,843
<issue_start>username_0: I have 2 parameters (Detail, Summary) that I have created in a Crystal Report. The report is called from c# in a Windows Forms application. I am trying to pass the appropriate value to each parameter at runtime so the report can make some decisions based on the values. I have read many articles regarding this and I think I am using the best method to accomplish this? This is the simple code I have implemented after the report has been loaded and before the SetDataSoruce has been set: ``` crReportDocument.SetParameterValue("DetailView", false); crReportDocument.SetParameterValue("SummaryView", true); ``` For some reason the values are not getting to the report as the report is always prompting for the values to be set when it runs. Everything else about the report works correctly. I would appreciate any light someone can shed on this matter as it seems to be a simple task to do?<issue_comment>username_1: The problem is that the parameter must be passed using format `{?PARAMETER}`. It works. ``` crReportDocument.SetParameterValue("{?DetailView}", false); crReportDocument.SetParameterValue("{?SummaryView}", true); ``` Upvotes: -1 <issue_comment>username_2: Actually the problem was code placement. I was populating the parameters in the wrong place of code execution: This is how I had it when it was not working: ``` crReportDocument.SetParameterValue("FromDate", dtmFromDate); ReportViewer reportViewer = new ReportViewer(); reportViewer.ReportSource = crReportDocument; ``` To resolve I moved the code around as follows: ``` ReportViewer reportViewer = new ReportViewer(); reportViewer.ReportSource = crReportDocument; crReportDocument.SetParameterValue("FromDate", dtmFromDate); ``` That's all it took to get it to work correctly. Let me know if it does not work for you. Upvotes: 0
2018/03/19
288
932
<issue_start>username_0: I'm trying to create a razor page to create some users programmatically. I'm doing something wrong at this point (obviously). Inside of my pagemodel, I have the following: ``` [Required] [Display(Name = "<NAME>")] public string FirstName { get; set; } ``` In my cshtml, I have: ``` | | | | --- | --- | | | | ``` I would expect to get a label that says "<NAME>" and a textfield that has the value (in this specific use case, I would not expect a value). Unfortunately, I am getting nothing. Any help is appreciated. ``` | | | | --- | --- | | | | ``` Thanks<issue_comment>username_1: You don't need the `@Model` portion on the tag-helpers, you only need the property name: ``` | | | | --- | --- | | | | ``` Upvotes: 2 <issue_comment>username_2: ok, so the problem was that i forgot to add the tag helpers to the page. problem solved. thanks @username_1 :-) Upvotes: 0
2018/03/19
240
888
<issue_start>username_0: **So I have a question about XSL:** I have a bunch of elements that contain a child element that gets renamed and moved with a certain update. I am trying to handle that transformation. The example below will give a better idea. **Before:** ``` name 20 route ``` **After:** ``` route name 20 ``` And I have to do this for a number of these individual elements. I have looked at a lot of similar questions on here about this but none of their solutions were doing what I wanted them to do. So any help would be appreciated.<issue_comment>username_1: You don't need the `@Model` portion on the tag-helpers, you only need the property name: ``` | | | | --- | --- | | | | ``` Upvotes: 2 <issue_comment>username_2: ok, so the problem was that i forgot to add the tag helpers to the page. problem solved. thanks @username_1 :-) Upvotes: 0
2018/03/19
434
1,468
<issue_start>username_0: Initially Android Studio shows 'can not resolve ...' but after I clean and rebuild the project it can not resolve `R`. [![enter image description here](https://i.stack.imgur.com/0Bsa1.png)](https://i.stack.imgur.com/0Bsa1.png) [![enter image description here](https://i.stack.imgur.com/Z6ZQ7.png)](https://i.stack.imgur.com/Z6ZQ7.png) [![enter image description here](https://i.stack.imgur.com/WD08k.png)](https://i.stack.imgur.com/WD08k.png)<issue_comment>username_1: `Gradle build finished with 7 error(s) in 12s 393ms (2 minutes ago)` This means that your app failed to build. The way Android Studio works is that the `R` class is generated as part of the build - it was deleted when you cleaned the project, and since the build failed it could not be recreated. This is a red herring error - it's an actual build error, but it's caused by another. If you open the Messages tab, you'll see a few errors - the `Unresolved reference: R` ones will be there, but so will be others. If you manage to fix them, them the `R` ones will fix themselves. :) Upvotes: 1 <issue_comment>username_2: If you read the error throughly, you will found the following: ``` error:resource drawable/ic_launcher(aka com.example.paramjeet.service:/drawable/ic_launcher) not found ``` That means you don't have any image named `ic_launcher` in `res/drawable/` or `res/drawable***/` You only have ic\_launcher in `/res/mipmap/`. So, change to it. Upvotes: 0
2018/03/19
982
2,515
<issue_start>username_0: I would like to compare previous and next rows for given row. I used `diff` function for this purpose but unfortunately it does not give what I need. Similar post is [here](https://stackoverflow.com/questions/36895525/how-to-get-the-difference-between-next-and-previous-row-in-an-r-time-series) but my post is rather looking for something different! Here is what I mean ``` test = data.frame(y1=c(10,22,22,36,36,36,38), y2=c(12,22,18,21,14,17,15)) > test y1 y2 1 10 12 2 22 22 3 22 18 4 36 21 5 36 14 6 36 17 7 38 15 ``` and I tried ``` test%>% mutate(diff_y1=c(NA,diff(y1))) y1 y2 diff_y1 1 10 12 NA 2 22 22 12 3 22 18 0 4 36 21 14 5 36 14 0 6 36 17 0 7 38 15 2 ``` What I want to have is that assigning same difference number if there is consecutive row next of previous of that row. See that the `diff` function gives different `diff_y1` values. Its normal. What I want add is that setting same diff value if the first row that starts a consecutive row. If row i and i+1 or i and i-1 are the same I want a 0 on row i and i+1 or i and i-1. Something like this is the expected output ``` y1 y2 diff_y1 1 10 12 NA 2 22 22 0 3 22 18 0 4 36 21 0 5 36 14 0 6 36 17 0 7 38 15 2 ```<issue_comment>username_1: If row i and i+1 are the same you want a 0 on row i and i+1. We first apply the diff r function. Whenever the difference is 0 at row i it means that the "difference" at row i-1 should also be set to 0. ``` customdiff <- function(x){ res<-c(NA,diff(x)) res[which(res==0)-1]=0 res } test%>% mutate(diff_y1=customdiff(y1)) ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Just to provide an alternative, you could use the phantastic `dplyr` package, especially `case_when()`, `lead()` and `lag()`: ``` library(dplyr) test = data.frame(y1=c(10,22,22,36,36,36,38), y2=c(12,22,18,21,14,17,15)) test %>% mutate(prev_ = lag(y1), # the previous item next_ = lead(y1), # the next item diff_y1 = case_when( # ifelse y1 == next_ | y1 == prev_ ~ 0, TRUE ~ y1-prev_ )) %>% select(-prev_,-next_) # deselect prev & next ``` Which yields ``` y1 y2 diff_y1 1 10 12 NA 2 22 22 0 3 22 18 0 4 36 21 0 5 36 14 0 6 36 17 0 7 38 15 2 ``` Upvotes: 2
2018/03/19
3,062
8,100
<issue_start>username_0: I have data table with hierarchy data model with tree structures. For example: Here is a sample data row: ``` ------------------------------------------- Id | name |parentId | path | depth ------------------------------------------- 55 | Canada | null | null | 0 77 | Ontario | 55 | /55 | 1 100| Toronto | 77 | /55/77 | 2 104| Brampton| 100 | /55/77/100 | 3 ``` I am looking to convert those rows into flattening version, sample output would be: ``` ----------------------------------- Id | name | parentId | depth ------------------------------------ 104| Brampton | Toronto | 3 100| Toronto | Ontario | 2 77 | Ontario | Canada | 1 55 | Canada | None | 0 100| Toronto | Ontario | 2 77 | Ontario | Canada | 1 55 | Canada | None | 0 77 | Ontario | Canada | 1 55 | Canada | None | 0 55 | Canada | None | 0 ``` I tried using cartesian or do like n2 search but none of them are working.<issue_comment>username_1: Below is one way: ``` //Creating DF with your data def getSeq(s:String): Seq[String] = { s.split('|').map(_.trim).toSeq } var l = getSeq("77 | Ontario | 55 | /55 | 1") :: Nil l :+= getSeq("55 | Canada | null | null | 0") l :+= getSeq("100| Toronto | 77 | /55/77 | 2") l :+= getSeq("104| Brampton| 100 | /55/77/100 | 3") val df = l.map(x => x match { case Seq(a,b,c,d,e) => (a,b,c,d,e) }).toDF("Id", "name", "parentId", "path", "depth") //original DF with parentName using a self join val dfWithPar = df.as("df1").join(df.as("df2"), $"df1.parentId" === $"df2.Id", "leftouter").select($"df1.Id",$"df1.name",$"df1.parentId",$"df1.path",$"df1.depth",$"df2.name".as("parentName")) // Split path as per requirement and get the exploded DF val dfExploded = dfWithPar.withColumn("path", regexp_replace($"path", "^/", "")).withColumn("path", split($"path","/")).withColumn("path", explode($"path")) //Join orig with exploded to get addendum of rows as per individual path placeholders val dfJoined = dfWithPar.join(dfExploded, dfWithPar.col("Id") === dfExploded.col("path")).select(dfWithPar.col("Id"), dfWithPar.col("name"), dfWithPar.col("parentId"), dfWithPar.col("path"), dfWithPar.col("depth"), dfWithPar.col("parentName")) //Get the final result by adding the addendum to orig dfWithPar.union(dfJoined).select($"Id", $"name", $"parentName", $"depth").show +---+--------+----------+-----+ | Id| name|parentName|depth| +---+--------+----------+-----+ | 77| Ontario| Canada| 1| | 55| Canada| null| 0| |100| Toronto| Ontario| 2| |104|Brampton| Toronto| 3| | 77| Ontario| Canada| 1| | 77| Ontario| Canada| 1| | 55| Canada| null| 0| | 55| Canada| null| 0| | 55| Canada| null| 0| |100| Toronto| Ontario| 2| +---+--------+----------+-----+ ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: **Self joins with conditions** and **selecting appropriate columns** should work for you. *The solution is a bit tricky as you need to find every parent names in path column including the papentId column* which would require `concat_ws`, `split` and `explode` *inbuilt functions*. The rest of the process is `joins`, `selects` and `fills`. Given dataframe : ``` +---+--------+--------+----------+-----+ |Id |name |parentId|path |depth| +---+--------+--------+----------+-----+ |55 |Canada |null |null |0 | |77 |Ontario |55 |/55 |1 | |100|Toronto |77 |/55/77 |2 | |104|Brampton|100 |/55/77/100|3 | +---+--------+--------+----------+-----+ ``` You can *generate temporary dataframe* for **final join** as ``` val df2 = df.as("table1") .join(df.as("table2"), col("table1.parentId") === col("table2.Id"), "left") .select(col("table1.Id").as("path"), col("table1.name").as("name"), col("table2.name").as("parentId"), col("table1.depth").as("depth")) .na.fill("None") // +----+--------+--------+-----+ // |path|name |parentId|depth| // +----+--------+--------+-----+ // |55 |Canada |None |0 | // |77 |Ontario |Canada |1 | // |100 |Toronto |Ontario |2 | // |104 |Brampton|Toronto |3 | // +----+--------+--------+-----+ ``` And the required dataframe can be achieved by doing ``` df.withColumn("path", explode(split(concat_ws("", col("parentId"), col("path")), "/"))) .as("table1") .join(df2.as("table2"), Seq("path"), "right") .select(col("table2.path").as("Id"), col("table2.name").as("name"), col("table2.parentId").as("parentId"), col("table2.depth").as("depth")) .na.fill("0") .show(false) // +---+--------+--------+-----+ // |Id |name |parentId|depth| // +---+--------+--------+-----+ // |55 |Canada |None |0 | // |55 |Canada |None |0 | // |55 |Canada |None |0 | // |55 |Canada |None |0 | // |77 |Ontario |Canada |1 | // |77 |Ontario |Canada |1 | // |77 |Ontario |Canada |1 | // |100|Toronto |Ontario |2 | // |100|Toronto |Ontario |2 | // |104|Brampton|Toronto |3 | // +---+--------+--------+-----+ ``` **Explanation** for `|104|Brampton|100 |/55/77/100|3 |` row `concat_ws("", col("parentId"), col("path"))` would generate `|104|Brampton|100 |100/55/77/100|3 |` as you can see *100 being concatenated* at the front `split(concat_ws("", col("parentId"), col("path")), "/")` would generate *array column* as `|104|Brampton|100 |[100, 55, 77, 100]|3 |` and `explode(split(concat_ws("", col("parentId"), col("path")), "/"))` as a whole would *explode the array column into separate rows* as ``` |104|Brampton|100 |100 |3 | |104|Brampton|100 |55 |3 | |104|Brampton|100 |77 |3 | |104|Brampton|100 |100 |3 | ``` `joins` are much clearer to understand which doesn't need explanation ;) I hope the answer is helpful Upvotes: 1 <issue_comment>username_3: Here is another version: ``` val sparkConf = new SparkConf().setAppName("pathtest").setMaster("local") val spark = SparkSession.builder().config(sparkConf).getOrCreate() import org.apache.spark.sql._ import org.apache.spark.sql.functions._ import spark.implicits._ var dfA = spark.createDataset(Seq( (55, "Canada", -1, "", 0), (77, "Ontario", 55, "/55", 1), (100, "Toronto", 77, "/55/77", 2), (104, "Brampton", 100, "/55/77/100", 3)) ) .toDF("Id", "name", "parentId", "path", "depth") def getArray = udf((path: String) => { if (path.contains("/")) path.split("/") else Array[String](null) }) val dfB = dfA .withColumn("path", getArray(col("path"))) .withColumn("path", explode(col("path"))) .toDF() dfB.as("B").join(dfA.as("A"), $"B.parentId" === $"A.Id", "left") .select($"B.Id".as("Id"), $"B.name".as("name"), $"A.name".as("parent"), $"B.depth".as("depth")) .show() ``` I have 2 dataframes dfA and dfB which is generated from the first one. dfB is generated with an udf by exploding the array of path. Note that the trick for Canada is to return an empty Array otherwise explode will not generate a row. dfB looks like this: ``` +---+--------+--------+----+-----+ | Id| name|parentId|path|depth| +---+--------+--------+----+-----+ | 55| Canada| -1|null| 0| | 77| Ontario| 55| | 1| | 77| Ontario| 55| 55| 1| |100| Toronto| 77| | 2| |100| Toronto| 77| 55| 2| |100| Toronto| 77| 77| 2| |104|Brampton| 100| | 3| |104|Brampton| 100| 55| 3| |104|Brampton| 100| 77| 3| |104|Brampton| 100| 100| 3| +---+--------+--------+----+-----+ ``` And the final results as next: ``` +---+--------+-------+-----+ | Id| name| parent|depth| +---+--------+-------+-----+ | 55| Canada| null| 0| | 77| Ontario| Canada| 1| | 77| Ontario| Canada| 1| |100| Toronto|Ontario| 2| |100| Toronto|Ontario| 2| |100| Toronto|Ontario| 2| |104|Brampton|Toronto| 3| |104|Brampton|Toronto| 3| |104|Brampton|Toronto| 3| |104|Brampton|Toronto| 3| +---+--------+-------+-----+ ``` Upvotes: 0
2018/03/19
1,479
4,946
<issue_start>username_0: I followed the Relay tutorial for getting started after doing the create-react-app to get a new react up and running. I ran it both in TypeScript mode (from here: <https://github.com/Microsoft/TypeScript-React-Starter>) and also in the normal JavaScript mode and came to the same result initially. This is the error I'm getting when I try and run the app: > > Either the Babel transform was not set up, or it failed to identify > this call site. Make sure it is being used verbatim as `graphql` > > > My suspicion is that Babel was just not running at all, but I'm not sure if that's completely true. I followed this: <https://hackernoon.com/using-create-react-app-with-relay-modern-989c078fa892> to see if that would help get my babel executing within my new create-react-app + relay app with no luck. I even ejected the app from create-react-app and modified the webpack to get it working. Below are what I believe are the relevant files. I've done a ton of searching on this topic with no such luck and can't find an example that's using React + Relay Modern + Graphql. package.json ``` { "name": "testProj", "version": "0.1.0", "private": true, "metadata": { "graphql": { "schema": "./graphql/testProj.json" } }, "dependencies": { "@babel/polyfill": "^7.0.0-beta.42", "@babel/runtime": "^7.0.0-beta.42", "react": "^16.2.0", "react-dom": "^16.2.0", "react-relay": "^1.5.0", "react-scripts-ts": "2.14.0", "relay-runtime": "^1.5.0" }, "scripts": { "start": "node ./setup && react-scripts-ts start", "build": "node ./setup && react-scripts-ts build", "test": "node ./setup && react-scripts-ts test --env=jsdom", "relay": "relay-compiler --src ./src --schema graphql/implementato.graphql --extensions ts tsx" }, "devDependencies": { "@babel/register": "^7.0.0-beta.42", "@types/jest": "^22.2.0", "@types/node": "^9.4.7", "@types/react": "^16.0.40", "@types/react-dom": "^16.0.4", "@types/react-relay": "^1.3.4", "babel-plugin-relay": "^1.5.0", "babel-plugin-styled-components": "^1.5.1", "babel-relay-plugin-loader": "^0.11.0", "graphql": "^0.13.2", "relay-compiler": "^1.5.0", "typescript": "^2.7.2" } } ``` setup.js ``` const fs = require('fs'); const path = require('path'); const file = path.resolve('./node_modules/babel-preset-react-app/index.js'); let text = fs.readFileSync(file, 'utf8'); if (!text.includes('babel-plugin-relay')) { if (text.includes('const plugins = [')) { text = text.replace( 'const plugins = [', "const plugins = [\n require.resolve('babel-plugin-relay'),", ); fs.writeFileSync(file, text, 'utf8'); } else { throw new Error(`Failed to inject babel-plugin-relay.`); } } ``` App.tsx (or App.jsx) ``` import * as React from 'react'; import { QueryRenderer, graphql } from 'react-relay'; import environment from './environment'; const query = graphql` query AppQuery{ allAccounts { totalCount } }`; class App extends React.Component { render() { return ( { if (error) { return Error!; } if (!props) { return Loading...; } return Loaded!; }} /> ); } } export default App; ``` Please let me know if any more files or information would be helpful. I'd really appreciate any direction I can get. Thanks!<issue_comment>username_1: I ran into this exact issue on the same tutorial. Basically, this is a Babel configuration issue. There are ways to jump through hoops getting this to work with create react app but the easiest way it to just eject the app and do the following steps: Run `react-scripts eject` (make sure `react-scripts` is installed globally) Adjust your Webpack config to include `'babel-loader'`: ``` { test: /\.jsx$/, exclude: /node_modules/, loader: 'babel-loader' }, ``` Add `.babelrc` to you project's root directory: ``` { "presets": [ "env" ], "plugins": [ "relay" ] } ``` Install `babel-core`, `babel-preset-env`, and `babel-loader` as dev dependencies in your project. Once Babel is running properly you should no longer get the error you are seeing. Upvotes: 3 [selected_answer]<issue_comment>username_2: In case of `create-react-app`, you can avoid having to eject by installing `babel-plugin-relay/macro` and doing this: ``` import graphql from "babel-plugin-relay/macro"; ``` Instead of importing the `graphql` tag from `react-relay`. Upvotes: 1 <issue_comment>username_3: **React Native** In case of create-react-app, you can avoid having to eject by installing babel-plugin-relay/macro and doing this: import graphql from "babel-plugin-relay/macro"; Instead of importing the graphql tag from react-relay. Don't forget to add plugins: ['macros'] to your babel config. Using the package `babel-plugin-macros` to avoid the node crypto error. Upvotes: 0
2018/03/19
766
2,668
<issue_start>username_0: I am new to php and trying just to append these two variables that I'm getting from a form on another page to a csv file. ``` php $email = $_POST['Email_Address']; $full_name = $_POST['Full_Name']; $entry = $full_name; $entry .= ","; $entry .= $email; $file = fopen("contacts.csv","w"); fputcsv($file,explode(',',$entry)); fclose($file); ? ``` This script does run and the file does get written, but two problems: 1. It seems to write over the existing file instead of appending. 2. Clicking submit takes me to a blank page with Url = location of this php script. The form is in a pop up window and I just want to close that window onclick instead of it taking me somewhere else. Here's the form: ``` ``` There isn't any error that I can see (I enabled errors to check). How do I accomplish this?<issue_comment>username_1: You are using `'w'` instead of `'a'` in `fopen()`, that's why your file is always overridden. Then you could use an array to use `fputcsv()` instead of create a string and explode it, because if a string contains a comma, you will write three values, instead of two. From [`fopen()`](https://php.net/fopen) documentation: > > **a** > > Open for writing only; place the file pointer at the end of the file. If the file does not exist, attempt to create it. In this mode, fseek() has no effect, writes are always appended. > > **w** > > Open for writing only; place the file pointer at the beginning of the file and **truncate the file to zero length**. If the file does not exist, attempt to create it. > > > ``` // first, check the existance of your variables: if (isset($_POST['Email_Address']) && isset($_POST['Full_Name'])) { $email = $_POST['Email_Address']; $full_name = $_POST['Full_Name']; // create an array with your values $entry = [$full_name, $email]; // use 'a' to Append. $file = fopen('contacts.csv', 'a'); fputcsv($file, $entry); fclose($file); } // then redirect header("Location: /"); exit; ``` Note that you should check the content of the posted values before to write them, and use them. Upvotes: 3 [selected_answer]<issue_comment>username_2: 1) To append use the 'a' flag (<http://php.net/manual/en/function.fopen.php>) ``` $file = fopen("contacts.csv","a"); ``` 2) Yout get redirected to a blank page after submitting because when you post to that url, you execute the logic in the php file. As it probably doesn't output any html, it's a blank page. For this, You can post asynchronously with javascript. Look here [jQuery AJAX submit form](https://stackoverflow.com/questions/1960240/jquery-ajax-submit-form) Upvotes: 1
2018/03/19
698
2,327
<issue_start>username_0: **Current code:** ``` import pyodbc connection = pyodbc.connect('DRIVER={ODBC Driver 13 for SQL Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password) if connection: print("Yes we connected\n") cur = connection.cursor() cur.execute(SELECT cost1 FROM tbl) data = cur.fetchall() for row in data: print(row) ``` I admit I have almost no idea what I am doing in python and I got slapped with a project a little while ago that is do ASA-yesterday. I need to be able to take the results from the SQL query and sum them together. Or store them in variables to where I can call back to them in an equation. **Results are:** 10, 431, 543, 1268, 1207 **Expected Result:** 3459 I am more than happy to give any more information, and I will be spending the interim continuing to research.<issue_comment>username_1: your SQL query should be modified to `SELECT sum(cost1) FROM tbl` or you can get the results in the python, and sum them... otherwise you are just getting a list of values in the column cost1 or in python you can do: ``` import pyodbc connection = pyodbc.connect('DRIVER={ODBC Driver 13 for SQL Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password) if connection: print("Yes we connected\n") cur = connection.cursor() cur.execute(SELECT cost1 FROM tbl) data = cur.fetchall() sum = 0 for row in data: sum = sum + row print(sum) ``` Upvotes: 1 <issue_comment>username_2: If you are only after the sum and not the individual numbers then the easiest way is to do that in SQL: ``` query = 'select sum(cost1) from tbl' ``` If you want the individual numbers also then.: ``` cur.execute(SELECT cost1 FROM tbl) data = cur.fetchall() tot = 0 for row in data: print(row) tot += row[0] print(tot) ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: **You don't need to go adding row by row:** ``` import pyodbc connection = pyodbc.connect('DRIVER={ODBC Driver 13 for SQLServer};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password) if connection: print("Yes we connected\n") cur = connection.cursor() cur.execute('SELECT sum(cost1) FROM tbl;') x=cur.fetchone()[0] print(x) # x will hold the value of the sum ``` Have a great day. Upvotes: 0
2018/03/19
728
2,416
<issue_start>username_0: In our company use Centura from 1999, I'm newbie in this, but there not so lot information on web, can somebody help me with good manuals? I find this language looks like Ada.<issue_comment>username_1: Good stuff. Moved on a lot since 1999. SQLWindows is now 64bit ( version 7 onwards ) , UNICODE ( v5 onwards ) and .Net enabled. Well there are many resources - you just need to know where to look. Also there are many, many manuals - I'll attach some here as a starter. If you need anything specific e.g. ReportWriter or Connectivity or DBA etc - just say and I will attach. 1) [Gupta / Centura Manuals (ALL versions !)](http://samples.tdcommunity.net/index.php?dir=Misc/TD_Books/) Here are some other links you MUST have: 2) [Gupta Global Community Forum - ( Sign-in and you get more options )](http://forum.tdcommunity.net/) 3) [Archived Global Community Forum ( Read Only)](http://support.guptatechnologies.com/supportforum/) 4) [Gupta Team Developer/SQLWindows Wiki](http://wiki.tdcommunity.net/index.php/Main_Page) 5) [Team Developer Sample Vault](http://samples.tdcommunity.net/) 6) [TekTips Centura Forum](http://www.tek-tips.com/threadminder.cfm?pid=430) and [(another) TekTips Gupta Forum](http://www.tek-tips.com/threadminder.cfm?pid=854) 7) [Gupta SQLWindows - SQLBase - TD.Net Users & Developers Network](https://www.linkedin.com/groups/144210) Upvotes: 4 [selected_answer]<issue_comment>username_2: [Gupta technologies TD support documents for version 5.1](https://support.guptatechnologies.com/Docs/TDDoc/wwhelp/wwhimpl/js/html/wwhelp.htm) The link contains features for version 5.1 but core concepts remains the same for all versions. The link contains documentation for below mentioned modules, ``` 1. Team Developer Web Services 2. Team Developer - API Reference 3. Connecting SQLWindows Objects to Databases 4. Extending the SQLWindows Development Environment. 5. Localizing and Customizing SQLWindows Applications 6. Unify Report Builder - Business Reporting. 7. Managing Teams and Objects with Team Object Manager ``` Upvotes: 0 <issue_comment>username_3: I found a guy that made some videos on YouTube teaching Gupta. <https://youtube.com/playlist?list=PLNQmauumXnBc_kS9cyUJ3d3u2nNWXms0z> He also wrote a book CDT2000 and share for free a actualized version on web. <http://appstartup.blogspot.com/2013/05/?m=0> Tks LairtonJr. Upvotes: 0
2018/03/19
446
1,573
<issue_start>username_0: Given a finite number of sorted infinite streams. How to merge those stream into a single sorted infinite stream? e.g. ``` def merge[T](ss: List[Stream[T]]): Stream[T] ```<issue_comment>username_1: You could do something like this ``` def merge[T: Ordering](ss: List[Stream[T]]): Stream[T] = { val str = ss.minBy(_.head) val (containsMin, rest) = ss.partition(_.head == str.head) containsMin.map(_.head).toStream #::: merge(containsMin.map(_.tail) ++ rest) } ``` This would take from smaller to bigger and it is assuming that the streams are ordered in that same way Upvotes: 3 [selected_answer]<issue_comment>username_2: The sorted bit is probably not relevant as to be able to sort you need the stream to be strictly evaluated (then a Stream is not the type you want). But you can keep the "order" where the elements are evaluated. I would probably try something like this: ``` scala> val s1 = Stream.from(1) scala> val s2 = Stream.from(1000) scala> val streams = List(s1, s2) scala> val newStream: Stream[Int] = streams.foldRight(Stream.empty[Int])( (xs, xss) => xss #::: xs)) scala> newStream.headOption.foreach(println) 1 ``` Upvotes: 0 <issue_comment>username_3: It is impossible to create a method with this signature that support infinite streams. You'll run out of memory because all elements of original streams will be in scope during execution. Please read this article for explanations <http://blog.dmitryleskov.com/programming/scala/stream-hygiene-i-avoiding-memory-leaks/> . Upvotes: 1
2018/03/19
560
1,874
<issue_start>username_0: I'm new to JQuery, I'm try to implement basic jquery code but it's not working. Please help. ```js $(document).ready(function() { manipulateDOM(); }); function manipulateDOM() { var h1Headers = $('h1'); var h3Headers = $('h3'); } h1Headers.text('Hello World'); h3Headers.first().css('text-decoration', 'line-through'); h3Headers.css('color', '#37887D'); ``` ```html My HTML code: H1 Header ========= --- ### H3 Header --- ### Another H3 Header ```<issue_comment>username_1: You could do something like this ``` def merge[T: Ordering](ss: List[Stream[T]]): Stream[T] = { val str = ss.minBy(_.head) val (containsMin, rest) = ss.partition(_.head == str.head) containsMin.map(_.head).toStream #::: merge(containsMin.map(_.tail) ++ rest) } ``` This would take from smaller to bigger and it is assuming that the streams are ordered in that same way Upvotes: 3 [selected_answer]<issue_comment>username_2: The sorted bit is probably not relevant as to be able to sort you need the stream to be strictly evaluated (then a Stream is not the type you want). But you can keep the "order" where the elements are evaluated. I would probably try something like this: ``` scala> val s1 = Stream.from(1) scala> val s2 = Stream.from(1000) scala> val streams = List(s1, s2) scala> val newStream: Stream[Int] = streams.foldRight(Stream.empty[Int])( (xs, xss) => xss #::: xs)) scala> newStream.headOption.foreach(println) 1 ``` Upvotes: 0 <issue_comment>username_3: It is impossible to create a method with this signature that support infinite streams. You'll run out of memory because all elements of original streams will be in scope during execution. Please read this article for explanations <http://blog.dmitryleskov.com/programming/scala/stream-hygiene-i-avoiding-memory-leaks/> . Upvotes: 1
2018/03/19
1,125
3,684
<issue_start>username_0: I am writing a Vue.js app with Bootstrap 4 and I can't loaded though I followed the documentation. Added to main.js `Vue.use(BootstrapVue);` Added to css file related to App.vue: ``` @import '../../node_modules/bootstrap/dist/css/bootstrap.css'; @import '../../node_modules/bootstrap-vue/dist/bootstrap-vue.css'; ``` Here is template: ``` {{msg}} {{statusCode}} Players data Tournament data ``` Result: no css rendered but in css file from dist dir I see Bootstrap What am I missing? The project created by vue-cli 3.0-beta<issue_comment>username_1: Try importing the files using JavaScript. ``` import 'bootstrap/dist/css/bootstrap.css' import 'bootstrap-vue/dist/bootstrap-vue.css' ``` --- On closer inspection it looks like you're missing also is controlled by `v-model` ``` ```4 {{ msg }} ``` {{ statusCode }} Players data Tournament data ``` That should fix the styling of the tabs. Upvotes: 4 <issue_comment>username_2: Solution: Import to App.vue: ``` '../../node_modules/bootstrap/dist/css/bootstrap.css'; '../../node_modules/bootstrap-vue/dist/bootstrap-vue.css'; ``` Upvotes: -1 <issue_comment>username_3: I came across this same issue, but luckily I found the cause: The loader is not loaded :) 1. Make sure you have both **vue-style-loader** and **css-loader** in `package.json` 2. Then in your webpack config, your **module.rules** should have this object: ``` { test: /\.css/, use: ['vue-style-loader', 'css-loader'] // BOTH are needed! } ``` 3. And in your `App.vue`, under the Upvotes: 4 <issue_comment>username_4: In my case I was working with vue-cli 4.3.1, and I was using this configuration by error. If you remove this configuration then all work nice! <https://cli.vuejs.org/guide/css.html#css-modules> > > If you want to drop the .module in the filenames, set css.requireModuleExtension to false in vue.config.js: > > > ``` // vue.config.js module.exports = { css: { requireModuleExtension: false } } ``` Upvotes: 0 <issue_comment>username_5: I had to do a combination/variation of some of the other answers. App.vue: ``` import { BootstrapVue, IconsPlugin } from 'bootstrap-vue' import Vue from 'vue' // Install BootstrapVue Vue.use(BootstrapVue) // Optionally install the BootstrapVue icon components plugin Vue.use(IconsPlugin) import 'bootstrap/dist/css/bootstrap.css' import 'bootstrap-vue/dist/bootstrap-vue.css' export default { name: 'App', components: { } } ``` Sources: <https://forum.vuejs.org/t/ui-library-styles-not-loading-in-web-components/77858/2> <https://bootstrap-vue.org/docs> Upvotes: 0 <issue_comment>username_6: For new people facing above issue. None of the above answers worked for me. My problem was with installed bootstrap version. npm installled lastest version `5.1.0` which does not work with bootstrap-vue. I had to downgrade bootstrap to `4.5.3` [(recommended version from website)](https://bootstrap-vue.org/docs) run following `npm install [email protected] --save` Upvotes: 3 <issue_comment>username_7: `bootstrap-vue` require specific version of `bootstrap` in order to work correctly. Below is the recommendation from the [Bootstrap-vue doc](https://bootstrap-vue.org/docs) at the time of writing. ``` Vue.js v2.6 is required, v2.6.12 is recommended Bootstrap v4.3.1 is required, v4.5.3 is recommended Popper.js v1.16 is required for dropdowns (and components based on dropdown), tooltips, and popovers. v1.16.1 is recommended PortalVue v2.1 is required by Toasts, v2.1.7 is recommended jQuery is not required ``` Solution: Please install with the version `yarn add [email protected]` or `npm -i [email protected]` Upvotes: 1
2018/03/19
385
1,577
<issue_start>username_0: I am experimenting with the ARCore SDK inside Unity to make an Augmented Reality app. As a start I ran the HelloAR Demo app where the ground is detected and onto which you can place multiple Andy's when tapping on the screen. I notice the Andy's are placed on top of the plane, as it should. Now, I create a 3D Cube object and replaced the Andy prefab with the Cube. This places Cubes instead of Andy's in the app. However, I notice that the bottom of the cube is not touching the plane! The plane runs through the center of the cube, which is not an ideal scenario! I looked on the internet for a solution, but haven't found it yet. On another website there was someone with the same issue, but no solution was provided. Only a reponse stating that it works only with the ARCore demo objects, not with custom objects. Can someone help me out on this one? Thanks!<issue_comment>username_1: I have found the solution to my problem. The pivot point of my Cube GameObject was in the center. All I did was wrap an empty GameObject around the Cube and repositioned the pivot point to the bottom of the Cube. From this I created a Prefab and used it inside the app. Now the cubes are placed correctly on top of the plane. Upvotes: 2 [selected_answer]<issue_comment>username_2: One thing I do is any objects that don't have centers where they should be, is to load the .obj into blender, and move it to the center of world axes. I might also tell it "center to geometry" before centering on world axis, then I export as .obj (selection only). Upvotes: 0
2018/03/19
3,675
13,793
<issue_start>username_0: I have a spreadsheet that I've been working on for over a month to sort and optimize coordinates (sometimes exceeding 100,000 rows) and it is *UNBEARABLY* slow once I start importing files over 5,000 rows (it has taken several hours to complete the calculations and sorting process on data sets over 25,000 rows). The processing time grows exponentially with the number of coordinates imported. I've researched Stack Overflow to help me with some of the code and included a few safety nets for error handling and exiting the sub if there is no data. The bit of code I'm using to actually sort the coordinates to find nearest neighbor coordinates and that I need help with is under the remark `' Sort coordinates in Point List Data looking for shortest distance between points`, located at approximately line 58 of 109 in my code below. Simple quadrant coordinates (X, Y and Z) are in columns H, I , and J respectively, starting at row 6. The named range is **PosXYZ** and the formula for this named range is: ``` =INDEX(Optimizer!$H:$H, ROW(Optimizer!$H$5) + 1):INDEX(Optimizer!$L:$L, MATCH(bignum, Optimizer!$I:$I)). ``` **bignum** is defined as `=1E+307*17.9769313486231`. * `Column K` is populated with the Pythagorean Theorem to calculate the distance between the current data point X,Y and the previous data point X,Y in the list. * `Column L` is populated with a list of sequential row numbers created when the data is imported so that the original sort order of the data can be restored using a separate piece of VBA code. I'm trying to see if using an array would greatly speed up the time it takes to run this point list optimizer, and **I'm hoping that someone might be able to help me figure out how to get this portion of my code to run exponentially faster**. --- I found the following similar question, and I'm wondering if this approach is something that I could use to help speed up my processing time: [How do you speed up VBA code with a named range?](https://stackoverflow.com/questions/18644032/how-do-you-speed-up-vba-code-with-a-named-range) I've learned a lot from this site, and I'm hoping that someone has the patience and the knowledge to help me figure this one out. I don't have a lot of experience using arrays in VBA. Sample Excel file with 2904 data points and VBA code can be found [here](https://1drv.ms/x/s!Ak5_WPnQMSfLiYAxYqRQHWRjrXDl6A). ``` Sub Optimize_PL() ' Add an error handler On Error GoTo ErrorHandler ' Speed up sub-routine by turning off screen updating and auto calculating until the end of the sub-routine Application.ScreenUpdating = False Application.Calculation = xlCalculationManual ' Declare variable names and types Dim rInp As Range Dim rTmp As Range Dim i As Long Dim n As Long Dim sFrm As String Dim PosX As String Dim PosY As String Dim PosZ As String Dim SortOrder As String Dim LastRow As Long Dim hLastRow As Long Dim lLastRow As Long ' Find number of populated cells in Column H and Column L (not including the 5 column header rows) hLastRow = Cells(Rows.Count, "H").End(xlUp).Row - 5 lLastRow = Cells(Rows.Count, "L").End(xlUp).Row - 5 ' Check for existing Point List Data to avoid error If hLastRow < 2 Then MsgBox "Not enough data points are available to optimize." & vbNewLine & _ "" & vbNewLine & _ "Column H populated rows: " & hLastRow, vbInformation, "Error Message" GoTo ErrorHandler ElseIf lLastRow < 2 Then MsgBox "Original sort order row numbers not available in Column L," & vbNewLine & _ "" & vbNewLine & _ "Original sort order canot be restored without Row # data." & vbNewLine & _ "Column L populated rows: " & lLastRow, vbInformation, "Error Message" Err.Number = 0 GoTo ErrorHandler ElseIf hLastRow <> lLastRow Then MsgBox "The number of rows of coordinate data does not match the" & vbNewLine & _ "number of rows in the Row # column. There is no way to" & vbNewLine & _ "restore the original sort order." & vbNewLine & _ "" & vbNewLine & _ "Column H populated rows: " & hLastRow & vbNewLine & _ "Column L populated rows: " & lLastRow, vbInformation, "Error Message" Err.Number = 0 GoTo ErrorHandler End If ' Timer Start (calculate the length of time this VBA code takes to complete) StartTime = Timer ' Sort coordinates in Point List Data looking for shortest distance between points Set rInp = Range("PosXYZ").Resize(, 4) n = rInp.Rows.Count i = 0 For i = 1 To n - 1 Application.StatusBar = i + 1 & " of " & n & " Calculating for " & SecondsElapsed & " seconds" & " Estimated Time Remaining: " & TimeRemaining & " seconds" SecondsElapsed = Round(Timer - StartTime) ' Change to StartTime, 2) to display seconds two decimal places out TimeRemaining = Round((SecondsElapsed / (i + 1)) * (n - (i + 1))) ' Change to i + 1)),2) to display seconds two decimal places out Set rTmp = rInp.Offset(i).Resize(n - i, 5) With rTmp PosX = .Cells(0, 1).Address(ReferenceStyle:=xlR1C1) PosY = .Cells(0, 2).Address(ReferenceStyle:=xlR1C1) PosZ = .Cells(0, 3).Address(ReferenceStyle:=xlR1C1) SortOrder = .Cells(0, 5).Address(ReferenceStyle:=xlR1C1) sFrm = Replace(Replace(Replace(Replace("=SQRT((RC[-3] - PosX)^2 + (RC[-2] - PosY)^2)", "PosX", PosX), "PosY", PosY), "PosZ", PosZ), "SortOrder", SortOrder) sFrm = Replace(Replace(Replace(Replace(sFrm, "PosX", PosX), "PosY", PosY), "PosZ", PosZ), "SortOrder", SortOrder) .Columns(4).FormulaR1C1 = sFrm .Sort Key1:=.Range("D1"), Header:=xlNo End With Next i ' Timer Stop (calculate the length of time this VBA code took to complete) SecondsElapsed = Round(Timer - StartTime, 2) ' Turn screen updating and auto calculating back on since file processing is now complete Application.ScreenUpdating = True Application.Calculation = xlCalculationAutomatic ' Message to report VBA code processing time after file selection and number of data rows imported MsgBox "Calculated optimized travel path between coordinates in " & vbNewLine & _ "" & vbNewLine & _ " " & SecondsElapsed & " seconds" ' Reset to defaults in the event of a processing error during the sub-routine execution ErrorHandler: Application.StatusBar = True Application.ScreenUpdating = True Application.Calculation = xlCalculationAutomatic If Err.Number <> 0 Then ' Display a message to the user including the error code in the event of an error during execution MsgBox "An error number " & Err.Number & " was encountered!" & vbNewLine & _ "Part or all of this VBA code was not completed.", vbInformation, "Error Message" End If End Sub ```<issue_comment>username_1: Aside from switching to a different algorihm (e.g. the k-d tree), here are a few things that will speed up the code: 1. Convert formulas to values before sorting 2. Only update status bar periodically (e.g. every 100 loops) 3. **Delete** the dynamic named-range "PosXYZ" and use hLastRow that was already calculated. Dynamic named ranges are re-calculated when the sheet is calculated and thus can be costly. Updated code: ``` Const HeaderRow = 5 Set rInp = Range(Cells(HeaderRow + 1, 8), Cells(hLastRow, 11)) n = rInp.Rows.Count For i = 1 To n - 1 If i Mod 100 = 0 Then Application.StatusBar = i + 1 & " of " & n & " Calculating for " & SecondsElapsed & " seconds" & " Estimated Time Remaining: " & TimeRemaining & " seconds" SecondsElapsed = Round(Timer - StartTime) ' Change to StartTime, 2) to display seconds two decimal places out TimeRemaining = Round((SecondsElapsed / (i + 1)) * (n - (i + 1))) ' Change to i + 1)),2) to display seconds two decimal places out End If Set rTmp = rInp.Offset(i).Resize(n - i, 5) With rTmp Dim TargetRow As Long TargetRow = HeaderRow + i sFrm = "=SQRT((RC[-3] - R" & TargetRow & "C[-3])^2 + (RC[-2] - R" & TargetRow & "C[-2])^2)" With .Columns(4) .FormulaR1C1 = sFrm .Calculate .Value = .Value End With .Sort Key1:=.Range("D1"), Header:=xlNo End With Next i ``` Upvotes: 2 <issue_comment>username_2: **Yes you can speed up this code a lot using arrays**: the code below is approx 20 times faster. ``` Sub Optimize_PL2() ' Add an error handler On Error GoTo ErrorHandler ' Speed up sub-routine by turning off screen updating and auto calculating until the end of the sub-routine Application.ScreenUpdating = False Application.Calculation = xlCalculationManual ' Define variable names and types Dim i As Long Dim j As Long Dim k As Long Dim hLastRow As Long Dim lLastRow As Long Dim varData As Variant Dim dData() As Double Dim dResult() As Double Dim jRow() As Long Dim dThisDist As Double Dim dSmallDist As Double Dim jSmallRow As Long ' Find number of populated cells in Column H and Column L (not including the 5 column header rows) hLastRow = Cells(Rows.Count, "H").End(xlUp).Row - 5 lLastRow = Cells(Rows.Count, "L").End(xlUp).Row - 5 ' Check for existing Point List Data to avoid error If hLastRow < 2 Then MsgBox "Not enough data points are available to optimize." & vbNewLine & _ "" & vbNewLine & _ "Column H populated rows: " & hLastRow, vbInformation, "Error Message" GoTo ErrorHandler ElseIf lLastRow < 2 Then MsgBox "Original sort order row numbers not available in Column L," & vbNewLine & _ "" & vbNewLine & _ "Original sort order canot be restored without Row # data." & vbNewLine & _ "Column L populated rows: " & lLastRow, vbInformation, "Error Message" Err.Number = 0 GoTo ErrorHandler ElseIf hLastRow <> lLastRow Then MsgBox "The number of rows of coordinate data does not match the" & vbNewLine & _ "number of rows in the Row # column. There is no way to" & vbNewLine & _ "restore the original sort order." & vbNewLine & _ "" & vbNewLine & _ "Column H populated rows: " & hLastRow & vbNewLine & _ "Column L populated rows: " & lLastRow, vbInformation, "Error Message" Err.Number = 0 GoTo ErrorHandler End If On Error GoTo 0 ' Timer Start (calculate the length of time this VBA code takes to complete) StartTime = Timer varData = Worksheets("Optimizer").Range("H6").Resize(hLastRow, 5).Value2 ReDim dResult(1 To hLastRow, 1 To 5) As Double ReDim dData(1 To hLastRow, 1 To 5) As Double ' ' copy vardata into data coercing to double ' (repeated arithmetic is faster on doubles than variants) ' For j = LBound(varData) To UBound(varData) For k = LBound(varData, 2) To UBound(varData, 2) dData(j, k) = CDbl(varData(j, k)) If j = 1 Then dResult(j, k) = dData(j, k) End If Next k Next j ' ' look for shortest distance row ' For i = LBound(dResult) To UBound(dResult) - 1 ' ' calc distance from this row to all remaining rows and find shortest ' jSmallRow = -1 dSmallDist = 1 * 10 ^ 307 For j = 2 To UBound(dData) If dData(j, 3) > -1 And j <> i Then dThisDist = Sqr((dResult(i, 1) - dData(j, 1)) ^ 2 + (dResult(i, 2) - dData(j, 2)) ^ 2) If dThisDist < dSmallDist Then jSmallRow = j dSmallDist = dThisDist End If End If Next j ' ' copy jsmallrow row to i+1 ' If jSmallRow > -1 Then For k = 1 To 2 dResult(i + 1, k) = dData(jSmallRow, k) Next k dResult(i + 1, 4) = dSmallDist dResult(i + 1, 5) = jSmallRow ' ' set smallrow so it does not get used again ' dData(jSmallRow, 3) = -1 End If Next i ' ' put data back on sheet ' Worksheets("Optimizer").Range("H6").Resize(hLastRow, 5).Value2 = dResult ' Timer Stop (calculate the length of time this VBA code took to complete) SecondsElapsed = Round(Timer - StartTime, 2) ' Turn screen updating and auto calculating back on since file processing is now complete Application.ScreenUpdating = True Application.Calculation = xlCalculationAutomatic ' Message to report VBA code processing time after file selection and number of data rows imported MsgBox "Calculated optimized travel path between coordinates in " & vbNewLine & _ "" & vbNewLine & _ " " & SecondsElapsed & " seconds" ' Reset to defaults in the event of a processing error during the sub-routine execution ErrorHandler: Application.StatusBar = True Application.ScreenUpdating = True Application.Calculation = xlCalculationAutomatic If Err.Number <> 0 Then ' Display a message to the user including the error code in the event of an error during execution MsgBox "An error number " & Err.Number & " was encountered!" & vbNewLine & _ "Part or all of this VBA code was not completed.", vbInformation, "Error Message" End If End Sub ``` Upvotes: 4 [selected_answer]
2018/03/19
792
3,188
<issue_start>username_0: I am in the process of finishing up a Windows 10 IOT Core app to be deployed on Raspberry Pi 3 devices for a single location beta test. The app is consumer facing and reads from a sensor so I am sure there are going to be at least 1 or 2 updates per week as we perfect the app. Everything I am reading says you can only achieve app updates from Windows Device Portal, Windows Store, or Azure. 1. Windows Device Portal works fine for local management/deployment right now but won't work remotely because the devices will eventually be installed all throughout the country behind different firewalls, etc. We don't want to have to keep up with customer firewall port forwarding. 2. Windows Store probably won't work because we don't want to have to wait for up to 48 hours for update approvals. Also not sure if Windows 10 IOT fully supports Windows Store yet. 3. I have not figured out how Azure IOT Hub manages app updates but I would really like to avoid having to use Azure if possible since we prefer to keep everything on our own servers and not pay cloud providers for stuff we can do on our own servers. Has anyone figure out how to run a background service to download an app update and apply it? I am just going to have to deal with Azure IOT Hub?<issue_comment>username_1: There is another option: **[Using OMA-DM](https://learn.microsoft.com/en-us/windows-hardware/service/iot/updating-iot-core-apps#using-oma-dm)**. > > Using OMA-DM: The app is updated using an OMA-DM compliant device > management channel such as Intune or System Center Configuration > Manager (SCCM) > > > The OMA-DM interface is supported in Windows 10 IoT Core and any > OMA-DM compliant management solution can be used to install and update > applications. Read the documentation for [EnterpriseModernAppManagement > CSP](https://learn.microsoft.com/zh-cn/windows/client-management/mdm/enterprisemodernappmanagement-csp) for usage instructions. > > > **NOTE: Some information of EnterpriseModernAppManagement CSP relates to prereleased product which may be substantially modified before it's commercially released.** Upvotes: 1 <issue_comment>username_2: You do need a type of (online/internet) service to connect to your IoT Core devices. [Reference](https://learn.microsoft.com/en-us/windows-hardware/service/iot/updating-iot-core-apps): 1. Microsoft Store: The app is published and updated from the Microsoft Store 2. Using Component Update Service: The app is published to Windows Update and updated like any other OEM package (driver package) **Coming soon** 3. Using Azure IoT Device Management: The app is published to Azure Storage and updated through the Azure DM channel New for Windows 10, version 1709 4. Using OMA-DM: The app is updated using an OMA-DM compliant device management channel such as Intune or System Center Configuration Manager (SCCM) With Store, you'll be using the Microsoft Store service; Azure IoT Device Management will be using Azure IoT hub; OMA-DM can use Intune/SCCM or *your own infrastructure ([OMA-DM Server](https://learn.microsoft.com/en-us/windows/client-management/mdm/server-requirements-windows-mdm))* Upvotes: 0
2018/03/19
365
1,240
<issue_start>username_0: I have a subproject which is built into a jar file. There is `src/main/resources/db/migration/V1_1__create_table.sql` in the project. How to run migrate from this jar? I tried this `./flyway migrate -url=jdbc:postgresql://127.0.0.1:5432/myproject -user=myproject -password=<PASSWORD> -locations=classpath:~/src/myproject/target/myproject-0.0.1-SNAPSHOT.jar`, but I have got the following error: `Database: jdbc:postgresql://127.0.0.1:5432/myproject (PostgreSQL 9.6) WARNING: Unable to resolve location classpath:~/src/myproject/target/myprojectn-0/0/1-SNAPSHOT/jar WARNING: Schema "public" has version 0.0.6, but no migration could be resolved in the configured locations !` How to run migration from the jar? Thanks.<issue_comment>username_1: You have to place your jar in the `/jars` directory and use `-locations=classpath:db/migration`. See <https://flywaydb.org/documentation/commandline/#installation> Upvotes: 2 <issue_comment>username_2: In my case migrations were in `BOOT-INF/classes/sql/migrations` folder of the jar file and flyway couldn't find it with `-locations=classpath:sql/migrations` But it can reach it with full path like `-locations=classpath:BOOT-INF/classes/sql/migrations` Upvotes: 0
2018/03/19
331
1,365
<issue_start>username_0: I'm building a Spring Boot project with Gradle migrating it from Maven. I wonder what Gradle task is doing the same thing as the `package` phase in Maven. Thank you!<issue_comment>username_1: You can use `gradle assemble` or `gradle build`. Upvotes: 2 <issue_comment>username_2: Maven build is based on build cycle phases, Gradle build is based on tasks and tasks dependencies. Maven package phase can execute multiple plugin goals that is configured up to the package phase in the [lifecycle](https://maven.apache.org/guides/introduction/introduction-to-the-lifecycle.html). The same thing can be achieve with gradle using tasks(and maybe creating task dependencies to integrate them in to default build) If you only care about running unit tests and creating jar file; gradle way would be `gradle build`. If other actions are also concern in the maven package phase, additional gradle tasks should be added. Upvotes: 4 [selected_answer]<issue_comment>username_3: See the diagram from the java plugin documentation [here](https://docs.gradle.org/current/userguide/java_plugin.html#sec:java_tasks) ![java plugin tasks](https://docs.gradle.org/current/userguide/img/javaPluginTasks.png) As username_1 said: * If you want to run tests you'd run `gradle build` * If you just want to build the jar you'd run `gradle assemble` Upvotes: 3
2018/03/19
432
1,526
<issue_start>username_0: We have an angular 5 project which has a problem with size. On load of the app, when there isn't much of anything, we have 95MB in memory. I'm trying to analyze it and use `webpack-bundle-analyzer`. I've read that adding `new webpack.optimize.ModuleConcatenationPlugin()` into `webpack.config` should help, so I did and it helped (from 95 to 76MB). But when I run build with stats after that `npm run build:stats > stats.json` (`"build:stats": "webpack --profile --json --config webpack.config.buildserver.js"`) I get this message **FATAL ERROR: CALL\_AND\_RETRY\_LAST Allocation failed - JavaScript heap out of memory** Before adding the plugin, the `stats.json` file had around 390 000 lines of json object. I found that there is a memory flag for node scripts `--max_new_space_size` but I can't find anything similar for webpack script. I'm not sure what to look for, could anyone help please?<issue_comment>username_1: It's a known bug, you can check this [issue](https://github.com/angular/angular-cli/issues/5618) for more details and also try to install [this npm package](https://www.npmjs.com/package/increase-memory-limit) and run before you build and after npm install. Upvotes: 3 [selected_answer]<issue_comment>username_2: To set Node.js flags one can use [`NODE_OPTIONS` env variable](https://nodejs.org/api/cli.html#node_optionsoptions). For example, building stats in webpack version 4: ```sh NODE_OPTIONS="--max-old-space-size=4096" webpack --json > stats.json ``` Upvotes: 1
2018/03/19
857
2,614
<issue_start>username_0: How to iterate over 2 loops in a List using Java Stream. ``` public class ArrayStreams { public static void main(String[] args) { List list = new ArrayList<>(); list.add(1); list.add(3); list.add(5); list.add(7); list.add(2); for (int i = 0; i < list.size(); i++) { for (int j = i + 1; j < list.size(); j++) { System.out.println("i :" + list.get(i) + "J :" + list.get(j)); } } } } ``` How can i convert this code into Java Stream. Please help!<issue_comment>username_1: > > How can i convert this code into Java Stream. > > > You should not use Streams for at least two reasons : * you don't iterate all elements in the second loop, so you should skip the first element in the inner loop. * and **above all** you use indexes of the list in your `println(`). Streams are not designed to maintain index of the streamed elements Upvotes: 3 <issue_comment>username_2: The simplest approach is a 1:1 translation of the loop ``` IntStream.range(0, list.size()) .forEach(i -> IntStream.range(i+1, list.size()) .forEach(j -> System.out.println("i :"+list.get(i)+"J :"+list.get(j)))); ``` You could also use ``` IntStream.range(0, list.size()) .forEach(i -> list.subList(i+1, list.size()) .forEach(o -> System.out.println("i :"+list.get(i)+"J :"+o))); ``` which would be the equivalent of ``` for(int i = 0; i < list.size(); i++) { for(Integer o: list.subList(i + 1, list.size())) { System.out.println("i :" + list.get(i) + "J :" + o); } } ``` though it would be better to do ``` for(int i = 0; i < list.size(); i++) { Integer o = list.get(i); String prefix = "i :" + o + "J :"; for(Integer p: list.subList(i + 1, list.size())) { System.out.println(prefix + p); } } ``` reducing the redundant work. A more declarative approach is ``` IntStream.range(0, list.size()).boxed() .flatMap(i -> IntStream.range(i+1, list.size()) .mapToObj(j -> ("i :"+list.get(i)+"J :"+list.get(j)))) .forEach(System.out::println); ``` Unfortunately, the alternative with the reduced redundant work can’t be expressed as Stream operation easily, due to the lack of a simple-to-use pair type. One solution would be: ``` IntStream.range(0, list.size()) .mapToObj(i -> new Object(){ int index=i; String prefix="i :"+list.get(i)+"J :";}) .flatMap( p -> list.subList(p.index+1, list.size()).stream().map(o -> p.prefix+o)) .forEach(System.out::println); ``` Obviously, that’s not more readable than the nested `for` loops… Upvotes: 2
2018/03/19
808
2,862
<issue_start>username_0: Well, I have objects in array named **data**: ``` [ { title: 'Title', broadcast: true }, { title: 'Title', broadcast: false } ] ``` On one page I want to show only ones with `broadcast: true` and I want to use a mixin call for that. My mixin: ``` mixin techs(condition) - var data = trs.tech.data; ul.techs each item in data if condition li .h2= item.title ``` And my mixin call: ``` +techs('item.broadcast') ``` But (of course) this thing doesn't work as I want to. It shows all objects in array. Is there any way to get result I expect without writing condition into mixin?<issue_comment>username_1: I see multiple issues with your code. Your mixin definition is `techs` but you are trying to call `tech`. Secondly the indentation is incorrect after the mixin declaration. Also, the array should be passed as an object with an identifier. Therefore, consider restructuring your JSON onto, ``` { "tempArrayName": [ { "title": "Title1", "broadcast": true }, { "title": "Title2", "broadcast": false } ] } ``` And your JADE/PUG could be rewritten as, ``` mixin techs - var data = tempArrayName; ul.techs each item in data if item.broadcast li .h2= item.title +techs ``` Where `+techs` is the mixin call which can be reused in multiple places. It checks for the condition usin `broadcast` value (hope that is what you are trying to achieve) and prints, ``` * Title1 ``` Tested using - <http://naltatis.github.io/jade-syntax-docs> Hope this helps. Upvotes: 0 <issue_comment>username_2: From my point of view, regarding this given problem, the mixin should not at all contain any additional logic connected to the data it receives. It instead should be a straightforward render method that iterates a list. Thus, in this case, the render method exclusively processes a list of already filtered/sanitized/proven data items, passed as this method's sole argument. ```js // running, js only, demo code var techList = [{ title: 'Title', broadcast: true }, { title: 'Title', broadcast: false }]; function isMarkedForBroadcast(type/*, idx, list*/) { return (type.broadcast === true); } var broadcastItemList = techList.filter(isMarkedForBroadcast); console.log('techList : ', techList); console.log('broadcastItemList : ', broadcastItemList); ``` ```css .as-console-wrapper { max-height: 100%!important; top: 0; } ``` ```html //- pug/view mixin renderTechList(list) ul.techs each item in list li .h2= item.title - function isMarkedForBroadcast(type/*, idx, list*/) { return (type.broadcast === true); } +renderTechList(trs.tech.data.filter(isMarkedForBroadcast)) ``` Upvotes: 2 [selected_answer]
2018/03/19
559
1,738
<issue_start>username_0: I have following query that returns the results that i am looking for except for fact i am not able to get group by to work: ``` select EXTRACT(month from datetime_insert) m, EXTRACT(year from datetime_insert) y from tst_result_main order by datetime_insert ``` This returns following: ``` m y 1 2006 1 2006 2 2006 2 2006 2 2007 2 2007 ``` Results i am looking for: ``` m y 1 2006 2 2006 2 2007 ``` I have tried following which throws error: ``` select EXTRACT(month from datetime_insert) m, EXTRACT(year from datetime_insert) y from tst_result_main order by datetime_insert group by y, m ``` error: ``` ORA-00933: SQL command not properly ended 00933. 00000 - "SQL command not properly ended" *Cause: *Action: Error at Line: 7 Column: 136 ``` Any help would be appreciated<issue_comment>username_1: Did you tried something like DISTINCT values: ``` select DISTINCT EXTRACT(month from datetime_insert) m , EXTRACT(year from datetime_insert) y from tst_result_main ``` Upvotes: 3 <issue_comment>username_2: Try this : ``` select EXTRACT(month from datetime_insert) m, EXTRACT(year from datetime_insert) y from tst_result_main group by EXTRACT(year from datetime_insert), EXTRACT(month from datetime_insert) order by y,m; ``` An alias can not be used for a `group by` expression. [Demo](http://sqlfiddle.com/#!4/d5caa/2) Upvotes: 1 [selected_answer]<issue_comment>username_3: having not aggregation function you should use distinct clause and not group by ``` select DISTINCT EXTRACT(month from datetime_insert) m , EXTRACT(year from datetime_insert) y from tst_result_main order by y, m ``` Upvotes: 2
2018/03/19
496
1,588
<issue_start>username_0: Fiddle: <https://jsfiddle.net/uafaLstf/1/> I have the following: ``` $(".span1").html("Warm Welcome To: " + " "); arr = []; arr.push(["john", "jdoe", "very nice"]); arr.push(["mike", "mdone", "job well"]); arr.push(["haan", "hgore", "creative"]); var interval = 1000; for (var f = 0; f < arr.length; f++) { SendWishes(arr[f][0]); } function SendWishes(name) { setTimeout(function () { $(".span1").html($(".span1").html() + name + " "); }, 1000); } ``` What I am trying to do is display the name after every 1 second... however in my code the entire list is displayed without delay for each entry. I think I am pretty close and missing something. Can someone please help with my code...<issue_comment>username_1: Did you tried something like DISTINCT values: ``` select DISTINCT EXTRACT(month from datetime_insert) m , EXTRACT(year from datetime_insert) y from tst_result_main ``` Upvotes: 3 <issue_comment>username_2: Try this : ``` select EXTRACT(month from datetime_insert) m, EXTRACT(year from datetime_insert) y from tst_result_main group by EXTRACT(year from datetime_insert), EXTRACT(month from datetime_insert) order by y,m; ``` An alias can not be used for a `group by` expression. [Demo](http://sqlfiddle.com/#!4/d5caa/2) Upvotes: 1 [selected_answer]<issue_comment>username_3: having not aggregation function you should use distinct clause and not group by ``` select DISTINCT EXTRACT(month from datetime_insert) m , EXTRACT(year from datetime_insert) y from tst_result_main order by y, m ``` Upvotes: 2
2018/03/19
438
1,364
<issue_start>username_0: ``` ^\$?(\d{1,3},?(\d{3},?)*\d{3}(\.\d{0,2})?|\d{1,3}(\.\d{0,2})?|\.\d{1,2}?)$ ``` I actually found this to help me to validate the amount of $. The problem is that I want to have a limited amount to validate between 0$ and 99.99$. also amounts like 01.20 and 10.1 are not acceptable but 1.20$ 10.10 are. Is there something I could modify on this regex. Also this is for the use of my php code. I know I need to put one more backlash on the regex to make it work on php. thanks.<issue_comment>username_1: Did you tried something like DISTINCT values: ``` select DISTINCT EXTRACT(month from datetime_insert) m , EXTRACT(year from datetime_insert) y from tst_result_main ``` Upvotes: 3 <issue_comment>username_2: Try this : ``` select EXTRACT(month from datetime_insert) m, EXTRACT(year from datetime_insert) y from tst_result_main group by EXTRACT(year from datetime_insert), EXTRACT(month from datetime_insert) order by y,m; ``` An alias can not be used for a `group by` expression. [Demo](http://sqlfiddle.com/#!4/d5caa/2) Upvotes: 1 [selected_answer]<issue_comment>username_3: having not aggregation function you should use distinct clause and not group by ``` select DISTINCT EXTRACT(month from datetime_insert) m , EXTRACT(year from datetime_insert) y from tst_result_main order by y, m ``` Upvotes: 2
2018/03/19
1,031
3,785
<issue_start>username_0: I'm unable to correctly define a generic method when using covariance on a generic class, if this is at all possible the way I intend to. I'll best explain the issue at hand by example. Say we have the following setup of interfaces for cars ``` interface Car { ... } interface SportsCar extends Car { ... } ``` and such generic interfaces for car vendors returning a `Sale` object ``` interface CarVendor { Sale sell(C car); } interface SportsCarVendor extends CarVendor { @Override Sale sell(SportsCar car); } ``` Let's now suppose we want our cars to be generic, e.g. regarding fuel type: ``` interface Car { ... } interface SportsCar extends Car { ... } class PetrolSportsCar extends SportsCar { ... } class DieselSportsCar extends SportsCar { ... } ``` We run into problems when redefining our vendor interfaces if we wan't them to be able to sell cars for any kind of fuel. A generic method seem to be the answer, however I'm unable to correctly define it since the generic `Car` is defined on the class but the generic `FuelType` should be defined on the method. To get the idea: ``` interface CarVendor> { Sale> sell(Car param); } interface SportsCarVendor extends CarVendor> { @Override Sale> sell(SportsCar param); } ``` `SportsCarVendor` obviously doesn't compile since the signature `sell(SportsCar)` doesn't match with the expected type `SportsCar` . Can anybody offer a viable solution for this problem?<issue_comment>username_1: From what you have provided so far, I don't see any reason why most of your interfaces and classes *should* be generic, and I see reasons why most should not be generic. `FuelType` sounds like it should be an *attribute*, not a type parameter, of a `Car`. Perhaps it could be declared as an enum, depending on your exact requirements. ``` enum FuelType { PETROL, DIESEL; } public class Car { private FuelType fuelType; // rest of implementation } ``` Similarly, `Car` should be an attribute, not a type parameter, of a `Sale`. ``` public class Sale { private Car sold; // rest of implementation } ``` You may still need `SportsCarVendor` to be generic so that you can narrow the type of `car` the implementation class can sell, but the `Sale` the `sell` method returns still doesn't need to be generic. ``` interface CarVendor { Sale sell(C car); } interface SportsCarVendor extends CarVendor { @Override Sale sell(SportsCar car); } ``` Additionally, if you happen to need a specific subclass of `Sale`, e.g. `SportsCarSale`, then you can use return-type covariance, which is the ability for a subclass to narrow the return type without generics: ``` interface SportsCarVendor extends CarVendor { @Override SportsCarSale sell(SportsCar car); } ``` Upvotes: 2 <issue_comment>username_2: If you really have to handle fuel type separately for each car vendor (maybe tax reasons?), here is how you define your car vendor to allow this: ``` interface CarVendor> { Sale sell(C param); } interface SportsCarVendor extends CarVendor> { } ``` Now for the concrete implementations: ``` class SportsCarVendorDiesel implements SportsCarVendor { @Override public Sale> sell(SportsCar param) { return null; } } class SportsCarVendorGas implements SportsCarVendor { @Override public Sale> sell(SportsCar param) { return null; } } ``` The rest of the classes I used are as so: ``` interface FuelType { double burnRate(); } class DieselFuel implements FuelType { @Override public double burnRate() { return 0; } } class GasolineFuel implements FuelType { @Override public double burnRate() { return 0; } } interface Car { } interface SportsCar extends Car { } class Sale { } ``` Upvotes: 0
2018/03/19
818
3,299
<issue_start>username_0: I am trying to run php artisan migrate to create my mysql tables usin laravel. I got this error: Foreign key constraint is incorrectly formed Users Table: ``` Schema::create('users', function (Blueprint $table) { $table->increments('id'); $table->string('name'); $table->string('street'); $table->string('city'); $table->string('phone'); $table->string('email')->unique(); $table->string('password'); $table->rememberToken(); $table->timestamps(); }); ``` password\_resets table: ``` Schema::create('password_resets', function (Blueprint $table) { $table->string('email')->index(); $table->string('token'); $table->timestamp('created_at')->nullable(); }); ``` products table: ``` Schema::create('products', function (Blueprint $table) { $table->increments('id'); $table->string('product_type'); $table->integer('quantity'); $table->timestamps(); }); ``` Shipments table: ``` Schema::create('shipments', function (Blueprint $table) { $table->increments('id'); $table->integer('order_number')->unsigned(); $table->integer('product_id')->unsigned(); $table->foreign('order_number')->references('id')->on('orders'); $table->foreign('product_id')->references('id')->on('products'); $table->dateTime('chargecardtime'); $table->dateTime('packingtime'); $table->date('shiporderdate'); $table->timestamps(); }); ``` Orders table: ``` Schema::create('orders', function (Blueprint $table) { $table->increments('id'); $table->integer('customer_id')->unsigned(); $table->integer('product_id')->unsigned(); $table->foreign('customer_id')->references('id')->on('users'); $table->foreign('product_id')->references('id')->on('products'); $table->string('name'); $table->string('to_street'); $table->string('to_city'); $table->date('ship_date'); $table->string('phone'); $table->timestamps(); }); ``` Exception trace: 1 PDOException::("SQLSTATE[HY000]: General error: 1005 Can't create table `ec`.`#sql-3664_86` (errno: 150 "Foreign key constraint is incorrectly formed")") i guess there is a problem with orders table since after error i cant see that table in database.others are created.<issue_comment>username_1: Since you're referencing an `id` you need to make the foreign key column `unsigned`. As ids are by default unsigned (non-negative). So do this for all your foreign keys: ``` $table->integer('product_id')->unsigned(); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: you should add `unsigned()` method to your foreign keys columns, like `$table->integer('product_id')->unsigned()` cause they must be exactly the same column types as the key that they are referenced. the referenced keys are probably unsigned integers, that's why you got the error. in MySQL, integer signed and integer unsigned are different types Upvotes: 0 <issue_comment>username_3: $table->integer('product\_id')->unsigned(); try this $table->unsignedBigInteger('); Upvotes: 0
2018/03/19
8,169
29,125
<issue_start>username_0: I have a `pandas.Series` containing integers, but I need to convert these to strings for some downstream tools. So suppose I had a `Series` object: ``` import numpy as np import pandas as pd x = pd.Series(np.random.randint(0, 100, 1000000)) ``` On StackOverflow and other websites, I've seen most people argue that the best way to do this is: ``` %% timeit x = x.astype(str) ``` This takes about 2 seconds. When I use `x = x.apply(str)`, it only takes 0.2 seconds. Why is `x.astype(str)` so slow? Should the recommended way be `x.apply(str)`? I'm mainly interested in python 3's behavior for this.<issue_comment>username_1: **Performance** It's worth looking at actual performance before beginning any investigation since, contrary to popular opinion, `list(map(str, x))` appears to be *slower* than `x.apply(str)`. ``` import pandas as pd, numpy as np ### Versions: Pandas 0.20.3, Numpy 1.13.1, Python 3.6.2 ### x = pd.Series(np.random.randint(0, 100, 100000)) %timeit x.apply(str) # 42ms (1) %timeit x.map(str) # 42ms (2) %timeit x.astype(str) # 559ms (3) %timeit [str(i) for i in x] # 566ms (4) %timeit list(map(str, x)) # 536ms (5) %timeit x.values.astype(str) # 25ms (6) ``` Points worth noting: 1. (5) is marginally quicker than (3) / (4), which we expect as more work is moved into C [assuming no `lambda` function is used]. 2. (6) is by far the fastest. 3. (1) / (2) are similar. 4. (3) / (4) are similar. **Why is x.map / x.apply fast?** This *appears to be* because it uses fast [compiled Cython code](https://github.com/pandas-dev/pandas/blob/699a48bcd71da54da05caee85e5d006afabc3df6/pandas/_libs/lib.pyx#L473-L483): ``` cpdef ndarray[object] astype_str(ndarray arr): cdef: Py_ssize_t i, n = arr.size ndarray[object] result = np.empty(n, dtype=object) for i in range(n): # we can use the unsafe version because we know `result` is mutable # since it was created from `np.empty` util.set_value_at_unsafe(result, i, str(arr[i])) return result ``` **Why is x.astype(str) slow?** Pandas applies `str` to each item in the series, not using the above Cython. Hence performance is comparable to `[str(i) for i in x]` / `list(map(str, x))`. **Why is x.values.astype(str) so fast?** Numpy does not apply a function on each element of the array. [One description](https://github.com/pandas-dev/pandas/issues/8732) of this I found: > > If you did `s.values.astype(str)` what you get back is an object holding > `int`. This is `numpy` doing the conversion, whereas pandas iterates over > each item and calls `str(item)` on it. So if you do `s.astype(str)` you have > an object holding `str`. > > > There is a technical reason [why the numpy version hasn't been implemented](https://github.com/pandas-dev/pandas/pull/8971) in the case of no-nulls. Upvotes: 5 [selected_answer]<issue_comment>username_2: Let's begin with a bit of general advise: If you're interested in finding the bottlenecks of Python code you can use a profiler to find the functions/parts that eat up most of the time. In this case I use a line-profiler because you can actually see the implementation and the time spent on each line. However, these tools don't work with C or Cython by default. Given that CPython (that's the Python interpreter I'm using), NumPy and pandas make heavy use of C and Cython there will be a limit how far I'll get with profiling. Actually: one probably could extend profiling to the Cython code and probably also the C code by recompiling it with debug symbols and tracing, however it's not an easy task to compile these libraries so I won't do that (but if someone likes to do that the [Cython documentation includes a page about profiling Cython code](http://cython.readthedocs.io/en/latest/src/tutorial/profiling_tutorial.html)). But let's see how far I can get: Line-Profiling Python code -------------------------- I'm going to use [line-profiler](https://github.com/rkern/line_profiler) and a Jupyter Notebook here: ``` %load_ext line_profiler import numpy as np import pandas as pd x = pd.Series(np.random.randint(0, 100, 100000)) ``` ### Profiling `x.astype` ``` %lprun -f x.astype x.astype(str) ``` ```none Line # Hits Time Per Hit % Time Line Contents ============================================================== 87 @wraps(func) 88 def wrapper(*args, **kwargs): 89 1 12 12.0 0.0 old_arg_value = kwargs.pop(old_arg_name, None) 90 1 5 5.0 0.0 if old_arg_value is not None: 91 if mapping is not None: ... 118 1 663354 663354.0 100.0 return func(*args, **kwargs) ``` So that's simply a decorator and 100% of the time is spent in the decorated function. So let's profile the decorated function: ``` %lprun -f x.astype.__wrapped__ x.astype(str) ``` ```none Line # Hits Time Per Hit % Time Line Contents ============================================================== 3896 @deprecate_kwarg(old_arg_name='raise_on_error', new_arg_name='errors', 3897 mapping={True: 'raise', False: 'ignore'}) 3898 def astype(self, dtype, copy=True, errors='raise', **kwargs): 3899 """ ... 3975 """ 3976 1 28 28.0 0.0 if is_dict_like(dtype): 3977 if self.ndim == 1: # i.e. Series ... 4001 4002 # else, only a single dtype is given 4003 1 14 14.0 0.0 new_data = self._data.astype(dtype=dtype, copy=copy, errors=errors, 4004 1 685863 685863.0 99.9 **kwargs) 4005 1 340 340.0 0.0 return self._constructor(new_data).__finalize__(self) ``` [Source](https://github.com/pandas-dev/pandas/blob/v0.22.0/pandas/core/generic.py#L3896-L4005) Again one line is the bottleneck so let's check the `_data.astype` method: ``` %lprun -f x._data.astype x.astype(str) ``` ```none Line # Hits Time Per Hit % Time Line Contents ============================================================== 3461 def astype(self, dtype, **kwargs): 3462 1 695866 695866.0 100.0 return self.apply('astype', dtype=dtype, **kwargs) ``` Okay, another delegate, let's see what `_data.apply` does: ``` %lprun -f x._data.apply x.astype(str) ``` ```none Line # Hits Time Per Hit % Time Line Contents ============================================================== 3251 def apply(self, f, axes=None, filter=None, do_integrity_check=False, 3252 consolidate=True, **kwargs): 3253 """ ... 3271 """ 3272 3273 1 12 12.0 0.0 result_blocks = [] ... 3309 3310 1 10 10.0 0.0 aligned_args = dict((k, kwargs[k]) 3311 1 29 29.0 0.0 for k in align_keys 3312 if hasattr(kwargs[k], 'reindex_axis')) 3313 3314 2 28 14.0 0.0 for b in self.blocks: ... 3329 1 674974 674974.0 100.0 applied = getattr(b, f)(**kwargs) 3330 1 30 30.0 0.0 result_blocks = _extend_blocks(applied, result_blocks) 3331 3332 1 10 10.0 0.0 if len(result_blocks) == 0: 3333 return self.make_empty(axes or self.axes) 3334 1 10 10.0 0.0 bm = self.__class__(result_blocks, axes or self.axes, 3335 1 76 76.0 0.0 do_integrity_check=do_integrity_check) 3336 1 13 13.0 0.0 bm._consolidate_inplace() 3337 1 7 7.0 0.0 return bm ``` [Source](https://github.com/pandas-dev/pandas/blob/v0.22.0/pandas/core/internals.py#L3251-L3337) And again ... one function call is taking all the time, this time it's `x._data.blocks[0].astype`: ``` %lprun -f x._data.blocks[0].astype x.astype(str) ``` ```none Line # Hits Time Per Hit % Time Line Contents ============================================================== 542 def astype(self, dtype, copy=False, errors='raise', values=None, **kwargs): 543 1 18 18.0 0.0 return self._astype(dtype, copy=copy, errors=errors, values=values, 544 1 671092 671092.0 100.0 **kwargs) ``` .. which is another delegate... ``` %lprun -f x._data.blocks[0]._astype x.astype(str) ``` ```none Line # Hits Time Per Hit % Time Line Contents ============================================================== 546 def _astype(self, dtype, copy=False, errors='raise', values=None, 547 klass=None, mgr=None, **kwargs): 548 """ ... 557 """ 558 1 11 11.0 0.0 errors_legal_values = ('raise', 'ignore') 559 560 1 8 8.0 0.0 if errors not in errors_legal_values: 561 invalid_arg = ("Expected value of kwarg 'errors' to be one of {}. " 562 "Supplied value is '{}'".format( 563 list(errors_legal_values), errors)) 564 raise ValueError(invalid_arg) 565 566 1 23 23.0 0.0 if inspect.isclass(dtype) and issubclass(dtype, ExtensionDtype): 567 msg = ("Expected an instance of {}, but got the class instead. " 568 "Try instantiating 'dtype'.".format(dtype.__name__)) 569 raise TypeError(msg) 570 571 # may need to convert to categorical 572 # this is only called for non-categoricals 573 1 72 72.0 0.0 if self.is_categorical_astype(dtype): ... 595 596 # astype processing 597 1 16 16.0 0.0 dtype = np.dtype(dtype) 598 1 19 19.0 0.0 if self.dtype == dtype: ... 603 1 8 8.0 0.0 if klass is None: 604 1 13 13.0 0.0 if dtype == np.object_: 605 klass = ObjectBlock 606 1 6 6.0 0.0 try: 607 # force the copy here 608 1 7 7.0 0.0 if values is None: 609 610 1 8 8.0 0.0 if issubclass(dtype.type, 611 1 14 14.0 0.0 (compat.text_type, compat.string_types)): 612 613 # use native type formatting for datetime/tz/timedelta 614 1 15 15.0 0.0 if self.is_datelike: 615 values = self.to_native_types() 616 617 # astype formatting 618 else: 619 1 8 8.0 0.0 values = self.values 620 621 else: 622 values = self.get_values(dtype=dtype) 623 624 # _astype_nansafe works fine with 1-d only 625 1 665777 665777.0 99.9 values = astype_nansafe(values.ravel(), dtype, copy=True) 626 1 32 32.0 0.0 values = values.reshape(self.shape) 627 628 1 17 17.0 0.0 newb = make_block(values, placement=self.mgr_locs, dtype=dtype, 629 1 269 269.0 0.0 klass=klass) 630 except: 631 if errors == 'raise': 632 raise 633 newb = self.copy() if copy else self 634 635 1 8 8.0 0.0 if newb.is_numeric and self.is_numeric: ... 642 1 6 6.0 0.0 return newb ``` [Source](https://github.com/pandas-dev/pandas/blob/v0.22.0/pandas/core/internals.py#L546-L642) ... okay, still not there. Let's check out `astype_nansafe`: ``` %lprun -f pd.core.internals.astype_nansafe x.astype(str) ``` ```none Line # Hits Time Per Hit % Time Line Contents ============================================================== 640 def astype_nansafe(arr, dtype, copy=True): 641 """ return a view if copy is False, but 642 need to be very careful as the result shape could change! """ 643 1 13 13.0 0.0 if not isinstance(dtype, np.dtype): 644 dtype = pandas_dtype(dtype) 645 646 1 8 8.0 0.0 if issubclass(dtype.type, text_type): 647 # in Py3 that's str, in Py2 that's unicode 648 1 663317 663317.0 100.0 return lib.astype_unicode(arr.ravel()).reshape(arr.shape) ... ``` [Source](https://github.com/pandas-dev/pandas/blob/v0.22.0/pandas/core/dtypes/cast.py#L640-L704) Again one it's one line that takes 100%, so I'll go one function further: ``` %lprun -f pd.core.dtypes.cast.lib.astype_unicode x.astype(str) UserWarning: Could not extract a code object for the object ``` Okay, we found a `built-in function`, that means it's a C function. In this case it's a Cython function. But it means we cannot dig deeper with line-profiler. So I'll stop here for now. ### Profiling `x.apply` ``` %lprun -f x.apply x.apply(str) ``` ```none Line # Hits Time Per Hit % Time Line Contents ============================================================== 2426 def apply(self, func, convert_dtype=True, args=(), **kwds): 2427 """ ... 2523 """ 2524 1 84 84.0 0.0 if len(self) == 0: 2525 return self._constructor(dtype=self.dtype, 2526 index=self.index).__finalize__(self) 2527 2528 # dispatch to agg 2529 1 11 11.0 0.0 if isinstance(func, (list, dict)): 2530 return self.aggregate(func, *args, **kwds) 2531 2532 # if we are a string, try to dispatch 2533 1 12 12.0 0.0 if isinstance(func, compat.string_types): 2534 return self._try_aggregate_string_function(func, *args, **kwds) 2535 2536 # handle ufuncs and lambdas 2537 1 7 7.0 0.0 if kwds or args and not isinstance(func, np.ufunc): 2538 f = lambda x: func(x, *args, **kwds) 2539 else: 2540 1 6 6.0 0.0 f = func 2541 2542 1 154 154.0 0.1 with np.errstate(all='ignore'): 2543 1 11 11.0 0.0 if isinstance(f, np.ufunc): 2544 return f(self) 2545 2546 # row-wise access 2547 1 188 188.0 0.1 if is_extension_type(self.dtype): 2548 mapped = self._values.map(f) 2549 else: 2550 1 6238 6238.0 3.3 values = self.asobject 2551 1 181910 181910.0 95.5 mapped = lib.map_infer(values, f, convert=convert_dtype) 2552 2553 1 28 28.0 0.0 if len(mapped) and isinstance(mapped[0], Series): 2554 from pandas.core.frame import DataFrame 2555 return DataFrame(mapped.tolist(), index=self.index) 2556 else: 2557 1 19 19.0 0.0 return self._constructor(mapped, 2558 1 1870 1870.0 1.0 index=self.index).__finalize__(self) ``` [Source](https://github.com/pandas-dev/pandas/blob/v0.22.0/pandas/core/series.py#L2426-L2558) Again it's one function that takes most of the time: `lib.map_infer` ... ``` %lprun -f pd.core.series.lib.map_infer x.apply(str) ``` ```none Could not extract a code object for the object ``` Okay, that's another Cython function. This time there's another (although less significant) contributor with ~3%: `values = self.asobject`. But I'll ignore this for now, because we're interested in the major contributors. Going into C/Cython ------------------- ### The functions called by `astype` This is the `astype_unicode` function: ``` cpdef ndarray[object] astype_unicode(ndarray arr): cdef: Py_ssize_t i, n = arr.size ndarray[object] result = np.empty(n, dtype=object) for i in range(n): # we can use the unsafe version because we know `result` is mutable # since it was created from `np.empty` util.set_value_at_unsafe(result, i, unicode(arr[i])) return result ``` [Source](https://github.com/pandas-dev/pandas/blob/v0.22.0/pandas/_libs/lib.pyx#L859-L869) This function uses this helper: ```cython cdef inline set_value_at_unsafe(ndarray arr, object loc, object value): cdef: Py_ssize_t i, sz if is_float_object(loc): casted = int(loc) if casted == loc: loc = casted i = loc sz = cnp.PyArray\_SIZE(arr) if i < 0: i += sz elif i >= sz: raise IndexError('index out of bounds') assign\_value\_1d(arr, i, value) ``` [Source](https://github.com/pandas-dev/pandas/blob/v0.22.0/pandas/_libs/src/util.pxd#L71-L91) Which itself uses this C function: ```c PANDAS_INLINE int assign_value_1d(PyArrayObject* ap, Py_ssize_t _i, PyObject* v) { npy_intp i = (npy_intp)_i; char* item = (char*)PyArray_DATA(ap) + i * PyArray_STRIDE(ap, 0); return PyArray_DESCR(ap)->f->setitem(v, item, ap); } ``` [Source](https://github.com/pandas-dev/pandas/blob/v0.22.0/pandas/_libs/src/numpy_helper.h#L56-L61) ### Functions called by `apply` This is the implementation of the `map_infer` function: ```cython def map_infer(ndarray arr, object f, bint convert=1): cdef: Py_ssize_t i, n ndarray[object] result object val n = len(arr) result = np.empty(n, dtype=object) for i in range(n): val = f(util.get_value_at(arr, i)) # unbox 0-dim arrays, GH #690 if is_array(val) and PyArray_NDIM(val) == 0: # is there a faster way to unbox? val = val.item() result[i] = val if convert: return maybe_convert_objects(result, try_float=0, convert_datetime=0, convert_timedelta=0) return result ``` [Source](https://github.com/pandas-dev/pandas/blob/v0.22.0/pandas/_libs/src/inference.pyx#L1457-L1497) With this helper: ```cython cdef inline object get_value_at(ndarray arr, object loc): cdef: Py_ssize_t i, sz int casted if is_float_object(loc): casted = int(loc) if casted == loc: loc = casted i = loc sz = cnp.PyArray\_SIZE(arr) if i < 0 and sz > 0: i += sz elif i >= sz or sz == 0: raise IndexError('index out of bounds') return get\_value\_1d(arr, i) ``` [Source](https://github.com/pandas-dev/pandas/blob/v0.22.0/pandas/_libs/index.pyx#L45-L50) Which uses this C function: ```c PANDAS_INLINE PyObject* get_value_1d(PyArrayObject* ap, Py_ssize_t i) { char* item = (char*)PyArray_DATA(ap) + i * PyArray_STRIDE(ap, 0); return PyArray_Scalar(item, PyArray_DESCR(ap), (PyObject*)ap); } ``` [Source](https://github.com/pandas-dev/pandas/blob/v0.22.0/pandas/_libs/src/numpy_helper.h#L63-L66) ### Some thoughts on the Cython code There are some differences between the Cython codes that are called eventually. The one taken by `astype` uses `unicode` while the `apply` path uses the function passed in. Let's see if that makes a difference (again IPython/Jupyter makes it very easy to compile Cython code yourself): ``` %load_ext cython %%cython import numpy as np cimport numpy as np cpdef object func_called_by_astype(np.ndarray arr): cdef np.ndarray[object] ret = np.empty(arr.size, dtype=object) for i in range(arr.size): ret[i] = unicode(arr[i]) return ret cpdef object func_called_by_apply(np.ndarray arr, object f): cdef np.ndarray[object] ret = np.empty(arr.size, dtype=object) for i in range(arr.size): ret[i] = f(arr[i]) return ret ``` Timing: ``` import numpy as np arr = np.random.randint(0, 10000, 1000000) %timeit func_called_by_astype(arr) 514 ms ± 11.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit func_called_by_apply(arr, str) 632 ms ± 43.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` Okay, there is a difference but it's *wrong*, it would actually indicate that `apply` would be slightly **slower**. But remember the `asobject` call that I mentioned earlier in the `apply` function? Could that be the reason? Let's see: ``` import numpy as np arr = np.random.randint(0, 10000, 1000000) %timeit func_called_by_astype(arr) 557 ms ± 33.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit func_called_by_apply(arr.astype(object), str) 317 ms ± 13.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` Now it looks better. The conversion to an object array made the function called by apply much faster. There is a simple reason for this: `str` is a Python function and these are generally much faster if you already have Python objects and NumPy (or Pandas) don't need to create a Python wrapper for the value stored in the array (which is generally not a Python object, except when the array is of dtype `object`). However that doesn't explain the **huge** difference that you've seen. My suspicion is that there is actually an additional difference in the ways the arrays are iterated over and the elements are set in the result. Very likely the: ``` val = f(util.get_value_at(arr, i)) if is_array(val) and PyArray_NDIM(val) == 0: val = val.item() result[i] = val ``` part of the `map_infer` function is faster than: ``` for i in range(n): # we can use the unsafe version because we know `result` is mutable # since it was created from `np.empty` util.set_value_at_unsafe(result, i, unicode(arr[i])) ``` which is called by the `astype(str)` path. The comments of the first function seem to indicate that the writer of `map_infer` actually tried to make the code as fast as possible (see the comment about "is there a faster way to unbox?" while the other one maybe was written without special care about performance. But that's just a guess. Also on my computer I'm actually quite close to the performance of the `x.astype(str)` and `x.apply(str)` already: ``` import numpy as np arr = np.random.randint(0, 100, 1000000) s = pd.Series(arr) %timeit s.astype(str) 535 ms ± 23.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit func_called_by_astype(arr) 547 ms ± 21.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit s.apply(str) 216 ms ± 8.48 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit func_called_by_apply(arr.astype(object), str) 272 ms ± 12.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` Note that I also checked some other variants that return a different result: ``` %timeit s.values.astype(str) # array of strings 407 ms ± 8.56 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit list(map(str, s.values.tolist())) # list of strings 184 ms ± 5.02 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) ``` Interestingly the Python loop with `list` and `map` seems to be the fastest on my computer. I actually made a small benchmark including plot: ``` import pandas as pd import simple_benchmark def Series_astype(series): return series.astype(str) def Series_apply(series): return series.apply(str) def Series_tolist_map(series): return list(map(str, series.values.tolist())) def Series_values_astype(series): return series.values.astype(str) arguments = {2**i: pd.Series(np.random.randint(0, 100, 2**i)) for i in range(2, 20)} b = simple_benchmark.benchmark( [Series_astype, Series_apply, Series_tolist_map, Series_values_astype], arguments, argument_name='Series size' ) %matplotlib notebook b.plot() ``` [![enter image description here](https://i.stack.imgur.com/Nl3QZ.png)](https://i.stack.imgur.com/Nl3QZ.png) Note that it's a log-log plot because of the huge range of sizes I covered in the benchmark. However lower means faster here. The results may be different for different versions of Python/NumPy/Pandas. So if you want to compare it, these are my versions: ``` Versions -------- Python 3.6.5 NumPy 1.14.2 Pandas 0.22.0 ``` Upvotes: 5
2018/03/19
1,813
3,241
<issue_start>username_0: I have a list, which looks like this: ``` ['3.2323943e+00, 4.4316312e+00, 4.3174178e+00, 3.8661688e+00, 3.6366895e+00, 3.4324592e+00, 3.3091351e+00, 3.1746527e+00, 1.0588169e+00, 4.4036068e+00, 4.4692073e+00, 4.3857228e+00, 4.2660739e+00, 4.1388672e+00, 4.0061081e+00, 3.8303311e+00'] ``` How can I change it to be int (now shows me error, says it is str) in order to find mean and standard deviation?<issue_comment>username_1: Using list comprehension: ``` list = [float(x) for x in '3.2323943e+00, 4.4316312e+00'.split(',')] ``` Returns: ``` [3.2323943, 4.4316312] ``` Simply add the rest of your data. Upvotes: 0 <issue_comment>username_2: You cant, they are not ints, they are floating values. And your list is a 1-element list of one big string holding comma seperated floating point values in mathematical notation: ``` floats = list(map(float,'3.2323943e+00, 4.4316312e+00, 4.3174178e+00, 3.8661688e+00, 3.6366895e+00, 3.4324592e+00, 3.3091351e+00, 3.1746527e+00, 1.0588169e+00, 4.4036068e+00, 4.4692073e+00, 4.3857228e+00, 4.2660739e+00, 4.1388672e+00, 4.0061081e+00, 3.8303311e+00'.split(","))) print (floats) mean = sum(floats)/len(floats) variance = sum((x-mean)**2 for x in floats) / len(floats) popul = variance**0.5 from pprint import pprint print(floats) print("Mean",mean) print("Variance",variance) print("Population",popul) ``` Output: ``` [3.2323943, 4.4316312, 4.3174178, 3.8661688, 3.6366895, 3.4324592, 3.3091351, 3.1746527, 1.0588169, 4.4036068, 4.4692073, 4.3857228, 4.2660739, 4.1388672, 4.0061081, 3.8303311] Mean 3.74745516875 Variance 0.6742259030611121 Population 0.8211126007199695 ``` Upvotes: 2 <issue_comment>username_3: Another way is like this: ``` old_list = ['3.2323943e+00, 4.4316312e+00, 4.3174178e+00, 3.8661688e+00, 3.6366895e+00, 3.4324592e+00, 3.3091351e+00, 3.1746527e+00, 1.0588169e+00, 4.4036068e+00, 4.4692073e+00, 4.3857228e+00, 4.2660739e+00, 4.1388672e+00, 4.0061081e+00, 3.8303311e+00'] new_list = [float(i) for i in old_list[0].split(',')] >>> new_list [3.2323943, 4.4316312, 4.3174178, 3.8661688, 3.6366895, 3.4324592, 3.3091351, 3.1746527, 1.0588169, 4.4036068, 4.4692073, 4.3857228, 4.2660739, 4.1388672, 4.0061081, 3.8303311] ``` You can then get your mean and std of your new list using `numpy`: ``` import numpy as np mean_of_list = np.mean(new_list) std_of_list = np.std(new_list) ``` To explain, your values are currently in a list comprising of one long string (which I've called `old_list`). My list comprehension splits that at the comma (using `.split(',')`), and turns it into a float, rather than a string (using `float(...)`) **Note on ints vs. floats** As pointed out by username_2 in their post, it makes sense to cast to float, rather than int, because your values seem like floats (they have a seemingly relevant decimal part). If you actually wanted them as ints, simply do this: ``` new_list = [int(float(i)) for i in old_list[0].split(',')] ``` But your resulting list would be: ``` >>> new_list [3, 4, 4, 3, 3, 3, 3, 3, 1, 4, 4, 4, 4, 4, 4, 3] ``` Which is probably not what you're looking for. Upvotes: 2
2018/03/19
376
1,353
<issue_start>username_0: I am wanting to obtain a div value using a `data-attribute`. In this case specifically, `data-pdf`. As you can see within my example, after I click on either option, the value comes up undefined. What am I doing wrong? Also, will the value change based on which heading I click on? I only want one value stored at a time. ```js $(".pdfWrap").on("click", function (event) { let pdfChoice = $(this).find('.pdfWrap').data('pdf'); console.log(pdfChoice); }); ``` ```html ### Linear Structure ### Dynamic Structure ```<issue_comment>username_1: You are registering the click handler on `pdfWrap` so `this` inside the callback will be `pdfWrap` element only. You dont need to do an extra `find` and instead just fetch the `data-attribute` value. ```js $(".pdfWrap").on("click", function(event) { let pdfChoice = $(this).data('pdf'); console.log(pdfChoice); }); ``` ```html ### Linear Structure ### Dynamic Structure ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: [Here's how you can do it without jQuery in the event handler](https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement/dataset): ```js $(".pdfWrap").on("click", function(event) { let pdfChoice = this.dataset.pdf; console.log(pdfChoice); }); ``` ```html ### Linear Structure ### Dynamic Structure ``` Upvotes: 0
2018/03/19
770
2,751
<issue_start>username_0: I'm working on a website, and I was tired of copy/pasting headers and footers to each page, because when you than need to change one letter, you have to change it on all pages using that header/footer. So I 'copied' a function I learnt from Laravel, partials. I write the HTML of the header in one file, and than load that to the page using PHP. Code of my header: ``` [![](images/logo.png)](index.php) * [Home](index.php) * [Over ons](over.php) * [Onze machines](catalogus.php) * [Portfolio](portfolio.php) * [Contact](contact.php) //Highlight current function highlight(obj) { var page = location.pathname.substring(location.pathname.lastIndexOf("/") + 1); if (page == obj.getAttribute("href")) { obj.classList.add("current"); } alert(page); alert(obj.getAttribute("href")); } ``` CSS: ``` .current { text-decoration: underline; } ``` And the header is included in pages like this: ``` php //Include header $f = fopen("partial/header.html", "r"); echo fread($f, 4096); ? ``` This is working, the only problem is that I want to show the current page the user is on in the header. This by just underlining the current page-name in the header. The function highlight() is working, but the onload isn't working. It seems that the anchor-tag isn't supported by the onLoad-function. How would I fix this? How to execute the highlight()-function on each anchor tag onLoad? Javascript and JQuery allowed, am not using a framework.<issue_comment>username_1: Thanks to Mike for pointing me in the right direction. I was able to fix it with a bit of JQuery. ``` //Highlight current function highlight(obj) { var page = location.pathname.substring(location.pathname.lastIndexOf("/") + 1); if (page == obj.getAttribute("href")) { obj.classList.add("current"); } } //Launch highlight function on document load $(document).ready(function() { $(".header-wrapper ul li a").each(function() { highlight(this); }); }); ``` Note: you still have to handle that www.mywebsite.com/ won't work, because it hasn't a filename in the url like www.mywebsite.con/**index.php** has. Upvotes: 0 <issue_comment>username_2: It looks like you're not using Jquery 'll keep it plain javascript ``` var page = location.pathname.substring( location.pathname.lastIndexOf('/') + 1, ); document.querySelectorAll('li > a').forEach(el => { if (page == el.getAttribute('href')) { el.classList.add('current'); } }); ``` To explain what is going on here we're selecting all anchor tags in a list item. I'd suggest giving the list a `class` so you can identify it better in the javascript like so. ``` document.querySelectorAll('.nav a') ``` and add the class to the HTML Upvotes: 1
2018/03/19
1,283
4,078
<issue_start>username_0: I'm trying to pass a parameter which is a list of values: ``` import argparse parser = argparse.ArgumentParser() parser.add_argument('--cb_ticks', required=False, default='') args = vars(parser.parse_args()) print(args['cb_ticks']) ``` For most cases, this code works as expected: * python test.py --cb\_ticks "1" -> 1 * python test.py --cb\_ticks "1,2" -> 1,2 * python test.py --cb\_ticks "-1" -> -1 But when I'm trying to pass more than one value, where the first is negative: * python test.py --cb\_ticks "-1,2" I'm getting the following error: test.py: > > error: argument --cb\_ticks: expected one argument > > ><issue_comment>username_1: The `add_argument` method allows you to tell the argument parser to expect multiple (or no) values: ``` import argparse parser = argparse.ArgumentParser() parser.add_argument('--cb_ticks', nargs='*') args = vars(parser.parse_args()) print(args['cb_ticks']) ``` but the values are expected to be space separated, so you'll have to execute your script as: ``` python test.py --cb_ticks -1 2 ``` See [reference](https://docs.python.org/3/library/argparse.html). Upvotes: 3 <issue_comment>username_2: `-1,2` is allowed as a optionals flag: ``` In [39]: parser.add_argument('-1,2') ... In [40]: parser.print_help() usage: ipython3 [-h] [--cb_ticks CB_TICKS] [-1,2 1,2] optional arguments: -h, --help show this help message and exit --cb_ticks CB_TICKS -1,2 1,2 In [44]: args=parser.parse_args(['--cb_ticks','foo','-1,2','bar']) In [45]: args Out[45]: Namespace(cb_ticks='foo', **{'1,2': 'bar'}) # curious display In [46]: vars(args) Out[46]: {'1,2': 'bar', 'cb_ticks': 'foo'} In [47]: getattr(args, '1,2') Out[47]: 'bar' ``` This is an edge case, a consequence of code that tries not to constrain what flags (and/or `dest`) the user can define. Upvotes: 0 <issue_comment>username_3: To accept the inputs as specified in the question, you must pre-process the arguments. I tried many things, from where the [negative number issue appears in the documentation](https://docs.python.org/3.2/library/argparse.html#arguments-containing), from here and from [Python Argparse: Issue with optional arguments which are negative numbers](https://stackoverflow.com/questions/9025204/python-argparse-issue-with-optional-arguments-which-are-negative-numbers), but those methods didn't work in this specific kind of case (my case was <https://github.com/poikilos/minetestmapper-python/blob/master/minetestmapper-numpy.py>). I solved the issue as follows. Step 1: Before using argparse, do the following: ``` cb_ticks_str = None i = 0 while i < len(sys.argv): if sys.argv[i] == "--cb_ticks": del sys.argv[i] cb_ticks_str = '' elif cb_ticks_str == '': cb_ticks_str = sys.argv[i] del sys.argv[i] break else: i += 1 i = None ``` Step 2: Use argpase as normal except don't use it for any non-numerical argument that starts with a hyphen: ``` parser = argparse.ArgumentParser() # parser.add_argument('--cb_ticks', required=False, default='') args = vars(parser.parse_args()) ``` Step 3: Split your argument manually then add it to your args dict: ``` if cb_ticks_str is not None: args['cb_ticks'] = [int(v) for v in cb_ticks_str.split(",")] # ^ raises ValueError if a non-int is in the list if len(args['cb_ticks']) != 2: raise ValueError("cb_ticks must have 2 values separated by a comma.") ``` **Alternatively**: If you were using the parser directly instead of using vars like in the question, (do #1 as I described) then in #2 change `args = vars(parser.parse_args())` to `args = parser.parse_args()`, then for #3 instead do: Step 3: Split your argument manually then add it to an args object: ``` if cb_ticks_str is not None: args.cb_ticks = [int(v) for v in cb_ticks_str.split(",")] # ^ raises ValueError if a non-int is in the list if len(args.cb_ticks) != 2: raise ValueError("cb_ticks must have 2 values separated by a comma.") ``` Upvotes: 1
2018/03/19
1,217
3,975
<issue_start>username_0: I have defined some necessary fields that are indicated by a required (\*) label with an text input next to it. When I submit, I want to make sure that all of these required labels' text inputs are not empty after the button submit. This is what I have for HTML (<https://jsfiddle.net/mmsawjwr/5/>): ``` First Name: Last Name: Email: Phone number: Save ``` At first, I was going to hardcode and check each id individually, but I figured that there should be a dynamic way to check this. Any guidance/direction will be appreciated!<issue_comment>username_1: The `add_argument` method allows you to tell the argument parser to expect multiple (or no) values: ``` import argparse parser = argparse.ArgumentParser() parser.add_argument('--cb_ticks', nargs='*') args = vars(parser.parse_args()) print(args['cb_ticks']) ``` but the values are expected to be space separated, so you'll have to execute your script as: ``` python test.py --cb_ticks -1 2 ``` See [reference](https://docs.python.org/3/library/argparse.html). Upvotes: 3 <issue_comment>username_2: `-1,2` is allowed as a optionals flag: ``` In [39]: parser.add_argument('-1,2') ... In [40]: parser.print_help() usage: ipython3 [-h] [--cb_ticks CB_TICKS] [-1,2 1,2] optional arguments: -h, --help show this help message and exit --cb_ticks CB_TICKS -1,2 1,2 In [44]: args=parser.parse_args(['--cb_ticks','foo','-1,2','bar']) In [45]: args Out[45]: Namespace(cb_ticks='foo', **{'1,2': 'bar'}) # curious display In [46]: vars(args) Out[46]: {'1,2': 'bar', 'cb_ticks': 'foo'} In [47]: getattr(args, '1,2') Out[47]: 'bar' ``` This is an edge case, a consequence of code that tries not to constrain what flags (and/or `dest`) the user can define. Upvotes: 0 <issue_comment>username_3: To accept the inputs as specified in the question, you must pre-process the arguments. I tried many things, from where the [negative number issue appears in the documentation](https://docs.python.org/3.2/library/argparse.html#arguments-containing), from here and from [Python Argparse: Issue with optional arguments which are negative numbers](https://stackoverflow.com/questions/9025204/python-argparse-issue-with-optional-arguments-which-are-negative-numbers), but those methods didn't work in this specific kind of case (my case was <https://github.com/poikilos/minetestmapper-python/blob/master/minetestmapper-numpy.py>). I solved the issue as follows. Step 1: Before using argparse, do the following: ``` cb_ticks_str = None i = 0 while i < len(sys.argv): if sys.argv[i] == "--cb_ticks": del sys.argv[i] cb_ticks_str = '' elif cb_ticks_str == '': cb_ticks_str = sys.argv[i] del sys.argv[i] break else: i += 1 i = None ``` Step 2: Use argpase as normal except don't use it for any non-numerical argument that starts with a hyphen: ``` parser = argparse.ArgumentParser() # parser.add_argument('--cb_ticks', required=False, default='') args = vars(parser.parse_args()) ``` Step 3: Split your argument manually then add it to your args dict: ``` if cb_ticks_str is not None: args['cb_ticks'] = [int(v) for v in cb_ticks_str.split(",")] # ^ raises ValueError if a non-int is in the list if len(args['cb_ticks']) != 2: raise ValueError("cb_ticks must have 2 values separated by a comma.") ``` **Alternatively**: If you were using the parser directly instead of using vars like in the question, (do #1 as I described) then in #2 change `args = vars(parser.parse_args())` to `args = parser.parse_args()`, then for #3 instead do: Step 3: Split your argument manually then add it to an args object: ``` if cb_ticks_str is not None: args.cb_ticks = [int(v) for v in cb_ticks_str.split(",")] # ^ raises ValueError if a non-int is in the list if len(args.cb_ticks) != 2: raise ValueError("cb_ticks must have 2 values separated by a comma.") ``` Upvotes: 1
2018/03/19
259
924
<issue_start>username_0: How do I add a conditional to a single stored procedure to either return one item or to join a tvp to return many items? So I want to be able to do ``` Select * From table where id = @id ``` or ``` Select t.* From table t join tvp tvp on t.id = tvp.id ``` Would it be terrible to pass in a tvp for just one item every time so I can avoid the first query? Am I being lazy and should this just be two different stored procedures?<issue_comment>username_1: I've done this before with two procedures: the first procedure takes the TVP, the other procedure takes a single ID, packages it as a TVP and calls the first procedure. I did it for backwards compatibility purposes, but it's applicable in this case. Upvotes: 1 [selected_answer]<issue_comment>username_2: You can do: ``` Select t.* From table t where id = @id or (@id is null and t.id in (select tvp.id from tvp)); ``` Upvotes: 1
2018/03/19
500
1,720
<issue_start>username_0: As you can tell by this question I'm still learning SASS and Bootstrap 4. With BS4 I've had to setup some .scss partial files and then import them into my site.scss file. In order to make changes to Bootstrap I made changes in my \_my-theme.scss file. My question is when I want to make simple CSS changes, not dealing with BS, is it best to put them directly in the site.scsss file or should I put them in the \_my-theme file? Ex: if I just want to make a page-title class I see that it will work if I put it in either my site.scss file or in the \_my-theme.scss file but which one is best practice and is there a reason why? I know I need to only have the class in one or the other file, I just don't know which one I should be putting it in? Here is my site.scss: ``` @import "scss/_custom-variables.scss"; @import "bootstrap/bootstrap.scss"; @import "scss/_my-theme.scss"; .page-title { color: red; font-weight: bold; } ``` Here is an example of \_my-theme.scss: ``` body { padding-top: 0px; padding-bottom: 20px; } /* Set padding to keep content from hitting the edges */ .body-content { padding-left: 15px; padding-right: 15px; } .page-title { color: red; font-weight: bold; } ```<issue_comment>username_1: I've done this before with two procedures: the first procedure takes the TVP, the other procedure takes a single ID, packages it as a TVP and calls the first procedure. I did it for backwards compatibility purposes, but it's applicable in this case. Upvotes: 1 [selected_answer]<issue_comment>username_2: You can do: ``` Select t.* From table t where id = @id or (@id is null and t.id in (select tvp.id from tvp)); ``` Upvotes: 1
2018/03/19
456
1,613
<issue_start>username_0: I tried everywhere but not found a solution for my question, so I came here to ask from you guys and hope to get a solution. I have a MDI and few child forms where on MDI form I have a sidepanel control and it has few buttons control and I what to change button image of these button dynamically from child form. ``` MDImenu frmMDImenu = new MDImenu(); frmMDImenu.btnConnect.Image = Properties.Resources.connect_icon; ``` all buttons images store in Property.Recources by applying these code I can change the icon/image to button in MDI form from child form but the effect doesn't apply to button. Any suggestion where I am missing something.<issue_comment>username_1: There are some steps need to follow to achieve this. 1. In your child form(e.g. Form1) create a static variable ``` public static MDImenu frmMDImenu=null; ``` 2. in your MDI form when you are opening the child form ``` Form1 f1=new Form1(); Form1.frmMDImenu=this; f1.MDIparent=this; f1.show(); ``` 3. now where you want to change the image of the button Button's Modifier property should be public ``` frmMDImenu.btnConnect.Image = Properties.Resources.connect_icon; ``` May be this is not the efficient way to solve this. Who cares. It will solve your problem. Upvotes: -1 <issue_comment>username_2: @username_1, I have soled it by applying the following codes.... ``` frmMDIParent mdiForm = (frmMDIParent)this.MdiParent; mdiForm.button1.Image = MyProject.Properties.Resources.OK_icon; ``` I didn't applied your code but I should be good too. Thanks everybody. Upvotes: 0
2018/03/19
543
1,719
<issue_start>username_0: I am using Wordpress and i try to change the background of this website: <http://evolum.org> (the grey) and i want to put this one: <https://greatives.eu/themes/movedo/> (behind the movedo word) I am trying to put this on the Wordpress additional CSS editor: ``` element.style { background: linear-gradient(135deg, rgb(132, 84, 226) 0%, rgb(6, 82, 253) 100%); } ``` instead of: ``` element.style { background-image: url(http://evolum.org/wp-content/uploads/2000/01/grve-dummy-sample-image-large.png); ``` when I inspect the item and change it manually it works, but when i modify it on the wordpress editor it doesnt work! i think it's maybe because it have multiple 'element.style', how to designate the right item to the wordpress editor?<issue_comment>username_1: There are some steps need to follow to achieve this. 1. In your child form(e.g. Form1) create a static variable ``` public static MDImenu frmMDImenu=null; ``` 2. in your MDI form when you are opening the child form ``` Form1 f1=new Form1(); Form1.frmMDImenu=this; f1.MDIparent=this; f1.show(); ``` 3. now where you want to change the image of the button Button's Modifier property should be public ``` frmMDImenu.btnConnect.Image = Properties.Resources.connect_icon; ``` May be this is not the efficient way to solve this. Who cares. It will solve your problem. Upvotes: -1 <issue_comment>username_2: @username_1, I have soled it by applying the following codes.... ``` frmMDIParent mdiForm = (frmMDIParent)this.MdiParent; mdiForm.button1.Image = MyProject.Properties.Resources.OK_icon; ``` I didn't applied your code but I should be good too. Thanks everybody. Upvotes: 0
2018/03/19
449
1,595
<issue_start>username_0: I have noticed that on android versions 6.0+,when someone starting app after login screen and submiting info,black screen appear and you have to wait 3-15 sec depending on devices. I heard that it could be for heavy loadings,but this app is nothing special it just load listview with few images after login (i had 3 images while tested) so im not sure is it really that heavy plus it only happen on newer versions of android so im not sure what to do. I can provide whole code or some snippet. EDIT: I tried with various image sizes (full loaded with big images,and only 1 small image) and it have no effect at all. Anyone???<issue_comment>username_1: There are some steps need to follow to achieve this. 1. In your child form(e.g. Form1) create a static variable ``` public static MDImenu frmMDImenu=null; ``` 2. in your MDI form when you are opening the child form ``` Form1 f1=new Form1(); Form1.frmMDImenu=this; f1.MDIparent=this; f1.show(); ``` 3. now where you want to change the image of the button Button's Modifier property should be public ``` frmMDImenu.btnConnect.Image = Properties.Resources.connect_icon; ``` May be this is not the efficient way to solve this. Who cares. It will solve your problem. Upvotes: -1 <issue_comment>username_2: @username_1, I have soled it by applying the following codes.... ``` frmMDIParent mdiForm = (frmMDIParent)this.MdiParent; mdiForm.button1.Image = MyProject.Properties.Resources.OK_icon; ``` I didn't applied your code but I should be good too. Thanks everybody. Upvotes: 0
2018/03/19
571
1,821
<issue_start>username_0: I'm trying to get data at monthly level. ``` SELECT c.Calendar_Month_Name, COUNT(*) FROM db1 AS c INNER JOIN (SELECT DISTINCT a.tel_num, b.postpaid_tel_num FROM db2 AS a INNER JOIN db3 AS b ON a.tel_num = b.tel_num WHERE a.hs_manufacturer = 'Samsung' AND b.postpaid_tel_num = 1) d ON c.Dim_Calendar_Dt = d.REPORT_DT WHERE c.Calendar_Year_Num = 2018 GROUP BY c.Calendar_Month_Name; ``` --- REPORT\_DT is present in db2 but still I get an error that says REPORT\_DT does not exist If I change the position of paratheses as follows I get an error that says, something is expected between 'REPORT\_DT' and the 'where' keyword. ``` SELECT c.Calendar_Month_Name, COUNT(*) FROM (db1 AS c INNER JOIN (SELECT DISTINCT a.tel_num, b.postpaid_tel_num FROM db2 AS a INNER JOIN db3 AS b ON a.tel_num = b.tel_num WHERE a.hs_manufacturer = 'Samsung' AND b.postpaid_tel_num = 1) d ON c.Dim_Calendar_Dt = d.REPORT_DT WHERE c.Calendar_Year_Num = 2018) GROUP BY c.Calendar_Month_Name; ```<issue_comment>username_1: In the first version, it looks like you need to add *REPORT\_DT* to the **select** clause of your subquery **d** Upvotes: 1 <issue_comment>username_2: FWIW, I think a formatted query should look something like this: ``` SELECT c.Calendar_Month_Name , COUNT(*) FROM db1 c JOIN ( SELECT DISTINCT a.tel_num , b.postpaid_tel_num FROM db2 a JOIN db3 b ON a.tel_num = b.tel_num WHERE a.hs_manufacturer = 'Samsung' AND b.postpaid_tel_num=1 ) d ON c.Dim_Calendar_Dt = d.REPORT_DT WHERE c.Calendar_Year_Num = 2018 GROUP BY c.Calendar_Month_Name ``` Upvotes: 0
2018/03/19
463
1,811
<issue_start>username_0: How to define post build actions for Jenkins multi pipeline project? There is a separate option available when you have a simple project but not for multipipeline. [![enter image description here](https://i.stack.imgur.com/YUdfE.png)](https://i.stack.imgur.com/YUdfE.png)<issue_comment>username_1: When you write a pipeline, you describe the whole flow yourself, which gives you great flexibility to do whatever you want, including running post-build steps. You can see an example of using post-build steps in a pipeline I wrote: <https://github.com/geek-kb/Android_Pipeline/blob/master/Jenkinsfile> Example from that code: ``` run_in_stage('Post steps', { sh """ # Add libCore.so files to symbols.zip find ${cwd}/Product-CoreSDK/obj/local -name libCore.so | zip -r ${cwd}/Product/build/outputs/symbols.zip -@ # Remove unaligned apk's rm -f ${cwd}/Product/build/outputs/apk/*-unaligned.apk """ }) ``` Upvotes: 0 <issue_comment>username_2: To add post build steps to a Multibranch Pipeline, you need to code these steps into the `finally` block, an example is below: ``` node { try { stage("Checkout") { // checkout scm } stage("Build & test") { // build & Unit test } } catch (e) { // fail the build if an exception is thrown currentBuild.result = "FAILED" throw e } finally { // Post build steps here /* Success or failure, always run post build steps */ // send email // publish test results etc etc } } ``` For most of the post-build steps you would want there are online examples of them on how to write in pipeline format. If you have any specific one please list it here Upvotes: 3 [selected_answer]
2018/03/19
1,013
2,925
<issue_start>username_0: I would like to utilize postman to test a REST API that requires one of the input fields to be encrypted with RSA encryption. I see that postman provides the functionality through `require('crypto-js')` to encrypt using AES encryption, but I that library does not provide RSA encryption. How can I use post man to automate RSA encryption? The flow would work like this: 1. Call a REST API that returns an RSA public key 2. Store the RSA public key in a variable 3. Utilize that public key to encrypt an value in the following request before sending<issue_comment>username_1: I dit it. you need <https://github.com/digitalbazaar/forge> and compile it, not from cdn, can not work load from your script ```js 1.load script var server = postman.getEnvironmentVariable("server"); if (!postman.getEnvironmentVariable("forgeJS")) { pm.sendRequest("http://" + server + "/res/forge.js", function(err, res) { if (err) { console.log(err); } else { postman.setEnvironmentVariable("forgeJS", res.text()); } }) } 2.eval script and encrypt var password = '<PASSWORD>'; var public_key = '-----BEGIN PUBLIC KEY-----\n' + pm.environment.get("rsaKey") + '\n' + '-----END PUBLIC KEY-----'; var jsscript = pm.environment.get("forgeJS"); eval(jsscript); console.info(public_key) var publicKey = forge.pki.publicKeyFromPem(public_key); var encryptedText = forge.util.encode64(publicKey.encrypt("123456", 'RSA-OAEP', { // server Java private decrypt md: forge.md.sha256.create(), mgf1: { md: forge.md.sha1.create() } })); postman.setEnvironmentVariable("password", encryptedText); ``` Upvotes: 0 <issue_comment>username_2: I have created a little ¨library¨ to use cryptographic methods in Postman Pre-request and Tests script, RSA is totally supported, have a look to the tutorial here, is very easy to use. <https://joolfe.github.io/postman-util-lib/> Best Regards. Here is an example of how to RSA encrypt using the 'RSAOAEP' alg: ``` // RSAOAEP signature example const pubkey = '-----BEGIN PUBLIC KEY-----\n' + '<KEY> + '<KEY>' + '<KEY>' + '<KEY>' + '<KEY>' + '<KEY>' + 'DwIDAQAB\n' + '-----END PUBLIC KEY-----\n' const fileContent = 'My file content comes here...' const keyObj = pmlib.rs.KEYUTIL.getKey(pubkey) const encHex = pmlib.rs.KJUR.crypto.Cipher.encrypt(fileContent, keyObj, 'RSAOAEP') console.log(encHex) // will return the hexadecimal encoded, can be converted to many format ussing the library too ``` Upvotes: 2
2018/03/19
799
2,612
<issue_start>username_0: I follow this documentation to set up sessions in my Spring Boot application: <https://docs.spring.io/spring-session/docs/current/reference/html5/guides/boot-redis.html> application.properties ``` spring.session.store-type=redis server.session.timeout=10 ``` After sending in a request, I see the following records in redis: ``` 127.0.0.1:6379> keys * 1) "spring:session:sessions:4b524c1e-e133-4d04-8b5b-40ffc3685af3" 2) "spring:session:sessions:expires:c1e2792f-f001-4a02-b812-39ab68f719ea" 3) "spring:session:sessions:expires:4b524c1e-e133-4d04-8b5b-40ffc3685af3" 4) "spring:session:index:org.springframework.session.FindByIndexNameSessionRepository.PRINCIPAL_NAME_INDEX_NAME:105121963489487663346" 5) "spring:session:expirations:1521492480000" ``` I expect these records to disappear after 10 seconds (because of the server.session.timeout property), but the data stays even after several minutes. How to set up session time out in Spring Sessions properly?<issue_comment>username_1: You are using a deprecated `server.session.timeout` which was replaced with `server.servlet.session.timeout` in Spring Boot 2.0. With the change from [PR I've opened against your sample repo](https://github.com/nagyzsolthun/spring-boot-session-timeout/pull/1) the desired session timeout is applied correctly. Please take time to get familiar with [Spring Boot 2.0 Migration Guide](https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-2.0-Migration-Guide) and consider using `spring-boot-properties-migrator` module. Upvotes: 2 [selected_answer]<issue_comment>username_2: As per the Vedran answer is not completely correct, because the Springboot common properties has the two properties, both are currently active and not deprecated. [Springboot common properties](https://docs.spring.io/spring-boot/docs/current/reference/html/common-application-properties.html) **For Spring Session:** ``` spring.session.timeout= # Session timeout. If a duration suffix is not specified, seconds will be used. ``` **For web propeties** WEB PROPERTIES EMBEDDED SERVER CONFIGURATION (ServerProperties) ``` server.servlet.session.timeout= # Session timeout. If a duration suffix is not specified, seconds will be used. ``` Upvotes: 1 <issue_comment>username_3: I also encountered this problem, but unfortunately, I can't effectively control the session timeout by setting "spring.session.timeout" and "server.servlet.session.timeout". My solution is to configure the session timeout through annotations. ```java @EnableRedisHttpSession(maxInactiveIntervalInSeconds = 30) ``` Upvotes: 0
2018/03/19
654
2,163
<issue_start>username_0: I am trying to count of classes to start each month. ``` select to_char(START_DATE_TIME,'MON'), count(START_DATE_TIME) from SECTION having count(START_DATE_TIME) > 1 group by START_DATE_TIME ``` **It give me this output** ``` MAY 4 APR 3 MAY 2 JUN 2 APR 2 JUL 7 JUL 7 JUN 3 APR 4 MAY 2 APR 6 MAY 4 JUN 2 JUN 2 JUN 3 MAY 5 JUN 2 APR 3 MAY 3 JUN 3 MAY 2 APR 2 MAY 3 ``` **I need a output similar to this** ``` Start_Month Count July 14 June 17 April 21 May 26 ```<issue_comment>username_1: You are using a deprecated `server.session.timeout` which was replaced with `server.servlet.session.timeout` in Spring Boot 2.0. With the change from [PR I've opened against your sample repo](https://github.com/nagyzsolthun/spring-boot-session-timeout/pull/1) the desired session timeout is applied correctly. Please take time to get familiar with [Spring Boot 2.0 Migration Guide](https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-2.0-Migration-Guide) and consider using `spring-boot-properties-migrator` module. Upvotes: 2 [selected_answer]<issue_comment>username_2: As per the Vedran answer is not completely correct, because the Springboot common properties has the two properties, both are currently active and not deprecated. [Springboot common properties](https://docs.spring.io/spring-boot/docs/current/reference/html/common-application-properties.html) **For Spring Session:** ``` spring.session.timeout= # Session timeout. If a duration suffix is not specified, seconds will be used. ``` **For web propeties** WEB PROPERTIES EMBEDDED SERVER CONFIGURATION (ServerProperties) ``` server.servlet.session.timeout= # Session timeout. If a duration suffix is not specified, seconds will be used. ``` Upvotes: 1 <issue_comment>username_3: I also encountered this problem, but unfortunately, I can't effectively control the session timeout by setting "spring.session.timeout" and "server.servlet.session.timeout". My solution is to configure the session timeout through annotations. ```java @EnableRedisHttpSession(maxInactiveIntervalInSeconds = 30) ``` Upvotes: 0
2018/03/19
603
2,251
<issue_start>username_0: I have following query and I need unique value for column ``` Select unique(t.id), log.* from tableA log inner join tableT t on t.id1=log.id1 where log.time>=(somedate) and log.time<(somedate) and ref-id=20 and ref-id=30 and t.id not in (select unique t.id from tableA log inner join tableT t on t.id1=log.id1 where log.time>=(somedate) and log.time<(somedate) and ref-id=20 and ref-id=30); ``` I am not getting unique values for t.id.Can anyone please help ? I am using oracle data base<issue_comment>username_1: You are using a deprecated `server.session.timeout` which was replaced with `server.servlet.session.timeout` in Spring Boot 2.0. With the change from [PR I've opened against your sample repo](https://github.com/nagyzsolthun/spring-boot-session-timeout/pull/1) the desired session timeout is applied correctly. Please take time to get familiar with [Spring Boot 2.0 Migration Guide](https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-2.0-Migration-Guide) and consider using `spring-boot-properties-migrator` module. Upvotes: 2 [selected_answer]<issue_comment>username_2: As per the Vedran answer is not completely correct, because the Springboot common properties has the two properties, both are currently active and not deprecated. [Springboot common properties](https://docs.spring.io/spring-boot/docs/current/reference/html/common-application-properties.html) **For Spring Session:** ``` spring.session.timeout= # Session timeout. If a duration suffix is not specified, seconds will be used. ``` **For web propeties** WEB PROPERTIES EMBEDDED SERVER CONFIGURATION (ServerProperties) ``` server.servlet.session.timeout= # Session timeout. If a duration suffix is not specified, seconds will be used. ``` Upvotes: 1 <issue_comment>username_3: I also encountered this problem, but unfortunately, I can't effectively control the session timeout by setting "spring.session.timeout" and "server.servlet.session.timeout". My solution is to configure the session timeout through annotations. ```java @EnableRedisHttpSession(maxInactiveIntervalInSeconds = 30) ``` Upvotes: 0
2018/03/19
788
2,685
<issue_start>username_0: I am having a problem with calling a default method from an interface. Here is my code: Interface: ``` public interface Pozdrav { public static void stat(){ System.out.println("Ja sam staticni metod"); } default void osn(){ System.out.println("Ja sam osnovni metod"); } } ``` Main class: ``` public class KonkretniPozdrav implements Pozdrav{ public static void main(String[] args) { Pozdrav.stat(); } } ``` Now, I need to call the default method `Pozdrav.osn();` but when I do that, I receive this error: > > Error:(8, 16) java: non-static method osn() cannot be referenced from a static context. > > > How do I call this default method?<issue_comment>username_1: You need an instance of `Pozdrav` to call an instance method on it. For example: ``` new Pozdrav() {}.osn(); ``` Upvotes: 2 <issue_comment>username_2: You need a concrete instance to invoke the method on. If you have not yet created a concrete type, you could do so anonymously. Like, ``` new Pozdrav() { }.osn(); ``` Outputs ``` Ja sam osnovni metod ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: ``` new KonkretniPozdrav().osn(); ``` In order to call `osn`, an instance of `Pozdrav` is required. A `default` method (instance member) doesn't mean a `static` method (class member). It suggests that the method will have the body in an interface. `default` methods provide the default behaviour for implementations. That's why you didn't have to implement `osn` in `KonkretniPozdrav`. Upvotes: 2 <issue_comment>username_4: To call non static methods, you should create a instance of the class using the keyword `new`, example: `KonkretniPozdrav pozdrav = new KonkretniPozdrav();`. To call static methods, you don't need a instance. Just call using **CLASS.Method()**. Your main class would appear like it. ``` public class KonkretniPozdrav implements Pozdrav{ public static void main(String[] args) { Pozdrav.stat(); KonkretniPozdrav konkretnipozdrav = new KonkretniPozdrav(); konkretnipozdrav.osn(); } } ``` A consideration for your code is that interfaces shouldn't have code implemented, except in `static` methods and `default` methods, which are allowed code in the body. Interfaces are contracts that classes that implement the interface should comply/obey. Normally is a convention start a interface with the letter **I** to indicate interface, example: **IPozdrav**. [Here a document about Java interfaces](https://www.tutorialspoint.com/java/java_interfaces.htm). Maybe, you would look at the difference between **Abstract class** vs **Interfaces** Upvotes: 1
2018/03/19
937
3,471
<issue_start>username_0: I have a simple controller method: ``` products = Product.where(name: name, color: color, size: size, available: true) ``` Which returns a Product::ActiveRecord\_Relation object. I want to take the first object and pull out a field e.g. `products.first.product_code` But this and some other methods that I tried re-query the database. I've tried: ``` products[0].product_code products.take.product_code ``` All of which re-query the database, hitting the database twice. Once for the where and one to take the field. Is there a simple solution that won't hit the db? What does work is converting the ActiveRecord to an array (`products.to_a.[0].product_code`) but this seems inefficient. Below is the server log to show the two separate hits: [![enter image description here](https://i.stack.imgur.com/ZO8tf.png)](https://i.stack.imgur.com/ZO8tf.png) Here is my controller method for reference: ``` def update_selection size = params[:size] color = params[:color] name = params[:product_name] products = Product.where(name: name, color: color, size: size, available: true) product_code = products.empty? ? 'sold out' : products.first.product_code respond_to do |format| format.json { render json: { count: products.length, code: product_code}} end end ```<issue_comment>username_1: My best guess is that you're not actually hitting the database when assigning to `products`. ActiveRecord relations are lazy, meaning they don't query the database until the moment a result of that query is needed. To put it in terms of your issue - this line: `products = Product.where(name: name, color: color, size: size, available: true)` doesn't require any data to actually be pulled from the db, so the query isn't carried out. `products.first.product_code`, on the other hand, will cause a query to fire. EDIT: I believe the issue is coming from using `.exists?` (which is an ActiveRecord method that *always* fires a query). If you're checking to see if the query returned any results, try using `.present?` or `.any?` instead of `.exists?`. EDIT 2: OK, thanks for posting the code. You've got a couple options here. 1. Use `.to_a`: `products = Product.where(name: name, color: color, size: size, available: true).to_a` This loads all of the `products` into memory, which makes sense if you expect there to be a small number of products. This only fires one query. 2. You can change this line to only fire one query: `product_code = products.first.try(:product_code) || 'sold out'` This is more memory-efficient (since you're loading a maximum of one `product` into memory), but uses two queries (`products.count is the other one`). Upvotes: 3 <issue_comment>username_2: You might find this to be efficient, as it gets a SQL-based count, and then reruns a product query if required to get a single product. Do you really want the "first" in the sense of the one with the lowest id? If so then use `first`, otherwise use `take`. ``` def update_selection size = params[:size] color = params[:color] name = params[:product_name] products = Product.where(name: name, color: color, size: size, available: true) product_count = products.count product_code = product_count.zero? ? 'sold out' : products.take.product_code respond_to do |format| format.json { render json: { count: product_count, code: product_code}} end end ``` Upvotes: 1
2018/03/19
539
1,916
<issue_start>username_0: In my Laravel model controller I have this: public function guestInfo($id) ``` { $guests = Person::where('id', $id) ->with('languages') ->get(); return view('layouts.visitor', ['guests' => $guests]); } ``` in my blade file I have this: ``` @foreach ($guests as $guest) some html stuff in here {!! Form::select('name', $guest->languages->pluck('name')->all(), ['class' => 'form-control']) !!} more html @endforeach ``` Person is a model to a db table "persons" and "languages" is a relative model "belongs to" Person. In my languages table I have different rows for languages: "english, spanish, etc" each is in it's own row with it's own id. My goal is to get all the languages to show up. However currently with the above code I only get one language to show up. Any ideas?<issue_comment>username_1: your code will retrieve only that languages which associated with persons. if you want to show all languages, just retrieve them as `id => name` pairs and pass to your view. ``` $guests = Person::where('id', $id)->get(); $languagess = Language::pluck('name', 'id); ``` and then do `foreach` with `$languages` ``` @foreach ($languages as $key => $val) {!! Form::select('name', $key, ['class' => 'form-control']) !!} @endforeach ``` Upvotes: 1 <issue_comment>username_2: I ended up doing it like this with some help from @username_1. Not exactly as he suggested but it gave me some direction. Controller: ``` public function guestInfo($id) { $guests = Person::where('id', $id) ->with('languages') ->get(); $languages = Language::pluck('name')->toArray(); return view('layouts.visitor', ['guests' => $guests, 'languages' => $languages]); } ``` Then in my blade: ``` {!! Form::select('name', $languages, 'name', ['placeholder' => '','class'=>'form-control']) !!} ``` Upvotes: 0
2018/03/19
221
854
<issue_start>username_0: I have been looking for a multi-core optimizer for Python thats equivalent to DEopt. Been messing around with <https://github.com/hyperopt/hyperopt> but mutli-core support is very lacking. Does anyone know of any good alternatives that support using multiple cores?<issue_comment>username_1: Check out Ray Tune - it supports multi-core along with distributed executions of random search, grid search, and evolutionary methods (specifically Population-Based Training), which seems to be what you're looking for. Here's the docs page - [ray.readthedocs.io/en/latest/tune.html](http://ray.readthedocs.io/en/latest/tune.html) Disclaimer: I work on this project! Upvotes: 2 <issue_comment>username_2: Have been using hyperopt, its a direct replacement for deoptim for python. <https://github.com/hyperopt/hyperopt> Upvotes: 1
2018/03/19
534
1,458
<issue_start>username_0: Given a data frame: ``` id day value 01 4 abc 01 3 abc 01 2 y 01 1 y 02 3 abc 02 2 x 02 1 x 03 4 abc 03 3 abc 03 2 abc 03 1 z ``` I want to remove rows with "abc" as duplicate and keep last. The result would be: ``` id day value 01 3 abc 01 2 y 01 1 y 02 3 abc 02 2 x 02 1 x 03 2 abc 03 1 z ``` As of right now, I'm able to do this but the method I used is somewhat complicated. I pull out all the rows with "abc" on to another data frame then remove duplicate keep last then concat the 2 data frames together. Is there a simpler approach? Many thanks!<issue_comment>username_1: Here is one way which should be efficient. Note I keep last, in line with your desired output. ``` res = pd.concat([df[df['value']=='abc'].drop_duplicates('id', keep='last'), df[df['value']!='abc']]).sort_index() # id day value # 1 1 3 abc # 2 1 2 y # 3 1 1 y # 4 2 3 abc # 5 2 2 x # 6 2 1 x # 9 3 2 abc # 10 3 1 z ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: This should work. There might be a nicer way to avoid the sort at the end. ``` df_abc = df[df.value=='abc'].drop_duplicates('id', keep='last') df_not_abc = df[df.value!='abc'] df2 = pd.concat([df_abc, df_not_abc]).sort_values(by=['id', 'day'], ascending=[True, False]) ``` Upvotes: 2
2018/03/19
556
1,642
<issue_start>username_0: I am trying to convert multiple rows of CSV file to single row stored in a variable. That will be used later as a part of json for rest API. My csv file looks like: ``` Key Value Key1 Value1 Key2 Value2 Key3 Value3 ``` I need an output string like: ``` Json= "key1":"Value1","key2":"Value2","key3":"Value3" ``` As of now I have tried this: if you go to csv in docs python, the example is: ``` import csv with open('file.csv', 'r') as csvfile: csvReader = csv.reader(csvfile) for row,column in csvReader: Json= '"'+row+'":'+column+'",' Print json ``` But this is not giving me the desired output. The output has comma in the last. Like ``` Json= "key1":"Value1","key2":"Value2","key3":"Value3", ```<issue_comment>username_1: Only small change needed: ``` import csv with open('file.csv', 'r') as csvfile: csvReader = csv.reader(csvfile) for row,column in csvReader: json= '"'+row+'":'+column+'",' print json[:-1] ``` will remove the last comma and output: ``` Json= "key1":"Value1","key2":"Value2","key3":"Value3" ``` Upvotes: 1 <issue_comment>username_2: use `json` library Try like below ``` import json csv_dict = dict() for row,column in csvReader: csv_dict[row] = column dump_variable = json.dumps(csv_dict) print(dump_variable) ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: Using pandas: ``` import pandas as pd df = pd.read_csv('file.csv',sep=',') # sep can be \s+ if multiple spaces jsonstr = df.set_index('Key')['Value'].to_dict() print(jsonstr) ``` Prints ``` {'Key1': 'Value1', 'Key2': 'Value2', 'Key3': 'Value3'} ``` Upvotes: 1
2018/03/19
638
1,923
<issue_start>username_0: I have a 40 digit hex number stored in a string, I have to store it inside a struct called Int40 that only contains a pointer to an int. ``` typedef struct Int40 { // a dynamically allocated array to hold a 40 // digit integer, stored in reverse order int *digits; } Int40; ``` this is what I have tried ``` Int40 *parseString(char *str) { Int40 *value = malloc(sizeof(Int40) * MAX40); for (int i = 0; i < MAX40; i++) { value[i] = (int)str[i]; } return value; } int main() { Int40 *p; p = parseString("0123456789abcdef0123456789abcdef01234567"); printf("-> %d\n", *p); } ``` I know that an Int cant contain 40 digits thats why I tried to store each number from the string in an array of integers but my code doesnt seem to work. Edit: Also the number contains letters because is a hex number, will I have to the get the ascii value of the hex number to be able to store it in the array of int, how do i do that?<issue_comment>username_1: Only small change needed: ``` import csv with open('file.csv', 'r') as csvfile: csvReader = csv.reader(csvfile) for row,column in csvReader: json= '"'+row+'":'+column+'",' print json[:-1] ``` will remove the last comma and output: ``` Json= "key1":"Value1","key2":"Value2","key3":"Value3" ``` Upvotes: 1 <issue_comment>username_2: use `json` library Try like below ``` import json csv_dict = dict() for row,column in csvReader: csv_dict[row] = column dump_variable = json.dumps(csv_dict) print(dump_variable) ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: Using pandas: ``` import pandas as pd df = pd.read_csv('file.csv',sep=',') # sep can be \s+ if multiple spaces jsonstr = df.set_index('Key')['Value'].to_dict() print(jsonstr) ``` Prints ``` {'Key1': 'Value1', 'Key2': 'Value2', 'Key3': 'Value3'} ``` Upvotes: 1
2018/03/19
619
2,465
<issue_start>username_0: I am a new employee at the company. The person before me had built some tables in BigQuery. I want to investigate the create table query for that particular table. Things I would want to check using the query is: 1. What joins were used? 2. What are the other tables used to make the table in question? I have not worked with BigQuery before but I did my due diligence by reading tutorials and the documentation. I could not find anything related there.<issue_comment>username_1: Brief outline of your actions below: Step 1 - gather all query jobs of that user using [Jobs.list API](https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/list) - you must have Is Owner permission for respective projects to get someone else's jobs Step 2 - extract only those jobs run by the [`user`](https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/list#jobs.user_email) you mentioned and referencing your table of interest - using [`destination table`](https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/list#jobs.configuration.query.destinationTable) attribute Step 3 - for those extracted jobs - just simply check respective [`queries`](https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/list#jobs.configuration.query.query) which allow you to learn how that table was populated Hth! Upvotes: 4 [selected_answer]<issue_comment>username_2: I have been looking for an answer since a long time. Finally found it : Go to the three bars tab on the left hand side top From there go to the Analytics tab. Select BigQuery under which you will find Scheduled queries option,click on that. In the filter tab you can enter the keywords and get the required query of the table. Upvotes: 0 <issue_comment>username_3: For me, I was able to go through my query history and find the query I used. Step 1. Go to the Bigquery UI, on the bottom there are `personal history` and `project history` tabs. If you can use the same account used to execute the query I recommend `personal history`. Step 2. Click on the tab and there will be a list of queries ordered from most recently run. Check the time the table was created and find a query that ran before the table creation time. Since the query will run first and create the table there will be slight differences. For me it stayed between a few seconds. Step 3. After you find the query used to create the table, simply copy it. And you're done. Upvotes: 0
2018/03/19
801
2,843
<issue_start>username_0: I am having trouble solving a problem, in Scheme, that sums all elements in a list, in which all elements themselves are lists. I wrote a function `(polyAdd l1 l2)` which takes two lists and sums them together. for example, `(polyAdd '(1 2) '(3 4 5))` will give `(4 6 5)`. the code is here: ``` (define (polyAdd l1 l2) (cond ((null? (and l1 l2))'()) ((null? l1) (cons (+ 0 (car l2))(polyAdd l1 (cdr l2)))) ((null? l2) (cons (+ 0 (car l1))(polyAdd l2 (cdr l1)))) (else (cons (+ (car l1) (car l2))(polyAdd (cdr l1) (cdr l2)))))) ``` now I try to write a function `(polyAddList l)` which takes a list of lists and adds them together using my `polyAdd` function. for example, `(polyAddList '((1 2)(3 4)(5 6 7)))` will give `(9 12 7)`, but I have no idea how this should be implemented. can someone help me? thanks!<issue_comment>username_1: Brief outline of your actions below: Step 1 - gather all query jobs of that user using [Jobs.list API](https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/list) - you must have Is Owner permission for respective projects to get someone else's jobs Step 2 - extract only those jobs run by the [`user`](https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/list#jobs.user_email) you mentioned and referencing your table of interest - using [`destination table`](https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/list#jobs.configuration.query.destinationTable) attribute Step 3 - for those extracted jobs - just simply check respective [`queries`](https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/list#jobs.configuration.query.query) which allow you to learn how that table was populated Hth! Upvotes: 4 [selected_answer]<issue_comment>username_2: I have been looking for an answer since a long time. Finally found it : Go to the three bars tab on the left hand side top From there go to the Analytics tab. Select BigQuery under which you will find Scheduled queries option,click on that. In the filter tab you can enter the keywords and get the required query of the table. Upvotes: 0 <issue_comment>username_3: For me, I was able to go through my query history and find the query I used. Step 1. Go to the Bigquery UI, on the bottom there are `personal history` and `project history` tabs. If you can use the same account used to execute the query I recommend `personal history`. Step 2. Click on the tab and there will be a list of queries ordered from most recently run. Check the time the table was created and find a query that ran before the table creation time. Since the query will run first and create the table there will be slight differences. For me it stayed between a few seconds. Step 3. After you find the query used to create the table, simply copy it. And you're done. Upvotes: 0
2018/03/19
692
2,398
<issue_start>username_0: **Blade** ``` ``` **app.js code** ``` const app = new Vue({ el: '#app', data: { domain_Name: document.body.querySelector("div[id='Domain_Name']").innerHTML } }); ``` **Component Code** ``` export default { props: ['domain\_Name'], methods: { ValidateUser() { debugger; //this.domain\_Name is null here } } } ``` I can confirm that below code holds valid value and not undefined. ``` document.body.querySelector("div[id='Domain_Name']").innerHTML ``` this code was present in many components so trying to shift it in a single centralized place<issue_comment>username_1: When the data property is being instantiated, assume that the Domain\_Name div is not yet populated. Instead, you can set this up on the [mounted lifecycle hook](https://v2.vuejs.org/v2/guide/instance.html#Instance-Lifecycle-Hooks). ``` const app = new Vue({ el: '#app', data: { domain_Name: null }, mounted(){ this.domain_Name = document.body.querySelector("div[id='Domain_Name']").innerHTML } }); ``` Upvotes: 0 <issue_comment>username_2: You have to pass `domain_Name` to the component. You do that using props, but wait: HTML attributes are case insensitive. This means that to `domain_Name` work as it is, you would need to declare like: ``` ``` which is **weird**. Vue does show suggestions: > > [Vue tip]: Prop "`domain_name`" is passed to component , but the declared prop name is "`domain_Name`". Note that **HTML attributes are case-insensitive and camelCased props need to use their kebab-case equivalents** when using in-DOM templates. You should probably use "`domain_-name`" instead of "`domain_Name`". > > > Although above will work, to achieve a cleaner code, my suggestions are: * change `:domain_-name="domain_Name"` prop to `:domain-name="domain_Name"`; * change `props: ['domain_Name']`, `props: ['domainName'],` And, inside the component, use `this.domainName`. Demo below. ```js Vue.component('login', { template: '#login-tpl', props: ['domainName'], methods: { ValidateUser() { console.log('domain name:', this.domainName); } } }); new Vue({ el: '#app', data: { domain_Name: document.body.querySelector("div[id='Domain_Name']").innerHTML } }) ``` ```html CLICK ME AND CHECK CONSOLE www.example.com ``` Upvotes: 1
2018/03/19
645
2,288
<issue_start>username_0: I have filter attribute in ASP.NET core 2.0, See my code snippet below. Here the problem is I always get the status code is 200. Even the actual status code is 500 then also I get a 200. How do I get the actual status code? ``` public void OnActionExecuted(ActionExecutedContext context) { try { var controller = context.Controller as APIServiceBase; var statusCode = controller.Response.StatusCode; .. .. } catch { } } ```<issue_comment>username_1: When the data property is being instantiated, assume that the Domain\_Name div is not yet populated. Instead, you can set this up on the [mounted lifecycle hook](https://v2.vuejs.org/v2/guide/instance.html#Instance-Lifecycle-Hooks). ``` const app = new Vue({ el: '#app', data: { domain_Name: null }, mounted(){ this.domain_Name = document.body.querySelector("div[id='Domain_Name']").innerHTML } }); ``` Upvotes: 0 <issue_comment>username_2: You have to pass `domain_Name` to the component. You do that using props, but wait: HTML attributes are case insensitive. This means that to `domain_Name` work as it is, you would need to declare like: ``` ``` which is **weird**. Vue does show suggestions: > > [Vue tip]: Prop "`domain_name`" is passed to component , but the declared prop name is "`domain_Name`". Note that **HTML attributes are case-insensitive and camelCased props need to use their kebab-case equivalents** when using in-DOM templates. You should probably use "`domain_-name`" instead of "`domain_Name`". > > > Although above will work, to achieve a cleaner code, my suggestions are: * change `:domain_-name="domain_Name"` prop to `:domain-name="domain_Name"`; * change `props: ['domain_Name']`, `props: ['domainName'],` And, inside the component, use `this.domainName`. Demo below. ```js Vue.component('login', { template: '#login-tpl', props: ['domainName'], methods: { ValidateUser() { console.log('domain name:', this.domainName); } } }); new Vue({ el: '#app', data: { domain_Name: document.body.querySelector("div[id='Domain_Name']").innerHTML } }) ``` ```html CLICK ME AND CHECK CONSOLE www.example.com ``` Upvotes: 1
2018/03/19
1,189
4,198
<issue_start>username_0: My view has some simple code: ``` HERE{{ files.length }}THERE #### Selected Files #####  (X) | File Name | File Size | Upload Status | Remove All | | --- | --- | --- | --- | | {{ file.relativePath }} | {{file.size}} | | | ``` My component is: ``` import { Component, OnInit } from '@angular/core'; import { UploadEvent, UploadFile, FileSystemFileEntry } from 'ngx-file-drop'; @Component({ selector: 'upload-modal', // providers: [ ], styleUrls: [ './upload.component.scss' ], templateUrl: './upload.component.html' }) export class UploadComponent implements OnInit { constructor() { } public files: UploadFile[] = []; public ngOnInit() { } dropFile(event) { let droppedFiles: UploadFile[] = [] if(this.files.length === 1) { return } const fileEntry = event.files[0].fileEntry as FileSystemFileEntry; fileEntry.file(fileData => { console.log('before', this.files) this.files.push({ name: fileEntry.name, size: fileData.size }) console.log('after', this.files) }) } handleFileInput() { alert('files') } removeAll(event) { event.stopPropagation() this.files: UploadFile[] = [] } } ``` When my component's `dropFile` function does what it does, the `console` prints out correctly, but the view doesn't have any updated files. I'm using `angular 5.2.0`. What am I doing wrong?<issue_comment>username_1: I think Angular's not aware of you having changed the model behind the scenes. By default, Angular uses [zones](https://angular.io/guide/glossary#zone) to trigger it's change detection mechanism after async events—although this works fine for most async events you'll encounter, the zone implementation used by Angular, Zone.js, only supports a handful of non-standard APIs and [FileSystemFileEntry](https://developer.mozilla.org/en-US/docs/Web/API/FileSystemFileEntry) isn't [one of them](https://github.com/angular/zone.js/blob/dffae8b6f528c84ec18a24a179b1d0760436e35b/NON-STANDARD-APIS.md). In cases like this, your best bet is to manually trigger the change detection as explained [here](https://stackoverflow.com/questions/34827334/triggering-change-detection-manually-in-angular). Upvotes: 2 <issue_comment>username_2: You can manually detect changes using the change detection. I don't recommend using this method in all situations but in this one it's probably the best way. Add ChangeDectorRef to the import statement ``` import { Component, OnInit, ChangeDetectorRef } from '@angular/core'; ``` then add it to your constructor ``` constructor(private cd: ChangeDetectorRef) {} ``` Then after you drop the files you can use the change detector to trigger change detection. ``` dropFile(event) { // add this to the end of the dropFile method this.cd.detectChanges(); } ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: The change detector needs to be the last line of the callback as user4091335 pointed out and not the last line of the droppped file method. ``` dropped(event: any) { for (const droppedFile of event.files) { const fileEntry = droppedFile.fileEntry as FileSystemFileEntry; fileEntry.file((file: File) => { this.files.push(file); **this.ref.detectChanges();** }) } } ``` Upvotes: 0 <issue_comment>username_4: I had the same issue trying to update the view, and I solved using promises instead of detecting the changes on the view using ChangeDetectorRef. Worked like a charm. Here's my solution ``` triggerUpload(evt:UploadEvent){ let files = evt.files; let self = this; this.fillupFiles(files).then(function(response){ self.uploadAndRender(); }); } fillupFiles(files):Promise{ let self = this; let promise = new Promise(function(resolve, reject){ var count = files.length; var index = 0; for(let droppedFile of files){ if(droppedFile.fileEntry.isFile){ let fileEntry = droppedFile.fileEntry as FileSystemFileEntry; fileEntry.file((file:File)=>{ index++; self.files.push(file); if(index==count){ resolve(true); } }); } } }); return promise; } ``` Upvotes: 0