date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/19
569
2,102
<issue_start>username_0: I'm trying to uninstall the current version of Eclipse IDE in my RHEL machine by simply deleting all the files like: ``` sudo rm -rf ~/.eclipse sudo rm -rf ~/eclipse-workspace ``` I also tried ``` sudo yum remove 'eclipse*' ``` However, these didn't seem to solve the purpose. Any help will be appreciated, thanks!<issue_comment>username_1: ``` def clean(self): cleaned_data = super().clean() if not self.cleaned_data['learn1'] and not self.cleaned_data['teach1']: raise forms.ValidationError("Specify at least one") else: return cleaned_data def save(self, user): user.is_profile_to.learn1 = self.cleaned_data['learn1'] user.is_profile_to.teach1 = self.cleaned_data['teach1'] user.save() user.is_profile_to.save() ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: For completeness, you can also handle this at the view level (especially, class-based). You use `form.add_error()` and return `form_invalid()`: ``` class MyView( FormView): # or UpdateView, etc. # usual definitions def form_valid( self, form): if not form.cleaned_data['learn1'] and not form.cleaned_data['teach1']: form.add_error('', 'You must specify either "Learn" or "Teach"') return self.form_invalid( form) super().form_valid( form) # if you want the standard stuff, or # process the form # return a response ``` The first argument of `form.add_error` is the name of the form field to attach the error to. `''` specifies a non-form error. You might want the same error attached to both fields instead, in which case just attach two errors ``` form.add_error('learn1', 'You must specify either "Learn" or "Teach"') form.add_error('teach1', 'You must specify either "Learn" or "Teach"') return self.form_invalid( form) ``` Doing it this way allows you to access the view context in deciding whether or not something is an error. You can consult the url args or kwargs, or consider who is the user. Upvotes: 0
2018/03/19
1,469
4,573
<issue_start>username_0: I have a `dask dataframe` grouped by the index (`first_name`). ``` import pandas as pd import numpy as np from multiprocessing import cpu_count from dask import dataframe as dd from dask.multiprocessing import get from dask.distributed import Client NCORES = cpu_count() client = Client() entities = pd.DataFrame({'first_name':['Jake','John','Danae','Beatriz', 'Jacke', 'Jon'],'last_name': ['<NAME>', 'Foster', 'Smith', 'Patterson', 'Toro', 'Froster'], 'ID':['X','U','X','Y', '12','13']}) df = dd.from_pandas(entities, npartitions=NCORES) df = client.persist(df.set_index('first_name')) ``` (Obviously `entities` in the real life is several thousand rows) I want to apply a user defined function to each grouped dataframe. I want to compare each row with all the other rows in the group (something similar to [Pandas compare each row with all rows in data frame and save results in list for each row](https://stackoverflow.com/questions/35459316/pandas-compare-each-row-with-all-rows-in-data-frame-and-save-results-in-list-for)). The following is the function that I try to apply: ``` def contraster(x, DF): matches = DF.apply(lambda row: fuzz.partial_ratio(row['last_name'], x) >= 50, axis = 1) return [i for i, x in enumerate(matches) if x] ``` For the test `entities` data frame, you could apply the function as usual: ``` entities.apply(lambda row: contraster(row['last_name'], entities), axis =1) ``` And the expected result is: ``` Out[35]: 0 [0, 4] 1 [1, 5] 2 [2] 3 [3] 4 [0, 4] 5 [1, 5] dtype: object ``` When `entities` is huge, the solution is use `dask`. Note that `DF` in the `contraster` function must be the groupped dataframe. I am trying to use the following: ``` df.groupby('first_name').apply(func=contraster, args=????) ``` But How should I specify the grouped dataframe (i.e. `DF` in `contraster`?)<issue_comment>username_1: The function you provide to groupby-apply should take a Pandas dataframe or series as input and ideally return one (or a scalar value) as output. Extra parameters are fine, but they should be secondary, not the first argument. This is the same in both Pandas and Dask dataframe. ``` def func(df, x=None): # do whatever you want here # the input to this function will have all the same first name return pd.DataFrame({'x': [x] * len(df), 'count': len(df), 'first_name': df.first_name}) ``` You can then call df.groupby as normal ``` import pandas as pd import dask.dataframe as dd df = pd.DataFrame({'first_name':['Alice', 'Alice', 'Bob'], 'last_name': ['Adams', 'Jones', 'Smith']}) ddf = dd.from_pandas(df, npartitions=2) ddf.groupby('first_name').apply(func, x=3).compute() ``` This will produce the same output in either pandas or dask.dataframe ``` count first_name x 0 2 Alice 3 1 2 Alice 3 2 1 Bob 3 ``` Upvotes: 3 <issue_comment>username_2: With a little bit of guesswork, I think that the following is what you are after. ``` def mapper(d): def contraster(x, DF=d): matches = DF.apply(lambda row: fuzz.partial_ratio(row['last_name'], x) >= 50, axis = 1) return [d.ID.iloc[i] for i, x in enumerate(matches) if x] d['out'] = d.apply(lambda row: contraster(row['last_name']), axis =1) return d df.groupby('first_name').apply(mapper).compute() ``` Applied to your data, you get: ``` ID first_name last_name out 2 X Danae Smith [X] 4 12 Jacke Toro [12] 0 X Jake <NAME> [X] 1 U John Foster [U] 5 13 Jon Froster [13] 3 Y Beatriz Patterson [Y] ``` i.e., because you group by **first\_name**, each group only contains one item, which matches only with itself. If, however, you has some **first\_name** values that were in multiple rows, you would get matches: ``` entities = pd.DataFrame( {'first_name':['Jake','Jake', 'Jake', 'John'], 'last_name': ['<NAME>', 'Toro', 'Smith' 'Froster'], 'ID':['Z','U','X','Y']}) ``` Output: ``` ID first_name last_name out 0 Z Jake <NAME> [Z, U] 1 U Jake Toro [Z, U] 2 X Jake Smith [X] 3 Y John Froster [Y] ``` If you do not require *exact* matches on the **first\_name**, then maybe you need to sort/set index by the first\_name and use `map_partitions` in a similar way. In that case, you will need to reform your question. Upvotes: 4 [selected_answer]
2018/03/19
230
728
<issue_start>username_0: I have an array of multiple types `( Int32 | Char | String )` and need to remove a specific element. Is there a simple way to do that?<issue_comment>username_1: You may use [Array(T).delete\_at(index)](https://crystal-lang.org/api/0.20.4/Array.html#delete_at%28index%3AInt%29-instance-method) to delete an element at a given index in your array, or [Array(T).delete(obj)](https://crystal-lang.org/api/0.20.4/Array.html#delete%28obj%29-instance-method) that deletes all elements in the array that are equal to *obj* Upvotes: 3 [selected_answer]<issue_comment>username_2: Inspired by Shree's now deleted answer `new_arr = arr.reject{ |element| element == "whatever"}` or could use `reject!` Upvotes: 1
2018/03/19
232
701
<issue_start>username_0: I want to convert C style `LOGD("hello");` to `LOGD<<"hello";` in eclipse find/replace. how can I do that?<issue_comment>username_1: You may use [Array(T).delete\_at(index)](https://crystal-lang.org/api/0.20.4/Array.html#delete_at%28index%3AInt%29-instance-method) to delete an element at a given index in your array, or [Array(T).delete(obj)](https://crystal-lang.org/api/0.20.4/Array.html#delete%28obj%29-instance-method) that deletes all elements in the array that are equal to *obj* Upvotes: 3 [selected_answer]<issue_comment>username_2: Inspired by Shree's now deleted answer `new_arr = arr.reject{ |element| element == "whatever"}` or could use `reject!` Upvotes: 1
2018/03/19
1,269
4,346
<issue_start>username_0: I have some xml like ``` ``` and I want to get grammar-tag, set my namespace and save all properties(prop1, prop2), children nodes and so on. I just move to grammar-tag and call `xmlNodePtr copied = xmlCopyNode(node, 1);`. After that I remove some properties, add new and so on(in copied). After that I want to replace namespace `"/path/to/namespace"` to `"/path/to/namespace2"`. There is no function like `xmlRemoveNs` or `xmlReplaceNs`, so I just free namespace and set new. ``` if (copied->ns) { xmlFree((void*)copied->ns->href); copied->ns->href = xmlStrdup((const xmlChar *)"/path/to/namespace2"); } ``` but it looks weird and a little awful. Is there way to replace namespace, copy without namespace or delete namespace and set new?<issue_comment>username_1: The function `xmlFree()` only free the memory allocated by some library function and that is not what are you searching for. Try to use for example `xmlSetNsProp()`: ``` xmlAttrPtr xmlSetNsProp(xmlNodePtr node, xmlNsPtr ns, const xmlChar * name, const xmlChar * value) ``` > > Set (or reset) an attribute carried by a node. The ns structure must be in scope, this is not checked > > > node: the node > > > ns: the namespace definition > > > name: the attribute name > > > value: the attribute value > > > Returns: the attribute pointer. > > > You will find more information here: <http://xmlsoft.org/html/libxml-tree.html> and I think you can find the function that best suits your needs. In the source code it seems that the namespace is intended as `ns->href` : ``` /** * xmlSetNsProp: * @node: the node * @ns: the namespace definition * @name: the attribute name * @value: the attribute value * * Set (or reset) an attribute carried by a node. * The ns structure must be in scope, this is not checked * * Returns the attribute pointer. */ xmlAttrPtr xmlSetNsProp(xmlNodePtr node, xmlNsPtr ns, const xmlChar *name, const xmlChar *value) { xmlAttrPtr prop; if(ns && (ns->href == NULL)) return (NULL); prop = xmlGetPropNodeInternal(node, name, (ns != NULL) ? ns->href : NULL, 0); if(prop != NULL) { /* * Modify the attribute's value. */ if(prop->atype == XML_ATTRIBUTE_ID) { xmlRemoveID(node->doc, prop); prop->atype = XML_ATTRIBUTE_ID; } if(prop->children != NULL) xmlFreeNodeList(prop->children); prop->children = NULL; prop->last = NULL; prop->ns = ns; if(value != NULL) { xmlNodePtr tmp; if(!xmlCheckUTF8(value)) { xmlTreeErr(XML_TREE_NOT_UTF8, (xmlNodePtr)node->doc, NULL); if (node->doc != NULL) node->doc->encoding = xmlStrdup(BAD_CAST "ISO-8859-1"); } prop->children = xmlNewDocText(node->doc, value); prop->last = NULL; tmp = prop->children; while(tmp != NULL) { tmp->parent = (xmlNodePtr)prop; if(tmp->next == NULL) prop->last = tmp; tmp = tmp->next; } } if(prop->atype == XML_ATTRIBUTE_ID) xmlAddID(NULL, node->doc, value, prop); return (prop); } /* * No equal attr found; create a new one. */ return (xmlNewPropInternal(node, ns, name, value, 0)); } ``` Upvotes: 2 <issue_comment>username_2: This seems to work ``` xmlSetNs(copied, nullptr); ``` Upvotes: 1 <issue_comment>username_3: I'm going to hang my answer off the title > > libxml2 ... /remove namespace > > > and I think it will also help with the general case of replacing a namespace. To completely wipe all namespace info from a libxml2 doc, I used the following code: ``` // remove xmlns="..." void remove_NsDef(xmlNodePtr node) { if (node->nsDef) { ::xmlFreeNsList(node->nsDef); node->nsDef = nullptr; } } void remove_ns_fully(xmlNodePtr node) { remove_NsDef(node); ::xmlSetNs(node, nullptr); // set NS empty for unprefixed XPath lookup } void remove_ns_fully_recursive(xmlNodePtr node) { remove_ns_fully(node); for (auto child = node->children; child != nullptr; child = child->next) { remove_ns_fully_recursive(child); } } ``` Upvotes: 0
2018/03/19
1,350
5,289
<issue_start>username_0: I want to use batch statement to delete a row from 3 tables in my database to ensure atomicity. The partition key is going to be the same in all the 3 tables. In all the examples that I read about batch statements, all the queries were for a single table? In my case, is it a good idea to use batch statements? Or, should I avoid it? I'm using Cassandra-3.11.2 and I execute my queries using the C++ driver.<issue_comment>username_1: Yes, you can use batch to ensure atomicity. Single partition batches are faster (same table and same partition key) but only for a limited number of partitions (in your case three) it is okay. But don't use it for performance optimization (Ex: reduce of multiple requests). If you need atomicity you can use it. You can check below links: [Cassandra batch query performance on tables having different partition keys](https://stackoverflow.com/questions/42929928/cassandra-batch-query-performance-on-tables-having-different-partition-keys/42946757#42946757) [Cassandra batch query vs single insert performance](https://stackoverflow.com/questions/42930498/cassandra-batch-query-vs-single-insert-performance/42947125#42947125) [How single parition batch in cassandra function for multiple column update?](https://stackoverflow.com/questions/39121092/how-single-parition-batch-in-cassandra-function-for-multiple-column-update) **EDITED** > > > > > > In my case, the tables are different but the partition key is the same in all 3 tables. So is this a special case of single partition batch or is it something entirely different. > > > > > > > > > For different tables partitions are also different. So this is a multi partition batch. **LOGGED** batches are used to ensure atomicity for different partitions (different tables or different partition keys). **UNLOGGED** batches are used to ensure atomicity and isolation for single partition batch. If you use **UNLOGGED** batch for multi partition batch atomicity will not be ensured. Default is **LOGGED** batch. For single partition batch default is **UNLOGGED**. Cause single partition batch is considered as single row mutation. For single row update, there is no need of using **LOGGED** batch. To know about **LOGGED** or **UNLOGGED** batch, I have shared a link below. > > Multi partition batches should only be used to achieve atomicity for a few writes on different tables. Apart from this they should be avoided because they’re too expensive. > > > Single partition batches can be used to achieve atomicity and isolation. They’re not much more expensive than normal writes. > > > But you can use multi partition **LOGGED** batch as partitions are limited. A very useful Doc in Batch and all the details are provided. If you read this, all the confusions will be cleared. [Cassandra - to BATCH or not to BATCH](https://inoio.de/blog/2016/01/13/cassandra-to-batch-or-not-to-batch/) **Partition Key tokens vs row partition** Table partitions and partition key tokens are different. Partition key is used to decide which node the data resides. For same row key partition tokens are same thus resides in the same node. For different partition key or same key different tables they are different row mutation. You cannot get data with one query for different partition keys or from different tables even if for the same key. Coordinator nodes have to treat it as different request or mutation and request the actual data from replicated nodes separately. It's the internal structure of how C\* stores data. > > Every table even has it's own directory structure making it clear that a partition from one table will never interact with the partition of another. > > > [Does the same partition key in different cassandra tables add up to cell theoretical limit?](https://stackoverflow.com/questions/36700859/does-the-same-partition-key-in-different-cassandra-tables-add-up-to-cell-theoret) To know details how C\* maps data check this link: [Understanding How CQL3 Maps to Cassandra's Internal Data Structure](https://www.slideshare.net/DataStax/understanding-how-cql3-maps-to-cassandras-internal-data-structure) Upvotes: 4 [selected_answer]<issue_comment>username_2: Yes, this is a good use-case for `BATCH` according to the Cassandra documentation. See the "Note:" on <https://docs.datastax.com/en/dse/6.0/cql/cql/cql_using/useBatchGoodExample.html> > > If there are two different tables in the same keyspace and the two tables have the same partition key, this scenario is considered a single partition batch. There will be a single mutation for each table. This happens because the two tables could have different columns, even though the keyspace and partition are the same. Batches allow a caller to bundle multiple operations into a single batch request. All the operations are performed by the same coordinator. The best use of a batch request is for a single partition in multiple tables in the same keyspace. Also, batches provide a guarantee that mutations will be applied in a particular order. > > > Specifically, if they have the same partition key, this will be considered a single-partition batch. Hence: "The best use of a batch request is for a single partition in multiple tables in the same keyspace." Upvotes: 0
2018/03/19
662
2,472
<issue_start>username_0: Why can't I just write class.`kotlin` instead of writing class.java. Because `AndroidMeActivity` is a kotlin class and I am getting an error `("Unsoloved refrence: java")` when I write this, How can I fix it. ``` val intent = Intent(this, AndroidMeActivity::class.java) ```<issue_comment>username_1: On the Java platform, the runtime component required for using the reflection features is distributed as a separate JAR file `(kotlin-reflect.jar)`. This is done to reduce the required size of the runtime library for applications that do not use reflection features. If you do use reflection, please make sure that the .jar file is added to the classpath of your project. see link <https://kotlinlang.org/docs/reference/reflection.html#class-references> Upvotes: 0 <issue_comment>username_2: > > "Unsoloved refrence: java" > > > Read [**`Reflection`**](https://kotlinlang.org/docs/reference/reflection.html#class-references) > > Reflection is a set of language and library features that allows for > introspecting the structure of your own program at runtime. > > > Make sure, You added **`org.jetbrains.kotlin:kotlin-gradle-plugin`** ``` buildscript { ext.kotlin_version = '1.2.30' ext.gradle_version = '3.0.1' repositories { mavenCentral() } dependencies { classpath "com.android.tools.build:gradle:$gradle_version" classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version" } } ``` And check you added below in your module level **`build.gradle`** ``` apply plugin: 'com.android.application' apply plugin: 'kotlin-android' apply plugin: 'kotlin-android-extensions' ``` Upvotes: 1 <issue_comment>username_3: if your AndroidMeActivity class is java class then used below code .. ``` val intent=Intent(this,MainActivity::class.java) startActivity(intent) ``` if you are used android studio 3.0 above then is working if below then you can add kotlin plugins in android studio and setup configratution. i suggest you can update your android studio 3.0 above is automatically supported kotlin. then after any new project start it provide include kotlin option in dialog where define you app name. Upvotes: 0 <issue_comment>username_4: > > You can use intent like this. > > > ``` val button = findViewById(R.id.btn\_login); button.setOnClickListener{ val intent = Intent(this, OTPActivity::class.java) startActivity(intent) } ``` Upvotes: 0
2018/03/19
813
2,954
<issue_start>username_0: I have a store procedure in SQL Server. I have a filtering page in my application, I want to select all `CDT_ID`, if I do not input the name and lower\_age and upper\_age, it will show all the `CDT_ID`. But if I input the name and age, it will show all the `CDT_ID` where `CDT_NAME` AND `CDT_AGE` range is like I input in the filtering column. My query is like in this below: ``` select CDT_ID from CANDIDATE where CDT_NAME LIKE iif(@name is null, '', @name) AND CDT_NAME between (iif(@lower_age is null, '', @lower_age) and iif(@upper_age is null, '', @upper_age)) ``` The problem is my query result show nothing when I execute my store procedure. And if I run the query without where it shows a lot of `CDT_ID`. Do you know how to fix my 'where' clauses?<issue_comment>username_1: On the Java platform, the runtime component required for using the reflection features is distributed as a separate JAR file `(kotlin-reflect.jar)`. This is done to reduce the required size of the runtime library for applications that do not use reflection features. If you do use reflection, please make sure that the .jar file is added to the classpath of your project. see link <https://kotlinlang.org/docs/reference/reflection.html#class-references> Upvotes: 0 <issue_comment>username_2: > > "Unsoloved refrence: java" > > > Read [**`Reflection`**](https://kotlinlang.org/docs/reference/reflection.html#class-references) > > Reflection is a set of language and library features that allows for > introspecting the structure of your own program at runtime. > > > Make sure, You added **`org.jetbrains.kotlin:kotlin-gradle-plugin`** ``` buildscript { ext.kotlin_version = '1.2.30' ext.gradle_version = '3.0.1' repositories { mavenCentral() } dependencies { classpath "com.android.tools.build:gradle:$gradle_version" classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version" } } ``` And check you added below in your module level **`build.gradle`** ``` apply plugin: 'com.android.application' apply plugin: 'kotlin-android' apply plugin: 'kotlin-android-extensions' ``` Upvotes: 1 <issue_comment>username_3: if your AndroidMeActivity class is java class then used below code .. ``` val intent=Intent(this,MainActivity::class.java) startActivity(intent) ``` if you are used android studio 3.0 above then is working if below then you can add kotlin plugins in android studio and setup configratution. i suggest you can update your android studio 3.0 above is automatically supported kotlin. then after any new project start it provide include kotlin option in dialog where define you app name. Upvotes: 0 <issue_comment>username_4: > > You can use intent like this. > > > ``` val button = findViewById(R.id.btn\_login); button.setOnClickListener{ val intent = Intent(this, OTPActivity::class.java) startActivity(intent) } ``` Upvotes: 0
2018/03/19
822
3,119
<issue_start>username_0: I have read a study about TDD and one of the common issues (survey between developers) stated that they are not really letting the test fail first. The authors then state: > > If a new test does not fail, programmers receive an indication that > the production code was not working as they thought it was and a code > revision might be necessary. Another problem that might occur is that > programmers cannot be sure about what made the test pass; nothing > ensures the new code was actually responsible for it. The test > implementation might have been wrong since the beginning. > > > I wonder, how can a TDD test ever pass first **(because of the production code, like they mention)**, if it on a unit level? I mean, if all is mocked (stubbed..), it should be always isolated and thus never cannot really pass first.<issue_comment>username_1: Let's assume you have two classes `Calculator` and `Formatter`. `Calculator` calculates some value based on input and `Formatter` converts the value to string for displaying. You already have some tests in your `FormatterTest`: * `test_value_is_formatted_as_number` * `test_empty_is_formatted_as_NA` Now you implement new feature `Show zero values as N/A`. Following TDD you will add a test to `Formatter` `test_zero_is_formatted_as_NA` that checks this first and you expect it to fail: ```py def test_zero_is_formatted_as_NA(self): assert formatter.format(0) == 'N/A' ``` But it happens that it passes and the reason is `Formatter` already does this but `Calculator` returns floating zero which has limited precision. ```py def format(value): if value == 0 or value is None: return 'N/A' return format_as_string(value) ``` So the test passes but if you write another test it would fail: ```py def test_very_small_number_is_treated_as_zero_and_formatted_as_NA(self): assert formatter.format(0.00000001) == 'N/A' ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Usually, situations that you describe easily happen when something is already implemented but another part of the system (using this implemented part) is somehow limiting it, for example by stronger preconditions. Then if you do not know well the code, you might encounter such a surprise. Consider this example: ``` public string ShowScoreEvaluation(byte points) { switch(points) case 3: return "You are good!"; case 2: return "Not bad!"; case 1: return "Quite bad"; case 0: return "You suck!" } //caller code if (Points>0) ShowScoreEvaluation(points) ``` In the code above, the calling code does not expect to call the method when Points=0. Maybe during the implementation of that method, the programmer just put something there (as a joke or a placeholder) even for the case when points=0. And now imagine, that you joins the project and get a new request "When player has 0 points, show an encouraging message blabla". You write a unit test with Points=0 and expecting a string with length>0...and it did not fail, although you would expect it. Upvotes: 0
2018/03/19
499
1,839
<issue_start>username_0: I have a mysql database table with 1000000 records. I wanted to update one column in each row. **I tried**: ``` public function methodWorking() { $properties = Property::with('requests')->get(); foreach ($properties as $property) { $property->number_of_request = count($property->requests); $property->save(); } } ``` This one is working but, it's very bad in performance wise. **I wanted to write a code like this:** ``` public function methodExpect() { $properties = Property::with('requests')->get(); $property_array = []; foreach ($properties as $property) { $property->number_of_request = count($property->requests); $property_array[] = $property; } Property::save($property_array); } ``` Is it possible with laravel ? Thanks.<issue_comment>username_1: as I know, No way to do bulk updates in one query with MySQL. You can use something inside a loop like putting `Property::save($property_array);` inside your `foreach`. for more details see [this](https://laracasts.com/discuss/channels/eloquent/bulk-update?page=1) Upvotes: 1 <issue_comment>username_2: Changing all (or most) rows often indicates a poor schema design. Would you like to tell us what the column contains and why it needs a mass change? Consider removing that column from the table -- then provide the value some other way. Yes, `UPDATE` of a million row table takes a long time. This is because the old copy of each row is held on to, just in case there needs to be a `ROLLBACK`. To chunk the table, doing it in manageable pieces, see <http://mysql.rjweb.org/doc.php/deletebig#deleting_in_chunks> Upvotes: 0 <issue_comment>username_3: You can use [LaravelBatch](https://github.com/mavinoo/laravelBatch) it's very helpful (tested on 10 000 records ) Upvotes: 0
2018/03/19
547
1,857
<issue_start>username_0: I have this code from a tutorial and i was wondering how can i convert it to laravel eloquent method because currently it is in DB raw method. ``` // $match = DiraChatLog::select(DB::raw("SUM(numberofview) as count")) // ->orderBy("created_at") // ->groupBy(DB::raw("year(created_at)")) // ->get()->toArray(); // $match = array_column($match, 'count'); // $missing = DiraChatLog::select(DB::raw("SUM(numberofclick) as count")) // ->orderBy("created_at") // ->groupBy(DB::raw("year(created_at)")) // ->get()->toArray(); // $missing = array_column($missing, 'count'); // $noAnswer = DiraChatLog::select(DB::raw("SUM(numberofclick) as count")) // ->orderBy("created_at") // ->groupBy(DB::raw("year(created_at)")) // ->get()->toArray(); // $noAnswer = array_column($noAnswer, 'count'); ```<issue_comment>username_1: as I know, No way to do bulk updates in one query with MySQL. You can use something inside a loop like putting `Property::save($property_array);` inside your `foreach`. for more details see [this](https://laracasts.com/discuss/channels/eloquent/bulk-update?page=1) Upvotes: 1 <issue_comment>username_2: Changing all (or most) rows often indicates a poor schema design. Would you like to tell us what the column contains and why it needs a mass change? Consider removing that column from the table -- then provide the value some other way. Yes, `UPDATE` of a million row table takes a long time. This is because the old copy of each row is held on to, just in case there needs to be a `ROLLBACK`. To chunk the table, doing it in manageable pieces, see <http://mysql.rjweb.org/doc.php/deletebig#deleting_in_chunks> Upvotes: 0 <issue_comment>username_3: You can use [LaravelBatch](https://github.com/mavinoo/laravelBatch) it's very helpful (tested on 10 000 records ) Upvotes: 0
2018/03/19
1,060
3,107
<issue_start>username_0: How to achieve `groupBy` with native javascript? 【Definition of `groupBy`】 Creates an object composed of keys generated from the results of running each element of collection thru iteratee. The order of grouped values is determined by the order they occur in collection. The corresponding value of each key is an array of elements responsible for generating the key. The iteratee is invoked with one argument: (value). 【Expect Output】 `groupBy([6.1, 4.2, 6.3], Math.floor); // => { '4': [4.2], '6': [6.1, 6.3] }` `groupBy(['one', 'two', 'three'], 'length'); // => { '3': ['one', 'two'], '5': ['three'] }`<issue_comment>username_1: You can use [`Array.reduce()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/reduce) with an object to collect the items. The key is created by applying the iteratee to the item. The iteratee can be a string or a function, so we need to check the type, and if it's a string create function that extracts the property from the item. The collection can be an array or an object, and we can use [`Object.values()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_objects/Object/values) to get an array. ```js const groupBy = (collection, iteratee = (x) => x) => { const it = typeof iteratee === 'function' ? iteratee : ({ [iteratee]: prop }) => prop; const array = Array.isArray(collection) ? collection : Object.values(collection); return array.reduce((r, e) => { const k = it(e); r[k] = r[k] || []; r[k].push(e); return r; }, {}); }; console.log(groupBy([6.1, 4.2, 6.3], Math.floor)); // => { '4': [4.2], '6': [6.1, 6.3] } console.log(groupBy(['one', 'two', 'three'], 'length')); // => { '3': ['one', 'two'], '5': ['three'] } console.log(groupBy({ a: 6.1, b: 4.2, c: 6.3 }, Math.floor)); // => { '4': [4.2], '6': [6.1, 6.3] } ``` Upvotes: 1 <issue_comment>username_2: You can use `reduce` and expose a **generic *group-by-key* function**. ``` function groupBy(arr, groupByKeyFn ) { return arr.reduce( (acc, c) => { var key = groupByKeyFn(c); acc[key] = acc[key] || []; acc[key].push(c) return acc; }, []) } ``` Now you can use this function as ``` var arr1 = [6.1, 4.2, 6.3]; var arr2 = ['one', 'two', 'three']; console.log( groupBy(arr1, s => Math.floor(s) ) ); console.log( groupBy(arr2, s => s.length ) ); ``` **Demo** ```js function groupBy(arr, groupByKeyFn) { return arr.reduce((acc, c) => { var key = groupByKeyFn(c); acc[key] = acc[key] || []; acc[key].push(c) return acc; }, {}) } var arr1 = [6.1, 4.2, 6.3]; var arr2 = ['one', 'two', 'three']; console.log( groupBy(arr1, s => Math.floor(s) ) ); console.log( groupBy(arr2, s => s.length ) ); ``` Upvotes: 1 <issue_comment>username_3: The modern version of lodash's groupby in "native" js looks like this. ``` const groupBy = (xs, key) => { return xs.reduce((rv, x) => { (rv[x[key]] = rv[x[key]] || []).push(x); return rv; }, {}); }; ``` works exactly like lodash itself. Upvotes: 0
2018/03/19
480
1,760
<issue_start>username_0: I want to enable 'Update' button if i select 'System' from dropdown and should be disabled if I select other value from dropdown. By default 'Update' button should be disabled'.When I select value other than 'System' from dropdown the button should be disabled. ``` DataSync.jsp <%@ taglib prefix="s" uri="/struts-tags"%> <%@ page import="java.util.ArrayList" %> <%@ page import="java.util.HashMap" %> function getProjectDetails() { alert('inside getProjectDetails()') window.open(document.dataSyncForm.action="ProjectDetails.jsp","mywindow","menubar=0,resizable=1,scrollbars=1,width=500,height=400"); } #### Data Sync Select Warranty AMC System Price list ```<issue_comment>username_1: Try this code. ``` function getComboA(selectObject) { var value = selectObject.value; if(value == "System") { document.getElementsByTagName('Update').style.display='inline'; } } ``` Your HTML --------- ``` ``` By default Update button will be `display:none` and onChange of select it will be `display:inline`. Also display:none and disable are different things. Disable will show the button on view page but user won't be able to click it while display:none will hide the button from page. Upvotes: -1 <issue_comment>username_2: You could use this: ``` $('select').change(function () { if (this.value === 'System') { $('#btnUpdate').attr('disabled', false); } else { $('#btnUpdate').attr('disabled', true); } }); ``` To set `Update` button disabled as default, you could put the following statement into the `ready` of `document`: ``` $( document ).ready(function() { $('#btnUpdate').attr('disabled', true); }); ``` Hope this help you sorting the issue out. Upvotes: 0
2018/03/19
1,092
4,104
<issue_start>username_0: After update the chrome version 65, application is showing the splash screen again when taping some click event, it's a hybrid app Sencha touch and Cordova android.<issue_comment>username_1: Edit: this is a [known chrome 65 bug](https://bugs.chromium.org/p/chromium/issues/detail?id=819189) which is marked to be fixed in chrome 67. Edit 2: Confirmed to be fixed on Chrome 67. You will need to update "Android System WebView" on Android devices to get the fix. I believe this is a Chrome 65 bug. I have a deployed Sencha Touch application using version 2.4.2. About a week ago I started getting complaints that the application freezes. After debugging I found that this quick fix bypasses the issue by disabling animations on the message boxes (add to your app init, like in app.js): ``` Ext.Msg.defaultAllowedConfig.showAnimation = false; Ext.Msg.defaultAllowedConfig.hideAnimation = false; ``` I still haven't given up on my nice animations, so I kept debugging. After several hours, turns out the problem seems to stem from Chrome 65 behaving differently around `window.getComputedStyle()` under very specific conditions. Sencha Touch uses a hidden iframe with a hidden div inside to apply styles and get the computed values of the applied values. It then uses those computed values to apply the animation style string for the message boxes. You can see it for yourself, add `console.log(value)` before the return of the `getCssStyleValue` function in `touch/src/fx/runner/CssTransition.js`, and then show a message box (Ext.Msg.alert) and clicking "OK" on it. Chrome 65 will output "none", while Chromium 64 outputs `matrix(1, 0, 0, 1, 0, 0)`. I tested this using [Chromium 64.0.3282.0 (Developer Build) (64-bit)](https://commondatastorage.googleapis.com/chromium-browser-snapshots/index.html?prefix=Win_x64/520847/). Note that if you debug line-by-line, the bug will not appear. This seems to be a race condition on Chromium's side. I was able to reproduce the issue directly on the browser without using Sencha Touch ([JsFiddle](https://jsfiddle.net/vgaL9sy9/5/)): ``` var iframe = document.createElement('iframe'); var iframeStyle = iframe.style; iframeStyle.setProperty('visibility', 'hidden', 'important'); iframeStyle.setProperty('width', '0px', 'important'); iframeStyle.setProperty('height', '0px', 'important'); iframeStyle.setProperty('position', 'absolute', 'important'); iframeStyle.setProperty('border', '0px', 'important'); iframeStyle.setProperty('zIndex', '-1000', 'important'); document.body.appendChild(iframe); var iframeDocument = iframe.contentDocument; iframeDocument.open(); iframeDocument.writeln(''); iframeDocument.close(); var testElement = iframeDocument.createElement('div'); testElement.style.setProperty('position', 'absolute', 'important'); iframeDocument.body.appendChild(testElement); testElement.style.setProperty("transform", "translateX(0) translateY(0) translateZ(0) rotate(0) rotateX(0) rotateY(0) rotateZ(0) skewX(0) skewY(0) scaleX(1) scaleY(1) scaleZ(1)"); var computed = window.getComputedStyle(testElement).getPropertyValue("transform"); alert(computed); ``` If you play around with this, you'll see it only happens when the DIV is inside of an iframe, and in these specific conditions. As I said, my temporary solution is to disable the animations, but I will go ahead now and try to file a bug report with the Chromium project. Unfortunately I'm not thrilled with poking around this Sencha Touch code to try to find another way to get the computed values. I think Sencha did a lot of work to make sure all of this stuff works cross-browser, so I really hope this will be fixed in one of Chrome's coming versions. I think this is in addition to Android 8 user agent header bug mentioned by Grigoriy, since it happens on desktop versions of Chrome as well. I learned my lesson, make sure to test regularly on Chrome Beta or Dev releases... Hope this helps. Upvotes: 2 <issue_comment>username_2: Replaced all animateActiveItem with setActiveItem and disabled animation then it started to work again. Upvotes: 0
2018/03/19
906
3,430
<issue_start>username_0: Uncaught Error: Objects are not valid as a React child (found: object with keys {titlesCollection}). If you meant to render a collection of children, use an array instead I am getting this error while trying to render array of object, what I am doing wrong here? ``` import * as React from 'react'; import * as FontAwesomeIcon from 'react-fontawesome'; const data = { otherTitles:[ { titleHeading:"X", titles:["A","B","C","D"] }, { titleHeading:"Y Z", titles:["E","F","G","H"] } ] } export class OtherTitlesCollection extends React.Component{ render() { const titlesCollection = data.otherTitles.map((othertitle)=>{ let dataId = othertitle.titleHeading.replace(' ',''); return( {othertitle.titleHeading} { othertitle.titles.map((title)=>{ return({title}); }) } ) }); return ( {titlesCollection} ); } }; ```<issue_comment>username_1: Here is working example ```js import React from 'react'; const data = { otherTitles:[ { titleHeading:"X", titles:["A","B","C","D"] }, { titleHeading:"Y Z", titles:["E","F","G","H"] } ] } class TestJS extends React.Component { constructor(props) { super(props); } render() { let sample = []; let sampleData = data.otherTitles; for (let i = 0; i < sampleData.length; i++) { sample.push( {sampleData[i].titleHeading} - {sampleData[i].titles} ) } return( Hello world {sample} ); } } export default TestJS; ``` Upvotes: 0 <issue_comment>username_2: ``` import * as React from 'react'; import * as FontAwesomeIcon from 'react-fontawesome'; const data = { otherTitles:[ { titleHeading:"X", titles:["A","B","C","D"] }, { titleHeading:"Y Z", titles:["E","F","G","H"] } ] } export class OtherTitlesCollection extends React.Component{ render() { return data.otherTitles.map((othertitle)=>{ let dataId = othertitle.titleHeading.replace(' ',''); return( {othertitle.titleHeading} { othertitle.titles.map((title)=>{ return({title}); }) } ) }); } }; ``` Please try upper code this will work for you and issue here is you try to return titlesCollection variable and it will not contain proper data when it was return so we will direct return looping data Upvotes: 0 <issue_comment>username_3: Just put around `{titlesCollection}`. ``` return ( {titlesCollection} ); ``` Upvotes: 0 <issue_comment>username_4: The problem occurs because you try to render ``` return ( {titlesCollection} ); ``` now since you did not wrap `titlesCollection` within a `div, span or Fragment`, it is assumed to be an object like ``` return ( {titlesCollection: titlesCollection} ); ``` and hence you get an error, now that `titlesCollection` will be an array you can use `React.Fragment` like ``` return ( {titlesCollection} ); ``` or you can add a `div` around `titlesCollection` like ``` return ( {titlesCollection} ); ``` or simply return `titlesCollection` like ``` return titlesCollection; ``` Upvotes: 3 [selected_answer]
2018/03/19
169
584
<issue_start>username_0: Is it possible to have the code of [this sonata app](http://demo.sonata-project.org) ?<issue_comment>username_1: You should read, install and configure all bundle that you need from sonata documentation: <https://sonata-project.org/bundles/admin/3-x/doc/index.html> Official sonata github repo: <https://github.com/sonata-project> Upvotes: 0 <issue_comment>username_2: The code can be found [here](https://github.com/sonata-project/sandbox) it is a bit outdated but there is an activity lately <https://github.com/sonata-project/sandbox/pull/608> Upvotes: 2
2018/03/19
447
1,798
<issue_start>username_0: My android app is running great; now in order to increase virality I am planning to read phonebook contacts of a user and upload them to my server! Then further I plan to process contacts and give relevant invite suggestion to my existing users on my application! Is this ethical? Are there any official documentation regarding the same?<issue_comment>username_1: As far as I know (not a Lawyer!) in Germany (possibly all of Europe), that would even be illegal! To save some ones private data, like a Phone-Number, you need THAT PERSONS permission! So, even if you think saving the data of people who never gave you permission to do so is ethical, the law say: No, don't do it! PS: Taking from this [page](https://www.wirtschaftswissen.de/unternehmensgruendung-und-fuehrung/datenschutz/kundendatenschutz/bundesdatenschutzgesetz-wann-sie-personenbezogene-daten-erheben-speichern-und-nutzen-duerfen/) regarding the situation in Germany: - All Storage of private Data is forbidden unless explicitly permitted. - If you safe Data, you must have a valid reason, and may not use the data for anything else - You have to answer requests from people whose Data you have about what data you have and how you use it. - It is your duty to only collect and store private Data when it is absolutely necessary. Upvotes: 3 [selected_answer]<issue_comment>username_2: Ethical or not, think about other possible outcomes of Your decision. There are privacy sensitive users that will not use app if it wants too much information or they can't understand how and why is their data used. In some countries there are laws describing what information can be requested, processed and for what purposes. By ignoring those You can get yourself in serious troubles. Upvotes: 1
2018/03/19
617
2,113
<issue_start>username_0: I have a table (redshift db) with the following sample: ``` product_id | date | is_unavailable 1 | 1st Jan | 1 1 | 2nd Jan | 0 1 | 3rd Jan | 0 1 | 4rd Jan | 1 ``` Here , a combination of `date` and `product_id` is `unique`. I need to have a 4th column: "Days since last unavailable". Here is the output required: ``` product_id | date | is_unavailable | days_since_last_unavailable 1 | 1st Jan | 1 | - 1 | 2nd Jan | 0 | 1 1 | 3rd Jan | 0 | 2 1 | 4rd Jan | 1 | 0 ``` I thought of using `lag` window function with `partition over product_id` , however, an additional condition of `unavailable_flag` has to be checked here which I cannot accommodate in my query. select \*, date-lag(date) over (partition by product\_id order by date) as days\_since\_last\_unavailbale from mytable order by product\_id However, I can't figure out how to use unavailable\_flag since it is required to find the last date with unavailable\_flag=1<issue_comment>username_1: No LAG, but a simple MAX over a CASE: ``` max(case when is_unavailable = 1 then date end) -- previous unavailable date over (partition by product_id order by date rows unbounded preceding) ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: try this: ``` create table #tmp (product_id INT,[date] DATETIME ,is_unavailable BIT) INSERT INTO #tmp SELECT 1,'2018-01-01',1 union SELECT 1,'2018-01-02',0 union SELECT 1,'2018-01-03',0 union SELECT 1,'2018-01-04',1 select product_id, date ,is_unavailable, DATEDIFF(d, CASE WHEN is_unavailable = 1 THEN date ELSE MIN(case when is_unavailable = 1 then date end) over (partition by product_id) END, date) as days_sice_last_unavailable FROM #tmp drop table #tmp ``` Upvotes: 0
2018/03/19
619
2,243
<issue_start>username_0: In NgPrime turbo table ,i have edit function. After saving the data in server am reloading the grid. But cant able to retain the current page.How can i set the page number to the grid? I found a way to get the current page number using this function ``` paginate(event) { let pageIndex = event.first/event.rows + 1; } ``` and by adding the this attribute to table tag `(onPage)="paginate($event)"`. How can i set the page number to the grid?<issue_comment>username_1: It looks like rather than having direct control over the page number, per se, you have control over the first row displayed: ``` ``` In the above case, with 10 rows per page, you'd set `first` to 0 to get to the first page, 10 for the second page, 20 for the third page, etc. Update: Since the above didn't work for changing the page after the fact (perhaps it only works for the initial set-up of the table), you could try something like the following, which works for the now-deprecated DataTable: In the HTML: ``` ``` Then, in your component: ``` @ViewChild('myTable') myTable: TurboTable; ... this.myTable.first = desiredFirstRow; ``` I'll take this as an occasion to update my old table code to TurboTable, and I'll know soon enough if this works for sure. Upvotes: 2 <issue_comment>username_2: @username_1 answer did work for me. HTML ``` ``` TypeScript ``` onRecordsPerPageCountChange() { this.totalRecords = 0; this.pageNavLinks = 0; this.myTable.first = 0; // <-- this did the work. this.lazyLoadNextBatch({ 'first': 0 }); } lazyLoadNextBatch(event: LazyLoadEvent) { // calculate the page number from event.first this.loading = true; const pageNumber = Math.round(event.first / this.recordsPerPageCount.value) + 1; this.bmService.getBatchList(this.batchListStatus, pageNumber, this.recordsPerPageCount.value) .subscribe(response => { this.batchList = response.batches; // totalRecords is used to find the navigation links this.totalRecords = response.batches[0].TotalRowCount; this.pageNavLinks = Math.round(this.totalRecords / this.recordsPerPageCount.value); this.loading = false; }); } ``` Upvotes: 0
2018/03/19
464
1,670
<issue_start>username_0: I have three Controls:a slider, a button, and a textbox. And what I want to achieve is that When I drag the slider to change it's value,the content of textbox changes.The content of textbox is the value of the slider.However, when I click the button, the value of the slider adds 1 but the content of textbox doesn't change.The textbox shows the value of the Slider.But it changes it's content only when I change the value of slider by dragging the slider. So how can I make this work in code? [![enter image description here](https://i.stack.imgur.com/cCTHC.png)](https://i.stack.imgur.com/cCTHC.png)<issue_comment>username_1: Bind the textbox TEXT property to the slider's VALUE property using Element Name ``` ``` The textbox will auto update when the slider value change. Upvotes: 0 <issue_comment>username_2: [Here](https://stackoverflow.com/questions/723502/wpf-slider-with-an-event-that-triggers-after-a-user-drags) is where I found the solution. And my code is: In Xaml: ``` ``` In code: ``` private bool isdragging = false; private void btnAdd_Click(object sender, RoutedEventArgs e) { mySlider.Value += 1; } private void mySlider_ValueChanged(object sender, RoutedPropertyChangedEventArgs e) { if (isdragging) tboxValue.Text = mySlider.Value.ToString(); } private void mySlider\_DragStarted(object sender, System.Windows.Controls.Primitives.DragStartedEventArgs e) { isdragging = true; } private void mySlider\_DragCompleted(object sender, System.Windows.Controls.Primitives.DragCompletedEventArgs e) { isdragging = false; } ``` It works perfectly for me. Upvotes: 2 [selected_answer]
2018/03/19
418
1,511
<issue_start>username_0: I have tried to build apk for android in IONIC however everytime I do the build using command: `ionic cordova build android` it will always result to BUILD FAILED. The error is so generic it only says DeprecationWarning: Unhandled promise rejections are deprecated. Below image is the full response: [![enter image description here](https://i.stack.imgur.com/W6mD6.png)](https://i.stack.imgur.com/W6mD6.png) Thank you in advance for the help.<issue_comment>username_1: Bind the textbox TEXT property to the slider's VALUE property using Element Name ``` ``` The textbox will auto update when the slider value change. Upvotes: 0 <issue_comment>username_2: [Here](https://stackoverflow.com/questions/723502/wpf-slider-with-an-event-that-triggers-after-a-user-drags) is where I found the solution. And my code is: In Xaml: ``` ``` In code: ``` private bool isdragging = false; private void btnAdd_Click(object sender, RoutedEventArgs e) { mySlider.Value += 1; } private void mySlider_ValueChanged(object sender, RoutedPropertyChangedEventArgs e) { if (isdragging) tboxValue.Text = mySlider.Value.ToString(); } private void mySlider\_DragStarted(object sender, System.Windows.Controls.Primitives.DragStartedEventArgs e) { isdragging = true; } private void mySlider\_DragCompleted(object sender, System.Windows.Controls.Primitives.DragCompletedEventArgs e) { isdragging = false; } ``` It works perfectly for me. Upvotes: 2 [selected_answer]
2018/03/19
903
3,338
<issue_start>username_0: [![enter image description here](https://i.stack.imgur.com/W2Ylr.png)](https://i.stack.imgur.com/W2Ylr.png)I have a table view called `challengeTable`. This table contains an image, some text and a button. These values are retrieved from the RESTFUL API. ``` var Sec1 = [[String]]() public func tableView(_ tableView: UITableView, titleForHeaderInSection section: Int) -> String?{ return sectionTitles[section] } public func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int{ switch section { case 0 : return Sec1.count default: return Sec2.count } } public func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell{ let cell = challengeTable.dequeueReusableCell(withIdentifier: cellID) as! CustomisedCell cell.selectionStyle = .none switch indexPath.section { case 0: cell.CAChallengeTitle.text = Sec1[indexPath.row][0] cell.CAChallengeTitle.font = UIFont.boldSystemFont(ofSize: 13) cell.CAChallengeDescription.text = "Starting Date: \(Sec1[indexPath.row][1]) \n\n Ending Date: \(Sec1[indexPath.row][2])" cell.CAChallengeDescription.numberOfLines = 3 cell.CAChallengeIMG.image = UIImage(named : "Step" ) cell.CAChallengeButt.tag = indexPath[1] cell.CAChallengeButt.setTitle("Detsils >", for: .normal) print("Done with sectoin 0") default: cell.CAChallengeTitle.text = Sec2[indexPath.row] cell.CAChallengeDescription.text = "\(Sec2[indexPath.row]) " cell.CAChallengeButt.tag = indexPath[1] cell.CAChallengeButt.setTitle("I am interested >", for: .normal) print("Done with sectoin 1") } return cell } ``` I can reload the data of the table by calling the following: ``` Sec1.removeAll() challengeTable.reloadData() ``` Currently I am using the navigation controller. Thus, I can update my table simply by navigating back and forth. What I am trying to do however, is that I want to reload my table data simply when the user scrolls to the top. Any idea how can I possibly do that? Thanks :)<issue_comment>username_1: Bind the textbox TEXT property to the slider's VALUE property using Element Name ``` ``` The textbox will auto update when the slider value change. Upvotes: 0 <issue_comment>username_2: [Here](https://stackoverflow.com/questions/723502/wpf-slider-with-an-event-that-triggers-after-a-user-drags) is where I found the solution. And my code is: In Xaml: ``` ``` In code: ``` private bool isdragging = false; private void btnAdd_Click(object sender, RoutedEventArgs e) { mySlider.Value += 1; } private void mySlider_ValueChanged(object sender, RoutedPropertyChangedEventArgs e) { if (isdragging) tboxValue.Text = mySlider.Value.ToString(); } private void mySlider\_DragStarted(object sender, System.Windows.Controls.Primitives.DragStartedEventArgs e) { isdragging = true; } private void mySlider\_DragCompleted(object sender, System.Windows.Controls.Primitives.DragCompletedEventArgs e) { isdragging = false; } ``` It works perfectly for me. Upvotes: 2 [selected_answer]
2018/03/19
533
2,436
<issue_start>username_0: I am using this code: ``` FirebaseUser user = FirebaseAuth.getInstance().getCurrentUser(); user.updateEmail("<EMAIL>") .addOnCompleteListener(new OnCompleteListener() { @Override public void onComplete(@NonNull Task task) { if (task.isSuccessful()) { Log.d(TAG, "User email address updated."); } } }); ``` But still I am not able to update user Email ID for logged in person. Other things working fine but not this.<issue_comment>username_1: You need to re-authenticate your user. As according to documentation changing primary email address is a sensitive action. Re-Authentication : ``` FirebaseUser user = FirebaseAuth.getInstance().getCurrentUser(); // Get auth credentials from the user for re-authentication AuthCredential credential = EmailAuthProvider .getCredential("<EMAIL>", "<PASSWORD>"); // Current Login Credentials \\ // Prompt the user to re-provide their sign-in credentials user.reauthenticate(credential) .addOnCompleteListener(new OnCompleteListener() { @Override public void onComplete(@NonNull Task task) { Log.d(TAG, "User re-authenticated."); //Now change your email address \\ //----------------Code for Changing Email Address----------\\ FirebaseUser user = FirebaseAuth.getInstance().getCurrentUser(); user.updateEmail("<EMAIL>") .addOnCompleteListener(new OnCompleteListener() { @Override public void onComplete(@NonNull Task task) { if (task.isSuccessful()) { Log.d(TAG, "User email address updated."); } } }); //----------------------------------------------------------\\ } }); ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: For Kotlin ``` // need to sign user in immediately before updating the email auth.signInWithEmailAndPassword("currentEmail","currentPassword") .addOnCompleteListener(this) { task -> if (task.isSuccessful) { // Sign in success now update email auth.currentUser!!.updateEmail(newEmail) .addOnCompleteListener{ task -> if (task.isSuccessful) { // email update completed }else{ // email update failed } } } else { // sign in failed } } ``` Upvotes: 1
2018/03/19
768
2,438
<issue_start>username_0: I am using mongoDB version 3.4 . I have a collection where date is stored in String format `YYYY-mm-dd`. I need to get last week data from this collection. ``` new Date(new Date().setDate(new Date().getDate()-7)) ``` returns me the date in ISO date format , which cannot be used to compare with date in string format. I cannot use date formatter as it is not supported by version 3.4. Is there a way to get current date is in string format and use it in the query? And also , I need only date , without timestamp.<issue_comment>username_1: There are couple of ways I can think of. Most easiest will be calculating different dates in the last week, converting them to the `YYYY-MM-DD` format and then just comparing strings. ``` var listOfDates = ["2018-02-19","2018-02-18","2018-02-17","2018-02-16"] collection.find(date:{$in:listOfDates}) ``` **Alternative:** Convert all dates to standard ISODate type and then use normal queries. The conversion will only be one time, and it would be the right thing to do. You will be even able to use indexes to get faster results. ``` db.mycol.find({}).forEach(function(doc){ doc.date = new Date(doc.date) db.mycol.save(doc); }) ``` This will convert `{"date" : "2018-02-19"}` to `{"date" : ISODate("2018-02-19T00:00:00Z")}` Please note that `"2018-02-19"` will be the UTC date, so you might have to do some conversion based on your timezone. Upvotes: 0 <issue_comment>username_2: Check this ``` var today = new Date(); var first = today.getDate() - today.getDay(); var firstDayWeek = new Date(today.setDate(first)); var lastDayWeek = new Date(today.setDate(first + 6)); db.getCollection('Collection').aggregate([{ $project: { date: { $dateFromString: { dateString: '$date' } } } }, { $match: { "date": { $lt: lastDayWeek, $gt: firstDayWeek } } }]) ``` **Output** ``` { "_id" : ObjectId("5aaf77510f1ae49ecac9c1a7"), "date" : ISODate("2018-03-20T00:00:00.000Z") } ``` Upvotes: 1 <issue_comment>username_3: ``` var week_1 = new Date(); // current date ``` //here you can use your date instate of current date ``` var pastDate = week_1.getDate() - 7; week_1.setDate(pastDate); collection.find({"createdAt": { $gte: week_1 }}) ``` Upvotes: 0
2018/03/19
798
2,516
<issue_start>username_0: So basically I'm trying to download a pdf file and upload it on an amazon bucket, is it possible to do it without creating a temp file ? To download the pdf im doing this: ``` RestClient::Request.execute( :method => :get, :url => "#{@url}/test/#{id}/pdf", :headers => json_headers.merge(jwt_headers(access_token)) ) do |response| disconnect if response.code == 401 return unless response.code == 200 response.body end ``` But then can I upload the response.body to amazon directly as a pdf ? Im kinda new to rails so if there is a better way or if this is just plain wrong, please let me know<issue_comment>username_1: There are couple of ways I can think of. Most easiest will be calculating different dates in the last week, converting them to the `YYYY-MM-DD` format and then just comparing strings. ``` var listOfDates = ["2018-02-19","2018-02-18","2018-02-17","2018-02-16"] collection.find(date:{$in:listOfDates}) ``` **Alternative:** Convert all dates to standard ISODate type and then use normal queries. The conversion will only be one time, and it would be the right thing to do. You will be even able to use indexes to get faster results. ``` db.mycol.find({}).forEach(function(doc){ doc.date = new Date(doc.date) db.mycol.save(doc); }) ``` This will convert `{"date" : "2018-02-19"}` to `{"date" : ISODate("2018-02-19T00:00:00Z")}` Please note that `"2018-02-19"` will be the UTC date, so you might have to do some conversion based on your timezone. Upvotes: 0 <issue_comment>username_2: Check this ``` var today = new Date(); var first = today.getDate() - today.getDay(); var firstDayWeek = new Date(today.setDate(first)); var lastDayWeek = new Date(today.setDate(first + 6)); db.getCollection('Collection').aggregate([{ $project: { date: { $dateFromString: { dateString: '$date' } } } }, { $match: { "date": { $lt: lastDayWeek, $gt: firstDayWeek } } }]) ``` **Output** ``` { "_id" : ObjectId("5aaf77510f1ae49ecac9c1a7"), "date" : ISODate("2018-03-20T00:00:00.000Z") } ``` Upvotes: 1 <issue_comment>username_3: ``` var week_1 = new Date(); // current date ``` //here you can use your date instate of current date ``` var pastDate = week_1.getDate() - 7; week_1.setDate(pastDate); collection.find({"createdAt": { $gte: week_1 }}) ``` Upvotes: 0
2018/03/19
829
3,236
<issue_start>username_0: I have a model ``` class Foo { private int id; private String name; private String city; } ``` setting ``` Foo foo = new Foo(); foo.setId(1); foo.setName("<NAME>"); ``` Now I want a generic method that return not null Map from any object. for example, in case of foo , it return Map that contains `id` and `name`. any update ?<issue_comment>username_1: If i have understood your question clearly, you want a method that operates on your input and provides a map in return. However i am not sure why would you want to do this on a single object. I think you want to operate on list of such objects instead. I am providing answers for single object however you can extend the solution to support lists or arrays as well. For my first solution, i am not going to use reflection, as it's use in non framework situations for normal production code is discouraged. ``` public static Map getGenericMap(Supplier keySupplier, Supplier valueSupplier) { Map returnMap = new HashMap(); returnMap.put(keySupplier.get(), valueSupplier.get()); return returnMap; } ``` you will have to call above like below: ``` Foo foo = new Foo(); foo.setId(1); foo.setName("<NAME>"); Map map=getGenericMap(foo::getId,foo::getName); ``` however if you are really inclined towards reflective solution check my **not recommended** solution below: ``` @SuppressWarnings("unchecked") public static Map getGenericMap(Object obj, String keyGetterName, String valueGetterName) throws IllegalAccessException, IllegalArgumentException, InvocationTargetException, NoSuchMethodException, SecurityException { Map returnMap = new HashMap(); Class extends Object clazz = obj.getClass(); Method keyGetter = clazz.getDeclaredMethod(keyGetterName); Method valueGetter = clazz.getDeclaredMethod(valueGetterName); returnMap.put((K) keyGetter.invoke(obj), (V) valueGetter.invoke(obj)); return returnMap; } ``` the above has to be called like below: ``` Foo foo = new Foo(); foo.setId(1); foo.setName("<NAME>"); Map map1=getGenericMap(foo, "getId","getName"); ``` I hope this answers your question. Upvotes: 0 <issue_comment>username_2: You can use `com.fasterxml.jackson.databind.ObjectMapper` to get this thing done. First convert the foo object into the jsonString using `Gson`. Then pass it to the `ObjectMapper` asking for the map of Json key/value properties. It will filter out null values for you. The code would be something like this. ``` import java.io.IOException; import java.util.HashMap; import java.util.Map; import com.fasterxml.jackson.core.JsonParseException; import com.fasterxml.jackson.core.type.TypeReference; import com.fasterxml.jackson.databind.JsonMappingException; import com.fasterxml.jackson.databind.ObjectMapper; import com.google.gson.Gson; public class MapTest { public static void main(String[] args) throws JsonParseException, JsonMappingException, IOException { Foo foo = new Foo(); foo.setId(1); foo.setName("<NAME>"); final Gson gson = new Gson(); final HashMap result = new ObjectMapper().readValue(gson.toJson(foo), new TypeReference>() { }); System.out.println(result); } } ``` Upvotes: 2 [selected_answer]
2018/03/19
907
2,902
<issue_start>username_0: I have this code that generates a date and time, ``` ZoneId z = ZoneId.of( "Africa/Nairobi" ); Instant instant = Instant.now(); ZonedDateTime zdt = instant.atZone(z); return zdt.toString(); //2018-03-19T09:03:22.858+03:00[Africa/Nairobi] ``` Is there a lib like chrono - <https://docs.oracle.com/javase/8/docs/api/java/time/temporal/ChronoField.html> field that I can use to get the date, hour and minute? Chrono fields does not extract the complete date.<issue_comment>username_1: `ZonedDateTime` has methods such as `getHour()`, `getMinute()`, and such which should suffice. Upvotes: 1 <issue_comment>username_2: Since you seem to have been confused about how to get the date from your `ZonedDateTime`, I should like to supplement [username_1’a good and correct answer](https://stackoverflow.com/a/49357185/5772882). ``` ZoneId z = ZoneId.of("Africa/Nairobi"); ZonedDateTime zdt = ZonedDateTime.now(z); System.out.println("Date " + zdt.toLocalDate()); System.out.println("Year " + zdt.getYear()); System.out.println("Month " + zdt.getMonth()); System.out.println("Day of month " + zdt.getDayOfMonth()); ``` This just printed: ``` Date 2018-03-19 Year 2018 Month MARCH Day of month 19 ``` Please check the documentation for more methods including `getMonthValue` for the number of the month (1 through 12). I include a link at the bottom. Since `ZonedDateTime` class has a `now` method, you don’t need `Instant.now()` first. If you wanted an old-fashioned `java.util.Date` object — first answer is: don’t. The modern API you are already using is much nicer to work with. Only if you need a `Date` for some legacy API that you cannot change or don’t want to change just now, get an `Instant` and convert it: ``` Instant instant = Instant.now(); Date oldfashionedDateObject = Date.from(instant); System.out.println("Old-fashioned java.util.Date " + oldfashionedDateObject); ``` This printed: ``` Old-fashioned java.util.Date Mon Mar 19 12:00:05 CET 2018 ``` Even though it says `CET` for Central European Time in the string, the `Date` does not contain a time zone (this confuses many). Only its `toString` method (called implicitly when I append the `Date` to a `String`) grabs the JVM’s time zone setting and uses it for generating the `String` while the `Date` stays unaffected. In the special case where you just want a `Date` representing the date-time now, again, it’s ill-advised unless you have a very specific need, it’s very simple: ``` Date oldfashionedDateObject = new Date(); ``` The result is the same as above. Links ----- * [`ZonedDateTime` documentation](https://docs.oracle.com/javase/9/docs/api/java/time/ZonedDateTime.html) * [All about java.util.date](https://codeblog.jonskeet.uk/2017/04/23/all-about-java-util-date/) Upvotes: 6 [selected_answer]
2018/03/19
1,211
4,469
<issue_start>username_0: I have a view in the project but it doesn't show anything, I want to show data in the table but the page is blank.it was working yesterday but its just a blank page,don't know what happened, please help.. 1.is it problem of route 2.database issue 3.html error route ``` php Route::get('/', function () { return view('welcome'); }); Route::resource('books','BookController'); Route::resource('news','NewsController'); //Auth::routes(); Route::get('/home', 'HomeController@index')-name('home'); Route::get('my-home', 'HomeController@myHome'); Route::get('my-users', 'HomeController@myUsers'); Route::get('/news','NewsController@index'); Route::get('/news','NewsController@create'); ``` Index.blade.php ``` @extends('theme.default') @section('content') ADVERTISEMENT DETAILS ===================== [Create New News](#) Added News | Slno | News Name | News Details | News Link | News Status | EDIT | DELETE | | --- | --- | --- | --- | --- | --- | --- | php $i = 1; ? @foreach($news as $news) | {{ $news->$id }} | {{ $news->name }} | {{ $news->news }} | {{ $news->alink }} | @if($news->status==0) ACTIVE | @else INACTIVE | @endif | | @endforeach ``` @endsection my controller function ``` php namespace App\Http\Controllers; use App\News; use Illuminate\Http\Request; class NewsController extends Controller { public function index() { $news=News::all(); return view('news.index',['news'=$news]); } /** * Show the form for creating a new resource. * * @return \Illuminate\Http\Response */ public function create() { // return view('news.create'); } /** * Store a newly created resource in storage. * * @param \Illuminate\Http\Request $request * @return \Illuminate\Http\Response */ public function store(Request $request) { // } /** * Display the specified resource. * * @param \App\News $news * @return \Illuminate\Http\Response */ public function show(News $news) { // } /** * Show the form for editing the specified resource. * * @param \App\News $news * @return \Illuminate\Http\Response */ public function edit(News $news) { // } /** * Update the specified resource in storage. * * @param \Illuminate\Http\Request $request * @param \App\News $news * @return \Illuminate\Http\Response */ public function update(Request $request, News $news) { // } /** * Remove the specified resource from storage. * * @param \App\News $news * @return \Illuminate\Http\Response */ public function destroy(News $news) { // } } ```<issue_comment>username_1: If it doesn't show anything Laravel's error reporting features are probably turned off. Because almost always it says what's wrong. Maybe you should look in to that first. Upvotes: 0 <issue_comment>username_2: Problem with your route.Because once you use **resource** and then use declare the route as a get, First you clear what route you are using. Here below is the when **resource** using , example below. <http://www.expertphp.in/article/laravel-5-5-crud-create-read-update-delete-example-from-scratch> Upvotes: 0 <issue_comment>username_3: You have overlapping route definitions: ``` Route::get('/news','NewsController@index'); Route::get('/news','NewsController@create'); ``` The latter is being used. Probably change the latter to something like: ``` Route::get('/news/create', 'NewsController@create'); ``` Upvotes: 3 [selected_answer]<issue_comment>username_4: `Remove` these lines as you have already declared `resource` routing: ``` Route::get('/news','NewsController@index'); Route::get('/news','NewsController@create'); ``` Upvotes: 0 <issue_comment>username_5: Your route are replaced by the second route thats why its show empty page because your create() dosn't do anything i.e empty page. I would suggest you to rename create route as below Try below code : ``` Route::get('/news','NewsController@index'); Route::get('/news/create','NewsController@create'); ``` Upvotes: 0 <issue_comment>username_6: Try to delete the cache stored in the `storage -> framework -> cache folder` if still fails, Try to call `composer dump-autoload` via your cmd to reload your soure code. Upvotes: 0
2018/03/19
408
1,013
<issue_start>username_0: I am having trouble initializing a 2D int array. The structure of my program is: ``` int arr[2][2]; if(val==1) arr = {{1,1}, {2,2}}; else if (val==2) arr = {{3,3}, {4,4}}; ... ... int x = arr[1][1]; ... ``` I am getting an error "Expression must be a modifiable lvalue" Thanks.<issue_comment>username_1: In your code, `arr = {{1,1}, {2,2}};` is **not** initialization. If you insist on the native array, I'm afraid you have to manually set each element. However you can switch to use `std::array`, which gives what you want: ``` array, 2> arr; if (val == 1) arr = { { { 1,1 }, { 2,2 } } }; else if (val == 2) arr = { { { 3,3 }, { 4,4 } } }; int x = arr[1][1]; ``` Note the extra braces (see [here](https://stackoverflow.com/questions/17759757/multidimensional-stdarray)). Upvotes: 4 [selected_answer]<issue_comment>username_2: Initializing ``` int arr[2][2] = {{3,3}, {4,4}}; ``` modifying ``` arr[0][0] = 3; arr[0][1] = 3; arr[1][0] = 4; arr[1][1] = 4; ``` Upvotes: 2
2018/03/19
499
1,843
<issue_start>username_0: I am getting this error when I try to build the app on App Center from Microsoft. > > Errors in packages.config projects > https://{myDomainOnVSTS}.com/\_packaging/CustomNugetPackages/nuget/v3/index.json: Unable to load the service index for source https://{myDomainOnVSTS}.pkgs.visualstudio.com/\_packaging/CustomNugetPackages/nuget/v3/index.json. > The input is not a valid Base-64 string as it contains a non-base 64 character, more than two padding characters, or an illegal character among the padding characters. > <https://api.nuget.org/v3/index.json>: Package 'CalendarWithNoDeselect.1.0.0' is not found on source > > > The strange part is that, the app builds fine on VSTS and on my local machine using the private feed. Here is the Nuget.Config file ``` xml version="1.0" encoding="utf-8"? ``` Can someone Kindly help me with this issue. EDIT1: The issue is with Environment Variables since when I don't used them the package is restored as the following.<issue_comment>username_1: Hey your issue with the environment variables seems to be because of syntax error in the way you've configured them in the Nuget.Config file. Try editing them to: ``` ``` Just for reference - <https://learn.microsoft.com/en-us/appcenter/build/custom/variables/> Upvotes: 2 <issue_comment>username_2: I still had this error without using any variables. My problem was I was using a plain text password and it was expecting base 64 encoded. Maybe that's obvious to everyone else but it wasn't to me. I switched to using an API key for the user instead. Alternatively, there's a `cleartextpassword` that you can use, [documentation](https://learn.microsoft.com/en-us/nuget/reference/nuget-config-file#packagesourcecredentials). But you know, you really shouldn't be using these kinds of things. =D Upvotes: 1
2018/03/19
1,664
6,608
<issue_start>username_0: I'm trying to return an Object as JSON. Using the `/user/id` endpoint, I want to display a User based on his Id. When calling this controllerMethod I get the following Exception: ``` InvalidDefinitionException: No serializer found for class org.hibernate.proxy.pojo.javassist.JavassistLazyInitializer and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationFeature.FAIL_ON_EMPTY_BEANS) (through reference chain: com.sample.scrumboard.models.User_$$_jvsta02_1["handler"]) ``` My contollerClass looks like this: ``` @RestController @RequestMapping(path="/user") @JsonIgnoreProperties(ignoreUnknown = true) public class UserRestController { private UserRepository repository; @Autowired public UserRestController(UserRepository repository){ this.repository = repository; } @GetMapping(value = "/list") public List getUsers(){ return repository.findAll(); } @GetMapping(value = "/{id}") public @ResponseBody User getUserById(@PathVariable Long id, User user){ user = repository.getOne(id); return user; } } ``` I checked if al fields have a public getter and tried various options with @JSONIgnoreProperties, but I can't find it. Displaying all users as a JSONlist does work JSONlist with `/user/list`. So the problem is only there when trying to display one Object, not a list of Objects. From the repository it does find the User, but it's unable to serialize that Object and put in on the screen. The User class itself looks like this: ``` @Entity public class User { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "userId", nullable = false, updatable = false) private Long id; @NotNull @Size(min=2, max=20) private String firstName; @NotNull @Size(min=2, max=30) private String lastName; @NotNull @Size(min=2, max=20) private String userName; @NotNull @Size(min=2, max=30) private String passWord; @NotNull @Email private String email; //the mappedBy element must be used to specify the relationship field or property of the entity that is the owner of the relationship @OneToMany(mappedBy = "owner", cascade = CascadeType.ALL, fetch = FetchType.LAZY) @JsonIgnore private List userStoryList; public User() { } public User(String firstName, String lastName, String userName, String passWord, String email) { this.firstName = firstName; this.lastName = lastName; this.userName = userName; this.passWord = passWord; this.email = email; } @Override public String toString() { return "User{" + "id=" + id + ", firstName='" + firstName + '\'' + ", lastName='" + lastName + '\'' + ", userName='" + userName + '\'' + ", passWord='" + passWord + '\'' + ", email='" + email + '\'' + '}'; } public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getUserName() { return userName; } public void setUserName(String userName) { this.userName = userName; } public String getPassWord() { return passWord; } public void setPassWord(String passWord) { this.passWord = passWord; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public List getUserStoryList() { return userStoryList; } public void setUserStoryList(List userStoryList) { this.userStoryList = userStoryList; } } ``` How can I display my User returned from `/user/id`? **A Solution?** As suggested below, I made it work using a Dto and ModelMapper. I added ``` @Bean public ModelMapper modelMapper(){ return new ModelMapper(); } ``` ControllerMethod ``` @GetMapping(value = "/{id}") public UserDTO getUserById(@PathVariable Long id, User user, ModelMapper modelMapper){ user = repository.getOne(id); return modelMapper.map(user, UserDTO.class); } ``` And UserDto ``` public class UserDTO { private Long id; private String firstName; private String lastName; private String userName; private String passWord; private String email; private List userStoryList; //getters and setters ``` Now I'm able to show a User on the screen. Still I'm wondering if there is no solution using Jackson and without modelmapper and dto?<issue_comment>username_1: Maybe it's not a good idea to use your entity (User) to expose the data about user via REST? Can you create UserDTO for your user that will implement Serializable and send this DTO via REST? In this case it should be necessary to convert User object that you've retrieved from the db to UserDTO. Upvotes: 3 [selected_answer]<issue_comment>username_2: Annotation `@JsonIgnoreProperties` should not be on `UserRestController`, what you need to serialize is `User`, remove annotation is just ok. Jackson will help to do all the work, if you want to ignore some fields on class `User`, move `@JsonIgnoreProperties` to User class, and add `@JsonProperty` on filed need to display on the page. Upvotes: 0 <issue_comment>username_3: Serializability of your orm classes implementing the java.io.Serializable interface You are using @RestController, there is no need of @ResponseBody Also there is no need to create a DTO class to transform the ORM class for the response json. OBS: For the Bidirectional reationship you will use @JsonManagedReference, @JsonBackReference Upvotes: 0 <issue_comment>username_4: Don't use `@JsonIgnoreProperties(ignoreUnknown = true)` in the controller class. Use the below annotation in the entity class. That solved that issue. ``` @JsonIgnoreProperties({"hibernateLazyInitializer", "handler"}) ``` Upvotes: 2 <issue_comment>username_5: Try adding the following line to your application.properties file : `spring.jackson.serialization.fail-on-empty-beans=false` Upvotes: 2 <issue_comment>username_6: There is a different between spring repository.getOne() and repository.findById(). With getOne() you can get a reference (proxy) e.g. if the object is already read in the same transaction. With findById() you always get the User as expected. Upvotes: 0 <issue_comment>username_7: i am apply the @JsonIgnoreProperties({"hibernateLazyInitializer", "handler"}) spring.jackson.serialization.fail-on-empty-beans=false in my cause . it was not working fine. Upvotes: 0
2018/03/19
898
3,042
<issue_start>username_0: I want to know Is there any way through which I can avoid mentioning size of array in function. Below is my simple code. Every time I create an array in main I have to change size of reference array of function passbyref. Thanks alot. ``` #include #include #include #include using namespace std; class GradeBook { public: void changevalues() { cout<& refvar) //here 5 I have to mention myself { refvar[2]=2; } private: array arr2; }; int main() { array grades1{1,1,1,1,1}; GradeBook obj1; cout<<"grades[2] before change =" < ```<issue_comment>username_1: The array size can be specified as a template parameter in `passbyref` function. ``` template void passbyref(array& refvar) { } ``` The value of `N` will be deducted automatically, so there is no need to specify that in caller. In this way if you change the size in `main` there won't be any change needed in `passbyref`. Upvotes: 2 <issue_comment>username_2: It may be better for you to use a *dynamic array* - one whose length can change at runtime. The easiest way to work with dynamic arrays is using the [std::vector](http://en.cppreference.com/w/cpp/container/vector) which manages an array internally. ``` void passbyref(std::vector& refvar) // no need to mention size { if(refvar.size() > 2) refvar[2] = 2; } // ... int main() { std::vector grades1 {1, 1, 1, 1, 1}; // any length you want passbyref(grades1); } ``` Upvotes: 0 <issue_comment>username_3: The idea of `std::array` is that the size is an integral part of a specific array as is the type, which makes it cleaner than the c-style array, but it means that in functions, the expected size must be present. Possible solutions: * templates, but is `std::array` the correct container then? * typedef (or using): `typedef std::array MySpecificArray` * `std::vector` Upvotes: 0 <issue_comment>username_4: That's a common problem in C++: there is no way to declare a sequential container for which the size is constant and only known at run time. That's one of the reasons (along with C compatibility) while some compilers notably gcc and clang allow Variable Length Arrays in C++ as a compiler extension.. If only a few sizes will be used and will be compile time expressions, you can use a template integer value for the size (`std::array`). In any other case you will have to rely on `std::vector` for a C++ conformant way. Upvotes: -1 <issue_comment>username_5: Templates are your friends. The function below works with any array type that can be indexed with square brackets and has a value-type convertible to int. ``` #include #include #include #include using namespace std; class GradeBook { public: void changevalues() { cout< void passbyref(Arr& refvar) //here 5 I have to mention myself { auto N = distance(begin(refvar), end(refvar)); cout << "size is " << N << '\n'; refvar[2]=2; } private: array arr2; }; int main() { array grades1{1,1,1,1,1}; GradeBook obj1; cout<<"grades[2] before change =" < ``` Upvotes: 1 [selected_answer]
2018/03/19
362
1,360
<issue_start>username_0: I source a dotcshrc file in my python script with :os.system(‘/bin/csh dotcshrc’) and it works,but when I want to use the command I have just put into the env by the source command,like os.system(‘ikvalidate mycase ‘),linux complaints:command not found. But when I do it all by hand,everything go well. Where is problem?<issue_comment>username_1: If you have a command in linux like `ls` and you want to use it in your python code do like this: ``` import os ls = lambda : os.system('ls') # This effectively turns that command into a python function. ls() # skadoosh! ``` Output is : ``` FileManip.py Oscar MySafety PROJECT DOCS GooSpace Pg Admin l1_2014 PlatformMavenRepo l1_2015 R l1_201617 R64 l2_2014 Resources ``` Upvotes: 1 <issue_comment>username_2: `os.system` runs each command in its own isolated environment. If you are sourcing something in an `os.system` call, subsequent calls will not see that because they are starting with a fresh shell environment. If you have dependencies like the above, you might be able to combine it into one call: ``` os.system(‘/bin/csh "dotcshrc; ikvalidate mycase"’) ``` Upvotes: 0
2018/03/19
519
1,949
<issue_start>username_0: The workflow suddenly stopped working on the site which had been operating for about a year ago. With Office 365 Sharepoint, The workflow was made by Sharepoint Designer and call HttpWebService to change authorities on list items. I tried get the list item in Workflow 2013 on the test site as well, and it has stopped working the same way, so I am in trouble because I do not know the cause. Massage is below. > > Activity in progress > > > Retrying last request. Next attempt scheduled 2018/03/19 >13:18. Last request details: https: // 'site' / \_ api / web / lists (guid >'GUID') / Items (242)? % 24 select = ID% 2 HTTP Unauthorized for CID > > > We grant the authority of the application with Full Control and activate "Use application privilege in workflow" in management of site function. Even if it is not a solution, if there is a similar problem occurring, please share the information.<issue_comment>username_1: If you have a command in linux like `ls` and you want to use it in your python code do like this: ``` import os ls = lambda : os.system('ls') # This effectively turns that command into a python function. ls() # skadoosh! ``` Output is : ``` FileManip.py Oscar MySafety PROJECT DOCS GooSpace Pg Admin l1_2014 PlatformMavenRepo l1_2015 R l1_201617 R64 l2_2014 Resources ``` Upvotes: 1 <issue_comment>username_2: `os.system` runs each command in its own isolated environment. If you are sourcing something in an `os.system` call, subsequent calls will not see that because they are starting with a fresh shell environment. If you have dependencies like the above, you might be able to combine it into one call: ``` os.system(‘/bin/csh "dotcshrc; ikvalidate mycase"’) ``` Upvotes: 0
2018/03/19
1,431
4,880
<issue_start>username_0: I am making a Complex number class in order to work on overloading operators. ``` #include class Complex { double real; double imaginary; public: Complex(double real, double imaginary) : real(real), imaginary(imaginary) {} ~Complex() = default; friend std::ostream &operator<<(std::ostream &out, Complex &source) { out << "(" << source.real << " + " << source.imaginary << ")"; return out; } friend Complex operator+(const Complex &a, const Complex &b) { return Complex(a.real + b.real, a.imaginary + b.imaginary); } }; int main() { Complex c1(3, 2.25); Complex c2(2.25, 3); Complex res = c1 + c2; std::cout << res; return 0; } ``` The class definition is not finished as I need to overload a few operators more. However if I compile and run the project I get the result printed on my screen as expected though if I don't use a result variable in order to print for cout `cout<< c1+c2;`I am getting the following error: ``` error: no match for 'operator<<' (operand types are 'std::ostream {aka std::basic_ostream}' and 'cmp::Complex') ``` If I try to use `cout<< &(c1+c2);` I get the error message: ``` error: taking address of temporary [-fpermissive] ``` and it was not my intention to have to write it like that. I am under the impression that it fails because c1+c2 is not taken as a reference since it is a temporary object that is not saved anywhere and since I cannot take a reference of a temporary object according to the second error, it fails. This explains why when I save the result of c1+c2 on result I can execute the program without errors. In the video I was watching, Eclipse was used and in my case I am using Codeblocks with GNU GCC compiler. Could you help me understand what am I doing wrong ? Why isn't it working in my case but works with the same syntax on the video ? EDIT: Solution: The << operator function should be taking a const reference of Complex type instead. A temporary object can only be bound to a const reference.Thus the prototype of it should look something like this... ``` friend ostream &operator<<(ostream &out,const Complex &source); ```<issue_comment>username_1: `c1+c2` produces a temporary object, this can't be bound to the non-const reference in your stream operator. You need to change it to a const reference Upvotes: 3 [selected_answer]<issue_comment>username_2: ### Problem: ``` ... friend ostream &operator<<(ostream &out,Complex &source); ... cout<< c1+c2; // error or warning ... ``` ### Why isn't it working? Expression of `c1+c2`evaluates to temporary object of [prvalue](http://en.cppreference.com/w/cpp/language/value_category#prvalue). `std::cout< is trying to bind prvalue to lvalue reference` > > All temporary objects are destroyed as the last step in evaluating the > full-expression that (lexically) contains the point where they were > created, and if multiple temporary objects were created, they are > destroyed in the order opposite to the order of creation. This is true > even if that evaluation ends in throwing an exception. > > > ### Solution? > > The lifetime of a temporary object may be extended by binding to a > const lvalue reference or to an rvalue reference (since C++11) > > > ``` ... // using rvalue reference friend std::ostream &operator<<(std::ostream &out, Complex &&source) { source.real += 100; // non const value in function signature allows us to modify 'source' out << "(" << source.real << " + " << source.imaginary << ")"; return out; } ... ``` Adding `const` qualifier to the function argument: ``` ... // using const lvalue reference friend std::ostream &operator<<(std::ostream &out, const Complex &source) { out << "(" << source.real << " + " << source.imaginary << ")"; return out; } ... ``` ### What if we have all of this overloads at the same time? > > When used as a function argument and when two overloads of the > function are available, one taking rvalue reference parameter and the > other taking lvalue reference to const parameter, an rvalue binds to > the rvalue reference overload (thus, if both copy and move > constructors are available, an rvalue argument invokes the move > constructor, and likewise with copy and move assignment operators). > > > ### Example: ``` #include ... int main() { Complex c1(3, 2.25); Complex c2(2.25, 3); auto lvalue = c1 + c2; auto && rlvalueRef = c1 + c2; const auto& constlvalueRef = c1 + c2; const auto constlvalue = c1 + c2; std::cout << constlvalue << lvalue << rlvalueRef << constlvalueRef << c1 + c2 << std::endl; return 0; } ``` ### Output: ``` non modified: (5.25 + 5.25) non modified: (5.25 + 5.25) non modified: (5.25 + 5.25) non modified: (5.25 + 5.25) modified output: (105.25 + 5.25) ``` Upvotes: 1
2018/03/19
1,222
3,736
<issue_start>username_0: I have data like this, below are the 3 rows from my data set: ``` total=7871MB;free=5711MB;used=2159MB;shared=0MB;buffers=304MB;cached=1059MB; free=71MB;total=5751MB;shared=3159MB;used=5MB;buffers=30MB;cached=1059MB; cached=1059MB;total=5751MB;shared=3159MB;used=5MB;buffers=30MB;free=109MB; ``` Expected output as below, ``` total free used shared buffers cached 7871MB 5711MB 2159MB 0MB 304MB 1059MB 5751MB 71MB 5MB 3159MB 30MB 1059MB 5751MB 109MB 5MB 3159MB 30MB 1059MB ``` and the problem here is I want to make different columns using above data like `total value`, `free value`, `used value`, `shared value`. I can do that by splitting using `;` but in other rows values are getting shuffled, like first value coming as free then total followed by other values, Is there any way using REGEX in , if we find total get value till `;` and put into one column, if we find free get value till `;` and put into another column?<issue_comment>username_1: `c1+c2` produces a temporary object, this can't be bound to the non-const reference in your stream operator. You need to change it to a const reference Upvotes: 3 [selected_answer]<issue_comment>username_2: ### Problem: ``` ... friend ostream &operator<<(ostream &out,Complex &source); ... cout<< c1+c2; // error or warning ... ``` ### Why isn't it working? Expression of `c1+c2`evaluates to temporary object of [prvalue](http://en.cppreference.com/w/cpp/language/value_category#prvalue). `std::cout< is trying to bind prvalue to lvalue reference` > > All temporary objects are destroyed as the last step in evaluating the > full-expression that (lexically) contains the point where they were > created, and if multiple temporary objects were created, they are > destroyed in the order opposite to the order of creation. This is true > even if that evaluation ends in throwing an exception. > > > ### Solution? > > The lifetime of a temporary object may be extended by binding to a > const lvalue reference or to an rvalue reference (since C++11) > > > ``` ... // using rvalue reference friend std::ostream &operator<<(std::ostream &out, Complex &&source) { source.real += 100; // non const value in function signature allows us to modify 'source' out << "(" << source.real << " + " << source.imaginary << ")"; return out; } ... ``` Adding `const` qualifier to the function argument: ``` ... // using const lvalue reference friend std::ostream &operator<<(std::ostream &out, const Complex &source) { out << "(" << source.real << " + " << source.imaginary << ")"; return out; } ... ``` ### What if we have all of this overloads at the same time? > > When used as a function argument and when two overloads of the > function are available, one taking rvalue reference parameter and the > other taking lvalue reference to const parameter, an rvalue binds to > the rvalue reference overload (thus, if both copy and move > constructors are available, an rvalue argument invokes the move > constructor, and likewise with copy and move assignment operators). > > > ### Example: ``` #include ... int main() { Complex c1(3, 2.25); Complex c2(2.25, 3); auto lvalue = c1 + c2; auto && rlvalueRef = c1 + c2; const auto& constlvalueRef = c1 + c2; const auto constlvalue = c1 + c2; std::cout << constlvalue << lvalue << rlvalueRef << constlvalueRef << c1 + c2 << std::endl; return 0; } ``` ### Output: ``` non modified: (5.25 + 5.25) non modified: (5.25 + 5.25) non modified: (5.25 + 5.25) non modified: (5.25 + 5.25) modified output: (105.25 + 5.25) ``` Upvotes: 1
2018/03/19
327
998
<issue_start>username_0: I am new to JavaScript, I want to get the right page count. if one page the item count is `20`, and the page count is 23, the page should be `2`. ``` var count = 23 var per_page_count = 20 ``` If in other language we can use: ``` count / per_page_count + 1 ``` to get the page count, but in JavaScript we can not get it. I also tried use `Math.round`, still not work ``` console.log(Math.round(count/per_page_count)) // there I want to get 2, but get 1 ```<issue_comment>username_1: You can use ``` Math.ceil(count/per_page_count) ``` > > The Math.ceil() function returns the smallest integer greater than or equal to a given number. > > > from [Math.ceil document](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/ceil). Upvotes: 4 [selected_answer]<issue_comment>username_2: I think you are trying to implement some sort of pagination. So I would suggest : ``` Maths.ceil(count/per_page_count) ``` Upvotes: -1
2018/03/19
903
3,620
<issue_start>username_0: I have been having problems with this lab assignment for class. I want to ensure that my input is the specified enumerate, and not any other input. But for some reason I can't seem to get it to work properly, even when I put in an incorrect answer it just runs through the loop until I input the right answer. I am not experienced to know what I'm not seeing. ``` questions = [ {'question': '\nOverall what is your response?', 'answers': ['Calmly look around?', 'Panic!!', 'Roll over and cry?'], 'correct': '1'}, {'question': '\nYou notice a door. Do you try it?', 'answers': ['Yes', 'No'], 'correct': '1'}, {'question': '\nYou smell something strange in the corner. What do you do?', 'answers': ['Investigate?', 'Do you poke it?', 'Leave it alone?', 'Vomit'], 'correct':'1'}, {'question': '\nA light flickerss above you. What do you do?', 'answers': ['Break it!!', 'Tighten it.', 'Leave it alone.', 'Roll over'], 'correct': '2'}, {'question': '\nThe bed you woke up in seems strange..', 'answers': ['Roll it over.', 'Roll over and cry.', 'Go to bed.'], 'correct': '1'}, {'question': '\nIn one corner a is playing Tiny Tim. What do you do?', 'answers': ['Turn it off', 'Mess with the dials', 'Sing along', 'Watch the show, continuously'], 'correct': '1'}, {'question': '\nThe door knob creeks', 'answers': ['Jump through the roof', 'Run to it', 'Wait calmly', 'Yell!'], 'correct': '3'}, {'question': '\nYou notice a cellar door in the corner', 'answers': ['Try to open it', 'Cautiously approach it', 'Forcfully open', 'Yell at it'], 'correct': '1'}, {'question': '\nA loud speaker is on the ceiling, starts playing Tiny Tim', 'answers': ['Sing along', 'Panic even more', 'Embrace the Tiny Tim', 'Try and destroy the loud speaker'], 'correct': '4'}, {'question': '\nYou see a toilet of needeles. What do you do?', 'answers': ['Reach your arm into it.', 'Question the needles in the toilet.', 'Check the tank.', 'Roll over.'], 'correct': '3'} ] score = 0 for question in questions: print(question['question']) for i, choice in enumerate(question['answers']): print(str(i + 1) + '. ' + choice) answer = '' while answer not in range(1, len(question['answers'])): answer = input('Choose a numerical answer: ') if answer == question['correct']: score = score break elif answer in question['answers']: break if answer == question['correct']: score = score + 1 else: print('That\'s one way to try it...') ```<issue_comment>username_1: You can use ``` Math.ceil(count/per_page_count) ``` > > The Math.ceil() function returns the smallest integer greater than or equal to a given number. > > > from [Math.ceil document](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/ceil). Upvotes: 4 [selected_answer]<issue_comment>username_2: I think you are trying to implement some sort of pagination. So I would suggest : ``` Maths.ceil(count/per_page_count) ``` Upvotes: -1
2018/03/19
375
1,493
<issue_start>username_0: I have a multi module Spring Boot project with the following structure: * MyAppModule + src/main/java/Application.java + ... * IntegrationTests + src/test/java/integrationTest.java The *IntegrationTest* module is testing the *MyAppModule*. The *IntegrationTest* module does not have a main package. Therefore there is no Spring Application. It has just the test package. Nevertheless I would like to read in the application.yaml for some properties but I'm not able to because the attributes are always null: ``` @Configuration @PropertySource("classpath:application.yaml") public class IntegrationTest { @Value("${app.url}") private String appUrl; } ``` Isn't it possible to use the `@Value` annotation without having a **Spring Application** (main with `SpringApplication.run` etc.)?<issue_comment>username_1: Why not just @ConfigurationProperties? ``` @Configuration @ConfigurationProperties(prefix = "app") public class IntegrationTest { private String url; } ``` Upvotes: 1 <issue_comment>username_2: Add Integration tests base package for component scanning in spring configuration. ``` @ComponentScan("basePackage1,basePackage2....etc") ``` Upvotes: 0 <issue_comment>username_3: Thanks for all responses. Solved it with the following code and a SpringBootApplication was necessary. ``` @RunWith(SpringJUnit4ClassRunner.class) @SpringBootTest public class IntegrationTest { @Value("${app.url}") private String url; } ``` Upvotes: 0
2018/03/19
668
2,808
<issue_start>username_0: I have a single collection into which I am inserting documents of different types. I use the type parameter to distinguish between different datatypes in the collection. When I am inserting a document, I have created an Id field for every document, but Cosmosdb has a built-in id field. How can I insert a new document and retrieve the id of the created Document all in one query?<issue_comment>username_1: The **`CreateDocumentAsync`** method returns the created document so you should be able to get the document id. ``` Document created = await client.CreateDocumentAsync(collectionLink, order); ``` Upvotes: 3 <issue_comment>username_2: I think you just need to `.getResource()` method to get the create document obj. Please refer to the ***java code***: ``` DocumentClient documentClient = new DocumentClient(END_POINT, MASTER_KEY, ConnectionPolicy.GetDefault(), ConsistencyLevel.Session); Document document = new Document(); document.set("name","aaa"); document = documentClient.createDocument("dbs/db/colls/coll",document,null,false).getResource(); System.out.println(document.toString()); //then do your business logic with the document..... ``` ***C# code:*** ``` Parent p = new Parent { FamilyName = "Andersen.1", FirstName = "Andersen", }; Document doc = client.CreateDocumentAsync("dbs/db/colls/coll",p,null).Result.Resource; Console.WriteLine(doc); ``` Hope it helps you. Upvotes: 3 [selected_answer]<issue_comment>username_3: Sure, you could always fetch the `id` from creation method response in your favorite API as already shown in other answers. You may have reasons why you want to delegate key-assigning to DocumentDB, but to be frank, I don't see any good ones. If inserted document would have no `id` set DocumentDB would generate a GUID for you. There wouldn't be any notable difference compared to **simply generating a new GUID yourself and assign it into id-field before save**. Self-assigning the identity would let you simplify your code a bit and also let you use the identity not only after persisting but also BEFORE. Which could simplify a lot of scenarios you may have or run into in future. Also, note that you don't have to use GUIDs as as `id` and could use any unique value you already have. Since you mentioned you have and `Id` field (which by name, I assume to be a primary key) then you should consider reusing this instead introducing another set of keys. Self-assigned non-Guid key is usually a better choice since it can be designed to match your data and application needs better than a GUID. For example, in addition to being just unique, it may also be a natural key, narrower, human-readable, ordered, etc. Upvotes: 1
2018/03/19
1,334
4,701
<issue_start>username_0: Say we have two log files with comma separated values. The `file1.txt` represent the `employee id` and `employee name`, `file2.txt` represents the `employee id` and the `projects` he associated with. `file1` is having unique entry. `file2` will have many-many relation. New employees dont have any entry in `file2.txt` if he doesn't assigned any projects. ``` File1.txt:(EmpId, EmpName) 1,abc 2,ac 3,bc 4,acc 5,abb 6,bbc 7,aac 8,aba 9,aaa File2.txt: (EmpId, ProjectId) 1,102 2,102 1,103 3,101 5,102 1,103 2,105 2,200 9,102 Find the each employee has been assigned to number of projects. For New employees if they dont have any projects print 0; Output: 1=3 2=3 3=1 4=0 5=1 6=0 7=0 8=0 9=1 ``` I used BufferedReader to read a line from `file1` and compare it with each line from `file2`. Below is my code, ``` public static void main(String[] args) throws IOException { // TODO Auto-generated method stub BufferedReader file1 = new BufferedReader(new FileReader("file1.txt")); BufferedReader file2 = new BufferedReader(new FileReader("file2.txt")); BufferedReader file3 = new BufferedReader(new FileReader("file2.txt")); HashMap empProjCount = new HashMap(); int lines =0; while (file2.readLine() != null) lines++; String line1 = file1.readLine(); String[] line\_1 = line1.split(","); String line2 = file3.readLine(); String[] line\_2 = line2.split(","); while(line1 != null && line2 != null) { int count = 0; for(int i=1;i<=lines+1 && line2 != null;i++) { if(line\_1[0].equals(line\_2[0])) { count++; } line2 = file3.readLine(); if(line2 != null){ line\_2 = line2.split(","); } } file3 = new BufferedReader(new FileReader("file2.txt")); empProjCount.put(line\_1[0], count); line1 = file1.readLine(); if(line1 != null) line\_1 = line1.split(","); line2 = file3.readLine(); if(line2 != null) line\_2 = line2.split(","); } System.out.println(empProjCount); ``` My questions are, 1. Is there any way to optimize it less than O(n^2), without using any extra space? 2. I used 3 BufferedReader to read a `file2.txt`, as once we read a line, it moves to next line. Is there any other option to mark the current line? 3. If we considered this as a table, what is the best way to query the above scenario?<issue_comment>username_1: The **`CreateDocumentAsync`** method returns the created document so you should be able to get the document id. ``` Document created = await client.CreateDocumentAsync(collectionLink, order); ``` Upvotes: 3 <issue_comment>username_2: I think you just need to `.getResource()` method to get the create document obj. Please refer to the ***java code***: ``` DocumentClient documentClient = new DocumentClient(END_POINT, MASTER_KEY, ConnectionPolicy.GetDefault(), ConsistencyLevel.Session); Document document = new Document(); document.set("name","aaa"); document = documentClient.createDocument("dbs/db/colls/coll",document,null,false).getResource(); System.out.println(document.toString()); //then do your business logic with the document..... ``` ***C# code:*** ``` Parent p = new Parent { FamilyName = "Andersen.1", FirstName = "Andersen", }; Document doc = client.CreateDocumentAsync("dbs/db/colls/coll",p,null).Result.Resource; Console.WriteLine(doc); ``` Hope it helps you. Upvotes: 3 [selected_answer]<issue_comment>username_3: Sure, you could always fetch the `id` from creation method response in your favorite API as already shown in other answers. You may have reasons why you want to delegate key-assigning to DocumentDB, but to be frank, I don't see any good ones. If inserted document would have no `id` set DocumentDB would generate a GUID for you. There wouldn't be any notable difference compared to **simply generating a new GUID yourself and assign it into id-field before save**. Self-assigning the identity would let you simplify your code a bit and also let you use the identity not only after persisting but also BEFORE. Which could simplify a lot of scenarios you may have or run into in future. Also, note that you don't have to use GUIDs as as `id` and could use any unique value you already have. Since you mentioned you have and `Id` field (which by name, I assume to be a primary key) then you should consider reusing this instead introducing another set of keys. Self-assigned non-Guid key is usually a better choice since it can be designed to match your data and application needs better than a GUID. For example, in addition to being just unique, it may also be a natural key, narrower, human-readable, ordered, etc. Upvotes: 1
2018/03/19
677
2,682
<issue_start>username_0: I have a list of strings and I want to read the strings one by one and convert it into a list of ints, is there a way to convert each character into a new list? ``` ["123","346","789"] to [[1,2,3],[4,5,6],[7,8,9]] stringToInt :: [String] -> [[Int]] ```<issue_comment>username_1: The **`CreateDocumentAsync`** method returns the created document so you should be able to get the document id. ``` Document created = await client.CreateDocumentAsync(collectionLink, order); ``` Upvotes: 3 <issue_comment>username_2: I think you just need to `.getResource()` method to get the create document obj. Please refer to the ***java code***: ``` DocumentClient documentClient = new DocumentClient(END_POINT, MASTER_KEY, ConnectionPolicy.GetDefault(), ConsistencyLevel.Session); Document document = new Document(); document.set("name","aaa"); document = documentClient.createDocument("dbs/db/colls/coll",document,null,false).getResource(); System.out.println(document.toString()); //then do your business logic with the document..... ``` ***C# code:*** ``` Parent p = new Parent { FamilyName = "Andersen.1", FirstName = "Andersen", }; Document doc = client.CreateDocumentAsync("dbs/db/colls/coll",p,null).Result.Resource; Console.WriteLine(doc); ``` Hope it helps you. Upvotes: 3 [selected_answer]<issue_comment>username_3: Sure, you could always fetch the `id` from creation method response in your favorite API as already shown in other answers. You may have reasons why you want to delegate key-assigning to DocumentDB, but to be frank, I don't see any good ones. If inserted document would have no `id` set DocumentDB would generate a GUID for you. There wouldn't be any notable difference compared to **simply generating a new GUID yourself and assign it into id-field before save**. Self-assigning the identity would let you simplify your code a bit and also let you use the identity not only after persisting but also BEFORE. Which could simplify a lot of scenarios you may have or run into in future. Also, note that you don't have to use GUIDs as as `id` and could use any unique value you already have. Since you mentioned you have and `Id` field (which by name, I assume to be a primary key) then you should consider reusing this instead introducing another set of keys. Self-assigned non-Guid key is usually a better choice since it can be designed to match your data and application needs better than a GUID. For example, in addition to being just unique, it may also be a natural key, narrower, human-readable, ordered, etc. Upvotes: 1
2018/03/19
617
2,027
<issue_start>username_0: **Edit--** **i guess this structure of what i thought was wierd a bit so i decided not to code like that and just follow the rules that the library is saying.( use 1 fragment and that it ) --** I'm using Board Library (Reference : <https://github.com/woxblom/DragListView> ) **board\_layout.xml(Root Fragment)** ``` xml version="1.0" encoding="utf-8"? ``` **fragment\_adapter.xml** ``` xml version="1.0" encoding="utf-8"? ``` **recycler\_item.xml** ``` xml version="1.0" encoding="utf-8"? ``` **vertical\_recycler\_item.xml** ``` xml version="1.0" encoding="utf-8"? ``` and i'm gonna make viewPager which exteneded FragmentStatePagerAdapter with multiple fragments. but got an **error** ``` Caused by: android.view.InflateException: Binary XML file line #0: HorizontalScrollView can host only one direct child Caused by: java.lang.IllegalStateException: HorizontalScrollView can host only one direct child ``` i don't know what is the problem here. please help me thank you.<issue_comment>username_1: > > `Caused by: android.view.InflateException: Binary XML file line #0: > HorizontalScrollView can host only one direct child` > > > From [**source code**](https://github.com/woxblom/DragListView/blob/master/library/src/main/java/com/woxthebox/draglistview/BoardView.java) > > **`BoardView extends HorizontalScrollView`** so you can not add multiple child in it > > > `HorizontalScrollView` can host only one direct child > so you need to use single layout as as child of your `BoardView` may be `linearlayout or relativelayout` > > > **SAMPLE CODE** ``` xml version="1.0" encoding="utf-8"? //add your control here ``` Upvotes: 0 <issue_comment>username_2: > > HorizontalScrollView can host only one direct child - Meaning, you have to add the components to the linearlayout/relativelayout. > > > ``` HorizontalScrollView -> linearlayout/relativelayout -> then components ``` i.e **BoardView -> linearlayout/relativelayout -> viewpager** Upvotes: 1
2018/03/19
4,499
15,429
<issue_start>username_0: I am trying to read a csv file present on the Google Cloud Storage bucket onto a panda dataframe. ``` import pandas as pd import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline from io import BytesIO from google.cloud import storage storage_client = storage.Client() bucket = storage_client.get_bucket('createbucket123') blob = bucket.blob('my.csv') path = "gs://createbucket123/my.csv" df = pd.read_csv(path) ``` It shows this error message: ``` FileNotFoundError: File b'gs://createbucket123/my.csv' does not exist ``` What am I doing wrong, I am not able to find any solution which does not involve google datalab?<issue_comment>username_1: `read_csv` does not support `gs://` From the [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html): > > The string could be a URL. Valid URL schemes include http, ftp, s3, > and file. For file URLs, a host is expected. For instance, a local > file could be file ://localhost/path/to/table.csv > > > You can [download the file](https://googlecloudplatform.github.io/google-cloud-python/latest/storage/blobs.html#google.cloud.storage.blob.Blob.download_to_file) or [fetch it as a string](https://googlecloudplatform.github.io/google-cloud-python/latest/storage/blobs.html#google.cloud.storage.blob.Blob.download_as_string) in order to manipulate it. Upvotes: 2 <issue_comment>username_2: There are *three* ways of accessing files in the GCS: 1. Downloading the client library (**this one for you**) 2. Using Cloud Storage Browser in the Google Cloud Platform Console 3. Using gsutil, a command-line tool for working with files in Cloud Storage. Using Step 1, [setup](https://cloud.google.com/appengine/docs/standard/python/googlecloudstorageclient/setting-up-cloud-storage) the GSC for your work. After which you have to: ``` import cloudstorage as gcs from google.appengine.api import app_identity ``` Then you have to specify the Cloud Storage bucket name and create read/write functions for to access your bucket: You can find the remaining read/write tutorial [here](https://cloud.google.com/appengine/docs/standard/python/googlecloudstorageclient/read-write-to-cloud-storage): Upvotes: 2 <issue_comment>username_3: If i understood your question correctly then maybe this link can help u get a better **URL** for your **read\_csv()** function : <https://cloud.google.com/storage/docs/access-public-data> Upvotes: 1 <issue_comment>username_4: UPDATE ------ As of version 0.24 of pandas, `read_csv` supports reading directly from Google Cloud Storage. Simply provide link to the bucket like this: ``` df = pd.read_csv('gs://bucket/your_path.csv') ``` The `read_csv` will then use `gcsfs` module to read the Dataframe, which means it had to be installed (or you will get an exception pointing at missing dependency). I leave three other options for the sake of completeness. * Home-made code * gcsfs * dask I will cover them below. The hard way: do-it-yourself code --------------------------------- I have written some convenience functions to read from Google Storage. To make it more readable I added type annotations. If you happen to be on Python 2, simply remove these and code will work all the same. It works equally on public and private data sets, assuming you are authorised. In this approach you don't need to download first the data to your local drive. How to use it: ``` fileobj = get_byte_fileobj('my-project', 'my-bucket', 'my-path') df = pd.read_csv(fileobj) ``` The code: ``` from io import BytesIO, StringIO from google.cloud import storage from google.oauth2 import service_account def get_byte_fileobj(project: str, bucket: str, path: str, service_account_credentials_path: str = None) -> BytesIO: """ Retrieve data from a given blob on Google Storage and pass it as a file object. :param path: path within the bucket :param project: name of the project :param bucket_name: name of the bucket :param service_account_credentials_path: path to credentials. TIP: can be stored as env variable, e.g. os.getenv('GOOGLE_APPLICATION_CREDENTIALS_DSPLATFORM') :return: file object (BytesIO) """ blob = _get_blob(bucket, path, project, service_account_credentials_path) byte_stream = BytesIO() blob.download_to_file(byte_stream) byte_stream.seek(0) return byte_stream def get_bytestring(project: str, bucket: str, path: str, service_account_credentials_path: str = None) -> bytes: """ Retrieve data from a given blob on Google Storage and pass it as a byte-string. :param path: path within the bucket :param project: name of the project :param bucket_name: name of the bucket :param service_account_credentials_path: path to credentials. TIP: can be stored as env variable, e.g. os.getenv('GOOGLE_APPLICATION_CREDENTIALS_DSPLATFORM') :return: byte-string (needs to be decoded) """ blob = _get_blob(bucket, path, project, service_account_credentials_path) s = blob.download_as_string() return s def _get_blob(bucket_name, path, project, service_account_credentials_path): credentials = service_account.Credentials.from_service_account_file( service_account_credentials_path) if service_account_credentials_path else None storage_client = storage.Client(project=project, credentials=credentials) bucket = storage_client.get_bucket(bucket_name) blob = bucket.blob(path) return blob ``` gcsfs ----- [gcsfs](http://gcsfs.readthedocs.io/en/latest/) is a "Pythonic file-system for Google Cloud Storage". How to use it: ``` import pandas as pd import gcsfs fs = gcsfs.GCSFileSystem(project='my-project') with fs.open('bucket/path.csv') as f: df = pd.read_csv(f) ``` dask ---- [Dask](https://dask.pydata.org/) "provides advanced parallelism for analytics, enabling performance at scale for the tools you love". It's great when you need to deal with large volumes of data in Python. Dask tries to mimic much of the `pandas` API, making it easy to use for newcomers. Here is the [read\_csv](http://dask.pydata.org/en/latest/dataframe-api.html#dask.dataframe.read_csv) How to use it: ``` import dask.dataframe as dd df = dd.read_csv('gs://bucket/data.csv') df2 = dd.read_csv('gs://bucket/path/*.csv') # nice! # df is now Dask dataframe, ready for distributed processing # If you want to have the pandas version, simply: df_pd = df.compute() ``` Upvotes: 7 <issue_comment>username_5: Another option is to use TensorFlow which comes with the ability to do a streaming read from Google Cloud Storage: ``` from tensorflow.python.lib.io import file_io with file_io.FileIO('gs://bucket/file.csv', 'r') as f: df = pd.read_csv(f) ``` Using tensorflow also gives you a convenient way to handle wildcards in the filename. For example: Reading wildcard CSV into Pandas -------------------------------- Here is code that will read all CSVs that match a specific pattern (e.g: gs://bucket/some/dir/train-\*) into a Pandas dataframe: ``` import tensorflow as tf from tensorflow.python.lib.io import file_io import pandas as pd def read_csv_file(filename): with file_io.FileIO(filename, 'r') as f: df = pd.read_csv(f, header=None, names=['col1', 'col2']) return df def read_csv_files(filename_pattern): filenames = tf.io.gfile.Glob(filename_pattern) dataframes = [read_csv_file(filename) for filename in filenames] return pd.concat(dataframes) ``` usage ===== ``` DATADIR='gs://my-bucket/some/dir' traindf = read_csv_files(os.path.join(DATADIR, 'train-*')) evaldf = read_csv_files(os.path.join(DATADIR, 'eval-*')) ``` Upvotes: 5 <issue_comment>username_6: As of `pandas==0.24.0` this is supported natively if you have `gcsfs` installed: <https://github.com/pandas-dev/pandas/pull/22704>. Until the official release you can try it out with `pip install pandas==0.24.0rc1`. Upvotes: 3 <issue_comment>username_7: One will still need to use `import gcsfs` if loading compressed files. Tried `pd.read_csv('gs://your-bucket/path/data.csv.gz')` in pd.**version**=> 0.25.3 got the following error, ``` /opt/conda/anaconda/lib/python3.6/site-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds) 438 # See https://github.com/python/mypy/issues/1297 439 fp_or_buf, _, compression, should_close = get_filepath_or_buffer( --> 440 filepath_or_buffer, encoding, compression 441 ) 442 kwds["compression"] = compression /opt/conda/anaconda/lib/python3.6/site-packages/pandas/io/common.py in get_filepath_or_buffer(filepath_or_buffer, encoding, compression, mode) 211 212 if is_gcs_url(filepath_or_buffer): --> 213 from pandas.io import gcs 214 215 return gcs.get_filepath_or_buffer( /opt/conda/anaconda/lib/python3.6/site-packages/pandas/io/gcs.py in 3 4 gcsfs = import\_optional\_dependency( ----> 5 "gcsfs", extra="The gcsfs library is required to handle GCS files" 6 ) 7 /opt/conda/anaconda/lib/python3.6/site-packages/pandas/compat/\_optional.py in import\_optional\_dependency(name, extra, raise\_on\_missing, on\_version) 91 except ImportError: 92 if raise\_on\_missing: ---> 93 raise ImportError(message.format(name=name, extra=extra)) from None 94 else: 95 return None ImportError: Missing optional dependency 'gcsfs'. The gcsfs library is required to handle GCS files Use pip or conda to install gcsfs. ``` Upvotes: 0 <issue_comment>username_8: Since Pandas 1.2 it's super easy to load files from google storage into a DataFrame. If you work on **your local machine** it looks like this: ``` df = pd.read_csv('gcs://your-bucket/path/data.csv.gz', storage_options={"token": "credentials.json"}) ``` It's imported that you add as token the credentials.json file from google. If you work on google cloud do this: ``` df = pd.read_csv('gcs://your-bucket/path/data.csv.gz', storage_options={"token": "cloud"}) ``` Upvotes: 3 <issue_comment>username_9: I was taking a look at this question and didn't want to have to go through the hassle of installing another library, `gcsfs`, which literally says in the documentation, `This software is beta, use at your own risk`... but I found a great workaround that I wanted to post here in case this is helpful to anyone else, using just the google.cloud storage library and some native python libraries. Here's the function: ``` import pandas as pd from google.cloud import storage import os import io os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'path/to/creds.json' def gcp_csv_to_df(bucket_name, source_file_name): storage_client = storage.Client() bucket = storage_client.bucket(bucket_name) blob = bucket.blob(source_blob_name) data = blob.download_as_bytes() df = pd.read_csv(io.BytesIO(data)) print(f'Pulled down file from bucket {bucket_name}, file name: {source_file_name}') return df ``` Further, although it is outside of the scope of this question, if you would like to upload a pandas dataframe to GCP using a similar function, here is the code to do so: ``` def df_to_gcp_csv(df, dest_bucket_name, dest_file_name): storage_client = storage.Client() bucket = storage_client.bucket(dest_bucket_name) blob = bucket.blob(dest_file_name) blob.upload_from_string(df.to_csv(), 'text/csv') print(f'DataFrame uploaded to bucket {dest_bucket_name}, file name: {dest_file_name}') ``` Hope this is helpful! I know I'll be using these functions for sure. Upvotes: 4 <issue_comment>username_10: Using [pandas](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html) and [google-cloud-storage](https://cloud.google.com/storage/docs/reference/libraries#using_the_client_library) python packages: First, we upload a file to the bucket in order to get a fully working example: ```py import pandas as pd from sklearn.datasets import load_iris dataset = load_iris() data_df = pd.DataFrame( dataset.data, columns=dataset.feature_names) data_df.head() ``` ```py Out[1]: sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) 0 5.1 3.5 1.4 0.2 1 4.9 3.0 1.4 0.2 2 4.7 3.2 1.3 0.2 3 4.6 3.1 1.5 0.2 4 5.0 3.6 1.4 0.2 ``` Upload a csv file to the bucket (GCP credentials setup is required, read more in [here](https://cloud.google.com/docs/authentication/getting-started#setting_the_environment_variable)): ```py from io import StringIO from google.cloud import storage bucket_name = 'my-bucket-name' # Replace it with your own bucket name. data_path = 'somepath/data.csv' # Get Google Cloud client client = storage.Client() # Get bucket object bucket = client.get_bucket(bucket_name) # Get blob object (this is pointing to the data_path) data_blob = bucket.blob(data_path) # Upload a csv to google cloud storage data_blob.upload_from_string( data_df.to_csv(), 'text/csv') ``` Now that we have a csv on the bucket, use `pd.read_csv` by passing the content of the file. ```py # Read from bucket data_str = data_blob.download_as_text() # Instanciate dataframe data_dowloaded_df = pd.read_csv(StringIO(data_str)) data_dowloaded_df.head() ``` ```py Out[2]: Unnamed: 0 sepal length (cm) ... petal length (cm) petal width (cm) 0 0 5.1 ... 1.4 0.2 1 1 4.9 ... 1.4 0.2 2 2 4.7 ... 1.3 0.2 3 3 4.6 ... 1.5 0.2 4 4 5.0 ... 1.4 0.2 [5 rows x 5 columns] ``` When comparing this approach with `pd.read_csv('gs://my-bucket/file.csv')` approach, I found that the approach described in here makes more explicit that `client = storage.Client()` is the one taking care of the authentication (which could be very handy when working with multiple credentials). Also, `storage.Client` comes already fully installed if you run this code on a resource from Google Cloud Platform, when for `pd.read_csv('gs://my-bucket/file.csv')` you'll need to have installed the package `gcsfs` that allow pandas to access Google Storage. Upvotes: 2 <issue_comment>username_11: Google Cloud storage has a method [download\_as\_bytes()](https://cloud.google.com/python/docs/reference/storage/latest/google.cloud.storage.blob.Blob#google_cloud_storage_blob_Blob_download_as_bytes), and then, from that you can read a csv from the bytes HT to [NEWBEDEV](https://newbedev.com/how-to-convert-bytes-data-into-a-python-pandas-dataframe), the code would look like this: ``` import pandas as pd from io import BytesIO blob = storage_client.get_bucket(event['bucket']).get_blob(event['name']) blobBytes = blob.download_as_bytes() df = pd.read_csv(BytesIO(blobBytes)) ``` My `event` comes from a [cloud storage example](https://cloud.google.com/functions/docs/tutorials/storage#functions-deploy-command-python) Upvotes: 1
2018/03/19
4,621
15,971
<issue_start>username_0: I wanna do something before `User` is saved in `Users::RegistrationsController < Devise::RegistrationsController`. I can use callback methods like `before_save`, but it does too many things because I want it only in RegistrationsController. See the code below: (This is `Devise::RegistrationsController#create`) ``` def create build_resource(sign_up_params) resource.do_something_i_want # <= HERE resource.save yield resource if block_given? if resource.persisted? if resource.active_for_authentication? set_flash_message! :notice, :signed_up sign_up(resource_name, resource) respond_with resource, location: after_sign_up_path_for(resource) else set_flash_message! :notice, :"signed_up_but_#{resource.inactive_message}" expire_data_after_sign_in! respond_with resource, location: after_inactive_sign_up_path_for(resource) end else clean_up_passwords resource set_minimum_password_length respond_with resource end end ``` It seems the only way to do this is copy & paste the `create` method of devise into `Users::RegistrationsController`. Can I do this more easily?<issue_comment>username_1: `read_csv` does not support `gs://` From the [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html): > > The string could be a URL. Valid URL schemes include http, ftp, s3, > and file. For file URLs, a host is expected. For instance, a local > file could be file ://localhost/path/to/table.csv > > > You can [download the file](https://googlecloudplatform.github.io/google-cloud-python/latest/storage/blobs.html#google.cloud.storage.blob.Blob.download_to_file) or [fetch it as a string](https://googlecloudplatform.github.io/google-cloud-python/latest/storage/blobs.html#google.cloud.storage.blob.Blob.download_as_string) in order to manipulate it. Upvotes: 2 <issue_comment>username_2: There are *three* ways of accessing files in the GCS: 1. Downloading the client library (**this one for you**) 2. Using Cloud Storage Browser in the Google Cloud Platform Console 3. Using gsutil, a command-line tool for working with files in Cloud Storage. Using Step 1, [setup](https://cloud.google.com/appengine/docs/standard/python/googlecloudstorageclient/setting-up-cloud-storage) the GSC for your work. After which you have to: ``` import cloudstorage as gcs from google.appengine.api import app_identity ``` Then you have to specify the Cloud Storage bucket name and create read/write functions for to access your bucket: You can find the remaining read/write tutorial [here](https://cloud.google.com/appengine/docs/standard/python/googlecloudstorageclient/read-write-to-cloud-storage): Upvotes: 2 <issue_comment>username_3: If i understood your question correctly then maybe this link can help u get a better **URL** for your **read\_csv()** function : <https://cloud.google.com/storage/docs/access-public-data> Upvotes: 1 <issue_comment>username_4: UPDATE ------ As of version 0.24 of pandas, `read_csv` supports reading directly from Google Cloud Storage. Simply provide link to the bucket like this: ``` df = pd.read_csv('gs://bucket/your_path.csv') ``` The `read_csv` will then use `gcsfs` module to read the Dataframe, which means it had to be installed (or you will get an exception pointing at missing dependency). I leave three other options for the sake of completeness. * Home-made code * gcsfs * dask I will cover them below. The hard way: do-it-yourself code --------------------------------- I have written some convenience functions to read from Google Storage. To make it more readable I added type annotations. If you happen to be on Python 2, simply remove these and code will work all the same. It works equally on public and private data sets, assuming you are authorised. In this approach you don't need to download first the data to your local drive. How to use it: ``` fileobj = get_byte_fileobj('my-project', 'my-bucket', 'my-path') df = pd.read_csv(fileobj) ``` The code: ``` from io import BytesIO, StringIO from google.cloud import storage from google.oauth2 import service_account def get_byte_fileobj(project: str, bucket: str, path: str, service_account_credentials_path: str = None) -> BytesIO: """ Retrieve data from a given blob on Google Storage and pass it as a file object. :param path: path within the bucket :param project: name of the project :param bucket_name: name of the bucket :param service_account_credentials_path: path to credentials. TIP: can be stored as env variable, e.g. os.getenv('GOOGLE_APPLICATION_CREDENTIALS_DSPLATFORM') :return: file object (BytesIO) """ blob = _get_blob(bucket, path, project, service_account_credentials_path) byte_stream = BytesIO() blob.download_to_file(byte_stream) byte_stream.seek(0) return byte_stream def get_bytestring(project: str, bucket: str, path: str, service_account_credentials_path: str = None) -> bytes: """ Retrieve data from a given blob on Google Storage and pass it as a byte-string. :param path: path within the bucket :param project: name of the project :param bucket_name: name of the bucket :param service_account_credentials_path: path to credentials. TIP: can be stored as env variable, e.g. os.getenv('GOOGLE_APPLICATION_CREDENTIALS_DSPLATFORM') :return: byte-string (needs to be decoded) """ blob = _get_blob(bucket, path, project, service_account_credentials_path) s = blob.download_as_string() return s def _get_blob(bucket_name, path, project, service_account_credentials_path): credentials = service_account.Credentials.from_service_account_file( service_account_credentials_path) if service_account_credentials_path else None storage_client = storage.Client(project=project, credentials=credentials) bucket = storage_client.get_bucket(bucket_name) blob = bucket.blob(path) return blob ``` gcsfs ----- [gcsfs](http://gcsfs.readthedocs.io/en/latest/) is a "Pythonic file-system for Google Cloud Storage". How to use it: ``` import pandas as pd import gcsfs fs = gcsfs.GCSFileSystem(project='my-project') with fs.open('bucket/path.csv') as f: df = pd.read_csv(f) ``` dask ---- [Dask](https://dask.pydata.org/) "provides advanced parallelism for analytics, enabling performance at scale for the tools you love". It's great when you need to deal with large volumes of data in Python. Dask tries to mimic much of the `pandas` API, making it easy to use for newcomers. Here is the [read\_csv](http://dask.pydata.org/en/latest/dataframe-api.html#dask.dataframe.read_csv) How to use it: ``` import dask.dataframe as dd df = dd.read_csv('gs://bucket/data.csv') df2 = dd.read_csv('gs://bucket/path/*.csv') # nice! # df is now Dask dataframe, ready for distributed processing # If you want to have the pandas version, simply: df_pd = df.compute() ``` Upvotes: 7 <issue_comment>username_5: Another option is to use TensorFlow which comes with the ability to do a streaming read from Google Cloud Storage: ``` from tensorflow.python.lib.io import file_io with file_io.FileIO('gs://bucket/file.csv', 'r') as f: df = pd.read_csv(f) ``` Using tensorflow also gives you a convenient way to handle wildcards in the filename. For example: Reading wildcard CSV into Pandas -------------------------------- Here is code that will read all CSVs that match a specific pattern (e.g: gs://bucket/some/dir/train-\*) into a Pandas dataframe: ``` import tensorflow as tf from tensorflow.python.lib.io import file_io import pandas as pd def read_csv_file(filename): with file_io.FileIO(filename, 'r') as f: df = pd.read_csv(f, header=None, names=['col1', 'col2']) return df def read_csv_files(filename_pattern): filenames = tf.io.gfile.Glob(filename_pattern) dataframes = [read_csv_file(filename) for filename in filenames] return pd.concat(dataframes) ``` usage ===== ``` DATADIR='gs://my-bucket/some/dir' traindf = read_csv_files(os.path.join(DATADIR, 'train-*')) evaldf = read_csv_files(os.path.join(DATADIR, 'eval-*')) ``` Upvotes: 5 <issue_comment>username_6: As of `pandas==0.24.0` this is supported natively if you have `gcsfs` installed: <https://github.com/pandas-dev/pandas/pull/22704>. Until the official release you can try it out with `pip install pandas==0.24.0rc1`. Upvotes: 3 <issue_comment>username_7: One will still need to use `import gcsfs` if loading compressed files. Tried `pd.read_csv('gs://your-bucket/path/data.csv.gz')` in pd.**version**=> 0.25.3 got the following error, ``` /opt/conda/anaconda/lib/python3.6/site-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds) 438 # See https://github.com/python/mypy/issues/1297 439 fp_or_buf, _, compression, should_close = get_filepath_or_buffer( --> 440 filepath_or_buffer, encoding, compression 441 ) 442 kwds["compression"] = compression /opt/conda/anaconda/lib/python3.6/site-packages/pandas/io/common.py in get_filepath_or_buffer(filepath_or_buffer, encoding, compression, mode) 211 212 if is_gcs_url(filepath_or_buffer): --> 213 from pandas.io import gcs 214 215 return gcs.get_filepath_or_buffer( /opt/conda/anaconda/lib/python3.6/site-packages/pandas/io/gcs.py in 3 4 gcsfs = import\_optional\_dependency( ----> 5 "gcsfs", extra="The gcsfs library is required to handle GCS files" 6 ) 7 /opt/conda/anaconda/lib/python3.6/site-packages/pandas/compat/\_optional.py in import\_optional\_dependency(name, extra, raise\_on\_missing, on\_version) 91 except ImportError: 92 if raise\_on\_missing: ---> 93 raise ImportError(message.format(name=name, extra=extra)) from None 94 else: 95 return None ImportError: Missing optional dependency 'gcsfs'. The gcsfs library is required to handle GCS files Use pip or conda to install gcsfs. ``` Upvotes: 0 <issue_comment>username_8: Since Pandas 1.2 it's super easy to load files from google storage into a DataFrame. If you work on **your local machine** it looks like this: ``` df = pd.read_csv('gcs://your-bucket/path/data.csv.gz', storage_options={"token": "<PASSWORD>"}) ``` It's imported that you add as token the credentials.json file from google. If you work on google cloud do this: ``` df = pd.read_csv('gcs://your-bucket/path/data.csv.gz', storage_options={"token": "cloud"}) ``` Upvotes: 3 <issue_comment>username_9: I was taking a look at this question and didn't want to have to go through the hassle of installing another library, `gcsfs`, which literally says in the documentation, `This software is beta, use at your own risk`... but I found a great workaround that I wanted to post here in case this is helpful to anyone else, using just the google.cloud storage library and some native python libraries. Here's the function: ``` import pandas as pd from google.cloud import storage import os import io os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'path/to/creds.json' def gcp_csv_to_df(bucket_name, source_file_name): storage_client = storage.Client() bucket = storage_client.bucket(bucket_name) blob = bucket.blob(source_blob_name) data = blob.download_as_bytes() df = pd.read_csv(io.BytesIO(data)) print(f'Pulled down file from bucket {bucket_name}, file name: {source_file_name}') return df ``` Further, although it is outside of the scope of this question, if you would like to upload a pandas dataframe to GCP using a similar function, here is the code to do so: ``` def df_to_gcp_csv(df, dest_bucket_name, dest_file_name): storage_client = storage.Client() bucket = storage_client.bucket(dest_bucket_name) blob = bucket.blob(dest_file_name) blob.upload_from_string(df.to_csv(), 'text/csv') print(f'DataFrame uploaded to bucket {dest_bucket_name}, file name: {dest_file_name}') ``` Hope this is helpful! I know I'll be using these functions for sure. Upvotes: 4 <issue_comment>username_10: Using [pandas](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html) and [google-cloud-storage](https://cloud.google.com/storage/docs/reference/libraries#using_the_client_library) python packages: First, we upload a file to the bucket in order to get a fully working example: ```py import pandas as pd from sklearn.datasets import load_iris dataset = load_iris() data_df = pd.DataFrame( dataset.data, columns=dataset.feature_names) data_df.head() ``` ```py Out[1]: sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) 0 5.1 3.5 1.4 0.2 1 4.9 3.0 1.4 0.2 2 4.7 3.2 1.3 0.2 3 4.6 3.1 1.5 0.2 4 5.0 3.6 1.4 0.2 ``` Upload a csv file to the bucket (GCP credentials setup is required, read more in [here](https://cloud.google.com/docs/authentication/getting-started#setting_the_environment_variable)): ```py from io import StringIO from google.cloud import storage bucket_name = 'my-bucket-name' # Replace it with your own bucket name. data_path = 'somepath/data.csv' # Get Google Cloud client client = storage.Client() # Get bucket object bucket = client.get_bucket(bucket_name) # Get blob object (this is pointing to the data_path) data_blob = bucket.blob(data_path) # Upload a csv to google cloud storage data_blob.upload_from_string( data_df.to_csv(), 'text/csv') ``` Now that we have a csv on the bucket, use `pd.read_csv` by passing the content of the file. ```py # Read from bucket data_str = data_blob.download_as_text() # Instanciate dataframe data_dowloaded_df = pd.read_csv(StringIO(data_str)) data_dowloaded_df.head() ``` ```py Out[2]: Unnamed: 0 sepal length (cm) ... petal length (cm) petal width (cm) 0 0 5.1 ... 1.4 0.2 1 1 4.9 ... 1.4 0.2 2 2 4.7 ... 1.3 0.2 3 3 4.6 ... 1.5 0.2 4 4 5.0 ... 1.4 0.2 [5 rows x 5 columns] ``` When comparing this approach with `pd.read_csv('gs://my-bucket/file.csv')` approach, I found that the approach described in here makes more explicit that `client = storage.Client()` is the one taking care of the authentication (which could be very handy when working with multiple credentials). Also, `storage.Client` comes already fully installed if you run this code on a resource from Google Cloud Platform, when for `pd.read_csv('gs://my-bucket/file.csv')` you'll need to have installed the package `gcsfs` that allow pandas to access Google Storage. Upvotes: 2 <issue_comment>username_11: Google Cloud storage has a method [download\_as\_bytes()](https://cloud.google.com/python/docs/reference/storage/latest/google.cloud.storage.blob.Blob#google_cloud_storage_blob_Blob_download_as_bytes), and then, from that you can read a csv from the bytes HT to [NEWBEDEV](https://newbedev.com/how-to-convert-bytes-data-into-a-python-pandas-dataframe), the code would look like this: ``` import pandas as pd from io import BytesIO blob = storage_client.get_bucket(event['bucket']).get_blob(event['name']) blobBytes = blob.download_as_bytes() df = pd.read_csv(BytesIO(blobBytes)) ``` My `event` comes from a [cloud storage example](https://cloud.google.com/functions/docs/tutorials/storage#functions-deploy-command-python) Upvotes: 1
2018/03/19
485
2,100
<issue_start>username_0: Well I'm trying to understand how and at which point in an algorithm to apply the Kfold CV and GridSearchCV. Also if i understand correctly GridSearchCV is used for hyperparameter tuning i.e. what values of the arguments will give best result and the Kfold CV is used to better the generalization so that we are training like on different folds and hence reducing bias if the data is like ordered in some particular way and hence increasing generalization. Now the question is, isn't GridSearchCV doing the cross validation too with CV parameter. So why do we require Kfold CV, and if we do whether we do it before GridSearchCV? A little outline of the process would be extremely helpful.<issue_comment>username_1: [`GridSearchCV`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) is a higher-level construct than [`KFold`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.KFold.html). The former uses the latter (or others like it). `KFold` is a relatively low-level construct that gives you a sequence of train/test indices. You can use these indices to do several things, including finding the OOB performance of a model, and/or tuning hyperparameters (which basically searches somehow for hyperparameters based on OOB performance). `GridSearchCV` is a higher-level construct, that *takes* a CV engine like `KFold` (in its `cv` argument). It uses the CV engine to search over hyperparameters (in this case, using grid search over the parameters). Upvotes: 2 <issue_comment>username_2: Grid Search is used to choose best combination of Hyper parameters of predictive algorithms (Tuning the hyper-parameters of an estimator) whereas KFold Provides train/test indices to split data in train/test sets. It Split dataset into k consecutive folds (without shuffling by default). Each fold is then used once as a validation while the k - 1 remaining folds form the training set. It's used to get better measure of prediction accuracy (which we can use as a proxy for goodness of fit of the model). Upvotes: 1
2018/03/19
998
2,861
<issue_start>username_0: If we want to search for the optimal parameters theta for a linear regression model by using the normal equation with: **theta = inv(X^T \* X) \* X^T \* y** one step is to calculate inv(X^T\*X). Therefore numpy provides [np.linalg.inv()](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.inv.html) and [np.linalg.pinv()](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.pinv.html) Though this leads to different results: ``` X=np.matrix([[1,2104,5,1,45],[1,1416,3,2,40],[1,1534,3,2,30],[1,852,2,1,36]]) y=np.matrix([[460],[232],[315],[178]]) XT=X.T XTX=XT@X pinv=np.linalg.pinv(XTX) theta_pinv=(pinv@XT)@y print(theta_pinv) [[188.40031946] [ 0.3866255 ] [-56.13824955] [-92.9672536 ] [ -3.73781915]] inv=np.linalg.inv(XTX) theta_inv=(inv@XT)@y print(theta_inv) [[-648.7890625 ] [ 0.79418945] [-110.09375 ] [ -74.0703125 ] [ -3.69091797]] ``` The first output, that is the output of pinv is the correct one and additionally recommended in the [numpy.linalg.pinv()](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.pinv.html) docs. But why is this and where are the differences / Pros / Cons between inv() and pinv().<issue_comment>username_1: If the determinant of the matrix is zero it will not have an inverse and your inv function will not work. This usually happens if your matrix is singular. But pinv will. This is because pinv returns the inverse of your matrix when it is available and the pseudo inverse when it isn't. The different results of the functions are because of rounding errors in floating point arithmetic You can read more about how pseudo inverse works [here](https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse) Upvotes: 6 [selected_answer]<issue_comment>username_2: `inv` and `pinv` are used to compute the (pseudo)-inverse as a standalone matrix. Not to actually use them in the computations. For such linear system solutions the proper tool to use is [`numpy.linalg.lstsq`](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.lstsq.html#numpy.linalg.lstsq) (or from scipy) if you have a non invertible coefficient matrix or [`numpy.linalg.solve`](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.solve.html#numpy.linalg.solve) (or from scipy) for invertible matrices. Upvotes: 4 <issue_comment>username_3: Inverting X^T X is not as numerically stable as computing the pseudo-inverse using numpy's function when X^T X has relatively very small singular values, which can occur when a matrix is "almost" rank-deficient but isn't due to noise. This occurs in particular when the [condition number, that is the ratio of max sv to min sv, is large](https://math.stackexchange.com/questions/1622610/when-is-inverting-a-matrix-numerically-unstable). Upvotes: 1
2018/03/19
298
956
<issue_start>username_0: The click event is not triggering either for the span or the when I move the id to the tag. ```js $("body").on("click", "#btnClear", function() { alert("Clicked"); }); ``` ```html ```<issue_comment>username_1: Try this. ```js $("body").delegate("#btnClear", "click", function () { alert("Clicked"); }); ``` ```html ``` Upvotes: 0 <issue_comment>username_2: I think your code is working. There is no issue except, you have forgot to load `jquery` ```js $("body").on("click", "#btnClear", function() { alert("Clicked"); }); ``` ```html ICON HERE.. CLICK ``` I do not have `icon` to show there so I have placed a text to click there. Hope its helped. Upvotes: 2 <issue_comment>username_3: Your span is not being displayed, that's the issue. You JS is working fine. I gave border to your span. ```js $("body").on("click", "#btnClear", function() { alert("Clicked"); }); ``` ```html ``` Upvotes: 0
2018/03/19
570
2,134
<issue_start>username_0: I have been trying to build keras and tensor flow packages on yocto's image that is poky but not able to do so. Until now I have tried looking for their respective recipe but couldn't find. What's the way to add keras and tensor flow packages to poky?<issue_comment>username_1: From the layer index, it appears that there isn't an existing recipe to build either [tensorflow](https://layers.openembedded.org/layerindex/branch/master/recipes/?q=tensorflow) or [keras](https://layers.openembedded.org/layerindex/branch/master/recipes/?q=keras) . You'll have to create the recipe yourself or open a [bugzilla](https://bugzilla.yoctoproject.org/buglist.cgi?quicksearch=tensorflow&list_id=606307) request to have it added and hope that someone has the time to do that. Writing a new recipe for a large package such as tensorflow is quite a bit of work so I can't tell you exactly how to do it here but I can give you pointers to the relevant YP documentation and community. To write the recipe, there are [guidelines](https://www.yoctoproject.org/docs/current/dev-manual/dev-manual.html#new-recipe-writing-a-new-recipe) in the Yocto Documentation. You should be sure that you have built an image such as core-image-minimal and added a [simple recipe](http://www.variwiki.com/index.php?title=Yocto_Hello_World) as well as examined similar recipes in the oe-core and meta-openembedded. If you get stuck ask for help using one of the forums listed in the [Community section](https://www.openembedded.org/wiki/Main_Page) of the OpenEmbedded wiki. Upvotes: 2 <issue_comment>username_2: Found [this](https://github.com/renesas-rz/meta-renesas-ai) layer on Github that might be a good starting point. It seems to have all the recipes to include TensorFlow in your Yocto build. I would start by adding the `meta-tensorflow` to my `bblayers.conf` and adding `tensorflow` to `IMAGE_INSTALL_append` then run it, see what errors come out an try to take it from there. Not sure about Keras, but what I understand from the [docs](https://www.tensorflow.org/guide/keras) is that it should be part of TensorFlow. Upvotes: 1
2018/03/19
1,932
5,462
<issue_start>username_0: I am an ABAP programmer and learning the tensorflow object detection API just following the tutorial and using the Racoon dataset from <NAME>(<https://github.com/datitran/raccoon_dataset>). The training can be performed on my own PC(python 3.6.3 and tensorflow 1.5.0), but slow. So I put it to the google cloud plantform. The job keep failing. The training input looks like this. ``` "scaleTier": "CUSTOM", "masterType": "standard_gpu", "workerType": "standard_gpu", "parameterServerType": "standard", "workerCount": "9", "parameterServerCount": "3", "packageUris": [ "gs://racoon/train/packages/363569b954c446566b767aabfeb047adb0ed2f25f83248417e2667aac70d0790/object_detection-0.1.tar.gz", "gs://racoon/train/packages/363569b954c446566b767aabfeb047adb0ed2f25f83248417e2667aac70d0790/slim-0.1.tar.gz" ], "pythonModule": "object_detection.train", "args": [ "--train_dir=gs://racoon/train", "--pipeline_config_path=gs://racoon/data/ssd_mobilenet_v1_pets.config" ], "region": "us-central1", "runtimeVersion": "1.5", "jobDir": "gs://racoon/train", "pythonVersion": "3.5" ``` The training was execuated for almost 100 steps, but failed with error, the job log shows like this. ``` The replica worker 1 exited with a non-zero status of 1. Termination reason: Error. Traceback (most recent call last): File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.5/runpy.py", line 85, in _run_code exec(code, run_globals) File "/root/.local/lib/python3.5/site-packages/object_detection/train.py", line 167, in tf.app.run() File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 124, in run \_sys.exit(main(argv)) File "/root/.local/lib/python3.5/site-packages/object\_detection/train.py", line 163, in main worker\_job\_name, is\_chief, FLAGS.train\_dir) File "/root/.local/lib/python3.5/site-packages/object\_detection/trainer.py", line 360, in train saver=saver) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/slim/python/slim/learning.py", line 758, in train sys.maxint)) AttributeError: module 'sys' has no attribute 'maxint' The replica worker 2 exited with a non-zero status of 1. Termination reason: Error. Traceback (most recent call last): File "/usr/lib/python3.5/runpy.py", line 184, in \_run\_module\_as\_main "\_\_main\_\_", mod\_spec) File "/usr/lib/python3.5/runpy.py", line 85, in \_run\_code exec(code, run\_globals) File "/root/.local/lib/python3.5/site-packages/object\_detection/train.py", line 167, in tf.app.run() File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 124, in run \_sys.exit(main(argv)) File "/root/.local/lib/python3.5/site-packages/object\_detection/train.py", line 163, in main worker\_job\_name, is\_chief, FLAGS.train\_dir) File "/root/.local/lib/python3.5/site-packages/object\_detection/trainer.py", line 360, in train saver=saver) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/slim/python/slim/learning.py", line 758, in train sys.maxint)) AttributeError: module 'sys' has no attribute 'maxint' The replica worker 4 exited with a non-zero status of 1. Termination reason: Error. Traceback (most recent call last): File "/usr/lib/python3.5/runpy.py", line 184, in \_run\_module\_as\_main "\_\_main\_\_", mod\_spec) File "/usr/lib/python3.5/runpy.py", line 85, in \_run\_code exec(code, run\_globals) File "/root/.local/lib/python3.5/site-packages/object\_detection/train.py", line 167, in tf.app.run() File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 124, in run \_sys.exit(main(argv)) File "/root/.local/lib/python3.5/site-packages/object\_detection/train.py", line 163, in main worker\_job\_name, is\_chief, FLAGS.train\_dir) File "/root/.local/lib/python3.5/site-packages/object\_detection/trainer.py", line 360, in train saver=saver) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/slim/python/slim/learning.py", line 758, in train sys.maxint)) AttributeError: module 'sys' has no attribute 'maxint' To find out more about why your job exited please check the logs: https://console.cloud.google.com/logs/viewer?project=1006195729918&resource=ml\_job%2Fjob\_id%2Fracoon\_object\_detection\_9&advancedFilter=resource.type%3D%22ml\_job%22%0Aresource.labels.job\_id%3D%22racoon\_object\_detection\_9%22 ``` In the local tensorflow install, the learning.py do have the sys.maxint, and the IDE shows syntax error. Does anyone face the same issue and have the solution? Please share with us. Thank you very much.<issue_comment>username_1: In [python 3.0](https://docs.python.org/3.1/whatsnew/3.0.html#integers) `sys.maxint` is removed, so replace it with `sys.maxsize`: > > The sys.maxint constant was removed, since there is no longer a limit > to the value of integers. However, sys.maxsize can be used as an > integer larger than any practical list or string index. It conforms to > the implementation’s “natural” integer size and is typically the same > as sys.maxint in previous releases on the same platform (assuming the > same build options). > > > But this doesn't make sense to me that it works on your local machine. Upvotes: 1 <issue_comment>username_2: TensorFlow object detection API only supports [TensorFlow 1.2](https://github.com/tensorflow/models/issues/3071) for now, so you need to change the runtime version to 1.2. Upvotes: 0
2018/03/19
1,600
5,248
<issue_start>username_0: I'm using Hibernate to access and record data to database. Everything was fine until I used HQL to join mapped table and query some objects from database. As a result, I got a `List` contains data. But when I got the Object array, an exception appeared > > Exception in thread "AWT-EventQueue-0" java.lang.ClassCastException: com.aperture.demo.entities.Order cannot be cast to [Ljava.lang.Object; > > at com.aperture.demo.controller.MainAppController$OrderTableMouseListener.mouseClicked(MainAppController.java:113) > at java.awt.AWTEventMulticaster.mouseClicked(AWTEventMulticaster.java:270) > at java.awt.Component.processMouseEvent(Component.java:6536) > at javax.swing.JComponent.processMouseEvent(JComponent.java:3324) > at java.awt.Component.processEvent(Component.java:6298) > at java.awt.Container.processEvent(Container.java:2237) > at java.awt.Component.dispatchEventImpl(Component.java:4889) > at java.awt.Container.dispatchEventImpl(Container.java:2295) > at java.awt.Component.dispatchEvent(Component.java:4711) > at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4889) > at java.awt.LightweightDispatcher.processMouseEvent(Container.java:4535) > at java.awt.LightweightDispatcher.dispatchEvent(Container.java:4467) > at java.awt.Container.dispatchEventImpl(Container.java:2281) > at java.awt.Window.dispatchEventImpl(Window.java:2746) > at java.awt.Component.dispatchEvent(Component.java:4711) > at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:760) > at java.awt.EventQueue.access$500(EventQueue.java:97) > at java.awt.EventQueue$3.run(EventQueue.java:709) > at java.awt.EventQueue$3.run(EventQueue.java:703) > at java.security.AccessController.doPrivileged(Native Method) > at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:80) > at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:90) > at java.awt.EventQueue$4.run(EventQueue.java:733) > at java.awt.EventQueue$4.run(EventQueue.java:731) > at java.security.AccessController.doPrivileged(Native Method) > at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:80) > at java.awt.EventQueue.dispatchEvent(EventQueue.java:730) > at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:205) > at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:116) > at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:105) > at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101) > at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:93) > at java.awt.EventDispatchThread.run(EventDispatchThread.java:82) > > > I've tried any ways I can, like, make the List Iterable, cast the Object[] to Order[] (my object), but the results are the same. I use Debugger of the IDE, but it looked fine, like this [View post on imgur.com](https://imgur.com/sRleYM5) (Stack Overflow doesn't let me embeded image here, sorry!) I don't understand what does the "[L" mean, and why did it keep making my project crash. My Object ``` @Entity @Table(name = "orders") public class Order extends MyObject implements Serializable { //something } ``` The method that get the object from DB ``` public List listJoinedTable(int id) { Transaction tx = null; List result = null; try (Session session = sessionFactory.openSession()) { tx = session.beginTransaction(); //language=HQL String hql = "SELECT o FROM Order o INNER JOIN FETCH o.productList WHERE o.orderId = " + id; result = session.createQuery(hql).list(); tx.commit(); } catch (HibernateException e) { if (tx != null) tx.rollback(); e.printStackTrace(); } return result; } ``` Code that fires bug ``` if (mouseEvent.getButton() == MouseEvent.BUTTON1) { final JTable target = (JTable) mouseEvent.getSource(); final int row = target.getSelectedRow(); final int id = (int) mainAppFrame.getOrderList() .getModel() .getValueAt(0, 0); List recordList = ((AdvancedLoader) orderLoader).listJoinedTable(id); for (int i = 0; i < recordList.size(); i++) { Order[] myObjects = (Order[]) recordList.get(i); //Bug here } } ``` Please help me, thanks a lot!<issue_comment>username_1: In [python 3.0](https://docs.python.org/3.1/whatsnew/3.0.html#integers) `sys.maxint` is removed, so replace it with `sys.maxsize`: > > The sys.maxint constant was removed, since there is no longer a limit > to the value of integers. However, sys.maxsize can be used as an > integer larger than any practical list or string index. It conforms to > the implementation’s “natural” integer size and is typically the same > as sys.maxint in previous releases on the same platform (assuming the > same build options). > > > But this doesn't make sense to me that it works on your local machine. Upvotes: 1 <issue_comment>username_2: TensorFlow object detection API only supports [TensorFlow 1.2](https://github.com/tensorflow/models/issues/3071) for now, so you need to change the runtime version to 1.2. Upvotes: 0
2018/03/19
1,106
3,823
<issue_start>username_0: I want to get a value form a multidimensional array using `PHP`. I pass the key to the function and if the key contains the value (i.e. without any array value), it will return that. But if that key contains an array value, it will return the whole sub array. I am adding a sample array for that. ``` php // prepare a list of array for category, subcategory etc... $category = array( 'Account' = array( 'Show Balance' => array( 'Recharge' => 2300, 'Success' => 12000, 'Failure' => 25000, ), 'Balance History' => 'your balance is very low for last 2 years', 'Mini Statement' => 'This is your mini statement. You can have a look of your transaction details', 'Last Transaction' => 25000 ), 'Deposit' => array( 'Deposit Limit' => 40000, 'Make Deposit' => 'Please go to the nearest branch ans deposit the money.', 'Last Deposit' => 12000 ), 'FAQ' => array( 'How To Open An Account' => 'Go to your nearest branch fill up a form, submit it to the branch manger with required supporting documents.', 'How To Withdraw Money' => 'Go to any ATM center swipe the card enter pin and get your money.', 'How To Add Money' => 'You need to go to your nearest branch and deposit money over there.' ), 'Loan' => array( 'Home Loan' => 'This is home loan related answer', 'Personal Loan' => 'This is personal loan related answer', 'Car Loan' => 'This is car loan related answer', 'Bike Loan' => 'This is bike loan related answer' ) , 'Test', ); ``` Now if i pass my array $category and 'Recharge' as a parameter to any `PHP` function it should return me 2300 as a result. Now if i pass my array $category and 'Show Balance' as a parameter to any `PHP`function it should return me ``` array( 'Recharge' => 2300, 'Success' => 12000, 'Failure' => 25000, ) ``` as a result. Searched a lot on google but couldn't get an answer.<issue_comment>username_1: To get Show balance try this: ``` print_r($category['Account']['Show Balance']); ``` To get Recharge try this: ``` print_r($category['Account']['Show Balance']['Recharge']); ``` I just tested the code to make sure it works, It works perfectly. --Edit-- ``` foreach ($category as $cat) { print_r($cat['Account']['Recharge']); print_r($category['Account']['Show Balance']); } ``` Upvotes: 1 <issue_comment>username_2: Try this function, recursive way: ``` function get_key_val($arrs, $k) { foreach( $arrs as $key=>$val ) { if( $key === $k ) { return $val; } else { if( is_array( $val ) ) { $ret = get_key_val( $val, $k ); if( $ret !== NULL) { return $ret; } } } } return NULL; } echo get_key_val( $category, "Recharge" ); // outputs 2300 ``` This can go through every element of the passed array, using **foreach** and **recursion**, until the index is found, and returns the index' value (array or not). Upvotes: 2 <issue_comment>username_3: Write a recursive function for this. do `foreach` and check array key with passed key, if matched return array. if not matched and check array value if it is an array with `is_array()`, if yes, call function again, else return value ``` function getData($categoryArray, $key){ foreach($categoryArray as $k => $value){ if($k==$key) return $value; if(is_array($value)){ $find = getData($value, $key); if($find){ return $find; } } } return null; } $result1 = getData($category, 'Show Balance'); var_dump($result1); $result = getData($category, 'Recharge'); var_dump($result); ``` [Demo](https://3v4l.org/kEOZI) Upvotes: 3 [selected_answer]
2018/03/19
1,938
5,478
<issue_start>username_0: Trying to understand memory allocation in C. Facing issue while trying to create two arrays using pointer to integers. Kindly have a look at the below code: ``` #include #include int main() { int \*a; int \*b; for (int i = 0; i<4;i++) { printf("Enter value \n"); a[i]=(int \*)malloc(sizeof(int)); b[i]=(int \*)malloc(sizeof(int)); scanf("%d",&a[i]); scanf("%d",&b[i]); } for (int i =0;i<4;i++) { printf("%d = %x\n ",a[i],&a[i]); } for (int i =0;i<4;i++) { printf("%d = %x\n ",b[i],&b[i]); } return 0; } ``` I am working with C11 on CLion. Facing below error on runtime. Can someone please explain what is wrong with this code ? ``` Enter value Process finished with exit code 11 ``` [![b is being shown as NULL during debugging](https://i.stack.imgur.com/FnHJ2.png)](https://i.stack.imgur.com/FnHJ2.png) "b is being shown NULL during debugging" UPDATE: Tried on another IDE, where "a" itself is not being allocated any memory. It directly gives me segmentation fault. UPDATE 2: Changing: ``` int *a; int *b; ``` to ``` int *a = NULL; int *b = NULL; ``` at least stops how this code is behaving. It gives me segmentation fault as soon as I try to allocate memory to a[i] (Which is wrong, now I get).<issue_comment>username_1: While the code do allocate memory for four `int` values, it does not work as you expect. If we start from the beginning: ``` int *a; ``` This defines (and declares) a variable `a` which is a pointer to an `int`. You do not initialize it, which means its value is *indeterminate* (and will seem totally random). Dereferencing this (with e.g. `a[i]`) leads to [*undefined behavior*](https://en.wikipedia.org/wiki/Undefined_behavior). Then ``` for (int i = 0; i<4;i++) { a[i]=(int *)malloc(sizeof(int)); ... } ``` If you properly initialized `a` it would already point to memory for four `int` values. You could see `a` as an array, and you don't need to allocate each member of the array separately. This is indicated by the types, `a[i]` is of type `int`, not `int *`. The "best" solution is to not use pointers or dynamic allocation at all, and instead use plain arrays: ``` int a[4]; for (int i = 0; i<4;i++) { printf("Enter value \n"); scanf("%d",&a[i]); // No allocation needed, a[i] already exists } ``` If you *must* use pointers and dynamic allocation, then allocate memory enough for four `int` elements, and make `a` point to the first byte: ``` int *a = malloc(sizeof(int) * 4); for (int i = 0; i<4;i++) { printf("Enter value \n"); scanf("%d",&a[i]); // No allocation needed, a[i] already exists } ``` There are a few other problems as well, but start with these. Upvotes: 2 <issue_comment>username_2: You need to add these lines, because you need to create memory so that you can assign values later: ``` int **a = (int **)malloc(sizeof(int *) * 4); int **b = (int **)malloc(sizeof(int *) * 4); ``` You are assigning pointer values to elements of `a`. Therefore `a` should be pointer to pointer. Complete code: ``` #include #include int main () { int \*\*a = (int \*\*)malloc(sizeof(int \*) \* 4); int \*\*b = (int \*\*)malloc(sizeof(int \*) \* 4); for (int i = 0; i < 4; i++) { printf ("Enter value \n"); a[i] = (int \*) malloc (sizeof (int)); b[i] = (int \*) malloc (sizeof (int)); scanf ("%d", &a[i]); scanf ("%d", &b[i]); } for (int i = 0; i < 4; i++) { printf ("%d = %x\n ", a[i], &a[i]); } for (int i = 0; i < 4; i++) { printf ("%d = %x\n ", b[i], &b[i]); } return 0; } ``` Output: ``` Enter value 4 5 Enter value 1 2 Enter value 6 7 Enter value 8 9 4 = 165d010 1 = 165d018 6 = 165d020 8 = 165d028 5 = 165d040 2 = 165d048 7 = 165d050 9 = 165d058 ``` Upvotes: 0 <issue_comment>username_3: There is some confusion in your code: * you do not allocate the arrays for `a` and `b` to point to. `a` and `b` are uninitialized, dereferencing them has undefined behavior. `a` shows as `NULL` in the debugger, but the C Standard does not guarantee that. `a` could have any value, including trap representations that would cause undefined behavior just by reading it. * you store a pointer to `int` into `a[i]`: the compiler should issue a diagnostic about the type mismatch. It is unfortunate that this not be a fatal error IMHO, but ignoring compiler warnings is always a bad idea. * the return value of `malloc()` is cast as `(int *)`. This is necessary in C++, but considered bad style in C. * the third argument in `printf("%d = %x\n", a[i], &a[i]);` is inconsistent with the conversion specifier: if you want to print the value in hex, use `printf("%d = %x\n", a[i], a[i]);`, if you want to print the address, use the `%p` conversion specifier: `printf("%d at %p\n", a[i], (void*)&a[i]);` To play with `malloc()`, you should just allocate the `int` arrays, check for allocation success and free the memory after use: ``` #include #include int main() { int \*a; int \*b; a = malloc(sizeof(\*a) \* 4); b = malloc(sizeof(\*b) \* 4); if (a == NULL || b == NULL) { fprintf(stderr, "cannot allocate memory\n"); exit(1); } for (int i = 0; i < 4; i++) { printf("Enter 2 values\n"); if (scanf("%d%d", &a[i], &b[i]) != 2) { fprintf(stderr, "invalid input\n"); exit(1); } } for (int i = 0; i < 4; i++) { printf(" %d = %x\n", a[i], a[i]); } for (int i = 0; i < 4; i++) { printf(" %d = %x\n", b[i], b[i]); } free(a); free(b); return 0; } ``` Upvotes: 1
2018/03/19
1,074
3,989
<issue_start>username_0: I have a .Net application I am targetting both .Net framework 4.0 and .Net core 2.0, I have to call some pkcs11 driver using pkcs11interop library due to some driver issue I am getting **AccessViolation Exception** in .net framework 4.0 I was able to handle with attribute `[HandleProcessCorruptedStateExceptions]` on the method but this will not work in .Net core 2.0 how can I handle in .Net core 2.0 as per comment i have added environment variable [![Environment variable](https://i.stack.imgur.com/qz3NR.png)](https://i.stack.imgur.com/qz3NR.png) but still, unable to catch the exception.[![enter image description here](https://i.stack.imgur.com/n7LM2.png)](https://i.stack.imgur.com/n7LM2.png)<issue_comment>username_1: Please note the following: > > You shouldn't. An access violation is a serious problem: it is an > unexpected attempt to write to (or read from) an invalid memory > address. As John already clarified, the unmanaged DLL might already > have corrupted the process memory before the access violation has been > raised. This can have unpredicted effects on any part of the current > process. > > > The safest thing to do is to possibly inform the user and then > immediately exit. > > > Some more details: An access violation is an OS exception (a so-called > SEH or *structured exception handling* exception). This is a different > kind of exception than the managed CLR exceptions from > `System.Exception`. You will rarely see SEH exceptions in purely > managed code, but if one occurs, e.g. in unmanaged code, the CLR will > deliver it to managed code where you are also able to catch > it1. > > > However, catching SEH exceptions is mostly not a good idea. Further > details are explained in the article [***Handling Corrupted State > Exceptions***](http://msdn.microsoft.com/en-us/magazine/dd419661.aspx) in MSDN magazine where the following text it taken > from: > > > The CLR has always delivered SEH exceptions to managed code using the same mechanisms as exceptions raised by the program itself. This isn't a problem as long as code doesn't attempt to handle exceptional conditions that it cannot reasonably handle. Most programs cannot safely continue execution after an access violation. Unfortunately, the CLR's exception handling model has always encouraged users to catch these serious errors by allowing programs to catch any exception at the top of the System.Exception hierarchy. But this is rarely the right thing to do. > > > 1This was true until .NET 3.5. In .NET 4 the behavior has been changed. If you still want to be able to catch such kind of exceptions you would have to add `legacyCorruptedState­­ExceptionsPolicy=true` to the app.config. Further details in the articled linked above. That said, there is a nice question and answer [here](https://stackoverflow.com/questions/48682489/gracefully-handling-corrupted-state-exceptions-in-net-core) regarding the matter which might suit your case as well. There is a difference in handling corrupted state exception in .Net Core. Please refer to this [article](https://techblog.dorogin.com/handling-errors-in-aspnet-core-middleware-e39872496d51) that is concerning error handling in .Net Core, which uses a middleware (which is what I prefer while using extension methods to suit your needs). Upvotes: 2 <issue_comment>username_2: If a CSE is possible, you will hard crash. Dark times ahead. In chronological order... dotnet/coreclr [Issue #9045](https://github.com/dotnet/coreclr/issues/9045): Strip corrupted state exceptions handling dotnet/coreclr [PR #10957](https://github.com/dotnet/coreclr/pull/10957): Do not honor attribute HandleProcessCorruptedStateExceptions dotnet/coreclr [Issue #19192](https://github.com/dotnet/coreclr/issues/19192): Unable to catch in managed code any user exception thrown from native Linux code SOL currently. Have to create an unmanaged wrapper of sorts to catch external CSEs. Upvotes: 1
2018/03/19
563
2,212
<issue_start>username_0: I need to allow users to login through Corp network to access an internal web application. I've followed all the steps given in the [official documentation](https://learn.microsoft.com/en-us/azure/active-directory/develop/guidedsetups/active-directory-aspnetwebapp) and it works fine. However, a strange error that I am getting while logging in is that the authentication pages goes into a redirect loop every other day. As per now the immediate fix for me is to change the `Application/Client ID` for the application in my `web.config` file. ``` ``` So as of now, I've got 2 different Applications created in [Microsoft Identity Platform](https://apps.dev.microsoft.com/portal/) and I reuse the same `App Id` (switching them every time one stops working) and as soon as I change the `App Id`, the login starts working. Not sure if I am missing something, but haven't found anything related to this exact problem other than a few like <https://github.com/aspnet/Security/issues/219> which does not work for me. And to my understanding and suggestions over the internet if this was a Permission related issue, it should never allow login but it does.<issue_comment>username_1: It sounds strange that your corp login is in a loop. is it possible that it goes to your app but so 'fast' you don't notice it. I'm saying this because I have a web app and had a similar loop. and I found that the process was: 1. your app wants to login, 2. go to the corp login and do the login process, 3. To the app with the token, 4. Again to the corp login (still not fully sure why) 5. Back to the app with the token and then you are logged in But if you check your login to soon. At step 3 it won't know it is logged in yet, so it goes back to step 1. hence the loop. If your login sequence is auto triggered on app startup it could be the same as what I got. Greetings Glenn Upvotes: 3 <issue_comment>username_2: Turns out that it was an issue with configuration of the AD. I went to my `Azure App Service > Settings > Authentication/Authorization` and created a new AD App, and used the `App ID` of this `app` in my web application and it is now working fine. Upvotes: 4 [selected_answer]
2018/03/19
509
1,864
<issue_start>username_0: I am new to HTML and JS. Im trying to inject a button into an existing page with a Chrome extension. I want to submit a search form when the button is clicked but at the moment it submits the form on page load and then repeatedly afterwards. What did I do wrong? This is my manifest.json ``` "browser_action": { "default_icon": "icon.png", "content_scripts" : "script.js" }, "content_scripts": [ { "matches": ["https://test.com/*"], "js": ["script.js"], "run_at": "document_end" } ``` and this is my script.js ``` var button = document.createElement("button"); var buttonName = document.createTextNode("Search By Tag"); button.appendChild(buttonName); document.getElementById("nav-section").appendChild(button).onclick=window.location.href='https://test/search?....c'; ```<issue_comment>username_1: It sounds strange that your corp login is in a loop. is it possible that it goes to your app but so 'fast' you don't notice it. I'm saying this because I have a web app and had a similar loop. and I found that the process was: 1. your app wants to login, 2. go to the corp login and do the login process, 3. To the app with the token, 4. Again to the corp login (still not fully sure why) 5. Back to the app with the token and then you are logged in But if you check your login to soon. At step 3 it won't know it is logged in yet, so it goes back to step 1. hence the loop. If your login sequence is auto triggered on app startup it could be the same as what I got. Greetings Glenn Upvotes: 3 <issue_comment>username_2: Turns out that it was an issue with configuration of the AD. I went to my `Azure App Service > Settings > Authentication/Authorization` and created a new AD App, and used the `App ID` of this `app` in my web application and it is now working fine. Upvotes: 4 [selected_answer]
2018/03/19
1,531
5,582
<issue_start>username_0: I looked at tons of questions/answers on the "no var" topic but I'm still confused > > "no var" will look up the scope chain until it finds the variable or > hits the global scope (at which point it will create it) > > > I get this, but what does "no var" do if it does find something? e.g if you want to change an existing variable or binding, do you use var/let again or is it OK if you don't? Is there a difference?<issue_comment>username_1: If no `var`, `let` or `const` statement is used like: ``` x = 100; ``` than the variable gets assigned to the global scope, or reassigned where it is found in the scope, what very often is not the desired behavior, since that leads to cluttering in the scope, or variable overriding, so strange bugs. To reassign a variable wherever it is found in the scope, just us the same method as above: ``` x = 101; ``` If you have something like this: ``` //global scope var z = 40; (function () { //scope »level 1« var x = 100; //implicitly declared in global scope y = 300; z += 1; console.log(z); //41 (function () { //scope »level 2« x = 101; z += 1; })(); console.log(x) // 101; console.log(y) // 300; console.log(z) // 42; })(); console.log(x) // undefined; console.log(y) // 300; console.log(z) // 42; ``` A basic example of using the scope chain in a useful way: ``` //create a »working scope« //to prevent cluttering the global (function () { var getID = (function () { //hide the id variable in child scope //to prevent it from modification var id = 0; return function () { //modify and return values //from this scope id += 1; return id; } })(); //that doen't make much sense, //but is also a example of using a //working scope var usedIds = Array.prototype.slice .call(document.querySelectorAll('.a-class')) .map(function (node) { var id = 'my-id-' + getID(); node.id = id; return id; }); // export/expose variables to the global scope explicitly window.globallyNeeded = usedIds; })(); ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: You only use `var` or `let` if you initialize a variable at a scope at your choice. But keep in mind, that the `let` keyword for variable initialization only works for ECMAScript 2016 and later. Also, the `const` keyword. The difference between `let` and `const` is, that the first one is variable (its value can be changed) and that the second one is a constant (its value can't be changed, it's a fix value). Also, the `let` keyword has a more strictly scope defintion. That means, that the `let` keyword is block scope specific. Only inner blocks can read the outer `let` variable. But if you define a `let` variable inside e. g. an while block, it does not exist outside the loop. Upvotes: 0 <issue_comment>username_3: The short answer is no, not in modern JavaScript. A feature of the rough-and-ready design of JavaScript in the past is that variables were automatically created, whether you meant to or not. This means that the following code: ``` apple=3; Apple=apple+4; alert(apple); ``` would lead to *two* variable being created (JavaScript, is, of course, case sensitive), and you would have a logical error in your code. In modern code, you should begin with the `'use strict'` directive, which would disallow undeclared variables: ``` 'use strict'; var apple=3; Apple=apple+4; // error: undeclared Apple alert(apple); ``` `'use strict'` helps to close a gap which leads to sloppy and hard to manage code. It would raise an error when trying to assign to an undeclared variable. The `var` keyword serves *two* roles. First, it declares the variable, thus informing the JavaScript interpreter that it’s intentional. Second, it also sets the **scope** of the variable — the scope will be that of the containing environment, generally a function, where the variable was declared. Even more modern JavaScript has added `const`, which locks a value of what would otherwise be a variable (a read-only variable), and `let` which give a variable an even more limited scope. You should normally use the `'use strict'` directive, which will enforce a good habit. In any case, you should always declare your variables/constants as part of your careful code design. In my opinion, anybody who finds declaring variables too arduous is off to a bad start, and probably shouldn’t be writing JavaScript. Upvotes: 1 <issue_comment>username_4: I've just faced this situation that lead me to this post, I think it's a good example: ``` let requestViaCEP = function (cep) { callbackRequestViaCEP = function (resultadoCEP) { responseViaCEP = resultadoCEP; } return new Promise ((resolve, reject) => { $.getScript("https://viacep.com.br/ws/" + cep + "/json/?callback=callbackRequestViaCEP", function(resultadoCEP){ // console.dir(CEP.responseViaCEP) resolve(resultadoCEP) }) } ).then((value) => { return value }, (error) => { return new Error('CEP não encontrado.') } ) } ``` The 'callbackRequestViaCEP' function is only acessible when I declare it without using let, var or const. Now I know that it happens because once you don't use them, the variable goes to the global escope. I'm not sure if this is the best strategy, if not I hope you guys let me know, but for now it's the only way I found to make it work.. By the way, this is the first time I see it happening, found it pretty cool :) Upvotes: 0
2018/03/19
919
2,732
<issue_start>username_0: I build a docker container with alpine, s6 and samba. Everything looks fine, but when it start smbd it crashes right before up without anything in logfiles. ``` added interface eth0 ip=172.17.0.6 bcast=172.17.255.255 netmask=255.255.0.0 loaded services Netbios name list:- my_netbios_names[0]="ADD372A5C9D7" INFO: Profiling support unavailable in this build. Standard input is not a socket, assuming -D option Becoming a daemon. exit_daemon: STATUS=daemon failed to start: Failed to create session, error code 1 ``` s6 run service: ``` #!/usr/bin/execlineb -P smbd --foreground --log-stdout ``` Dockerfile: ``` FROM alpine:edge # env variables ENV S6_VERSION v1.21.2.1 # install s6-overlay ADD https://github.com/just-containers/s6-overlay/releases/download/${S6_VERSION}/s6-overlay-amd64.tar.gz /tmp/ RUN tar xzf /tmp/s6-overlay-amd64.tar.gz -C / RUN apk add --no-cache \ bash shadow \ samba-common-tools \ samba-client \ samba-server \ && rm -rf /var/cache/apk/* # add local files COPY root/ / EXPOSE 445/tcp CMD ["/init"] ```<issue_comment>username_1: As you did not share your configuration but I will recommend building your own docker image from this repository. I tired at locally and its working fine. Here is Docker file ``` FROM alpine:latest MAINTAINER <NAME> LABEL Description="Simple and lightweight Samba docker container, based on Alpine Linux." Version="0.1" # update the base system RUN apk update && apk upgrade # install samba and supervisord and clear the cache afterwards RUN apk add samba samba-common-tools supervisor && rm -rf /var/cache/apk/\* # create a dir for the config and the share RUN mkdir /config /shared # copy config files from project folder to get a default config going for samba and supervisord COPY \*.conf /config/ # add a non-root user and group called "rio" with no password, no home dir, no shell, and gid/uid set to 1000 RUN addgroup -g 1000 rio && adduser -D -H -G rio -s /bin/false -u 1000 rio # create a samba user matching our user from above with a very simple password ("<PASSWORD>") RUN echo -e "<PASSWORD>" | smbpasswd -a -s -c /config/smb.conf rio # volume mappings VOLUME /config /shared # exposes samba's default ports (137, 138 for nmbd and 139, 445 for smbd) EXPOSE 137/udp 138/udp 139 445 ENTRYPOINT ["supervisord", "-c", "/config/supervisord.conf"] ``` For samba.conf and other configuration you can check [samba-alpine-docker](https://github.com/pwntr/samba-alpine-docker) [![enter image description here](https://i.stack.imgur.com/NnV0g.png)](https://i.stack.imgur.com/NnV0g.png) Upvotes: 1 <issue_comment>username_2: Add `--no-process-group` flag to smbd. Upvotes: 4
2018/03/19
286
1,305
<issue_start>username_0: <https://microsoft.github.io/PowerBI-JavaScript/demo/v2-demo/index.html#> Hi, was wondering if it's possible to dynamically add/remove tiles from the dashboard via power bi-embedded. Imagine that the user wants to add there own tile and arrange it themselves in the dashboard. Also is the tile a snapshot of the data or if the datasource updates will the data in the tiles get updated as well. If this isn't currently supported, do you know what the time frame would this be supported or is there plans? Thanks, Derek<issue_comment>username_1: Power BI Embedded does not support **Pin to Dashboard** operation currently. Upvotes: 3 [selected_answer]<issue_comment>username_2: I guess this would have to be dealt through some visibility property in an embedDashboardConfig structure definition. It is possible to show|hide specific visuals in a report page by modifying report embedReportConfig.You may take a look at Custom layout demo which is available online under "live showcases" tab on Power BI JavaScript Playground v2 demo. The demo basically reshapes the whole layout of reports dynamically through VisualContainer section visibility attributes. Unfortunately those features only apply to reports scope scenarios and are not yet available to Dashboards. Upvotes: 0
2018/03/19
836
2,856
<issue_start>username_0: I am trying to create a docker Image for tensforflow serving like [here](https://github.com/llSourcell/How-to-Deploy-a-Tensorflow-Model-in-Production/blob/master/demo.ipynb). When i try to pull docker image with all the required dependencies(pip dependencies, bazel, grpc) [![enter image description here](https://i.stack.imgur.com/1XNDp.jpg)](https://i.stack.imgur.com/1XNDp.jpg) What am i doing wrong here? I believe it works for everyone except me. i am using docker toolbox in windows 7 and this is my first time using docker. I don't know what this error says edit: after removing the space [![enter image description here](https://i.stack.imgur.com/VbrC2.jpg)](https://i.stack.imgur.com/VbrC2.jpg) Docker version [![enter image description here](https://i.stack.imgur.com/UE52Y.jpg)](https://i.stack.imgur.com/UE52Y.jpg)<issue_comment>username_1: There is a typo in your `docker build` command: a space is after `Dockerfile` word. The correct command is: ``` docker build --pull -t $USER/tensorflow-serving-devel -f tensorflow_serving/tools/docker/Dockerfile.devel . ``` EDIT: I see where your problem is. You use Windows, so `$USER` is not resolves to username. Please change it to something else like: ``` docker build --pull -t user/tensorflow-serving-devel -f tensorflow_serving/tools/docker/Dockerfile.devel . ``` And then use it with `docker run` command: ``` docker run --name=tensorflow_container -it user/tensorflow-serving-devel ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: The problem is that `$USER` is expanding to an empty string, since there is no environment variable `USER`. To solve the issue just replace the `$USER` with your Dockerhub username or any username. You can also just change `$USER/tensorflow-serving-devel` to `tensorflow-serving-devel`. It really doesn't matter since this is only the name of the resulting image. Upvotes: 3 <issue_comment>username_3: In my case with the same error, the problem was with combination of "**-**" and "**\_**" symbols placed together in the image tag. So image tag like `MMT-6352_-_fix` is invalid, but image tag like `MMT-6352_fix` or `MMT-6352-fix` is valid. Upvotes: 1 <issue_comment>username_4: In my case, I am creating an environment variable with hash of recent git commit, and this hash value will be the tag of my docker image I'm going to build. So my file(say `deploy.sh`) looked like this: ``` GIT_SHA = $(git rev-parse HEAD) docker build -t user/myimage:$GIT_SHA ``` Then, I got error saying ``` deploy.sh: line 2: GIT_SHA: command not found invalid argument "user/myimage:" for "-t, --tag" flag: invalid reference format See 'docker build --help'. ``` I fixed it by removing the spaces before and after `=` as follows: ``` GIT_SHA=$(git rev-parse HEAD) docker build -t user/myimage:$GIT_SHA ``` Upvotes: 0
2018/03/19
1,013
2,287
<issue_start>username_0: I have a list `l` in the following form. I need to randomly generate `k` (six in this example) number of lists from this list so that only one element is selected from the sublists at a time. ``` l = [1,2,3,[11,22,33,44], 4,5,6, [22,33,44], 5, [99,88]] Result: 1,2,3, 22, 4,5,6, 22 ,5, 88 1,2,3, 33, 4,5,6, 44 ,5, 88 1,2,3, 44, 4,5,6, 22 ,5, 99 1,2,3, 22, 4,5,6, 33 ,5, 99 1,2,3, 33, 4,5,6, 33 ,5, 99 1,2,3, 33, 4,5,6, 44 ,5, 88 ``` I can write a for loop and pick a random element whenever I encountered a list. But i am looking for more elegant pythonic way to do this. ``` l = [1,2,3,[11,22,33,44], 4,5,6, [22,33,44], 5, [99,88]] k = 0 for k in range(6): new_l = [] for i in range(len(l)): if isinstance(l[i], list): new_l.append(np.random.choice(l[i])) else: new_l.append(l[i]) print(new_l) print("\n") ```<issue_comment>username_1: The key function to use is `choice` from the `random` module, which randomly selects a value from any iterable object with a known size. All such objects have a `__getitem__` method as well as a `__len__` method (both of which are needed to apply the `choice` function), so the builtin function `hasattr` can be used to check whether `choice` can be applied or not. The solution becomes straightforward: ``` from random import choice l = [1,2,3,[11,22,33,44], 4,5,6, [22,33,44], 5, [99,88]] for n in range(6): print([choice(item) if hasattr(item,'__getitem__') else item for item in l]) ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: You can repeat this procedure 6 times. It randomly reorders your inner lists and takes the first value. ``` l = [1,2,3,[11,22,33,44], 4,5,6, [22,33,44], 5, [99,88]] for i in range(6): print([np.random.permutation(i)[0] if type(i) == list else np.random.permutation([i])[0] for i in l]) ``` Upvotes: 0 <issue_comment>username_3: You can try this one too: ``` import random l = [1,2,3,[11,22,33,44], 4,5,6, [22,33,44], 5, [99,88]] res = [random.choice(l[i]) if type(l[i]) is list else l[i] for i in range(len(l))] print(res) ``` Possible outputs: ``` [1, 2, 3, 44, 4, 5, 6, 22, 5, 99] [1, 2, 3, 11, 4, 5, 6, 22, 5, 99] [1, 2, 3, 22, 4, 5, 6, 33, 5, 88] ``` Upvotes: 0
2018/03/19
783
3,718
<issue_start>username_0: To get a data from the firestore I queried a document and get that data into variable. Now, I need to use that variable in other part of the code. when I am using that variable it does not get any data . How to resolve these error. ``` var d1; var getdata = respond.get() .then(doc =>{ if(!doc.exists){ console.log('No such document'); }else{ console.log('Document data:', doc.data()); d1 = doc.data();// In d1 I am not getting the data of that document } }).catch(err => { console.log('Error getting documnet', err); }); ``` Here in for loop, I am using the d1 variable. But it is not executing these for loop ``` for(var k in d1){ var p = d1[k].PhoneNumber; let rph = respond.where(receiverph ,"==", p) .set({ Status : status }); let payload = { notification : { title: "Message", body: msg, sound:"default", } }; console.log(payload); return admin.messaging().sendToDevice(token,payload).then((response) =>{ console.log(token); console.log("Successfully sen notification"); }).catch(function(error){ console.warn("Error sending notification",error); }); } }); ``` In d1 the data is ``` { Invite2: { PhoneNumber: 917893659558, Amount: 33 }, Invite1: { PhoneNumber: 917799266509, Amount: 33 }, Invite3: { Amount: 33, PhoneNumber: 918639146409 } } ```<issue_comment>username_1: When you get a document with `.get`, the document has to be fetched from the database. Therefore this operation is asynchronous and you must wait until the document is received before you can iterate on its data. In short, it should look something like the following: ``` some_doc_ref.get().then(doc => { if (doc.exists) { var d1 = doc.data(); for(var k in d1) { //... } } }); ``` Hope that helps. Upvotes: 0 <issue_comment>username_2: Use Promisse.all ``` let promise = []; let all = []; for(var k in d1){ var p = d1[k].PhoneNumber; let rph = respond.where(receiverph ,"==", p) .set({ Status : status }); let payload = { notification : { title: "Message", body: msg, sound:"default", } }; console.log(payload); return admin.messaging().sendToDevice(token,payload).then((response) =>{ console.log(token); console.log("Successfully sen notification"); }).catch(function(error){ console.warn("Error sending notification",error); }); } promise.push(rhp); }); Promise.all(promise).then((data)=>{ data.forEach(query => { query.forEach(res=>{ all.push(res.data()); }) }) ``` Upvotes: 1
2018/03/19
427
1,690
<issue_start>username_0: I am working on Jenkins Pipeline Script and I have checked-in my jenkinsfile in Git repository and I need to clone to local work space. But by default its cloning to master (Unix) work space but I need it in slave (Windows) work space. Is there any plugins to change the default **Pipeline Script from SCM** work space location to slave?<issue_comment>username_1: When you get a document with `.get`, the document has to be fetched from the database. Therefore this operation is asynchronous and you must wait until the document is received before you can iterate on its data. In short, it should look something like the following: ``` some_doc_ref.get().then(doc => { if (doc.exists) { var d1 = doc.data(); for(var k in d1) { //... } } }); ``` Hope that helps. Upvotes: 0 <issue_comment>username_2: Use Promisse.all ``` let promise = []; let all = []; for(var k in d1){ var p = d1[k].PhoneNumber; let rph = respond.where(receiverph ,"==", p) .set({ Status : status }); let payload = { notification : { title: "Message", body: msg, sound:"default", } }; console.log(payload); return admin.messaging().sendToDevice(token,payload).then((response) =>{ console.log(token); console.log("Successfully sen notification"); }).catch(function(error){ console.warn("Error sending notification",error); }); } promise.push(rhp); }); Promise.all(promise).then((data)=>{ data.forEach(query => { query.forEach(res=>{ all.push(res.data()); }) }) ``` Upvotes: 1
2018/03/19
1,013
3,406
<issue_start>username_0: I want to create a single object from an array of objects. Please refer the code provided. Here's the input array ``` let queryArr = [ { query: { filter: { term: { search: 'complete', } } } }, { query: { notFilter: { term: { search: 'failed', } } } }, { query: { bool: { term: { search: 'complete', } } } } ] ``` The expected output ``` let oneQuery = {query: { bool: { ... }, filter: { ... }, notFilter: { ... } // data from respective array object key }}; ``` The function I wrote ``` function createQuery(arr){ for(let i = 0; i < arr.length; i++){ if(Object.keys(arr[i].query === 'bool')){ oneQuery.query.bool = arr[i].query.bool; } if(Object.keys(arr[i].query === 'filter')){ oneQuery.query.filter = arr[i].query.filter; } if(Object.keys(arr[i].query === 'notFilter')){ oneQuery.query.notFilter = arr[i].query.notFilter; } } return oneQuery; } createQuery(queryArr); ``` The output I'm getting: ``` query: { bool: { ... }, filter: undefined, notFilter: undefined } ``` I don't get what I'm doing wrong here. A solution using reduce or map will be preferred.<issue_comment>username_1: Use [`Array.map()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map) to get an array with the contents of each `query` property, then [spread](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax) into [`Object.assign()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/assign) to combine to a single object: ```js const queryArr = [{"query":{"filter":{"term":{"search":"complete"}}}},{"query":{"notFilter":{"term":{"search":"failed"}}}},{"query":{"bool":{"term":{"search":"complete"}}}}]; const createQuery = (arr) => ({ query: Object.assign({}, ...queryArr.map(({ query }) => query)) }); console.log(createQuery(queryArr)); ``` To fix your code, initialize the query item, and get the 1st key from each item in the array - `arr[i].query)[0]`: ```js const queryArr = [{"query":{"filter":{"term":{"search":"complete"}}}},{"query":{"notFilter":{"term":{"search":"failed"}}}},{"query":{"bool":{"term":{"search":"complete"}}}}] function createQuery(arr){ const oneQuery = { query: {} }; for(let i = 0; i < arr.length; i++){ if(Object.keys(arr[i].query)[0] === 'bool'){ oneQuery.query.bool = arr[i].query.bool; } if(Object.keys(arr[i].query)[0] === 'filter'){ oneQuery.query.filter = arr[i].query.filter; } if(Object.keys(arr[i].query)[0] === 'notFilter'){ oneQuery.query.notFilter = arr[i].query.notFilter; } } return oneQuery; } console.log(createQuery(queryArr)); ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: You problem seems to be this line ``` Object.keys(arr[i].query === 'filter') ``` This evaluates to `Object.keys(true)` or `Object.keys(false)` Use `reduce` ``` queryArr.reduce( (acc, c) => ( acc[ Object.keys(c.query)[0] ] = Object.values(c.query)[0], //set the first key and value to accumulator acc ), //return the accumulator {}); //initialize accumulator to {} ``` Upvotes: 1
2018/03/19
809
3,387
<issue_start>username_0: We are using spring-social-facebook-2.0.3 latest jar in production environment. In April 2018 graph api v2.5 is going to shut down. But the spring-social-facebook-2.0.3 latest jar is still using this deprecated graph API internally. Anyone has any knowledge, **is Spring Team going to release new version of spring-social-facebook till next month (i.e April 2018)?**<issue_comment>username_1: Solution for those who wants to change used API version in 2.0.3 Release and Facebook API Upgrade Tool says it does not affect them applications: ``` public class FacebookCustomApiVersionConnectionFactory extends OAuth2ConnectionFactory { public FacebookCustomApiVersionConnectionFactory(String apiVersion, String appId, String appSecret) { super("facebook", new FacebookCustomApiVersionServiceProvider(apiVersion, appId, appSecret, null), new FacebookAdapter()); } } /\*\* \* Facebook ServiceProvider implementation that allows to change Facebook API version. \*/ public class FacebookCustomApiVersionServiceProvider extends AbstractOAuth2ServiceProvider { private final String appNamespace; private final String apiVersion; /\*\* \* Creates a FacebookServiceProvider for the given API version, application ID, secret, and namespace. \* \* @param apiVersion Facebook API version \* @param appId The application's App ID as assigned by Facebook \* @param appSecret The application's App Secret as assigned by Facebook \* @param appNamespace The application's App Namespace as configured with Facebook. Enables use of Open Graph operations. \*/ public FacebookCustomApiVersionServiceProvider(String apiVersion, String appId, String appSecret, String appNamespace) { super(getOAuth2Template(apiVersion, "https://graph.facebook.com/v" + apiVersion + "/", appId, appSecret)); this.apiVersion = apiVersion; this.appNamespace = appNamespace; } private static OAuth2Template getOAuth2Template(String apiVersion, String graphApiUrl, String appId, String appSecret) { OAuth2Template oAuth2Template = new OAuth2Template(appId, appSecret, "https://www.facebook.com/v" + apiVersion + "/dialog/oauth", graphApiUrl + "oauth/access\_token"); oAuth2Template.setUseParametersForClientAuthentication(true); return oAuth2Template; } public Facebook getApi(String accessToken) { FacebookTemplate facebook = new FacebookTemplate(accessToken, appNamespace); facebook.setApiVersion(apiVersion); return facebook; } } ``` **Spring social configuration** ``` @Configuration @EnableSocial public class SocialConfiguration implements SocialConfigurer { @Override public void addConnectionFactories(ConnectionFactoryConfigurer cfConfig, Environment env) { cfConfig.addConnectionFactory(new FacebookCustomApiVersionConnectionFactory("2.7", "appId","appSecret"); } ... } ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: We can simply change api version by following way ``` FacebookTemplate facebookTemplate=new FacebookTemplate(accessToken); facebookTemplate.setApiVersion("3.2"); System.out.println("graph url"+facebookTemplate.getBaseGraphApiUrl()); ``` Upvotes: 0 <issue_comment>username_3: That project is obselete. They announced the end of life in 2018 to be effective in 2019: <https://spring.io/blog/2018/07/03/spring-social-end-of-life-announcement> They recommand to simply use Spring Security instead. Upvotes: 0
2018/03/19
408
1,678
<issue_start>username_0: I have implemented an on-scroll loading which fetches some chunk of data every time the scroll reaches the end of the viewing area. After some point of time when there would be no more new data to be shown, how should I convey to this to the end-user from a UX point of view? I was thinking of few options such as displaying a tooltip which automatically vanishes after few seconds. Other option would be something similar to [rubber banding scrolling](https://www.cultofmac.com/489256/bas-ording-rubber-band-effect-iphone/) from Apple. Any other approach that can be used here?<issue_comment>username_1: I don't like dead ends in my applications. If the user hits the bottom of your list and is still searching, he probably has the wrong search terms. I'd place a box along the line of "Haven't found what you're looking for? Try a different search term" and link that to the search box. Even if it's not a search, once the user hits the bottom without successfully finding what they where looking for provide them with an alternative. Hope this helps you. Upvotes: 1 <issue_comment>username_2: Without knowledge of what the use-case is (i.e. has user performed a search or just scrolling a list from elsewhere), in general, two good options: * Follow Slack's "You are upto date! + icon" little image on the last elastic scroll at bottom. Or, for example, "That's all we've got just yet! Check your email for more or Search for [term] instead". * Use a progress-bar type of indicator like when you read an article on Medium --> as people scroll down, they'll have a live indicator of getting to the bottom of the list. Upvotes: 3 [selected_answer]
2018/03/19
1,334
5,274
<issue_start>username_0: The volume control in HTML5 videos on my website is not appearing, see screenshot: [![enter image description here](https://i.stack.imgur.com/qAl7D.png)](https://i.stack.imgur.com/qAl7D.png) The video plays when started, but without any sound. The videos also play fine (with sound) in VLC and Windows Media Player. I have tested in Chrome (65.0.3325.162), Firefox (59.0.1), and Android (on a Samsung tablet). The volume of my system is fine with other applications, and YouTube videos. Here is the (minimal) code (adding additional attributes like height and poster etc. makes no difference to the problem): ``` ``` Am I missing something obvious? ``` [1]: https://i.stack.imgur.com/qAl7D.png ``` EDIT: When I tested with a sample video on `http://techslides.com/demos/sample-videos/small.mp4` the controls appeared. It seems to have something to do with the encoded mp4 video itself. I have now removed the video urls. I re-encoded the videos using VLC, and they are now working correctly.<issue_comment>username_1: It seems that you are using a mute video. Because of that, the volume control is not showing. Check this out: ``` ``` Upvotes: -1 <issue_comment>username_2: Why are these HTML5 video problems cropping up now after 5+ years? ------------------------------------------------------------------ TLDR: Your code routes around video content farms and their ad-click revenue by short circuiting MP4 content and eyeballs per second, this is retaliation. It's par for the course. Browser developers have busted your HTML5 browser embed code, either on purpose or by accident around the codecs needed to decode them. They own the source code of the browser that interprets and decodes your HTML5 MP4 file for presentation in the browser content area. Chrome developers corner the market on MP4 Videos and had their arms twisted by the powers that be. So the browser sees that the codec required to decode your MP4 is likely from an unauthorized area, and thus here we are scratching our heads as to why chrome isn't showing a volume button. My requirements has to be that HTML5 Video is fixed on server side, I can't require users to fiddle around with their chrome flags or installing a plugin that corrects the bug. It has to just work by default on the latest Chrome, Safari, Firefox then IE, preferably in that order. Screenshot of the case of the missing HTML5 video volume button: ---------------------------------------------------------------- The video plays, but at zero volume. No volume button is ever presented either during initial load, nor during or after playback. The mp4 download and go-full screen buttons are presented and work correctly during playback. [![So very late](https://i.stack.imgur.com/N6RQL.png)](https://i.stack.imgur.com/N6RQL.png) And yes, the chrome flags for new media player are disabled: [![enter image description here](https://i.stack.imgur.com/2kIyU.png)](https://i.stack.imgur.com/2kIyU.png) What it looked like before, what I expect to see: ------------------------------------------------- [![enter image description here](https://i.stack.imgur.com/eMN2E.png)](https://i.stack.imgur.com/eMN2E.png) The stripped down code I'm using: --------------------------------- This code was evolved from the likes of: <http://camendesign.com/code/video_for_everybody> ``` **No video playback capabilities detected.** Why not try to download the file instead? [MPEG4](__VIDEO__.mp4) [Ogg Theora](__VIDEO__.ogv) ``` The above code is the code that used to work, but got broken. Final solution that worked for me: Manual clean of the 3rd party taint from my MP4 videos. ------------------------------------------------------------------------------------------ There are many options to clean and re-encode an MP4 video, some free others non-free. One way is open the MP4 file with VLC or other video player or software that has and open/save/reencode/convert tools in it, and save it out to a different video encoding format. I was able to cook up a handy dandy script in Java to iterate over every MP4 file crack open the MP4 file, clean out the hobo taint if it exists then save and redeploy the mp4 file, and now all is well. Then do this on a schedule. **Other solutions considered, but rejected:** 1. Eliminate the bugged HTML5 `video` embed tag from your tool set. Display an image with an html5 `![]()` tag, overlay a play button so as to indicate this is a video, when the user clicks either open a new tab where the raw MP4 video plays in browser: the volume button is shown correctly, or worst case the user downloads the MP4 video to disk, and they can open it up from disk with their video player. 2. Use a different browser or an open source browser, that know how to do the right thing. 3. Try toggling on the 'new media controls' chrome://flags, maybe at some point in the future the Chrome Devs will push a fix and it won't freak out on the evidence that the mp4 smells of digital rights violations. 4. Yield the vanguard and eyeball click revenue to the big player content providers, just use an `whatever` tag to redirect users to the websites who are able to show video correctly. The game is afoot make your time. Upvotes: 3 [selected_answer]
2018/03/19
373
1,555
<issue_start>username_0: I have one doubt regarding using library from **GitHub** to **Android Studio** if anyone help me to solve my doubt, I would be thankful. My question is if we want to use **library from GitHub** , we have two option 1. either we can use dependencies to import library in project 2. or we can download the library from GitHub and use it as a module in our project from above option which one would be good way to use library? (from all perspective)<issue_comment>username_1: **Dependency** > > Because whenever a new version of a library arrives you don't have > to continuously check and look for it, let your build tool take care > of that. It can be cumbersome to regularly download and manage > different versions of libraries. That's where build tools like > Gradle comes in and informs you about an update and download it for > you. > > > Upvotes: 3 [selected_answer]<issue_comment>username_2: You should always use library from github via using dependencies. Why? * You probably won't have times to check and fix the bugs of the library in the future. * You need more time to learn about the library nuts and bolts to maintain and update the library in the future. * You probably don't have enough expertise about the domain of the library. * You need to catch and recreate the bugs you found in your system. So, you need to keep an exact version of each libraries in your project. * You can update and change your dependencies easily without afraid of introducing new bugs. * Make your project clean. Upvotes: 0
2018/03/19
760
2,846
<issue_start>username_0: I have my system connected with some server. I am reading data from the server. But i want to read data continuously from the server. Here is my code: ``` TcpClient client = new TcpClient("169.254.74.65", 7998); NetworkStream stream = client.GetStream(); Byte[] data = new Byte[1024]; String responseData = String.Empty; Int32 bytes = stream.Read(data, 0, data.Length); responseData = System.Text.Encoding.ASCII.GetString(data, 0, bytes); Console.WriteLine("Received: {0}", responseData); stream.Close(); client.Close(); ``` Can someone tell me the logic where to place the while loop to be able to listen continuously?<issue_comment>username_1: To receive data continuously you actually need to put in some loop. for example: ``` private void StartProcessing(Socket serverSocket) { var clientSocket = serverSocket.Accept(); StartReceiveing(clientSocket); } private void StartReceiveing(Socket clientSocket) { const int maxBufferSize = 1024; try { while (true) { var buffer = new byte[maxBufferSize]; var bytesRead = clientSocket.Receive(buffer); if (ClientIsConnected(clientSocket)) { var actualData = new byte[bytesRead]; Array.Copy(buffer, actualData, bytesRead); OnDataReceived(actualData); } else { OnDisconnected(clientSocket); } } } catch (SocketException ex) { Console.WriteLine(ex.Message); } } private void OnDisconnected(Socket issuedSocket) { if (issuedSocket != null) { issuedSocket.Shutdown(SocketShutdown.Both); issuedSocket.Close(); StartProcessing(listener); } } private void OnDataReceived(byte[] data) { //do cool things } private static bool ClientIsConnected(Socket socket) { return !(socket.Poll(1000, SelectMode.SelectRead) && socket.Available == 0); } ``` Upvotes: 0 <issue_comment>username_2: Just added loop without changing your code: ``` TcpClient client = new TcpClient("169.254.74.65", 7998); NetworkStream stream = client.GetStream(); Byte[] data = new Byte[1024]; String responseData = String.Empty; Int32 bytes; while(true) { bytes = stream.Read(data, 0, data.Length); if (bytes > 0) { responseData = System.Text.Encoding.ASCII.GetString(data, 0, bytes); Console.WriteLine("Received: {0}", responseData); } } stream.Close(); client.Close(); ``` This way it will request data from server in main thread infinitely. Additional improvements might be: * change loop condition to indicate when you want to stop reading; * add sleep when no data is available to avoid wasting processor time; * add error handling; * rewrite your code using asynchronous methods. Upvotes: 2
2018/03/19
276
1,055
<issue_start>username_0: I'm using Selenium WebDriver, how can I check if a page is opened or not after clicking a specific button ? Maybe someone recommend me useful resources where I can read about it. Thanks<issue_comment>username_1: You can check with some content of the page. ``` public boolean checkIfPageArrived(String... testText) throws Throwable { boolean found = false; for (String text : testText) { found = $$(By.xpath("//*[contains(text(),'" + text + "')]").isEmpty(); if (found) { break; } } return found; } ``` Upvotes: 1 <issue_comment>username_2: You can check with the title of the page. If you get title of the page, it means page is opened. ``` String expectedTitle = "Stack Overflow"; String url = "https://stackoverflow.com"; WebDriver driver = new FirefoxDriver(); driver.get(url); if(driver.getTitle() != null && driver.getTitle().contains(expectedTitle)){ System.out.println("Web page is opened"); } else{ System.out.println("Web page could not open."); } ``` Upvotes: 3
2018/03/19
854
3,149
<issue_start>username_0: I have app that is working with SQLite databse. I packed it as a bundle and i can see the services on service mix. When i am sending request to Post or Get service i am receiving this error: java.lang.ClassNotFoundException: org.sqlite.JDBC not found I installed SQLite JDBC driver on servicemix but still error. This is my POM: ``` 4.0.0 asd name 0.0.1-SNAPSHOT bundle Name Bundle Desc org.xerial sqlite-jdbc 3.15.1 compile org.apache.cxf cxf-core 3.1.5 org.apache.cxf cxf-rt-transports-http 3.1.5 org.apache.felix maven-bundle-plugin 3.3.0 true ${project.artifactId} ${project.description} javax.jws, javax.wsdl, javax.xml.namespace, org.apache.cxf.helpers, org.osgi.service.blueprint, org.xerial.sqlite-jdbc, \* my.services.package, org.xerial.sqlite-jdbc ``` I have tried to put this org.xerial.sqlite-jdbc only as a Export package and only as an Import package but did not succeeded. This is the java code for SQLite connection: ``` private void getConnection() throws ClassNotFoundException, SQLException { Class.forName("org.sqlite.JDBC"); con = DriverManager.getConnection("jdbc:sqlite:SQLiteTest1.db"); initialise(); } ``` The app works locally but not on the servicemix.<issue_comment>username_1: Your java code is not suitable for OSGi. By default in OSGi each class is loaded by the classloader of the bundle where it is located. So your own class is loaded by the classloader of your bundle. As you have an Import-Package statement for org.sqlite your code can access the sqlite driver classes. The problem is that DriverManager loads the classes itself. DriverManager is provided by the system bundle (felix framework bundle). This bundle of course has no Import-Package for the sqllite. So it can not load this class. There is a simple workaround though. DriverManager allows you to set a thread context classloader. You can set this classloader to the classloader of your own bundle. This way DriverManager can see the sqllite classes. This is only a workaround though. In OSGi the beast way to avoid problems is to simply not load any classes directly. In the case of jdbc this can be done by using DataSource classes intead of the DriverManager. See [this post](https://stackoverflow.com/questions/41230234/using-datasource-to-connect-to-sqlite-with-xerial-sqlite-jdbc-driver). Another option is to use pax-jdbc. It allows to create DataSource services from config. This way you can make your bundle independent of the actual DB driver and still avoid manual class loading. [See this example](https://github.com/cschneider/Karaf-Tutorial/tree/master/db/examplejdbc). Upvotes: 2 <issue_comment>username_2: You can try like this: ``` private void getConnection() throws ClassNotFoundException, SQLException { SQLiteDataSource ds = new SQLiteDataSource(); ds.setUrl("jdbc:sqlite:SQLiteTest1.db"); try { con = ds.getConnection(); System.out.println("Connected."); } catch (Exception e) { e.printStackTrace(); } initialise(); } ``` According to @username_1 this can be done by using DataSource. Upvotes: 2 [selected_answer]
2018/03/19
372
1,145
<issue_start>username_0: I just want to check if image in anchor is equal to specific url. Here is html i have ``` ![](http://v9contest.geojidesign.com/wp-content/uploads/contest_entries/thumbnail-^739D24706DC178C938623C638DDBFF0BC04ADF35338114AF68^pimgpsh_thumbnail_win_distr.jpg) ``` so when page loads, i want to suppose check if link image in anchor is equal to <http://google.com>. Here what i tried ``` if(jQuery('a').has('img.img-fluid').attr('src')=='http://google.com'){ alert('ok'); } ```<issue_comment>username_1: You may use the direct descendent selector, `>`, instead. Following is the working example: ```js if (jQuery('a > img.img-fluid').attr('src') == 'http://google.com') { alert('ok'); } ``` ```html ![](http://google.com) ``` Keep in mind that the selector may return more than one element if there are many such elements. In that case, you will need to loop through and check. Upvotes: 1 <issue_comment>username_2: Thanks, it is resoved. I used ``` jQuery('a > img.img-fluid').each(function(){ if(jQuery(this).attr('src')=='http://google.com'){ alert('ok'); } }); ``` Upvotes: 0
2018/03/19
477
1,707
<issue_start>username_0: I am doing test task for a self-teaching. My stack is Spring Boot/H2 date base/Hibernate I have like a REST-full service (actually it is not, now I am trying to fix it) I've been told that I have a lot of bad code decisions and mistakes, so I've decide to fix it. Initial state of a working project is there - <https://github.com/iliapastushenko/testtaskREST> I've started to refactor that and first thing that I've made - got rid of jackson-datatype-jsr310 because it is actually redundant thing for me - I've deleted it from POM and ClientappApplication class and edited my Application class field "dateCreated": ``` @DateTimeFormat(pattern = "dd-MM-yyyy hh:mm:ss") @Type(type="timestamp") private LocalDateTime dateCreated; ``` So, when I am trying to get one application of needed client via frontend I get this type of Exception: ``` java.lang.IllegalArgumentException: Can not set java.time.LocalDateTime field root.Model.Application.dateCreated to java.sql.Timestamp ``` Could you please give me a hint - what is wrong ?<issue_comment>username_1: You may use the direct descendent selector, `>`, instead. Following is the working example: ```js if (jQuery('a > img.img-fluid').attr('src') == 'http://google.com') { alert('ok'); } ``` ```html ![](http://google.com) ``` Keep in mind that the selector may return more than one element if there are many such elements. In that case, you will need to loop through and check. Upvotes: 1 <issue_comment>username_2: Thanks, it is resoved. I used ``` jQuery('a > img.img-fluid').each(function(){ if(jQuery(this).attr('src')=='http://google.com'){ alert('ok'); } }); ``` Upvotes: 0
2018/03/19
452
1,481
<issue_start>username_0: I have an arraylist (size unknown) of Strings (string lengths are not same). I need to print the combinations of all the characters in the Strings(specific condition), no repetition. It is more or like combination of elements in Sets as in Mathematics. Condition: 1. Output String length --> size of given arraylist (in the below example, since the ArrayList size is 3, output strings should be of length 3). 2. Each character of the new Strings formed, must take each character from each of the string in the ArrayList. Here is an example: Array List is sample: `["abc", "de", "fg"]` (number of output strings: 3(size of 1st string)\*2(size of 2nd string)\*2(size of 3rd string) = 12) Output should be: ``` ["adf", "adg", "aef", "aeg", "bdf", "bdg", "bef", "beg", "cdf", "cdg", "cef", "ceg"] ```<issue_comment>username_1: You may use the direct descendent selector, `>`, instead. Following is the working example: ```js if (jQuery('a > img.img-fluid').attr('src') == 'http://google.com') { alert('ok'); } ``` ```html ![](http://google.com) ``` Keep in mind that the selector may return more than one element if there are many such elements. In that case, you will need to loop through and check. Upvotes: 1 <issue_comment>username_2: Thanks, it is resoved. I used ``` jQuery('a > img.img-fluid').each(function(){ if(jQuery(this).attr('src')=='http://google.com'){ alert('ok'); } }); ``` Upvotes: 0
2018/03/19
902
3,196
<issue_start>username_0: I have been reading up on this on the actual site for ngx-toastr [ngx-toastr](https://www.npmjs.com/package/ngx-toastr), and other posts on Stack Overflow, but cannot find a clear solution for my work case. **I am trying to change the position of the `toastr` for specific use cases. Example; when it is an error, show the `toastr` on the top.** I have a very vanilla setup. In my `app.module.ts` I have the following: ``` import { ToastrModule } from 'ngx-toastr'; ``` In my imports of `app.module.ts` I have: ``` imports: [ BrowserModule, ToastrModule.forRoot({ timeOut: 3500, positionClass: 'toast-bottom-center', preventDuplicates: true, }), ``` In my components I declare the `toastr` in my `constructor`: ``` constructor(private toastr: ToastrService) {} ``` And I use the `toastr` as follows: ``` this.toastr.error('There was an error loading the Asset List!', 'Asset Register'); ``` **As per my setup, all toast show in `'toast-bottom-center'`. How can I modify this call to show the toast on the top?** ``` this.toastr.error('There was an error loading the Asset List!', 'Asset Register'); ```<issue_comment>username_1: Make a service for that. Start by creating an enum ``` export enum ToasterPosition { topRight = 'toast-top-right', topLeft = 'toast-top-left', bottomRight = 'toast-bottom-right', bottomLeft= 'toast-bottom-left', // Other positions you would like } ``` Now create your service ``` export class ToasterService { constructor(private toastr: ToastrService) {} public error(title: string, message: string, positionClass: ToasterPosition) { this.toastr.error(message, title, { positionClass }); } } ``` This way, you can't miss the positioning, since you have to provide an enum. Upvotes: 4 [selected_answer]<issue_comment>username_2: The 3rd parameter of the *error* method is used to provide the position of the toastr message (among other things). ``` this.toastrService.error('There was an error loading the Asset List!', 'Asset Register'); this.toastrService.warning('Some warning message', 'some title', { positionClass: 'toast-bottom-right' }); ``` Upvotes: 4 <issue_comment>username_3: Add this in style.css ```css .toast-top-center { bottom: 0; margin: 0 auto; right: 0; left: 0; width: 100%; } ``` Insert this in your toast function ```js show(config: { type: string, title: string, subTitle?: string }): void { switch (config.type) { case 'Success': this._toastr.success(config.subTitle ? config.subTitle : '', config.title,{ positionClass: 'toast-top-center'}); break; case 'Error': this._toastr.error(config.subTitle ? config.subTitle : '', config.title, { positionClass: 'toast-top-center'}); break; case 'Warning': this._toastr.warning(config.subTitle ? config.subTitle : '', config.title, { positionClass: 'toast-top-center'}); break; case 'Info': this._toastr.info(config.subTitle ? config.subTitle : '', config.title, { positionClass: 'toast-top-center'}); break; default: break; } } ``` Upvotes: 2
2018/03/19
3,148
11,612
<issue_start>username_0: I want output from two table.. One table for **Product\_detail** and second table for **Product\_image** In **product\_image** table all images store separately for particular product\_id I want output from both table : For particular product\_id all detail from Product\_detail table and all images from Product\_image table My Code : ``` php error_reporting(0); $response = array(); $response1 = array(); require_once __DIR__ . '/db_Connect.php'; // check for post data if (isset($_GET["pro_id"])) { $pid = $_GET['pro_id']; // get a product from products table //$result = mysql_query("SELECT * FROM product_list WHERE pro_id = '".$pro_id."'"); $q="SELECT product_list.pro_id,product_list.product_name,product_list.product_desc,product_images.image FROM product_images INNER JOIN product_list ON product_list.pro_id = product_images.pro_id WHERE product_list.pro_id = '$pid'"; $res=mysql_query($q); if (!empty($res)) { // user node $response["product"] = array(); $result = mysql_fetch_assoc($res); //var_dump($result); $count=count($result['image']); $count++; var_dump($count); $product=array(); $product['pro_id']=$result['pro_id']; //$product['cat_id']=$result['cat_id']; $product['product_name']=$result['product_name']; $product['product_desc']=$result['product_desc']; //$product['image']="http://friendzfashionz.com/pandora/admin/".$result['image']; $clr=array(); for($i=0;$i<$count;$i++) { $clr[$i]="http://friendzfashionz.com/pandora/admin/".$result['image']; //var_dump($clr[$i]); array_push($response1["images"], $clr[$i]); } $product['image']=$clr; array_push($response["product"], $product); $response["success"] = 1; echo json_encode($response); } else { // no product found $response["success"] = 0; $response["message"] = "No user found"; // echo no users JSON echo json_encode($response); } } else { // required field is missing $response["success"] = 0; $response["message"] = "Required field(s) is missing"; // echoing JSON response echo json_encode($response); } ? ``` Output of this code is : ``` int(2) {"product":[{"pro_id":"13","product_name":"jeans","product_desc":"Monkey wash ","image":["http:\/\/friendzfashionz.com\/pandora\/admin\/Sub_uploads\/download (1).jpg","http:\/\/friendzfashionz.com\/pandora\/admin\/Sub_uploads\/download (1).jpg"]}],"success":1} ``` I have two different image of pro\_id in product\_image table I want product\_details one time and all product\_image of that pro\_id.. But the problem is it give me first image two times.. Please help to solve this problem... Product\_detail table: ![product_detail](https://i.stack.imgur.com/Jx2e4.png) product\_image table: ![product_image](https://i.stack.imgur.com/WgjTr.png)<issue_comment>username_1: The problem is that you are getting two rows returned but then only make one call to ``` $result = mysql_fetch_assoc($res); ``` so you only process the first row. Instead, use `GROUP BY` on the product\_list values and `GROUP_CONCAT()` on the images. This will return a single row for the product with a comma-separated list of images. You can then get an array of the images separately with `EXPLODE()`. e.g. ``` SELECT pl.pro_id, pl.product_name, pl.product_desc, GROUP_CONCAT(pi.image) AS 'images' FROM product_images pi INNER JOIN product_list pl ON (pl.pro_id = pi.pro_id) WHERE pl.pro_id = ? GROUP BY pl.pro_id, pl.product_name, pl.product_desc; ``` Also, you are still using `mysql_query()` which was deprecated in PHP 5.5 and removed in PHP 7. You SHOULD be using paramaterized queries with either PDO or mysqli or your app will break when you upgrade PHP and you leave yourself wide open to SQL injection in the meantime. <https://secure.php.net/manual/en/class.pdo.php> <https://secure.php.net/manual/en/class.mysqli.php> Upvotes: 2 [selected_answer]<issue_comment>username_2: The problem with your code resides in the use of `mysql_fetch_assoc` function. By directly calling it, e.g. without including it in a *while* loop (like at the end of [Example #1](https://secure.php.net/manual/en/function.mysql-fetch-assoc.php#refsect1-function.mysql-fetch-assoc-examples)), you are fetching only one record from the database. Therefore, the `count($result['image'])` statement will return the value `1`. Though you are expecting multiple records: one for each image in the *product\_images* table. Note that you are using the *mysql* extension. Though it has been removed as of PHP 7.0.0! Use *mysqli* or *PDO* instead. I adapted your code to use *mysqli* - with some changes regarding building the `$response` array, too. You can read [this article](https://phpdelusions.net/mysqli) about the use of mysqli [prepared statements](https://secure.php.net/manual/en/mysqli.quickstart.prepared-statements.php), which are used to avoid [SQL injection](https://en.wikipedia.org/wiki/SQL_injection). At last, for proper error and exception handling you should read [this article](https://phpdelusions.net/articles/error_reporting). Whereas [this article](https://phpdelusions.net/mysqli/error_reporting) is focused on *mysqli*. --- ### index.php ``` php require __DIR__ . '/db_Connect.php'; // Array to hold the final response. $response = array(); // Validate the product id. if (!isset($_GET['pro_id']) || empty($_GET['pro_id']) || !is_numeric($_GET['pro_id'])) { $response['success'] = 0; $response['message'] = 'No product id provided.'; } else { // Read the product id. $productId = $_GET['pro_id']; /* * The SQL statement to be prepared. Notice the so-called markers, * e.g. the "?" signs. They will be replaced later with the * corresponding values when using mysqli_stmt::bind_param. * * @link http://php.net/manual/en/mysqli.prepare.php */ $sql = 'SELECT pl.pro_id, pl.product_name, pl.product_desc, pi.image FROM product_images AS pi INNER JOIN product_list AS pl ON pi.pro_id = pl.pro_id WHERE pi.pro_id = ?'; /* * Prepare the SQL statement for execution - ONLY ONCE. * * @link http://php.net/manual/en/mysqli.prepare.php */ $statement = $connection-prepare($sql); /* * Bind variables for the parameter markers (?) in the * SQL statement that was passed to prepare(). The first * argument of bind_param() is a string that contains one * or more characters which specify the types for the * corresponding bind variables. * * @link http://php.net/manual/en/mysqli-stmt.bind-param.php */ $statement->bind_param('i', $productId); /* * Execute the prepared SQL statement. * When executed any parameter markers which exist will * automatically be replaced with the appropriate data. * * @link http://php.net/manual/en/mysqli-stmt.execute.php */ $statement->execute(); /* * Get the result set from the prepared statement. * * NOTA BENE: * Available only with mysqlnd ("MySQL Native Driver")! If this * is not installed, then uncomment "extension=php_mysqli_mysqlnd.dll" in * PHP config file (php.ini) and restart web server (I assume Apache) and * mysql service. Or use the following functions instead: * mysqli_stmt::store_result + mysqli_stmt::bind_result + mysqli_stmt::fetch. * * @link http://php.net/manual/en/mysqli-stmt.get-result.php * @link https://stackoverflow.com/questions/8321096/call-to-undefined-method-mysqli-stmtget-result */ $result = $statement->get_result(); /* * Fetch data and save it into an array. * * @link http://php.net/manual/en/mysqli-result.fetch-all.php */ $productRecords = $result->fetch_all(MYSQLI_ASSOC); /* * Free the memory associated with the result. You should * always free your result when it is not needed anymore. * * @link http://php.net/manual/en/mysqli-result.free.php */ $result->close(); /* * Close the prepared statement. It also deallocates the statement handle. * If the statement has pending or unread results, it cancels them * so that the next query can be executed. * * @link http://php.net/manual/en/mysqli-stmt.close.php */ $statement->close(); /* * Close the previously opened database connection. * * @link http://php.net/manual/en/mysqli.close.php */ $connection->close(); if (!$productRecords) { // No product records found. $response['success'] = 0; $response['message'] = 'No product data found.'; } else { // Array to hold the final product data. $product = array(); foreach ($productRecords as $productRecord) { $productId = $productRecord['pro_id']; $productName = $productRecord['product_name']; $productDescription = $productRecord['product_desc']; $productImage = $productRecord['image']; if (!$product) { // Array is empty $product[0] = array( 'pro_id' => $productId, 'product_name' => $productName, 'product_desc' => $productDescription, ); } $product[0]['image'][] = 'http://friendzfashionz.com/pandora/admin/' . $productImage; } $response['success'] = 1; $response['product'] = $product; } } echo json_encode($response); ``` ### db\_Connect.php ``` php // Db configs. define('HOST', 'localhost'); define('PORT', 3306); define('DATABASE', 'tests'); define('USERNAME', 'root'); define('PASSWORD', '<PASSWORD>'); /* * Error reporting. * * Also, define an error handler, an exception handler and, eventually, * a shutdown handler function to handle the raised errors and exceptions. * * @link https://phpdelusions.net/articles/error_reporting Error reporting basics * @link http://php.net/manual/en/function.error-reporting.php * @link http://php.net/manual/en/function.set-error-handler.php * @link http://php.net/manual/en/function.set-exception-handler.php * @link http://php.net/manual/en/function.register-shutdown-function.php */ error_reporting(E_ALL); ini_set('display_errors', 1); /* SET IT TO 0 ON A LIVE SERVER! */ /* * Enable internal report functions. This enables the exception handling, * e.g. mysqli will not throw PHP warnings anymore, but mysqli exceptions * (mysqli_sql_exception). * * MYSQLI_REPORT_ERROR: Report errors from mysqli function calls. * MYSQLI_REPORT_STRICT: Throw a mysqli_sql_exception for errors instead of warnings. * * @link http://php.net/manual/en/class.mysqli-driver.php * @link http://php.net/manual/en/mysqli-driver.report-mode.php * @link http://php.net/manual/en/mysqli.constants.php */ $mysqliDriver = new mysqli_driver(); $mysqliDriver-report_mode = (MYSQLI_REPORT_ERROR | MYSQLI_REPORT_STRICT); // Create a new db connection. $connection = new mysqli(HOST, USERNAME, PASSWORD, DATABASE, PORT); ``` ### JSON encoded result As yours. Upvotes: 0
2018/03/19
279
950
<issue_start>username_0: so i have this table: <http://prntscr.com/it53pm> (in this link). what im trying to do is select schedule where schedule date = current date.<issue_comment>username_1: you can use `NOW()` in your where clause ``` for example SELECT * WHERE schedule like NOW() ``` * NOW() will be "YYYY-MM-DD HH-MM-SS" at curent date and time Upvotes: 0 <issue_comment>username_2: SELECT aid,schedule FROM yourtable WHERE TRUNC(schedule)=TRUNC(SYSDATE) This works in Oracle SQL Upvotes: 0 <issue_comment>username_3: Standard SQL: ``` select * from tab where cast(schedule as date) = current_date ``` Longer syntax but allows indexed access: ``` select * from tab where schedule >= cast(current_date as timestamp) and schedule < cast(current_date + interval '1' day as timestamp) ``` Upvotes: 3 [selected_answer]<issue_comment>username_4: Try this ``` SELECT aid FROM table WHERE schedule >= TIMESTAMP(CURDATE()) ``` Upvotes: 0
2018/03/19
3,227
12,929
<issue_start>username_0: I'm building an app like WhatsApp using 3 tabs and every tab have a `Fragment`. In my first `Fragment` I have a `RecyclerView`. Win a set the SectionsPagerAdapter to start width the RecyclerView Fragment like this: ``` inner class SectionsPagerAdapter(fm: FragmentManager) : FragmentPagerAdapter(fm) { override fun getItem(position: Int): Fragment { // getItem is called to instantiate the fragment for the given page. // Return a PlaceholderFragment (defined as a static inner class below). //return PlaceholderFragment.newInstance(position + 1) when(position){ 0 -> return Fragmenttask.newInstance() 1 -> return FragmentChat.newInstance() 2 -> return FragmentMaps.newInstance() else -> { return null!! } } } override fun getCount(): Int { // Show 3 total pages. return 3 } } ``` It give me this error: > > E/AndroidRuntime: FATAL EXCEPTION: main > Process: com.deraah.mohamed.deraahpro, PID: 21993 > java.lang.IllegalStateException: The specified child already has a > parent. You must call removeView() on the child's parent first. > at android.view.ViewGroup.addViewInner(ViewGroup.java:4937) > at android.view.ViewGroup.addView(ViewGroup.java:4768) > at android.support.v4.view.ViewPager.addView(ViewPager.java:1477) > at android.view.ViewGroup.addView(ViewGroup.java:4708) > at android.view.ViewGroup.addView(ViewGroup.java:4681) > at > android.support.v4.app.FragmentManagerImpl.moveToState(FragmentManager.java:1425) > at > android.support.v4.app.FragmentManagerImpl.moveFragmentToExpectedState(FragmentManager.java:1740) > at > android.support.v4.app.BackStackRecord.executeOps(BackStackRecord.java:794) > at > android.support.v4.app.FragmentManagerImpl.executeOps(FragmentManager.java:2580) > at > android.support.v4.app.FragmentManagerImpl.executeOpsTogether(FragmentManager.java:2367) > at > android.support.v4.app.FragmentManagerImpl.removeRedundantOperationsAndExecute(FragmentManager.java:2322) > at > android.support.v4.app.FragmentManagerImpl.execSingleAction(FragmentManager.java:2199) > at > android.support.v4.app.BackStackRecord.commitNowAllowingStateLoss(BackStackRecord.java:651) > at > android.support.v4.app.FragmentPagerAdapter.finishUpdate(FragmentPagerAdapter.java:145) > at android.support.v4.view.ViewPager.populate(ViewPager.java:1236) > at > android.support.v4.view.ViewPager.setCurrentItemInternal(ViewPager.java:662) > at > android.support.v4.view.ViewPager.setCurrentItemInternal(ViewPager.java:624) > at > android.support.v4.view.ViewPager.setCurrentItem(ViewPager.java:605) > at > android.support.design.widget.TabLayout$ViewPagerOnTabSelectedListener.onTabSelected(TabLayout.java:2170) > at > android.support.design.widget.TabLayout.dispatchTabSelected(TabLayout.java:1165) > at > android.support.design.widget.TabLayout.selectTab(TabLayout.java:1158) > at > android.support.design.widget.TabLayout.selectTab(TabLayout.java:1128) > at > android.support.design.widget.TabLayout$Tab.select(TabLayout.java:1427) > at > android.support.design.widget.TabLayout$TabView.performClick(TabLayout.java:1537) > at android.view.View$PerformClick.run(View.java:24770) > at android.os.Handler.handleCallback(Handler.java:790) > at android.os.Handler.dispatchMessage(Handler.java:99) > at android.os.Looper.loop(Looper.java:164) > at android.app.ActivityThread.main(ActivityThread.java:6494) > at java.lang.reflect.Method.invoke(Native Method) > at > com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:438) > at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:807) > > > **UPDATE** My fragments: FragmentChat: ``` /** * A simple [Fragment] subclass. * Activities that contain this fragment must implement the * [FragmentChat.OnFragmentInteractionListener] interface * to handle interaction events. * Use the [FragmentChat.newInstance] factory method to * create an instance of this fragment. */ class FragmentChat : Fragment() { private var mListener: OnFragmentInteractionListener? = null override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) } override fun onCreateView(inflater: LayoutInflater?, container: ViewGroup?, savedInstanceState: Bundle?): View? { // Inflate the layout for this fragment return inflater!!.inflate(R.layout.fragment_fragment_chat, container, false) } // TODO: Rename method, update argument and hook method into UI event fun onButtonPressed(uri: Uri) { if (mListener != null) { mListener!!.onFragmentInteraction(uri) } } override fun onAttach(context: Context?) { super.onAttach(context) } override fun onDetach() { super.onDetach() mListener = null } /** * This interface must be implemented by activities that contain this * fragment to allow an interaction in this fragment to be communicated * to the activity and potentially other fragments contained in that * activity. * * * See the Android Training lesson [Communicating with Other Fragments](http://developer.android.com/training/basics/fragments/communicating.html) for more information. */ interface OnFragmentInteractionListener { // TODO: Update argument type and name fun onFragmentInteraction(uri: Uri) } companion object { // TODO: Rename and change types and number of parameters fun newInstance(): FragmentChat { val fragment = FragmentChat() val args = Bundle() fragment.arguments = args return fragment } } }// Required empty public constructor ``` FragmentMaps: ``` /** * A simple [Fragment] subclass. * Activities that contain this fragment must implement the * [FragmentMaps.OnFragmentInteractionListener] interface * to handle interaction events. * Use the [FragmentMaps.newInstance] factory method to * create an instance of this fragment. */ class FragmentMaps : Fragment() { private var mListener: OnFragmentInteractionListener? = null override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) } override fun onCreateView(inflater: LayoutInflater?, container: ViewGroup?, savedInstanceState: Bundle?): View? { // Inflate the layout for this fragment return inflater!!.inflate(R.layout.fragment_fragment_maps, container, false) } // TODO: Rename method, update argument and hook method into UI event fun onButtonPressed(uri: Uri) { if (mListener != null) { mListener!!.onFragmentInteraction(uri) } } override fun onDetach() { super.onDetach() mListener = null } interface OnFragmentInteractionListener { // TODO: Update argument type and name fun onFragmentInteraction(uri: Uri) } companion object { // TODO: Rename and change types and number of parameters fun newInstance(): FragmentMaps { val fragment = FragmentMaps() val args = Bundle() fragment.arguments = args return fragment } } }// Required empty public constructor ``` Fragmenttask : ``` class Fragmenttask : Fragment() { private var mListener: OnFragmentInteractionListener? = null override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) } override fun onCreateView(inflater: LayoutInflater?, container: ViewGroup?, savedInstanceState: Bundle?): View? { val CustumViewtask = inflater!!.inflate(R.layout.fragment_fragmenttask, container, false) val taskMainRV = CustumViewtask.findViewById(R.id.recyclerView_main) as RecyclerView //taskMainRV.setBackgroundColor(Color.BLUE) taskMainRV.layoutManager = LinearLayoutManager(context) //recyclerView_main.adapter = MainAdapter() fetchJson(taskMainRV) return taskMainRV } fun fetchJson(RSV: RecyclerView) { //SharedPreferences val MY_APP_INFO: String = "UserInfo" val prefs = activity.getSharedPreferences(MY_APP_INFO, AppCompatActivity.MODE_PRIVATE) val LoggedUserId = prefs.getString("UserId", null) println("your code is : $LoggedUserId") println("Attempting to Fetch JSON") val url = "http://Restful.com/get_tasks.php" val client = OkHttpClient() val formBody = FormBody.Builder().add("UserId", LoggedUserId).build() val request = Request.Builder().url(url) .post(formBody) .build() client.newCall(request).enqueue(object: Callback { override fun onResponse(call: Call?, response: Response?) { val body = response?.body()?.string() println("mohamed : $body") val gson = GsonBuilder().create() val tasksfeed = gson.fromJson(body, M_tasksFeed::class.java) activity.runOnUiThread { RSV.adapter = MainAdaptertasks(tasksfeed) } } override fun onFailure(call: Call?, e: IOException?) { println("Failed to execute request") } }) } // TODO: Rename method, update argument and hook method into UI event fun onButtonPressed(uri: Uri) { if (mListener != null) { mListener!!.onFragmentInteraction(uri) } } override fun onDetach() { super.onDetach() mListener = null } /** * This interface must be implemented by activities that contain this * fragment to allow an interaction in this fragment to be communicated * to the activity and potentially other fragments contained in that * activity. * * * See the Android Training lesson [Communicating with Other Fragments](http://developer.android.com/training/basics/fragments/communicating.html) for more information. */ interface OnFragmentInteractionListener { // TODO: Update argument type and name fun onFragmentInteraction(uri: Uri) } companion object { fun newInstance(): Fragmenttask { val fragment = Fragmenttask() val args = Bundle() fragment.arguments = args return fragment } } }// Required empty public constructor ```<issue_comment>username_1: Problem coming from ``` else -> { return null!! } ``` Try this way ``` override fun getItem(position: Int): android.support.v4.app.Fragment { // getItem is called to instantiate the fragment for the given page. // Return a PlaceholderFragment (defined as a static inner class below). //return PlaceholderFragment.newInstance(position + 1) var fragment: android.support.v4.app.Fragment? = null when (position) { 0 -> fragment = Fragmenttask.newInstance() 1 -> fragment = FragmentChat.newInstance() 2 -> fragment = FragmentMaps.newInstance() } return fragment!! } override fun getCount(): Int { return 3 //no of cases/fragments } } ``` **`FYI`** You must call [**`removeView()`**](https://stackoverflow.com/questions/28071349/the-specified-child-already-has-a-parent-you-must-call-removeview-on-the-chil) . Each Fragment section ``` View viewOBJ = inflater.inflate(R.layout.layout_child, parent_layout, false); return viewOBJ ; ``` Upvotes: 0 <issue_comment>username_2: Are you sure while inflating fragment's view you have set the attachRootView to false? ``` inflater.inflate(R.layout.fragment, container, false) ``` Upvotes: 0 <issue_comment>username_3: You are returning wrong `view` check in your **`Fragmenttask`** `Fragment` **Use this** ``` return CustumViewtask ``` **Instead of this** ``` return taskMainRV ``` Change your code like this ``` override fun onCreateView(inflater: LayoutInflater?, container: ViewGroup?, savedInstanceState: Bundle?): View? { val CustumViewtask = inflater!!.inflate(R.layout.fragment_fragmenttask, container, false) val taskMainRV = CustumViewtask.findViewById(R.id.recyclerView_main) as RecyclerView //taskMainRV.setBackgroundColor(Color.BLUE) taskMainRV.layoutManager = LinearLayoutManager(context) //recyclerView_main.adapter = MainAdapter() fetchJson(taskMainRV) return CustumViewtask } ``` Upvotes: 2
2018/03/19
513
1,676
<issue_start>username_0: > > Uncaught Syntax Error: Unexpected end of JSON input > > > [![enter image description here](https://i.stack.imgur.com/f225B.png%20javascript)](https://i.stack.imgur.com/f225B.png%20javascript) help why I am getting such error ``` $('.view-profile').on('click', function(e) { e.preventDefault(); var id = $(this).data('id'); var str = $(this).data('citizens'); var citizensArray = JSON.parse(str); alert(citizensArray[0].id); }); ``` html & php ``` Profile ```<issue_comment>username_1: Wrap the `data-citizens` in single quotes i.e. `data-citizen='php echo json_encode($citizens);?'` as existence of `"` is JSON string will abruptly terminate the attribute value. And, You don't need to use `JSON.parse()` with [`.data()`](https://api.jquery.com/data/), if the data is valid JSON format the method will return JavaScript object. > > When the data attribute is an object (starts with '{') or array (starts with '[') then `jQuery.parseJSON` is used to parse the string; it must follow valid JSON syntax including quoted property names. If the value isn't parseable as a JavaScript value, it is left as a string. > > > Using `JSON.parse()` with valid JSON result if above error. So just use ``` var citizensArray = str; ``` ```js $('.view-profile').on('click', function(e) { e.preventDefault(); var str = $(this).data('citizens'); console.log(str); }); ``` ```html Profile ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Before you add JSON into HTML attribute, make sure you encode it. ``` data-citizens="php echo htmlspecialchars(json_encode($citizens));?" ``` Upvotes: 1
2018/03/19
580
2,083
<issue_start>username_0: In my application 2 linear layout are there. those layouts contain some text view ...etc data is came from the server. i have a requirement that when the data of Linear layout 1 is empty it goes hide and linear layout 2 is placed instead of linear layout 1 . ``` l1 = (LinearLayout) findViewById(R.id.lnrlgn); l2 = (LinearLayout) findViewById(R.id.lnrlgn1); if(l1.isEmpty){ l1.setVisibility(view.InVISIBLE); TranslateAnimation animate = new TranslateAnimation( 0, // fromXDelta 0, // toXDelta l1.getHeight(), // fromYDelta 0); // toYDelta animate.setDuration(5500); animate.setFillAfter(true); l1.startAnimation(animate); } ``` but it is not working [![enter image description here](https://i.stack.imgur.com/G9aHP.png)](https://i.stack.imgur.com/G9aHP.png)<issue_comment>username_1: Wrap the `data-citizens` in single quotes i.e. `data-citizen='php echo json_encode($citizens);?'` as existence of `"` is JSON string will abruptly terminate the attribute value. And, You don't need to use `JSON.parse()` with [`.data()`](https://api.jquery.com/data/), if the data is valid JSON format the method will return JavaScript object. > > When the data attribute is an object (starts with '{') or array (starts with '[') then `jQuery.parseJSON` is used to parse the string; it must follow valid JSON syntax including quoted property names. If the value isn't parseable as a JavaScript value, it is left as a string. > > > Using `JSON.parse()` with valid JSON result if above error. So just use ``` var citizensArray = str; ``` ```js $('.view-profile').on('click', function(e) { e.preventDefault(); var str = $(this).data('citizens'); console.log(str); }); ``` ```html Profile ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Before you add JSON into HTML attribute, make sure you encode it. ``` data-citizens="php echo htmlspecialchars(json_encode($citizens));?" ``` Upvotes: 1
2018/03/19
1,202
3,523
<issue_start>username_0: I'm doing a study of a text and I would like to obtain through a stylesheet XSLT a certain function (specifically, ) only if it is preceded by a full stop (). How can I put this condition on the XSLT stylesheet? I have tried with `previously sibling::`, but it does not return anything. Here are the stylesheet XSLT and the XML that I created: ``` xml version="1.0" encoding="UTF-8"? ### **COORDINADAS ADVERSATIVA** | Coordinadas adversativas | Libro | Capítulo | Folio | Columna | Línea comienzo | Línea final | | --- | --- | --- | --- | --- | --- | --- | | --- | --- | --- | --- | --- | --- | --- | ``` XML ``` xml version="1.0" encoding="UTF-8"? xml-model href="http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng" type="application/xml" schematypens="http://relaxng.org/ns/structure/1.0"? xml-model href="http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng" type="application/xml" schematypens="http://purl.oclc.org/dsdl/schematron"? xml-stylesheet type="text/xsl" href="../18-03-2018%20TEI/XSLT/2_Puntuacion_de_la_oracion/5-coordinada-adversativa.xsl"? Title Publication Information Information about the source De abel et de su hermano que nascio con el. ANdados treynta annos & seys dias de quando el mundo fue criado. assi como dize mahestre luchas obispo de thuy & otros que acuerdan con el. fizieron adam & eua otro fijo. ¶ Et a este segundo fijo dixieron abel. Et abel segund los esponimientos dela biblia quiere dezir en el nuestro lenguage castellano. tanto como lloro o cosa que non es duradera. ¶ Onde dize otrossi gregorio enla glosa del genesis sobre este logar. que abel tanto quiere dezir como baho. por que assi como el baho se ua ayna & se esparze por ell ayre & non paresçe. assi fallescio ayna abel. & fues commo oyre des & non parescio mas. ¶ Et con este abel naçio otra hermana de un parto commo cayn et la suya. Et esta ouo nombre delbora que mues tra tanto como seguydora por que siguio asu hermano abel. Ca asi commo dizen unos uis co poco como el. Pero otros cuentan que uisco mucho. & que tanto commo seguidora quiere dezir delbora. Non por que de muerte ella siguiesse assu hermano. mas por quel siguio en iusticia & en bondat amando a dios. Ca departen que tanto uisco que alcanço asu hermano seth. que nasçio muchos annos despues que delbora. & que ella caso con el. Et desto diremos adelante. ```<issue_comment>username_1: Instead of ``` tei:phr[@function='adversative' and preceding-sibling::node()[1][self::tei:pc]] ``` try ``` tei:phr[@function='adversative' and preceding-sibling::tei:pc] ``` This gives the following output ``` Non por que de muerte ella siguiesse assu hermano . mas por quel siguio en iusticia & en bondat amando a dios ``` The following link [Concept XML XLST preceding-sibling and ancestor](https://stackoverflow.com/questions/12347412/concept-xml-xlst-preceding-sibling-and-ancestor) provides a good explanation on the `preceding-sibling` axis. Upvotes: 1 <issue_comment>username_2: The reason your current expression does not work is because there is a whitespace node between the `pc` node and the `phr`, which is what is being picked up by your expression. One way to solve this is to tell XSLT to ignore whitespace nodes under all elements, by using `xsl:strip-space` ``` ``` Alternatively, change your expression to only check elements, rather than nodes ``` ``` Upvotes: 1 [selected_answer]
2018/03/19
671
2,278
<issue_start>username_0: Is there a way to store strings in a variable (javascript), like in php? ``` `$string = "Counting: "; for($x = 1; $x <= 10; $x++){ $string .= $x; } $string .= ":Counting END"; echo $string;` ``` I want to loop some JS object array between the string as I did above in php.<issue_comment>username_1: You can concatenate string in js by using `+` ```js var string = "Counting: "; for (var $x = 1; $x <= 10; $x++) { string += $x; } string += ":Counting END"; console.log(string); ``` Upvotes: 2 <issue_comment>username_2: In JavaScript ``` var string = "Counting: "; var x; for(x = 1; x <= 10; x++){ string += x; } string += ":Counting END"; document.write(string); ``` Upvotes: 2 <issue_comment>username_3: In answer to the question posed by the OP, yes, one may store strings in a JavaScript variable as one would in PHP. Translating such code from PHP to JavaScript usually requires little more than removing the "$" from a PHP variable and using the keyword `var` or possibly `let` to declare a JavaScript variable. However, one should note that JavaScript overloads the "+" operator, permitting its use for addition as well as concatenation depending on the context. PHP, in contrast, has a distinct operator for concatenation, namely the dot and for addition it reserves the plus sign. So, the PHP concatenation-assignment operator allows for code like the following: ``` php $res = 1; var_dump($res); $res .= 2; var_dump($res); </code ``` See [live code](https://3v4l.org/urh18#output) The concatenation-assignment operator creates a string context which allows for the variable's integer value to be coerced into the string "1". The operator appends the string "2", and assigns the final result to the variable which now contains a string value of "12". If you were to literally translate this code into JavaScript, you will get very different results, as follows: ```js var res = 1; console.log(res); res += 2; console.log(res); ``` In JavaScript, the "+=" always performs an addition-assignment when numbers are involved. To get the same results in JavaScript as in this PHP example, one may code as follows: ```js var res = 1; var str = res.toString(); str += "2"; console.log(str); ``` Upvotes: 0
2018/03/19
931
3,092
<issue_start>username_0: If I have a list lets say `[1,2,3,4]`, how can I create a tuple of two lists so that the first list contains the odd elements and the second one - the even elements. How can I do this with a tail-recursion? *For example:* ``` Input : [1,2,3,4] Output : ([1,3],[2,4]) with tail recursion and not ranges. [|x<-...] ``` So far I have tried something like: ``` sumt::[Int]->([Int],[Int]) sumt []=([],[]) sumt (x:xs) | x `mod` 2==0 = x: (fst $ tupl xs) | otherwise = x: (snd $ tupl xs) where tupl []=([],[]) tupl (y:ys)=y:(tupl ys) //how can I put the condition here ? I need it //to be aware of both guard cases at each iteration ``` I basically need two local lists that are formed by each guard case and at the end they are placed in a tuple.<issue_comment>username_1: Tail recursive functions are ones in which the final result of a function is a call back to the same function. In Haskell, this means that the right-hand side of an equation must be a call to the function. So, for example, f (x:xs) n = f xs (n+1) is tail recursive, while f' (x:xs) = 1 + f' xs is not - because while there is a recursive call, it is not the result of the function. Instead, the evaluation of (+) is the result. In the case of your code, this means you need something like this: ``` sumt_tr :: [Int] -> ([Int],[Int]) sumt_tr xs = go xs ([],[]) where go [] (os,es) = (reverse os,reverse es) go (x:xs) (os,es) | x `mod` 2 == 0 = go xs (os,x:es) | otherwise = go xs (x:os,es) ``` Here, the function `go` (local to sumt\_tr) is tail recursive, because each equation for `go` directly calls `go` again. Notice that, in order to write `go` as tail recursive, I needed to accumulate the result by passing it as a second argument, to be returned when the end of the list is reached. Upvotes: 2 <issue_comment>username_2: The most straightforward way to do it with explicit recursion is to use a tail-recursive helper function with two accumulators for the result lists: ``` sumt :: [Int] -> ([Int], [Int]) sumt = go [] [] where -- Each recursive call is directly to ‘go’, -- so it’s tail-recursive. go os es (x:xs) | x `mod` 2 == 0 = go os (x:es) xs | otherwise = go (x:os) es xs -- In the base case, it returns a tuple with -- the accumulated results in the proper order. go os es [] = (reverse os, reverse es) ``` A much simpler way to do this is with the `partition` function from `Data.List`: ``` sumt :: (Integral a) => [a] -> ([a], [a]) sumt = partition odd ``` If you look at the definition of `partition`, it’s not implemented with explicit recursion, but with `foldr`. Here it is with `odd` inlined: ``` sumt = foldr select ([], []) where select x ~(os, es) | odd x = (x:os, es) | otherwise = (os, x:es) ``` This has the advantage of being *streaming*: it doesn’t include the O(n) step of reversing the accumulated lists at the end, it just builds the results incrementally. Upvotes: 4 [selected_answer]
2018/03/19
814
2,734
<issue_start>username_0: I have two variables in my dataset and I want to combine `college_grad` and `sex` to create `sex_grad`. The levels should be as follows: * Male Graduate * Female Graduate * Male Non-graduate * Female Non-graduate `collegegrad` has two levels: * Yes * No `sex` has two levels: * Male * Female How should I approach the required combination to support 4 levels? I know how to use `mutate` along with `ifelse`, but that creates only two variables.<issue_comment>username_1: Tail recursive functions are ones in which the final result of a function is a call back to the same function. In Haskell, this means that the right-hand side of an equation must be a call to the function. So, for example, f (x:xs) n = f xs (n+1) is tail recursive, while f' (x:xs) = 1 + f' xs is not - because while there is a recursive call, it is not the result of the function. Instead, the evaluation of (+) is the result. In the case of your code, this means you need something like this: ``` sumt_tr :: [Int] -> ([Int],[Int]) sumt_tr xs = go xs ([],[]) where go [] (os,es) = (reverse os,reverse es) go (x:xs) (os,es) | x `mod` 2 == 0 = go xs (os,x:es) | otherwise = go xs (x:os,es) ``` Here, the function `go` (local to sumt\_tr) is tail recursive, because each equation for `go` directly calls `go` again. Notice that, in order to write `go` as tail recursive, I needed to accumulate the result by passing it as a second argument, to be returned when the end of the list is reached. Upvotes: 2 <issue_comment>username_2: The most straightforward way to do it with explicit recursion is to use a tail-recursive helper function with two accumulators for the result lists: ``` sumt :: [Int] -> ([Int], [Int]) sumt = go [] [] where -- Each recursive call is directly to ‘go’, -- so it’s tail-recursive. go os es (x:xs) | x `mod` 2 == 0 = go os (x:es) xs | otherwise = go (x:os) es xs -- In the base case, it returns a tuple with -- the accumulated results in the proper order. go os es [] = (reverse os, reverse es) ``` A much simpler way to do this is with the `partition` function from `Data.List`: ``` sumt :: (Integral a) => [a] -> ([a], [a]) sumt = partition odd ``` If you look at the definition of `partition`, it’s not implemented with explicit recursion, but with `foldr`. Here it is with `odd` inlined: ``` sumt = foldr select ([], []) where select x ~(os, es) | odd x = (x:os, es) | otherwise = (os, x:es) ``` This has the advantage of being *streaming*: it doesn’t include the O(n) step of reversing the accumulated lists at the end, it just builds the results incrementally. Upvotes: 4 [selected_answer]
2018/03/19
696
2,567
<issue_start>username_0: I need to write a `TextInput` value into the browser console, but when I try to do this I get an error as on the below screenshot or it gets erased automatically when I type in the username into `TextInput`. *Error:* [![enter image description here](https://i.stack.imgur.com/pMts2.png)](https://i.stack.imgur.com/pMts2.png). *My code:* ``` import React, { Component } from 'react'; import {TextInput , Button , View } from 'react-native'; export default class Profile extends Component { constructor(){ super() this.state = { text:'' } } handleChangeText = (typedText) => { this.setState({text:typedText}); console.log(this.state.text); } render() { return ( ); } } ``` What am I missing?<issue_comment>username_1: Try changing to the TextInput code to this (you're missing **onChangeText**): ``` ``` Btw, setState wouldn't change the value immediately. So putting the console.log right after this.setState() probably wouldn't show the correct value. Upvotes: 2 <issue_comment>username_2: Your TextInput handels the changes of your Text. ``` handleChangeText(text) { this.setState(text); // this is async console.log(this.state.text) // could be the old state } this.handleChangeText(text)} /> ``` You can find more about react-native's TextInputs [here](https://facebook.github.io/react-native/docs/textinput.html) Upvotes: 2 <issue_comment>username_3: Your `handleChangeText` should be associated with , not with . You need to pass `change handler` in `onChangeText` attribute. Please try as follows. ``` ``` Hope this will help. Upvotes: 2 <issue_comment>username_4: Your `handleChangeText` function should be used in `TextInput` [`onChangeText`](https://facebook.github.io/react-native/docs/textinput.html#onchangetext) prop so that you can update the state with the correct value. Change your code like below; ``` import React, { Component } from 'react'; import {TextInput , Button , View } from 'react-native'; export default class Profile extends Component { constructor(){ super() this.state = { text:'' } } handleChangeText = (typedText) => { this.setState({text:typedText}, () => { console.log(this.state.text); }); } handleSubmit = (event) => { // do something after submit } render() { return ( ); } } ``` **PS:** `this.setState` is a async function so you need to use callback to read state values right after like shown above code. Upvotes: 3 [selected_answer]
2018/03/19
350
1,082
<issue_start>username_0: This is my query: ``` SELECT store_id as `initial`, CONCAT(',', store_id, ',') AS store_id FROM `mytable` AS `main_table` WHERE (`store_id` LIKE '%,13,%'); ``` These are the results without the `where` conditions: [![enter image description here](https://i.stack.imgur.com/mt7rW.png)](https://i.stack.imgur.com/mt7rW.png) When I executed my query , I got no results. Same for `WHERE (`store\_id`LIKE '%,3,%');` Is it some kind of rule or exception for the `,` and `like` operator ? Thank you :)<issue_comment>username_1: The problem is that field `store_id` in WHERE has initial value and not the value you calculate with `CONCAT(',', store_id, ',') AS store_id` If you want filter you can use `HAVING` keyword Upvotes: 0 <issue_comment>username_2: Using FIND\_IN\_SET fixed my problem. More Info: <https://www.w3resource.com/mysql/string-functions/mysql-find_in_set-function.php> . Credit goes to @Jens ``` SELECT store_id as `store_id`, FROM `ffm_nonpayment_actiontemplate` AS `main_table` WHERE FIND_IN_SET('13', store_id); ``` Upvotes: 2
2018/03/19
545
1,957
<issue_start>username_0: HTML ---- ``` Test1 Test2 Test3 Test4 ``` jQuery ------ ``` $("form").submit(function( event ) { event.preventDefault(); $("form").attr("action", $(this).children(':selected').attr("data-action")); $(this).submit(); }); ``` Problem ------- Form action does not update and form does not submit?<issue_comment>username_1: create a separate button like this: HTML: ``` Submit! ``` JS/jQuery: ``` $(document).ready(function() { $('#submit-btn').on('click', function() { var action = $('form select option:selected').data('action'); $.ajax({ url: action, //rest of ajax code }) }) }); ``` instead of running the event on form submit, we execute the action via the button, which gets the action based on the selected option data-action value. Upvotes: 0 <issue_comment>username_2: Use a button instead of submit button and submit the form inside the click event handler of the button. Inside the click handler, before submitting the form, change the action of the form. I hope you can change the input type="submit" to type="button". Upvotes: -1 <issue_comment>username_3: I'll assume that you have a form and a submit button *(since it is missing in your codes)*, what your planning can also be done this way: Since you're using the attribute `data-`, you can get its value by using `data(nameOfData)` **Using submit button** ```js $("form").submit(function(e){ var newMethod = $(this).find(':selected').data('action'); $(this).attr('action', newMethod); }); ``` ```html Test1 Test2 Test3 Test4 Submit ``` **Using a simple button** ```js $("#btnSubmit").click(function(){ var newMethod = $("#selectElement :selected").data('action'); $(this).closest('form').attr('action', newMethod); $(this).closest('form').submit(); }); ``` ```html Test1 Test2 Test3 Test4 Submit ``` Upvotes: 1 [selected_answer]
2018/03/19
617
2,249
<issue_start>username_0: In my current project, I have an Oracle Database 11g. On the database is installed Java 1.6. My task is to connect over https to web service provider from database procedure (plsql). For this task have used Java stored procedure (rather than HTTP\_UTIL plsql package) because I also need to sign XML with a certificate before sending. The whole process worked well til now (picture A). Nowadays the web service provider has disabled the TLS1.0 and only TLS1.1 and TLS1.2 are supported. This brings me problems because Java 1.6 does not support TLS1.1 and TLS1.2 and it's impossible to upgrade Java on the database side. The idea is to write some kind of webservice proxy.. (picture B): [![enter image description here](https://i.stack.imgur.com/tHwug.jpg)](https://i.stack.imgur.com/tHwug.jpg) My idea is to make some kind of web service proxy (Web-service to Web-service communication over SSL) but doesn't know if this is the right approach to take? Another question is what is the best (simple) way to do that? For web service deployment I have Oracle Weblogic or Tomcat container. thank you for any info. I can't get any support from Oracle about this scenario (consuming webservice from Oracle 11g over TLS1.1/TLS1.2).<issue_comment>username_1: You could use [Bouncy Castle](https://www.bouncycastle.org/java.html) as an JCE Provider, if it is possible to load additional libs into the JVM. The you would have to use Bouncy Castle in your SSL Connection as described [here](https://stackoverflow.com/a/33375677/3161062). Otherwise you could upgrade [your database](https://docs.oracle.com/cd/E11882_01/network.112/e40393/asossl.htm#ASOAG9665) if you can somehow sign your XML in PL/SQL. Else you can also go for the other alternative you mentioned, as long as it is in a secured environment it should not be a problem. Maybe this can also simplify your setup since you do not have to sign your XML in the database but can rather do it in the oracle/tomcat container. Upvotes: 2 [selected_answer]<issue_comment>username_2: Solved with: ``` SSLContext sslcontext = SSLContext.getInstance("TLS",new BouncyCastleJsseProvider()); ``` now I have to upload libraries with loadjava to Oracle database Upvotes: 0
2018/03/19
1,184
4,628
<issue_start>username_0: I have an REST API interface which only gets me the first level of some information. So for example I want to collect groups. Every Group can have subgroups. So for example "Group 1" has the Subgroups "Group A" and "Group B". "Group A" has the Subgroup "GroupX". And so on. But the API only gives me the first level of Groups for a group name. So I pass "Group 1" to the API and it returns "Group A" and "Group B". To get the supgroups of Group A, I need to call the API again. But i don't know how many iterations of this it will have. So I thought about using recursion but I haven't come far. So far my Code: ``` getGroupChildren(group:string){ return this restService.getGroupChildren(group)} getGroups():Promise{ let collection:string[] = []; return this.getGroupChildren("Group A").then((result)=> { if(result.data.length !==0){ return this.getGroupChildren(data[0].groupName); } }); } ``` Now this will only return me the first Supgroups of the first element. How can I accomplish it will always find every Supgroup no matter how many? Maybe is it good to use Observables? Here an example structure of one API call: ``` { "groupName" : "Group_1", "children" : ["Group_A", "Group_B"]} ```<issue_comment>username_1: You can achieve what you want with `flatMap` operator of `Observable` ``` getGroups(group: string) { return this.http.get(`/group/{group}`).flatMap(response => { if (response.children.length === 0) { // you hit a leaf, stop recursion here return Observable.of(response); } else { // there are more levels to go deeper return this.getGroups(response.children[0].groupName); } }); } ``` **Edit** Using Promise Let's say you use a `GroupService` which returns the data instead of `HttpClient`. You can convert a `Promise` to an `Observable` with `fromPromise` operator. ``` getGroups(group: string) { return Observable.fromPromise(this.groupService.get(group)).flatMap(response => { if (response.children.length === 0) { // you hit a leaf, stop recursion here return Observable.of(response); } else { // there are more levels to go deeper return this.getGroups(response.children[0].groupName); } }); } ``` **Edit 2** Using this service Let's take a look at your example. You have following json ``` { "groupName": "Group_1", "children" : ["Group_A", "Group_B"] } ``` In your component file, you call the service as follows ``` ... this.recursiveGroupService.getGroups("Group_1") .subscribe(response => { // at this point response will be `Group_A` }) ``` **Edit 3** Getting the whole object This time we'll use `forkJoin` and call `getGroups` for all of the children and collect the results in a `children` array. **Note:** I haven't tested this code myself. It may contains some error. If it has, let me know. ``` import { forkJoin, of } from 'rxjs'; import { map } from 'rxjs/operators'; getGroups(group: string) { let retVal; return Observable.fromPromise(this.groupService.get(group)).flatMap(response => { retVal = { groupName: response.groupName }; if (response.children.length === 0) { // you hit a leaf, stop recursion here return of(retVal); } else { // there are more levels to go deeper // this will create list of observable for each child const children$ = response.children.map( child => this.getGroups(child)); // forkJoin will execute these observables in parallel return forkJoin(children$).pipe( map(results => { // results is an array containing children data retVal.children = results; return retVal; }) ); } }); } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You could use `Promise.all` to recursively resolve deeper children, and then take the result (an array) to create an object to resolve the promise with: ``` getGroups(groupName = "Group A") { return this.getGroupChildren(groupName).then((result) => Promise.all(result.data.map( ({groupName}) => this.getGroups(groupName) )) ).then(children => ({ groupName, children })); } ``` So the promised value could be something like: ``` [{ groupName: "Group A", children: [{ groupName: "Group A1", children: [] }, { groupName: "Group A2", children: [] }] }] ``` Upvotes: 1
2018/03/19
1,151
3,044
<issue_start>username_0: Falks, here is xml: ``` declare @xml xml = Cast(' Bloomberg IS.Clipboard e7aa2033-0a53-4390-a09b-504673ea54bb New 2017-11-23T13:35:49.171696+03:00 603bc973-39d0-e711-9417-984be16869ec 2017-11-22T00:00:00 bloomberg\_bond Bloomberg XS0114288789 Equity CRNCY USD PAR\_AMT .475000000 CH0385518086 Equity CRNCY CHF PAR\_AMT 5000.000000000 ' as xml) ``` Here is small select over it to grab identifier column only: ``` select c.value('@identifier', 'varchar(50)') as identifier from @xml.nodes('asset_market_data_response/body/asset_list/asset/common') t(c) ``` But it gives no output (col name 'identifier' & no rows). What I need is: [![enter image description here](https://i.stack.imgur.com/MkWll.png)](https://i.stack.imgur.com/MkWll.png) What's wrong with my select? It's so simple, I cannot get what goes wrong. May be, there is some problem with xml itself?<issue_comment>username_1: Found problem in xml itself. Removed nodes ending " />" + xml schema references: ``` set @chr = Replace( Replace( Replace( Replace( Replace( Replace( Replace(@chr, ' xmlns="http://schemas.bcs.ru/marketing_data_service/in/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"', ''), ' xmlns="http://schemas.bcs.ru/is/clipboard/"', ''), '', ''), '', ''), '', ''), '', ''), '', '') ``` After this operatin resulting xml to parce is: ``` Bloomberg IS.Clipboard e7aa2033-0a53-4390-a09b-504673ea54bb New 2017-11-23T13:35:49.171696+03:00 603bc973-39d0-e711-9417-984be16869ec 2017-11-22T00:00:00 bloomberg\_bond Bloomberg XS0114288789 Equity CRNCY USD PAR\_AMT .475000000 CH0385518086 Equity CRNCY CHF PAR\_AMT 5000.000000000 ``` And operable code to give desired result is: ``` select Tab_ass.Col_ass.value('identifier[1]', 'varchar(50)') as identifier from @xml.nodes('asset_market_data_response/body/asset_list') as Tab(Col) cross apply Tab.Col.nodes('asset/common') Tab_ass(Col_ass) ``` This works too: ``` select t.c.value('identifier[1]', 'varchar(50)') as identifier from @xml.nodes('asset_market_data_response/body/asset_list/asset/common') t(c) ``` And this too: ``` select t.c.query('./identifier').value('.', 'varchar(50)') as identifier from @xml.nodes('asset_market_data_response/body/asset_list/asset/common') t(c) ``` Upvotes: -1 <issue_comment>username_2: No, you should not use string methods to *repair* your XML, this is perfectly okay! You just have to declare the namespace involved: ``` WITH XMLNAMESPACES(DEFAULT 'http://schemas.bcs.ru/marketing_data_service/in/') select c.value('(identifier/text())[1]', 'varchar(50)') as identifier from @xml.nodes('asset_market_data_response/body/asset_list/asset/common') t(c); ``` Upvotes: 3 [selected_answer]
2018/03/19
471
1,709
<issue_start>username_0: I want to make an event that runs every time the mouse clicks anywhere on the form. Currently i got it set this: ``` this.MouseClick += new System.Windows.Forms.MouseEventHandler(this.Form_MouseClick); ``` But this only works when not clicking on any other element like a panel Is there any way i can override this?<issue_comment>username_1: You can listen `WndProc`, override method in your form class: ``` protected override void WndProc(ref Message m) { //0x210 is WM_PARENTNOTIFY if (m.Msg == 0x210 && m.WParam.ToInt32() == 513) //513 is WM_LBUTTONCLICK { Console.WriteLine(m); //You have a mouseclick(left)on the underlying user control } base.WndProc(ref m); } ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: You need to dynamically traverse through all the controls in the form and add the MouseClick event handler. Please check this answer: [Handling a Click for all controls on a Form](https://stackoverflow.com/questions/247946/handling-a-click-for-all-controls-on-a-form) Below code adds MouseClick event handler to the first level of controls: ``` foreach (Control c in this.Controls) { c.MouseClick += new MouseEventHandler( delegate(object sender, MouseEventArgs e) { // handle the click here }); } ``` But if your controls have clild controld then you will have to recursively add the eventhandler: ``` void initControlsRecursive(ControlCollection coll) { foreach (Control c in coll) { c.MouseClick += (sender, e) => {/* handle the click here */}); initControlsRecursive(c.Controls); } } /* ... */ initControlsRecursive(Form.Controls); ``` Upvotes: -1
2018/03/19
733
2,251
<issue_start>username_0: Is it possible to do a polynomial regression line on a scatter() in matplotlib? This is my graph: <https://i.stack.imgur.com/3ra9x.jpg> ``` alg_n = [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4...] orig_hc_runtime = [0.01, 0.02, 0.03, 0.04, 0.04, 0.04, 0.05, 0.09...] plt.scatter(alg_n, orig_hc_runtime, label="Orig HC", color="b", s=4) plt.scatter(alg_n, mod_hc_runtime, label="Mod HC", color="c", s=4) ... x_values = [x for x in range(5, n_init+2, 2)] y_values = [y for y in range(0, 10, 2)] plt.xlabel("Number of Queens") plt.ylabel("Time (sec)") plt.title("Algorithm Performance: Time") plt.xticks(x_values) plt.yticks(y_values) plt.grid(linewidth="1", color="white") plt.legend() plt.show() ``` Is it possible to have regression lines for eat data set? If so, can you please explain how I can do it.<issue_comment>username_1: You can listen `WndProc`, override method in your form class: ``` protected override void WndProc(ref Message m) { //0x210 is WM_PARENTNOTIFY if (m.Msg == 0x210 && m.WParam.ToInt32() == 513) //513 is WM_LBUTTONCLICK { Console.WriteLine(m); //You have a mouseclick(left)on the underlying user control } base.WndProc(ref m); } ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: You need to dynamically traverse through all the controls in the form and add the MouseClick event handler. Please check this answer: [Handling a Click for all controls on a Form](https://stackoverflow.com/questions/247946/handling-a-click-for-all-controls-on-a-form) Below code adds MouseClick event handler to the first level of controls: ``` foreach (Control c in this.Controls) { c.MouseClick += new MouseEventHandler( delegate(object sender, MouseEventArgs e) { // handle the click here }); } ``` But if your controls have clild controld then you will have to recursively add the eventhandler: ``` void initControlsRecursive(ControlCollection coll) { foreach (Control c in coll) { c.MouseClick += (sender, e) => {/* handle the click here */}); initControlsRecursive(c.Controls); } } /* ... */ initControlsRecursive(Form.Controls); ``` Upvotes: -1
2018/03/19
871
3,379
<issue_start>username_0: I have a VBA macro(Word2010) script to highlight all the text in italics. But when executed in large file say a document with more than 10 pages the Word get crashed. I have used the below code for this purpose. ``` Sub Italics_Highlight() ' ' test_italics_highlight_ Macro ' ' Application.ScreenUpdating = False Dim myString As Word.Range Set myString = ActiveDocument.Content With myString.Find .ClearFormatting .Text = "" .Font.Italic = True While .Execute myString.HighlightColorIndex = wdTurquoise myString.Collapse wdCollapseEnd Wend End With MsgBox "Thank you!" End Sub ``` Could you please help to overcome this. Thanks for your help in advance.<issue_comment>username_1: Your error description looks like your code is running forever and doesn't finish. 1. You might want to add a `DoEvents` inside your `While` loop to keep Word responsive while running the code. ``` With myString.Find .ClearFormatting .Text = "" .Font.Italic = True While .Execute DoEvents 'keeps Word responsive myString.HighlightColorIndex = wdTurquoise myString.Collapse wdCollapseEnd Wend End With ``` 2. I'm not sure if your code will ever stop. The loop might not stop at the end of the document but start again from beginning, and therefore always find something italic again and again, looping forever. So you might need to set the `.Wrap = wdFindStop` to stop at the end of the document. See [Find.Wrap Property (Word)](http://%20https://msdn.microsoft.com/en-us/vba/word-vba/articles/find-wrap-property-word). ``` With myString.Find .ClearFormatting .Text = "" .Font.Italic = True .Wrap = wdFindStop 'stop at the end of the document While .Execute DoEvents 'keeps Word responsive myString.HighlightColorIndex = wdTurquoise myString.Collapse wdCollapseEnd Wend End With ``` Upvotes: 2 <issue_comment>username_2: You don't need to stop at each "found" and apply highlighting. You can do it as part of a Find/Replace: ``` Sub testInfiniteLoop() Dim myString As word.Range Set myString = ActiveDocument.content Options.DefaultHighlightColorIndex = wdTurquoise With myString.Find .ClearFormatting .Text = "" .Font.Italic = True .Replacement.Text = "" .Replacement.Highlight = wdTurquoise .wrap = wdFindStop 'stop at the end of the document .Execute Replace:=wdReplaceAll End With End Sub ``` Upvotes: 1 [selected_answer]<issue_comment>username_3: The following code not only highlights but also restores whatever highlight settings were previously in force: ``` Sub Italics_Highlight() Application.ScreenUpdating = False Dim i As Long: i = Options.DefaultHighlightColorIndex Options.DefaultHighlightColorIndex = wdTurquoise With ActiveDocument.Content.Find .ClearFormatting .Replacement.ClearFormatting .Text = "" .Replacement.Text = "^&" .Replacement.Highlight = True .Format = True .Font.Italic = True .Wrap = wdFindContinue .Execute Replace:=wdReplaceAll End With Options.DefaultHighlightColorIndex = i Application.ScreenUpdating = True MsgBox "Done!" End Sub ``` As you can see, you also don't need: ``` Dim myString As Word.Range Set myString = ActiveDocument.Content ``` Upvotes: 0
2018/03/19
169
691
<issue_start>username_0: I am trying to figure out if Oracle JDBC driver supports SOCKS proxy or not. I am not finding any documentation related to this. Please let know if you are aware.<issue_comment>username_1: No the Oracle JDBC driver doesn't support SOCKS5 proxy. In the soon to be release 18.1 version of the thin driver there will be support for HTTPS proxy and websocket. Upvotes: 1 [selected_answer]<issue_comment>username_2: Yes, but only when disabling java NIO for jdbc by passing `-Doracle.jdbc.javaNetNio=false` to jvm or setting the property programmatically. You can then use the standard `socksproxyHost` and `socksproxyPort` properties to configure the proxy. Upvotes: 1
2018/03/19
367
988
<issue_start>username_0: I have a list of numbers: ``` Numbers = [1, 34, -45] ``` I want to create a function that returns the width of the number with the largest number of characters: For example: ``` Max_width([1, 34, -45]) ``` Output: ``` 3 ```<issue_comment>username_1: Use this: ``` def max_width(a): return max(len(str(x)) for x in a) print(max_width([1, 34, -45])) # 3 ``` I don't think any explanation is needed here, but, here you go: 1. `str(x)` converts the `int` to a string. (i.e. `str(-45) = '-45'`) 2. `len(x)` returns the length of the string (or, width, as you call it) 3. `x for x in a` simply iterates over the list `a` 4. `max()` returns the max value from the list Upvotes: 2 <issue_comment>username_2: You can try this one too, using the `map` function: ``` def Max_width(numbers): return len(max(map(str,numbers), key=len)) Numbers = [1, 34, -45] print("Max length: ", Max_width(Numbers)) ``` Output: ``` Max length: 3 ``` Upvotes: 1
2018/03/19
268
954
<issue_start>username_0: Whenever I move the cursor left or right using h or l, the fold under the cursor opens automatically. By the way, moving top or down does not have this problem. Is there any way to prevent automatically opening fold when moving horizontally?<issue_comment>username_1: You can create the auto command: ``` autocmd! CursorMoved * if foldclosed('.') != -1 | \ nnoremap h | \ nnoremap l | \ else | \ silent! unmap h| \ silent! unmap l| \ endif ``` Here `foldclosed('.')` returns `-1` if current line is unfolded. Instead of using this auto command just avoid pressing `h` or `l` over folds. Upvotes: 2 <issue_comment>username_2: The default value of [`:help 'foldopen'`](https://vimhelp.appspot.com/options.txt.html#'foldopen') has `hor` in it; this causes horizontal movements (like `l`, `h`) to open folds. To disable this, simply add this to your `~/.vimrc`: ``` set foldopen-=hor ``` Upvotes: 1
2018/03/19
565
2,203
<issue_start>username_0: I have a list in C#. List where User object has few parameters. username, age something like that. In the list there are duplicate (only twice) entities according to the username. Eventhoug the usernames are same, other attributes are not same. How can I merge those elements and remove duplications of elements in that list. P.S: Eventhough there are duplicate entities according to the username, other atteributes empty in one element and other element has the values for those attributes.<issue_comment>username_1: You can use a IEqualityComparer ``` internal class UserEqualChecker : IEqualityComparer { public bool Equals(User x, User y) { //Code for what makes them equal //for instance return x.UserName.Equals(y.UserName, System.StringComparison.OrdinalIgnoreCase); } //..... } ``` And then... ``` var list = new List(); //put the data into the list... list.Distinct(new UserEqualChecker()); ``` This way, you have a reusable comparer Upvotes: 1 <issue_comment>username_2: ``` var duplicates = Users .GroupBy(u => u.UserName) .Where(g => g.Count() > 1) .ToList(); ``` Each member is now an IEnumerable with the same UserName ``` foreach(var duplicate in duplicates) { // write some logic to combine >= 2 Users // and remove all but 1 from original Users // a rough idea: var main = duplicate.First(); foreach(var user in duplicate.Skip(1)) { // merge user with main .... toDeleteList.Add(user); } } ``` Upvotes: 2 <issue_comment>username_3: You can group your list using linq and then create a new object with your merged data: ``` var merged = from item in mylist group item by item.UserName into grp select new YourClass { Username = grp.Key, Proeperty1 = grp.Where(g => g.Porperty1 != null).FirstOrDefault(), Property2 = ... } ``` This assume usernames are CaseSensitive, you can change the group by using UserName.Toupper() or something similar... As @HenkHolterman said, you have to define how select values for your properties. The rule i wrote for Prperty1 is only an example... Upvotes: 0
2018/03/19
1,437
4,319
<issue_start>username_0: I followed the examples that I found but for some reason clicking on the above div won't trigger a click on the below input. Can someone tell me where I'm going wrong? ```js $(document).on('click', '#uploader', function(event) { $("#url").click(); }); ``` ```css #uploader { width: 480px; height: 100px; line-height: 100px; border: 2px dashed #443d66; cursor: pointer; color: #777; font-family: 'Arial'; font-weight: bold; text-align: center; margin-bottom:10px; } ``` ```html Click ```<issue_comment>username_1: Instead of this ``` $(document).on('click', '#uploader', function(event) { $("#url").click(); }); ``` Use like this ``` $(document).on('click', '#uploader', function(event) { $("#url").trigger('click); }); ``` Upvotes: 1 <issue_comment>username_2: I hope on click of div ***you want to show file upload window.*** For that Convert `type="text"` to `type="file"` and it will work fine. ***Working snippet:-*** ```js $(document).on('click', '#uploader', function(event) { $("#url").click(); }); ``` ```css #uploader { width: 480px; height: 250px; line-height: 250px; border: 2px dashed #443d66; cursor: pointer; color: #777; font-family: 'Arial'; font-weight: bold; text-align: center; } ``` ```html Click ``` ***Note:-*** Your code is working fine, but there is no click handler written to catch that event. So notting happening You can assure by adding click handler to your code like below:- ```js $(document).on('click', '#uploader', function(event) { $("#url").focus(); // you can apply .click() too }); /* for click add the even handler like below $('#url').click(function(){ $(this).val('Hey click worked!').focus(); }); */ ``` ```css #uploader { width: 480px; height: 250px; line-height: 250px; border: 2px dashed #443d66; cursor: pointer; color: #777; font-family: 'Arial'; font-weight: bold; text-align: center; } ``` ```html Click ``` Upvotes: 2 <issue_comment>username_3: If you need the click to focus on the `#url` element, use [`.focus()`](https://api.jquery.com/focus/) instead of [`.click()`](https://api.jquery.com/click/): ```js $(document).on('click', '#uploader', function(event) { $("#url").focus(); }); ``` ```css #uploader { width: 480px; height: 250px; line-height: 250px; border: 2px dashed #443d66; cursor: pointer; color: #777; font-family: 'Arial'; font-weight: bold; text-align: center; } ``` ```html Click ``` Upvotes: 3 [selected_answer]<issue_comment>username_4: Are you missing an click event for the *#url* element? When triggering a click event for an element using *.click()* you have to define an actual click event for it. ``` $(document).on('click', '#uploader', function(event) { $("#url").click(); }); $("#url").click(function(){ alert('asd'); }); ``` This could be the only reason, unless the scripts aren't loaded properly, or you are getting some errors while running your JS. See: [JSFiddle](https://jsfiddle.net/h1frdhma/) In case you want the click event to focus on your input element, you'd use *.focus()*, as described in a previous answer written by <NAME>. See: [JSFiddle](https://jsfiddle.net/44ptdre6/) Upvotes: 1 <issue_comment>username_5: ```js $(document).on('click', '#uploader', function(event) { $("#url").click(); }); ``` ```css #uploader { width: 480px; height: 250px; line-height: 250px; border: 2px dashed #443d66; cursor: pointer; color: #777; font-family: 'Arial'; font-weight: bold; text-align: center; } ``` ```html Click ``` I guess you want to open native file browser window on click. Check the snippet, there in an click handler attached to the input too. You can use this for further processing. Upvotes: 2 <issue_comment>username_6: > > **Click & Focus, just modify few code** > > > ```js $('#uploader').on('click', function(event) { $("#url").click().focus(); }); ``` ```css #uploader { width: 480px; height: 100px; line-height: 150px; border: 2px dashed #443d66; cursor: pointer; color: #777; font-family: 'Arial'; font-weight: bold; text-align: center; } ``` ```html Click ``` Upvotes: 1
2018/03/19
590
2,206
<issue_start>username_0: I recently installed **Node** on my MacBook running `High Sierra 10.13` using the `.pkg` file supplied on their website. After a few hours of experimenting with installing other packages and writing scripts, I decided I would like to uninstall both Node **and** NPM to get a fresh start. I tried all of the top answers from [this](https://stackoverflow.com/questions/11177954/how-do-i-completely-uninstall-node-js-and-reinstall-from-beginning-mac-os-x) thread, but to my dismay, after having followed all instructions, and repeated all steps many times, terminal would still recognise the Node and NPM versions. I ended up running a `bash` script through terminal which I found on the same thread, which ended up doing nothing but downgrading my current version of Node to `0.10.8` - making it a lot more difficult to delete in the long run. I've re-done all of the steps from the aforementioned thread with no avail, and terminal still stating that it has version `0.10.8` (and **NPM** just completely not working at all) Currently at my last stance of what to do, so hopefully someone on here can help me with my problem. Cheers.<issue_comment>username_1: Try to run the following command ``` brew uninstall node ``` After the above command, you need to scan manually for node\_modules if exists. Try following. ``` grep -irl "node_modules/node" sudo rm -rf result_from_above_command rm -rf ~/.npm ``` I hope this will remove all the node and it's components. As I have done like this, once before. Thank you. Upvotes: 0 <issue_comment>username_2: After searching through `Google` and `StackOverflow` for hours, I finally came up with a solution to the problem on my own. Running the `type` command within **terminal** against `node`, I got this returned: ``` :~ myusername$ type node node is /Users/myusername/.nvm/v0.10.48/bin/node ``` Subsequently, after deleting that folder, Node appears to be completely removed from my system. --- I have since made sure that I have deleted all `node` and `node_module` folders that I could find within `/usr/` to make sure - and I would suggest that anyone attempting this also do the same. Upvotes: 4 [selected_answer]
2018/03/19
384
1,505
<issue_start>username_0: I know that it is possible using `@SqlResultSetMapping` but I want to select not whole entity from the database but some fields and then map i to my entity using one of the constructor which accept that fields. Is that possible to map result with `@EntityResult` for only a few `@FieldResult`? I was trying to do that and all the time I get error which said that there is not specify mapping for some fields which exist in that entity.<issue_comment>username_1: Try to run the following command ``` brew uninstall node ``` After the above command, you need to scan manually for node\_modules if exists. Try following. ``` grep -irl "node_modules/node" sudo rm -rf result_from_above_command rm -rf ~/.npm ``` I hope this will remove all the node and it's components. As I have done like this, once before. Thank you. Upvotes: 0 <issue_comment>username_2: After searching through `Google` and `StackOverflow` for hours, I finally came up with a solution to the problem on my own. Running the `type` command within **terminal** against `node`, I got this returned: ``` :~ myusername$ type node node is /Users/myusername/.nvm/v0.10.48/bin/node ``` Subsequently, after deleting that folder, Node appears to be completely removed from my system. --- I have since made sure that I have deleted all `node` and `node_module` folders that I could find within `/usr/` to make sure - and I would suggest that anyone attempting this also do the same. Upvotes: 4 [selected_answer]