date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/22
890
2,439
<issue_start>username_0: I am trying to integrate over a multivariate distribution in python. To test it, I built this toy example with a bivariate normal distribution. I use `nquad()` in order to extend it to more than two variables later on. Here is the code: ``` import numpy as np from scipy import integrate from scipy.stats import multivariate_normal def integrand(x0, x1, mean, cov): return multivariate_normal.pdf([x0, x1], mean=mean, cov=cov) mean = np.array([100, 100]) cov = np.array([[20, 0], [0, 20]]) res, err = integrate.nquad(integrand, [[-np.inf, np.inf], [-np.inf, np.inf]], args=(mean, cov)) print(res) ``` The result I get is `9.559199162933625e-10`. Obviously, this is incorrect. It should be (close to) 1. What is the problem here?<issue_comment>username_1: scipy's nquad does numerical integration only on bounded rectangular domains. The fact that your integral converges at all is due to the `exp(-r^2)`-type weight of the PDF (see [here](https://docs.scipy.org/doc/scipy-1.0.0/reference/generated/scipy.stats.multivariate_normal.html) for its explicit form). Hence, you need [Hermite quadrature](https://en.wikipedia.org/wiki/Gauss%E2%80%93Hermite_quadrature) in 2D. [Some](https://www.ams.org/journals/mcom/1969-23-108/S0025-5718-1969-0258281-4/home.html) [articles](https://www.ams.org/journals/mcom/1963-17-082/S0025-5718-1963-0161473-0/home.html) exist on this topic, and [quadpy](https://github.com/nschloe/quadpy#2d-space-with-weight-function-exp-r2) (a project of mine) implements those. You'll first need to bring your integral into a form that contains the exact weight `exp(-r**2)` where `r**2` is `x[0]**2 + x[1]**2`. Then you cut this weight and feed it into quadpy's e2r2 quadrature: ``` import numpy import quadpy def integrand(x): return 1 / numpy.pi * numpy.ones(x.shape[1:]) val = quadpy.e2r2.integrate( integrand, quadpy.e2r2.RabinowitzRichter(3) ) print(val) ``` ``` 1.0000000000000004 ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: A bit off-topic, but you should use the following routine instead (it is quite fast): ``` from scipy.stats.mvn import mvnun import numpy as np mean = np.array([100, 100]) cov = np.array([[20, 0], [0, 20]]) mvnun(np.array([-np.inf, -np.inf]), np.array([np.inf, np.inf]), mean, cov) ``` Or use `multivariate_normal.cdf` and do the substractions. Upvotes: 2
2018/03/22
353
1,199
<issue_start>username_0: I have below columns in book\_history table. Every new book taken will have entry in this table with a new auto increment id, current date, student id and book id. ``` Id, RecordCreatdOn, StudentId, BookId ``` I want to get students (studentId) who have last taken any book before '2017-12-31'. Can anyone help me with the query to fetch the same ?<issue_comment>username_1: That is straight forward: ``` Select Distinct StudentID from book_history where RecordCreatedOn < @yourDate ``` Upvotes: 0 <issue_comment>username_2: You need to group by the student to get every students last book with `max(RecordCreatdOn)` ``` select studentId from book_history group by studentId having max(RecordCreatdOn) < '2017-12-31' ``` Upvotes: 2 <issue_comment>username_3: If you google 'mysql date comparisons', you'll get a LOT of Stackoverflow examples. Look at answers such as: [mysql date comparison with date\_format](https://stackoverflow.com/questions/13507642/mysql-date-comparison-with-date-format) That can give you at least somewhere to start. Try this query: ``` Select StudentId from book_history WHERE DATE(RecordCreatdOn) <= '2017-12-31' ``` Upvotes: 0
2018/03/22
849
3,099
<issue_start>username_0: I am pretty new to AWS Dynamodb. I am using python's boto3 to fetch all items of a particular attribute (say, Attribute name is 'Name') from the dynamodb table. Although there are other attributes too in the table like 'Email', 'Profession'. I need to fetch or get all items only of the attribute 'Name'. My Name attribute consists of four items : Nick, John, Gary, Jules. How can I fetch this using boto3 ? I tried with client.query method of boto3 but I am not sure if it works.<issue_comment>username_1: If you have DynamoDB table 'Test' as follows: [![enter image description here](https://i.stack.imgur.com/GxUbV.png)](https://i.stack.imgur.com/GxUbV.png) To fetch all items with attribute 'Name', use the following code: ``` from __future__ import print_function # Python 2/3 compatibility import boto3 import json import decimal # Helper class to convert a DynamoDB item to JSON. class DecimalEncoder(json.JSONEncoder): def default(self, o): if isinstance(o, decimal.Decimal): if o % 1 > 0: return float(o) else: return int(o) return super(DecimalEncoder, self).default(o) # us-east-1 is the region name here dynamodb = boto3.resource('dynamodb', 'us-east-1') # Test is the table name here table = dynamodb.Table('Test') # Table scan response = table.scan() for i in response['Items']: # get all the table entries in json format json_str = json.dumps(i, cls=DecimalEncoder) #using json.loads will turn your data into a python dictionary resp_dict = json.loads(json_str) # Getting particular column entries # will return None if 'Name' doesn't exist print (resp_dict.get('Name')) ``` Sample output: [![enter image description here](https://i.stack.imgur.com/U7bQZ.png)](https://i.stack.imgur.com/U7bQZ.png) Upvotes: 2 <issue_comment>username_2: Let's assume `User` is table name from where you want to fetch only `NAME` attribute. First scan the table and iterate over it and get `NAME` attribute and store in a list. Here I store the values of `NAME` attribute in a list named `nameList` ``` import boto3 import json def getNames(): dynamodb = boto3.resource('dynamodb') table = dynamodb.Table('User') response = table.scan() nameList = [] for i in response['Items']: nameList.append(i['NAME']) return nameList ``` Upvotes: 1 <issue_comment>username_3: Not sure if it is late to answer but you can use something like "ProjectionExpression" to get just the name attribute from Dynamodb : For example in your case you should use something like ``` tableparam = { 'ProjectionExpression':"Name" } reponse = tablename.scan(**tableparams) ``` It worked for me. Let me know if it helps you. Upvotes: 2 <issue_comment>username_4: you can use AttributesToGet when you use the 'scan' function. ``` import boto3 import json def getNames(): dynamodb = boto3.resource('dynamodb') table = dynamodb.Table('User') response = table.scan(AttributesToGet=['name']) return response['Items'] ``` Upvotes: 3
2018/03/22
1,689
4,902
<issue_start>username_0: ``` let receipts_array =["Anna 0.4","Peter 0.25","Anna 0.5","Peter 0.5","Peter 0.33"]; // ``` how can I split this array so I can have for example the first index to be just Anna and second 0.4, because i need to sum the numbers and then see who won **output is just Peter**<issue_comment>username_1: Assuming the posted data sample, you can use the function `split` and function `reduce`. ```js let receipts_array = ["Anna 0.4", "Peter 0.25", "Anna 0.5", "Peter 0.5", "Peter 0.33"]; var result = receipts_array.reduce((a, c) => { var [name, number] = c.split(/\s+/); a[name.trim()] = (a[name.trim()] || 0) + +number.trim(); return a; }, {}); var winner = Object.keys(result).sort((a, b) => result[a] - result[b]).pop(); console.log(result); console.log(winner); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Upvotes: 0 <issue_comment>username_2: You could use an object for summing the values of the names and reduce the keys by checking the values. ``` array.forEach(s => { var [k, v] = s.split(' '); count[k] = (count[k] || 0) + +v; }); ``` It means take every item of `array` as `s`, split that string by space and use a [destructuring assignment](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment) of the array into two items with the name `k` and `v` as key and value. Then use `k` as key of the object `count`, get this value and if not given take zero as a default value. Then add the value by taking an [unary plus `+`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Arithmetic_Operators#Unary_plus) for converting the string to a number. Later assign the sum to the property `k` of `count`. ```js var array = ["Anna 0.4", "Peter 0.25", "Anna 0.5", "Peter 0.5", "Peter 0.33"], count = Object.create(null); array.forEach(s => { var [k, v] = s.split(' '); count[k] = (count[k] || 0) + +v; }); console.log(Object.keys(count).reduce((a, b) => count[a] > count[b] ? a : b)); console.log(count); ``` For expected same values, you could return an array with the winner names. ```js var array = ["Anna 0.4", "Peter 0.25", "Anna 0.5", "Peter 0.5", "Peter 0.33", "Foo 1.08"], count = Object.create(null), winner; array.forEach(s => { var [k, v] = s.split(' '); count[k] = (count[k] || 0) + +v; }); winner = Object .keys(count) .reduce((r, k) => { if (!r || count[k] > count[r[0]]) { return [k]; } if (count[k] === count[r[0]]) { r.push(k); } return r; }, undefined); console.log(winner); console.log(count); ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: You can map your array to another array with the itens splitted, like so: `const resulting_array = receipts_array.map(x => ({name: x.split(' ')[0], points: +x.split(' ')[1]}))` Then you can sort your resulting array and get the first place. `const first_place = resulting_array.sort((a, b) => a.points < b.points ? 1 : -1)[0]` this way you'll have the object with the person with the highest pontuation. `console.log(first_place.name)` I'd recommend you to store the info in an array of objects like the one gotten from the map function thoug. Upvotes: 0 <issue_comment>username_4: I used **[`Array.prototype.forEach`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach)** to traverse array then split all element by space using **[`split`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/split)** and then create one object that will hold value total score for different players and based on that score choose the winner. ```js var receipts_array =["Anna 0.4","Peter 0.25","Anna 0.5","Peter 0.5","Peter 0.33"]; var x=[],max=-10000,winner; receipts_array.forEach(function(e){ var y=e.split(" "); //split the value if(!x[y[0]]){ //check if object with key exist or not x[y[0]]=parseFloat(0); //if object not exist create one } x[y[0]]+=parseFloat(y[1]); // add score of player if(x[y[0]]>max){ //compare for max score max=x[y[0]]; winner=y[0]; } }); console.log(winner); ``` Upvotes: 0 <issue_comment>username_5: You can create a **dictionary** whose keys are the names. ```js let receipts_array = ["Anna 0.4", "Peter 0.25", "Anna 0.5", "Peter 0.5", "Peter 0.33"]; var result = receipts_array.reduce((obj, c) => { var [name, number] = c.split(/\s+/); obj[name] = (obj[name] || 0) + parseFloat(number); return obj; },{}); console.log(result); var winner = Object.keys(result).reduce(function(a, b){ return result[a] > result[b] ? a : b }); console.log('The winner is ' + winner) ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Upvotes: 0
2018/03/22
388
1,530
<issue_start>username_0: We are an ESP provider. We send messages for our clients in HTML format. This week one of our clients complained that zero values ( 0 ) started being stripped from the email content by Outlook 2016. This case is only relevant when 0 is located in ``` 0 | ``` or in ``` 0 | ``` In this case message source when opened in Outlook is: ``` | ``` or ``` | ``` Apparently, zeroes are being removed at the exchange server level since zeros are displayed neither on desktop nor on mobile devices for the same account. Sending encoded zero as html entity fixes the issue: ``` 0 | ``` In this case message source when opened in Outlook is: ``` 0 | ``` Could you please help me to identify what causes zero removals in html tables? Can we control it or this is a recent Microsoft bug? Client confirmed that this issue started happening on Monday 03-19-2018 and emails rendered fine in previous weeks. System administrators confirmed that they did not run any updates during the weekend. Please let me know if you encountered similar issue and if you found a solution to it.<issue_comment>username_1: Apparently it has nothing to do with exchange server. We researched email message headers and found that the client use Proofpoint software which filters created that issue. Upvotes: 0 <issue_comment>username_2: We had this same issue, we decided to separate the td tag in different rows, as several client use different Proofpoint configuration. ``` 0 | ``` Upvotes: -1
2018/03/22
700
1,847
<issue_start>username_0: I'm creating a dataframe containing the number of incidents of a certain kind in each state in each year from 2000 to 2010 (pretend that they are gun incidents): ``` states <- c('Texas', 'Texas', 'Arizona', 'California', 'California') incidents <- c(1, 1, 2, 1, 4) years <- c(2000, 2008, 2004, 2002, 2007) DF <- data.frame(states, incidents, years) > DF states incidents years 1 Texas 1 2000 2 Texas 1 2008 3 Arizona 2 2004 4 California 1 2002 5 California 4 2007 ``` I want to insert rows to complete the dataset, e.g. zeros for Texas for 2001, 2002, 2003, ... 2007, and for 2009 and 2010. And likewise, zeros for Arizona for all years except 2004. Same thing for California. How can I do this?<issue_comment>username_1: You can use `tidyr::complete` to fill in missing years (`2010:2010`) and values with `0`. ``` library(tidyr) DFfilled <- DF %>% complete(states, years = 2000:2010, fill = list(incidents = 0)) %>% as.data.frame() ``` **PS:** If there are entries with year `2010` in your data (now it's only up to `2008`) you can use `full_seq(years, 1)` instead of `2000:2010`. Upvotes: 5 [selected_answer]<issue_comment>username_2: I would do it by creating an artifical `data.frame` and `merge` this data.frame with `DF`: ``` states <- c('Texas', 'Texas', 'Arizona', 'California', 'California') incidents <- c(1, 1, 2, 1, 4) years <- c(2000, 2008, 2004, 2002, 2007) DF <- data.frame(states, incidents, years) tmp <- data.frame(years=rep(seq(min(DF$years), max(DF$years)), each=length(unique(DF$states))), states=unique(DF$states) ) DF2 <- merge(DF, tmp, by=c('years','states'),all=T) DF2[is.na(DF2$incidents),]$incidents <- 0 ``` Upvotes: 0
2018/03/22
591
1,999
<issue_start>username_0: I am studying economics and we just started learning with R in RStudio and we got homework, I am really bad at this, I have almost all other tasks done but I have no idea b idea how to do this (sorry if it is pretty simple) We got some data and we should estimate this regression function (instead u there should be e): [Function](https://i.stack.imgur.com/HNs83.jpg) So far I have this: ``` tabulka = read.table("data.txt", header = TRUE, sep= "") regrese2 = lm(log(Output)~log(LPrice)+log(KPrice)+log(FPrice), data=tabulka) summary(regrese2) ``` Not sure if it is correct, if you see mistake please correct me :) But what I really need help is that we have to test hypothesis if `β1 = 1` and also if `β2 = β3 = 0`. Could someone tell me do i do this? Thanks for any help in advance<issue_comment>username_1: The tests of beta\_2 = 0 and beta\_3 = 0 will be given by `summary(regrese2)` (note that those tests are conditional on the other terms in the model). There are 3 ways (maybe more) of testing beta\_1 = 1, which one you should use would be determined by the teacher: 1. Run the summary function `summary(regrese2)` and use the effect size and standard error from the summary for the first regression term and plug those numbers into the book formula and do the test "by hand". This is often what the teacher expects students to do in more basic classes. 2. Subtract `log(LPrice)` from `log(Output)` then use that as the response variable and test to see if the coefficient on `log(LPrice)` is equal to 0. 3. Use the `offset` function: `log(LPrice) + offset(log(LPrice))` on the right side of the `~`, then again test the coefficient for `log(LPrice)` against 0. Upvotes: 0 <issue_comment>username_2: You can use `library(car)` And use `linearHypothesis` function for Joint hypotheses. Your code should look something like this: ``` linearHypothesis(regrese2, c(“LPrice = 0”)) linearHypothesis(regrese2, c(“KPrice = 0”, “FPrice = 0”)) ``` Upvotes: 1
2018/03/22
502
1,911
<issue_start>username_0: I'm working on a project with Web API 2 and I'm trying to find the best way to save the user configuration in memory. Each user has a particular configuration (timezone, location, language, its company information, and many more information), so I'm trying to implement a way to query this information at first login and save it to memory or something. So this information is frequently used by many operations and I don't want to slow down the application performance by querying all that info each time I need it. So, the first plan was to implement a Static clas with this information, but I don't know if it's the best approach. Can someone suggest the best way to implement this on a Web API 2?<issue_comment>username_1: The tests of beta\_2 = 0 and beta\_3 = 0 will be given by `summary(regrese2)` (note that those tests are conditional on the other terms in the model). There are 3 ways (maybe more) of testing beta\_1 = 1, which one you should use would be determined by the teacher: 1. Run the summary function `summary(regrese2)` and use the effect size and standard error from the summary for the first regression term and plug those numbers into the book formula and do the test "by hand". This is often what the teacher expects students to do in more basic classes. 2. Subtract `log(LPrice)` from `log(Output)` then use that as the response variable and test to see if the coefficient on `log(LPrice)` is equal to 0. 3. Use the `offset` function: `log(LPrice) + offset(log(LPrice))` on the right side of the `~`, then again test the coefficient for `log(LPrice)` against 0. Upvotes: 0 <issue_comment>username_2: You can use `library(car)` And use `linearHypothesis` function for Joint hypotheses. Your code should look something like this: ``` linearHypothesis(regrese2, c(“LPrice = 0”)) linearHypothesis(regrese2, c(“KPrice = 0”, “FPrice = 0”)) ``` Upvotes: 1
2018/03/22
1,238
4,354
<issue_start>username_0: I have setup a dockerized cluster of Kafka Connect which is running in distributed mode. I am trying to setup a Kafka JDBC Source Connector to move data between Microsoft SQL Server and Kafka. Below is the output of the response of my `connector-plugins` api ``` [ { class: "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector", type: "sink", version: "4.0.0" }, { class: "io.confluent.connect.hdfs.HdfsSinkConnector", type: "sink", version: "4.0.0" }, { class: "io.confluent.connect.hdfs.tools.SchemaSourceConnector", type: "source", version: "1.0.0-cp1" }, { class: "io.confluent.connect.jdbc.JdbcSinkConnector", type: "sink", version: "4.0.0" }, { class: "io.confluent.connect.jdbc.JdbcSourceConnector", type: "source", version: "4.0.0" }, { class: "io.debezium.connector.mongodb.MongoDbConnector", type: "source", version: "0.7.4" }, { class: "io.debezium.connector.mysql.MySqlConnector", type: "source", version: "0.7.4" }, { class: "org.apache.kafka.connect.file.FileStreamSinkConnector", type: "sink", version: "1.0.0-cp1" }, { class: "org.apache.kafka.connect.file.FileStreamSourceConnector", type: "source", version: "1.0.0-cp1" } ] ``` I have already added the `JDBC Driver provided my Microsoft SQL Server` to my `plugins path` in my Kafka Connect Cluster. Below is the input to my `connectors` api, ``` curl -X POST \ http://kafka-connect-cluster.com/connectors \ -H 'Content-Type: application/json' \ -H 'Accept: application/json' \ -d '{ "name": "mssql-source-connector", "config": { "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector", "mode": "timestamp", "timestamp.column.name": "updateTimeStamp", "query": "select * from table_name", "tasks.max": "1", "table.types": "TABLE", "key.converter.schemas.enable": "false", "topic.prefix": "data_", "value.converter.schemas.enable": "false", "connection.url": "jdbc:sqlserver://:;databaseName=;", "connection.user": "", "connection.password": "", "value.converter": "org.apache.kafka.connect.json.JsonConverter", "key.converter": "org.apache.kafka.connect.json.JsonConverter", "poll.interval.ms": "5000", "table.poll.interval.ms": "120000" } }' ``` The error that i get while trying this query is as follows: ``` { "error_code": 400, "message": "Connector configuration is invalid and contains the following 2 error(s):\nInvalid value java.sql.SQLException: No suitable driver found for jdbc:sqlserver://:;databaseName=; for configuration Couldn't open connection to jdbc:sqlserver://:;databaseName=;\nInvalid value java.sql.SQLException: No suitable driver found for jdbc:sqlserver://:;databaseName= for configuration Couldn't open connection to jdbc:sqlserver://:;databaseName=\nYou can also find the above list of errors at the endpoint `/{connectorType}/config/validate`" } ``` Any help you can provide is highly appreciated. Thanks<issue_comment>username_1: Based on <https://learn.microsoft.com/en-us/sql/connect/jdbc/building-the-connection-url> the trailing `;` you have in the URL is not valid. Also try putting the JDBC driver in `share/java/kafka-connect-jdbc`, and/or adding it to the `CLASSPATH` environment variable. Upvotes: 1 <issue_comment>username_2: Credit to the answer goes to @rmoff for pointing me in the right direction. So the issue lied in two places. 1. This is more like an FYI, rather than an issue. I gave the docker image a custom `CONNECT_PLUGIN_PATH`. There is nothing wrong with doing that, but its generally not a good idea because you will have to copy all the base plugins available with the confluent platform, this can create a problem when move to a new version as you might have to go through the same process again. 2. This part is most important. The [SQLServer JDBC driver](https://www.microsoft.com/en-us/download/search.aspx?q=jdbc "SQL Server JDBC Driver") needs to be in the same folder as that of `kafka-connect-jdbc-.jar` which in my case is `kafka-connect-jdbc-4.0.0.jar`. Once these two points were addressed my SQLServer JDBC Driver started working as expected. Upvotes: 4 [selected_answer]
2018/03/22
138
567
<issue_start>username_0: I have an action bar set up and it works great. I'd like to have it so that it is hidden until the user does something like pull down from the top of the screen, and then show it. I can't find any resources which discuss doing this.<issue_comment>username_1: You can toggle its visiblity using OnClickListener on layout or if you have some layout that is scrolled by the user then you can experiment with OnScrollChangeListener. Upvotes: 1 <issue_comment>username_2: You can also use Collapsing Toolbar ``` ``` Upvotes: 1 [selected_answer]
2018/03/22
336
1,228
<issue_start>username_0: I'm using Rails 5.1 with Minitest and Searchkick gem and in my system tests I need to have the data indexed in ElasticSearch to make sure the tests pass. If I add a breakpoint and inspect ``` class ActiveSupport::TestCase # Setup all fixtures in test/fixtures/*.yml for all tests in alphabetical order. fixtures :all require 'pry'; binding.pry # Add more helper methods to be used by all tests here... end ``` all my models have zero records, assuming I've recreated the database with: `rails db:drop db:create db:migrate` So how can I have the code `Model.reindex` running after the loading of fixtures? Note: I could use the `setup` but that way I will do a reindex in all needed models before each test, increasing the time.<issue_comment>username_1: You can use a class variable in your setup method like this: ``` class SomeTest setup do @@once ||= Model.reindex end end ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: I had the same issue on the CI server. I fixed it by preloading the test database before running the tests. ```sh bin/rails db:fixtures:load ``` (Source: <https://api.rubyonrails.org/v5.1.0/classes/ActiveRecord/FixtureSet.html>) Upvotes: 0
2018/03/22
542
1,862
<issue_start>username_0: ``` import glob2 from datetime import datetime filenames = glob2.glob("*.txt") with open(datetime.now().strftime("%Y-%m-%d-%H-%M-%S-%f")+".txt", 'w') as file: for filename in filenames: with open(filename, "r") as f: file.write(f.read() + "\n") ``` I was working in python and came across this name glob, googled it and couldn't find any answer, what does glob do, why is it used for?<issue_comment>username_1: from [glob docs](https://docs.python.org/3/library/glob.html) "The glob module finds all the pathnames matching a specified pattern(...)" i skip the imports `import glob2` and `from datetime import datetime` get all the filenames in the directory where filename is any and it is extension is text ``` filenames = glob2.glob("*.txt") ``` open new file which name is current datetime in the format as specified in the strftime and open it with write access as variable 'file' ``` with open(datetime.now().strftime("%Y-%m-%d-%H-%M-%S-%f")+".txt", 'w') as file: ``` for each filenames in found files which names / paths are stored in filenames variable... ``` for filename in filenames: ``` with the filename open for read access as f: ``` with open(filename, "r") as f: ``` write all content from f into file and add \n to the end (\n = new line) ``` file.write(f.read() + "\n") ``` Upvotes: 2 <issue_comment>username_2: I also saw "glob2"-module used in a kaggle-notebook and researched my own answer in what is the difference to "glob". All features of "[glob2](https://github.com/miracle2k/python-glob2/)" are in the current included "glob"-implementation of python. So there is no reason to use "glob2" anymore. As for what glob does in general, [BlueTomato](https://stackoverflow.com/users/4450090/bluetomato) already provided a nice link and description. Upvotes: 2
2018/03/22
706
2,578
<issue_start>username_0: I am using TeamCity as CI tool. I want to exclude all the Entity Framework generated models. I've been trying different syntax and options for a while and nothing seems to do the trick. I read the documentation, read all the question on the topic I could find, but still nothing seems to work for me. I have a *Repository* project within my *App* solution. It has two classes - *RepositoryOne.cs* and *RepositoryTwo.cs.* I have a lot of classes within edmx file from EF. I don't want to included them in the code coverage. I tried having something that will only include files that contain Repository, but without success. I haven't tried to exclude single files, because they are more than a hundred. something like ``` +:App.Repository.RepositoryOne +:App.Repository.RepositoryTwo -:App.Repository.* ``` I know this will not work, but just trying to explain better.<issue_comment>username_1: If you are using dotCover, then there are two solutions. **First:** Move all your edmx classes to separate project and remove it from coverage on a assembly filters: ``` -:App.Repository ``` **Second:** Use Attribute filters to remove whole namespace from coverage: ``` -:App.Repository.RepositoryOne ``` Here is a screenshots how this could look like in TeamCity with assembly and attribute filters - just pick one option: [![TeamCity assemmbly and attribute filters](https://i.stack.imgur.com/kBYw6.png)](https://i.stack.imgur.com/kBYw6.png) More about dotCover: <https://confluence.jetbrains.com/display/TCD10/JetBrains+dotCover> More examples: <https://blog.jetbrains.com/dotnet/2010/12/10/coverage-with-dotcover-teamcity-mstest-nunit-or-mspec/> Upvotes: 1 <issue_comment>username_2: When you are using `dotCover` you can specify an xml file to describe the class or modules you want to analyse, or ignore: ``` xml version="1.0" encoding="utf-8" ? c:\nunit\nunit-console.exe C:\Sources\out\Debug\MyLib.dll C:\Sources\out\Debug\ coverage.xml \*Repository\* \*Test ``` This is a sample, feel free to match your needs. Using a configuration file, you will be able to get the code coverage in local if you have dotCover easily, and you will not depend of Teamcity to run it. For more informations about the configuration file, you can have a look on Console Runner Commands, inside dotCover [documentation](https://www.jetbrains.com/help/dotcover/dotCover__Console_Runner_Commands.html). Or, run: `dotcover cover` without parameters, to get the help on the command line, and get a sample of a configuration file. Upvotes: 0
2018/03/22
840
2,937
<issue_start>username_0: How do I clear, or edit an input box with `ngModelChange`? I have a pluncker: <https://embed.plnkr.co/oc1Q7lkEkXNxSag6Kcss/> My angular calls are in `template-driven-form.ts` This is my HTML: ``` (ngModelChange) Example: ======================== Number: ``` This the angular code: ``` expiration: string = ''; onChange(event) { if (this.expiration) { let expiration = this.expiration.toString(); expiration = expiration.replace(/[^0-9\\]/g, ''); if (expiration.length > 2) { expiration += '\\'; } this.expiration = expiration; } else { this.expiration = ''; } } ``` What I am trying to implement a expiration date for a credit card. If someone enters something other than a digit I should ignore that input. If I get more than 3 digits I want to add a `\`. But currently `ngModelChange` is not functioning the way I would expect. I put in letters and they are added to the input box. I try to update the model `expiration = ''` but the letter persist in the input even though I have `[(ngModel)]=expiration`. edit: I made an update to use tel, as `\` would not work on an input type `number`. But the line of code to update the model `this.expiration = expiration` does not work.<issue_comment>username_1: If you are using dotCover, then there are two solutions. **First:** Move all your edmx classes to separate project and remove it from coverage on a assembly filters: ``` -:App.Repository ``` **Second:** Use Attribute filters to remove whole namespace from coverage: ``` -:App.Repository.RepositoryOne ``` Here is a screenshots how this could look like in TeamCity with assembly and attribute filters - just pick one option: [![TeamCity assemmbly and attribute filters](https://i.stack.imgur.com/kBYw6.png)](https://i.stack.imgur.com/kBYw6.png) More about dotCover: <https://confluence.jetbrains.com/display/TCD10/JetBrains+dotCover> More examples: <https://blog.jetbrains.com/dotnet/2010/12/10/coverage-with-dotcover-teamcity-mstest-nunit-or-mspec/> Upvotes: 1 <issue_comment>username_2: When you are using `dotCover` you can specify an xml file to describe the class or modules you want to analyse, or ignore: ``` xml version="1.0" encoding="utf-8" ? c:\nunit\nunit-console.exe C:\Sources\out\Debug\MyLib.dll C:\Sources\out\Debug\ coverage.xml \*Repository\* \*Test ``` This is a sample, feel free to match your needs. Using a configuration file, you will be able to get the code coverage in local if you have dotCover easily, and you will not depend of Teamcity to run it. For more informations about the configuration file, you can have a look on Console Runner Commands, inside dotCover [documentation](https://www.jetbrains.com/help/dotcover/dotCover__Console_Runner_Commands.html). Or, run: `dotcover cover` without parameters, to get the help on the command line, and get a sample of a configuration file. Upvotes: 0
2018/03/22
461
1,269
<issue_start>username_0: I tried this solution to my list and I can't get what I want after sorting. I got list: ``` m_2_mdot_3_a_1.dat ro= 303112.12 m_1_mdot_2_a_0.dat ro= 300.10 m_2_mdot_1_a_3.dat ro= 221.33 m_3_mdot_1_a_1.dat ro= 22021.87 ``` I used `sort -k 2 -n >name.txt` I would like to get list from the lowest `ro` to the highest `ro`. What I did wrong? I got a sorting but by the names of 1 column or by last value but like: `1000, 100001, 1000.2` ... It sorted like by only 4 meaning numbers or something.<issue_comment>username_1: ``` cat test.txt | tr . , | sort -k3 -g | tr , . ``` The following link gave a good answer [Sort scientific and float](https://stackoverflow.com/questions/26339232/sort-scientific-and-float) In brief, 1. you need -g option to sort on decimal numbers; 2. the -k option start from 1 not 0; 3. and by default locale, ***sort*** use **,** as seperator for decimal instead of **.** However, be careful if your name.txt contains **,** characters Upvotes: 2 [selected_answer]<issue_comment>username_2: Since there's a space or a tab between `ro=` and the numeric value, you need to sort on the 3rd column instead of the 2nd. So your command will become: ``` cat input.txt | sort -k 3 -n ``` Upvotes: 0
2018/03/22
1,002
3,280
<issue_start>username_0: First, a little of scenario. I have a tkinter window with a single button on it. This button is white until I hover over it, in which case it turns orange. My question is: How can I make the transition between the white and orange smooth, like a fade-in and fade-out. My code so far: ``` from tkinter import * from functools import partial root = Tk() def bg_config(widget, bg, fg, event): widget.configure(background=bg, foreground=fg) #Fading effect here btn = Button(root, text="Button", relief=GROOVE, bg="white") btn.bind("", partial(bg\_config, btn, "#f47142", "white")) btn.bind("", partial(bg\_config, btn, "white", "black")) bt.pack() root.mainloop() ``` I do have wxPython library, if that will help. Are there any other GUI libraries or methods that could make these kind of tasks easier?<issue_comment>username_1: There is nothing in tkinter to directly support this. You will need to do it by creating a function that runs every couple of milliseconds and slowly changes the color. Upvotes: 1 <issue_comment>username_2: This can be achieved by iterating over the difference in the *rgb* values of two different colors *(orange, white)*. Also, there are other python libraries like [colour](https://pypi.org/project/colour/) which makes the job much easier. Here, I made a function that uses [colour](https://pypi.org/project/colour/) library to **fade-in and fade-out different color options of widgets.** ```py def fade(widget, smoothness=4, cnf={}, **kw): """This function will show faded effect on widget's different color options. Args: widget (tk.Widget): Passed by the bind function. smoothness (int): Set the smoothness of the fading (1-10). background (str): Fade background color to. foreground (str): Fade foreground color to.""" kw = tk._cnfmerge((cnf, kw)) if not kw: raise ValueError("No option given, -bg, -fg, etc") if len(kw)>1: return [fade(widget,smoothness,{k:v}) for k,v in kw.items()][0] if not getattr(widget, '_after_ids', None): widget._after_ids = {} widget.after_cancel(widget._after_ids.get(list(kw)[0], ' ')) c1 = tuple(map(lambda a: a/(65535), widget.winfo_rgb(widget[list(kw)[0]]))) c2 = tuple(map(lambda a: a/(65535), widget.winfo_rgb(list(kw.values())[0]))) colors = tuple(colour.rgb2hex(c, force_long=True) for c in colour.color_scale(c1, c2, max(1, smoothness*100))) def worker(count=0): if len(colors)-1 <= count: return widget.config({list(kw)[0] : colors[count]}) widget._after_ids.update( { list(kw)[0]: widget.after( max(1, int(smoothness/10)), worker, count+1) } ) worker() ``` --- *Here is an example to properly use it.* [![enter image description here](https://i.stack.imgur.com/qVvdD.gif)](https://i.stack.imgur.com/qVvdD.gif) ```py from tkinter import * from functools import partial import colour root = Tk() def bg_config(widget, bg, fg, event): fade(widget, smoothness=5, fg=fg, bg=bg) btn = Button(root, text="Button", relief=GROOVE, bg="white") btn.bind("", partial(bg\_config, btn, "#f47142", "white")) btn.bind("", partial(bg\_config, btn, "white", "black")) btn.pack(padx=20, pady=20) root.mainloop() ``` Upvotes: 2
2018/03/22
607
1,687
<issue_start>username_0: 2d array, how to accept white space in input? in 1d array i think the right code is `cin.getline(array,5)`but in 2d array i cant figure it out what is right parameter. here is my code ``` #include void display(); char array[2][5]; using namespace std; int main(){ for (int x = 0; x < 2; ++x) { for (int y = 0; y < 5; ++y) { display(); cout << "Enter a value: "; cin>>array[x][y]; //i want to accept space in input. cin.getline(array[x][y],?) system("cls"); } } display(); } void display(){ for(int x = 0; x<2; x++){ for(int y = 0; y<5; y++){ cout<<" " < ``` lastly, how can i limit the input in cin>>? for example it will only allow 1 character input. ty in advance<issue_comment>username_1: **how to accept white space in input?** The problem was with your logic. You were trying to store a `string` or `char*` simply to a char. Even though its a 2D array, it will not work like that. You need either a `char*` or `std::string` for that, something like follows. ``` #include using namespace std; void display(); string array[2][5]; int main() { for (int x = 0; x < 2; ++x) { for (int y = 0; y < 5; ++y) { display(); cout << "Enter a value for ["< ``` Hope this was the case, any doubts, just ask. Upvotes: 3 [selected_answer]<issue_comment>username_2: ``` #include void display(); char array[2][5]; using namespace std; int main() { for (int x = 0; x < 2; ++x) { for (int y = 0; y < 5; ++y) { display(); cout << "Enter a value: "; cin>>array[x][y]; if(array[x][y]==" "; break; system("cls"); } } display(); } void display() { for(int x = 0; x<2; x++) { for(int y = 0; y<5; y++) { cout<<" " < ``` Upvotes: 0
2018/03/22
586
1,564
<issue_start>username_0: When subsetting a matrix or DF, it is possible to reference row columns, such as `df1[1:5, 3:10]`, or `df3[2:4, ]`. **Is there any way to do this with a raster? That is, can I clip just rows 500:700, for example from a raster object?** I have tried using `rasterFromCells()`, but it doesn't give me the result I want (and it seems like there should be a more simple solution given R's other slick subsetting methods). **Example:** ``` r <- raster(ncols = 50, nrow = 50) r[] <- 1:ncell(r) # I would like to subset the bottom 50 rows of cells, but keep it as a raster. # However, this returns a numeric object. rSub <- r[30:50, 1:50] ``` Thanks!<issue_comment>username_1: I don't find the question very clear. However, is this what you are looking for? ``` subR <- crop(r, extent(r, 30, 50, 1, 50)) plot(subR) ``` The function `crop()` of `raster` package does the trick because allow you to subset the raster object using rows and columns. Upvotes: 4 [selected_answer]<issue_comment>username_2: I prefer using crop as shown by username_1. There is another way, using `drop=FALSE` ``` library(raster) r <- raster(ncols = 10, nrow = 10) values(r) <- 1:ncell(r) rSub <- r[3:5, 2:3, drop=FALSE] rSub #class : RasterLayer #dimensions : 3, 2, 6 (nrow, ncol, ncell) #resolution : 36, 18 (x, y) #extent : -144, -72, 0, 54 (xmin, xmax, ymin, ymax) #coord. ref. : +proj=longlat +datum=WGS84 +ellps=WGS84 +towgs84=0,0,0 #source : memory #names : layer #values : 22, 43 (min, max) ``` Upvotes: 2
2018/03/22
415
1,220
<issue_start>username_0: What formula can I use to get a count of emoji and characters in a single cell? For example, In cells, A1,A2 and A3: ``` ✋️ ?? ``` Total Count of characters in each cell(**Desired Output**): ``` 3 5 5 ```<issue_comment>username_1: For the given emojis, This will work well: ``` =LEN(REGEXREPLACE(A13,".",".")) ``` * MID/LEN considers each emoji as 2 separate characters. * REGEX will consider them as one. * But even REGEX will fail with a complex emoji like this: ‍‍‍ === This contains a literal man emoji, a woman emoji,a girl emoji and a boy emoji-all joined by a ZeroWidthJoiner. You could even swap the boy for a another girl with this formula: ``` =SUBSTITUTE("‍‍‍‍","","") ``` It'll become like this: ‍‍‍‍ ==== Upvotes: 2 <issue_comment>username_2: `=COUNTA(FILTER( SPLIT(REGEXREPLACE(A1,"(.)","#$1"),"#"), SPLIT(REGEXREPLACE(A1,"(.)","#$1"),"#")<>"" ))` Based on the answer by @I'-'I Some emojis contain from multiple emojis joined by `char(8205)`: ‍‍‍‍‍ [![enter image description here](https://i.stack.imgur.com/Tk6sj.png)](https://i.stack.imgur.com/Tk6sj.png) The result differs and depends on a browser you use. I wonder, how do we count them? Upvotes: 1
2018/03/22
904
3,526
<issue_start>username_0: In Access I have five related tables that I'm trying to get information from, but I'm not sure how to write this query. I'm not even really sure how to start. I have: ``` tblEmployee tblCourseCatelog tblSessions tblInstructorDeliverables ----------- ---------------- ----------- ------------------------- EmpID (PK) CatelogID (PK) SessionID(pk) ID (PK) Name CourseName CatelogID EmpID DateAndTime CatelogID tblInstructorSessions --------------------- ID (pk) Instructor (fk tblInstructorDeliverables.ID) SessionID ``` `tblEmployees` is a list of all employees. `tblCourseCatelog` is a table of all courses that are offered. `tblSessions` is a list of courses that have been scheduled. `tblInstructorDeliverables` is a table of employees who can instruct and what course they can instruct. `tblInstructorSessions` is a table of sessions that instructors have been assigned to instruct. I'm trying to make a listbox on a form that will be populated with the name of instructors who are eligible to teach a certain course. The trick is that I'm passing `SessionID` to the form, not `CatelogID`. The reason I'm doing this is to that I can assign the instructors to that session, so I will need that number. e.g. A session had been setup. It is `SessionID` 1805. It is a first aid course (`CatelogID` 7). `frmAssignInstructors.OpenAgs = 1805`. I want to now select and display in the listbox all instructors that can instruct CatelogID 7 by figuring out that SessionID 1805 is a first aid course. I've only had one coffee today and I can use some help! Thanks!<issue_comment>username_1: Form frmAssignInstructors OnLoad event code: ``` ' Execute query, retrieve CatelogID Me.RecordSource = "SELECT SessionID, CatelogID FROM tblSessions WHERE SessionID = " _ & Me.OpenArgs & ";" ' Form has a Control txtCatelogID whose ControlSource is CatelogID Me.ListBox.RowSource = "SELECT ID, EmpID, CatelogID FROM " _ & "tblInstructorDeliverables WHERE CatelogID = " & txtCatelogID.Value & ";" ``` This way, you first load the Sessions data into the form RecordSource, then construct the ListBox RowSource from the retrieved CatelogID value. Upvotes: 2 [selected_answer]<issue_comment>username_2: There's more an issue of database modelling that on implementation. You could create a table relating tblCourseCatAlog and tblInstructorDeliverables, let's call it instructor\_x\_course, because it is a N x N relation and because in my point of view this information should not be provided by the Session entity. You can create this table with the following columns: CatalogID (FK) and instructorID (FK). This table will contain information of which instructor is able to teach what course and vice-versa. After that, it is going to be easy to filter classes that can be given by a certain instructor. Create a query using this new table and instructor's one, in order to bring instructor details. Use the Access wizard for the listbox control to source your listbox to this query. There's quicker ways to solve your problem as it is, but experience shows that "patches" are expensive in maitenance. Finally, consider your information flow, because it seems a bit confusing. You are giving instructor parameters to a form that will bring information of more instructors that are able to teach the same course? If it is the point, ok, but try to keep your application robust and minimalist. Good luck. Upvotes: 0
2018/03/22
665
2,008
<issue_start>username_0: When this query returns me the "Center" register, there is a word at all the fields with the same name that i don't want to display. I could make it via PHP but I need to use it on the db. Basics I know, but i haven't done sql for a long time ``` SELECT o.id_order, o.reference AS Ref, c.firstname AS Name, c.lastname AS Last Name, pl.`name` AS Center, od.product_name AS Product, od.product_quantity AS Quant, ROUND(od.product_price * 1.21,2) AS Price, o.date_add AS `Date` FROM ps_orders AS o INNER JOIN ps_order_detail AS od ON od.id_order = o.id_order INNER JOIN ps_customer AS c ON c.id_customer = o.id_customer INNER JOIN ps_product_lang AS pl ON pl.id_product = od.product_id WHERE pl.id_lang = 1 ORDER BY od.id_order_detail DESC ``` When this returns me Center data, all fields have the preposition "The" in front, returning something like: ``` Center The Odoo Team Center The Dev house ``` Then I need to show something like ``` Center Odoo Team Center Dev house ```<issue_comment>username_1: Something like this should work if you always wants to remove the first four chars: ``` SELECT SUBSTRING(center, 4, LENGTH(center)-3) FROM YOURTABLE; ``` Upvotes: 0 <issue_comment>username_2: You can use `REPLACE`: ``` SELECT o.id_order, o.reference AS Ref, c.firstname AS Name, c.lastname AS Last Name, REPLACE(pl.`name`,"The ","") AS Center, od.product_name AS Product, od.product_quantity AS Quant, ROUND(od.product_price * 1.21,2) AS Price, o.date_add AS `Date` FROM ps_orders AS o INNER JOIN ps_order_detail AS od ON od.id_order = o.id_order INNER JOIN ps_customer AS c ON c.id_customer = o.id_customer INNER JOIN ps_product_lang AS pl ON pl.id_product = od.product_id WHERE pl.id_lang = 1 ORDER BY od.id_order_detail DESC ``` > > **Documentation:** <https://dev.mysql.com/doc/refman/5.7/en/replace.html> > > > **Demo:** <http://sqlfiddle.com/#!9/a5640/91> > > > Upvotes: 3 [selected_answer]
2018/03/22
220
808
<issue_start>username_0: I have a Bash script to build my Jekyll sites using different config files depending on environment variables, etc. Is it possible to specify a Bash script as build command in Netlify? So far, my efforts have resulted in ``` 3:57:24 PM: Executing user command: /opt/repo/build.sh 3:57:24 PM: /usr/local/bin/build: line 32: /opt/repo/build.sh: No such file or directory ```<issue_comment>username_1: It turns out that the repo is not in `/opt/repo` but in `/opt/build/repo` Upvotes: 0 <issue_comment>username_2: The build commands in a Netlify deploy start at the root of your repository location, so you should be using a relative path `./build.sh` rather than an absolute path. This would allow for a Netlify container location change in the future. Upvotes: 4 [selected_answer]
2018/03/22
1,350
4,920
<issue_start>username_0: I have `ListView`, where I have `ImageView`. I need to get the image from url and show in `ImageView`, but that's not working, the image is not visible. In that `ListView` I have a `TextView` and `CheckBox` too, but you not need it because that works. I'm using `Glide`. So what's the problem? I set in glide placeholders and it loads the placeholders. I've done debug and I saw that the glide gets the image URL. But the image doesn't load. Ok here's the item code. ``` public class LanguageItem { String imagePath; LanguageItem(String imagePath,) { this.imagePath = imagePath; } public String getImagePath() { return imagePath; } ``` There are textView and checkbox too, but I'm not showing it to you, because that works fine. Here the adapter. ``` public class LanguageAdapter extends BaseAdapter { private Context context; private LayoutInflater lInflater; private ArrayList objects; LanguageAdapter(Context context, ArrayList itemObj) { this.context = context; objects = itemObj; lInflater = (LayoutInflater) context .getSystemService(Context.LAYOUT\_INFLATER\_SERVICE); } //amount of elements @Override public int getCount() { return objects.size(); } //element by position @Override public Object getItem(int position) { return objects.get(position); } //id by position @Override public long getItemId(int position) { return position; } @Override public View getView(int position, View convertView, ViewGroup parent) { View view = convertView; if (view == null) { view = lInflater.inflate(R.layout.language\_items, parent, false); } ImageView imageView = (ImageView) view.findViewById(R.id.imageView); Glide.with(context).load(objects.get(position).getImagePath()).thumbnail(0.5f).crossFade().into(imageView); return view; } ``` And here's the fragment. I'm doing my work in fragment. ``` public class FragmentLanguage extends BaseFragment { private static final String IMAGE = "IMAGE"; private ApiClient apiClient; private ArrayList objS; private LanguageAdapter adapter; private View mainView; private ListView listView; @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { mainView = inflater.inflate(R.layout.languages, container, false); listView = (ListView) mainView.findViewById(R.id.language\_list); apiClient = ApiClient.getInstance(); //calling methods fillData(); showResult(); return mainView; } public void fillData() { objS = new ArrayList<>(); getLanguageCall(); } public void getLanguageCall() { Call getLanguage = apiClient.getLanguage(SharedPreferencesManager.getInstance().getAccessToken()); getLanguage.enqueue(new Callback() { @Override public void onResponse(Call call, Response response) { if (response.isSuccessful()) { try { String data = response.body().string(); JSONArray array = new JSONArray(data); for (int i = 0; i < array.length(); i++) { JSONObject object = array.getJSONObject(i); String languageName = object.getString("name"); String path = object.getString("image\_path"); String real\_path = "https://supportop.eu-gb.mybluemix.net" + path.substring(1, path.length()); Toast.makeText(context, real\_path, Toast.LENGTH\_SHORT).show(); objS.add(new LanguageItem(languageName,real\_path, false)); } adapter = new LanguageAdapter(getActivity(), objS); listView.setAdapter(adapter); } catch (IOException | JSONException e) { e.printStackTrace(); } } } @Override public void onFailure(Call call, Throwable t) { Toast.makeText(context, "Error", Toast.LENGTH\_SHORT).show(); } }); } } ``` Ok here's the code. I done debug and the Image url it gets successfully, but glide not loads it. Thank you for reading. Ok and here's the layout. ``` ``` And here the listView part. ``` ```<issue_comment>username_1: Use Request Options with glide ``` RequestOptions options = new RequestOptions() .centerCrop() .placeholder(R.mipmap.ic_launcher_round) .error(R.mipmap.ic_launcher_round); Glide.with(this).load(image_url).apply(options).into(imageView); ``` Upvotes: 2 <issue_comment>username_2: You can use `Glide` or `Picasso`. From an activity the code will like: **Picasso** ``` Picasso.get().load(url).resize(50, 50).centerCrop().into(imageView) ``` **Glide** ``` Glide.with (context) .load ( "http://inthecheesefactory.com/uploads/source/glidepicasso/cover.jpg") .into (imageView); ``` Upvotes: 2 <issue_comment>username_3: Try to use http inside https. I think that's a server problem. Upvotes: 4 [selected_answer]<issue_comment>username_4: I'm a total Android beginner and my problem was that I was `missing the INTERNET permission` (as I could read in the Logcat tab after an hour long hassle). So try adding this to your `AndroidManifest.xml`: ``` ... ``` Upvotes: 2 <issue_comment>username_5: I just added and it worked finaly. Upvotes: 1
2018/03/22
1,014
3,749
<issue_start>username_0: I send a JSON object from my mangoDB to the html page in this way: ``` router.get('/index', function (req, res, next) { GenoverseInstance.find({name: req.query.name}, function (err, instance) { if (err) { res.send(err); throw err; } else if (instance.length) { console.log('Object loaded'); // object of the user console.log(instance[0]); res.render('index', {object: instance[0]}); } }); }); ``` I can use it in the html like this: ``` .containerCustom .head h1 | #{object.name} ``` But I can not use it in my javascript which is included in the html page: script. ``` alert(object.name); ``` How is it possible? Thanks<issue_comment>username_1: `object` is only defined in your Pug template and used to generate HTML that is then sent to the browser. After the HTML is generated, this `object` is consumed and disappears. It has nothing to do with the page's JS code. If you want this data to be available in the JS code, then : from the generated page, make another (Ajax) request to your server, asking for this same data. Upvotes: 2 <issue_comment>username_2: this is because your response is saved in local scope and you don't pass that response to global scope where u can access it from outside. I just make 1 snippet for you. Check it, i hope this will help you. Also if you don't understand scopes i suggest you to go and read some articles, like [w3school](https://www.w3schools.com/js/js_scope.asp) or [this](https://hackernoon.com/understanding-javascript-scope-1d4a74adcdf5). Also if you don't know what is asynchronous request read about them too. ```js /* ############################### */ // So if you want to access response from your request you must save them or pass them // This is one way to have access to your response outside by saving to global variable // Creating variable where we will hold our response from request // Global scope start here var data = ''; function getResponse(res) { // Here is opened Local scope where u have no access from outside // and now here u have the response from your request console.log('function data', res); // you can do stuff here with that response data = res; } setTimeout(function() { var response = [1,2,3]; // Now here we will save response to our data so we can access it from global scope // But remember that is async request data = response; console.log("local scope response ", response); }, 1000); setTimeout(function() { console.log("global scope data", data); // here data hold ur response data from your async request }, 1100); /* ############################### */ // This is another way to have access to your response outside by passing it to function function getResponse(res) { // and now here u have the response from your request console.log('function data', res); // you can do stuff here with that response data = res; } setTimeout(function() { var response = [1,2,3]; // other way pass response to your function and do what u want with that data getResponse(response); console.log("bracked scope response ", response); }, 1000); ``` Upvotes: 0 <issue_comment>username_3: As shown in the comments this link is very useful [reference](https://stackoverflow.com/questions/8698534/how-to-pass-variable-from-jade-template-file-to-a-script-file), thanks to @IsaacGodinez. It's possible to use this line of code to get the entire object: ``` var data = !{JSON.stringify(object).replace(/<\//g, '<\\/')}; ``` Or if you just want an element of the object: ``` var name = "#{object}"; ``` Upvotes: 1 [selected_answer]
2018/03/22
3,815
11,710
<issue_start>username_0: I'm a novice programmer learning C++ through a tutorial and I want to know how I can take a string, take its first letter, compare it to the first letters of 2 other strings and then sort them alphabetically. I don't want code written for me. I want to understand how to do this because other solutions were difficult to understand. I know that strings are just arrays of char so there should be a method to get the first index. **UPDATE** So this is my attempt: ``` #include #include using namespace std; void valueOutput(string firstName, string secondName, string thirdName){ cout << "\n"; cout << firstName << endl; cout << secondName << endl; cout << thirdName << endl; } int main(){ string name1, name2, name3; cout<<"Enter 3 names: "<>name1; cin>>name2; cin>>name3; if( (name1[0] < name2[0] && name2[0] < name3[0]) || (name1[0] < name3[0] && name3[0] < name2[0]) || (name2[0] < name1[0] && name1[0] < name3[0]) || (name2[0] < name3[0] && name3[0] < name1[0]) || (name3[0] < name1[0] && name1[0] < name2[0]) || (name3[0] < name2[0] && name2[0] < name1[0])) {valueOutput(name1, name2, name3);} else{ return 0; } } ``` My input was: Steinbeck Hemingway Fitzgerald but the output is the exactly in the same order. I want to sort them alphabetically.<issue_comment>username_1: > > I know that strings are just arrays of char so there should be a > method to get the first index. > > > It seems you mean string literals. If you have three declarations like these ``` char *s1 = "Onur"; char *s2 = "Ozbek"; char *s3 = "Hello"; ``` then you can compare first characters of the string literals the following ways ``` if ( s1[0] == s2[0] ) ``` or ``` if ( *s2 == *s3 ) ``` The same expressions can be used if instead of pointers you declared arrays ``` char s1[] = "Onur"; char s2[] = "Ozbek"; char s3[] = "Hello"; ``` In fact you can even use the following expressions ``` if ( "Onur"[0] == "Ozbek"[0] ) ``` or ``` if ( *"Onur" == *"Ozbek" ) ``` or even like ``` if ( 0["Onur"] == 0["Ozbek"] ) ``` to compare first characters of string literals directly. Upvotes: 2 <issue_comment>username_2: ``` #include int main() { std::string str1, str2, str3; std::cout << "Enter First String : "; getline(std::cin, str1); std::cout << "Enter Second String : "; getline(std::cin, str2); std::cout << "Enter Third String : "; getline(std::cin, str3); // The code is descriptive by itself till now // Now we gonna compare each of them. if(str1[0] < str2[0]) { // Once the control flow get inside this if statement that means str1 is smaller than str2 if (str1[0] < str3[0] && str3[0] < str2[0]) std::cout << str1 << std::endl << str3 << std::endl << str2; // In here we get that str1 is smaller than str3 (and str1 is smaller than str2 as well), so the smallest is str1 and we print that. and then we compared str3 with str 2, if str3 is smaller than str2 then we will print secondly str3 and thirdly str2. else if (str1[0] < str3[0] && str2[0] < str3[0]) std::cout << str1 << std::endl << str2 << std::endl << str3; // Here we get that str1 is smaller than str2 and str3 so we firstly print str1 and then we compared str2 with str3, if str2 is smaller than str3 then we will secondly print str2 and thirdly print str3. else std::cout << str3 << std::endl << str1 << std::endl << str2; // Now both the conditions mentioned above are wrong that means str3 is smaller than str1 and from the very first condition, we get that str1 is smaller than str2. That means smallest is str3 then str1 then str2 and we printed all of them in this order. } else { // else will be executed when str2 will be smaller than str1. So till now we get that str2 is smaller than str1. Now remember that str2 is smaller than str1. if (str2[0] < str3[0] && str3[0] < str1[0]) std::cout << str2 << std::endl << str3 << std::endl << str1; // Now here str2 proved to be smaller than str3 (and we already know that str2 is smaller than str1), So str2 is the smallest and we printed firstly str2. Then we compared str3 with str1, if str3 is smaller than str1 then we will secondly print str3 and thirdly print str1. else if (str2[0] < str3[0] && str1[0] < str3[0]) std::cout << str2 << std::endl << str1 << std::endl << str3; // Now here str2 proved to be smaller than str3 (and we already know that str2 is smaller than str1), So str2 is the smallest and we printed firstly str2. Then we compared str1 with str3, if str1 is smaller than str3 then we will secondly print str1 and thirdly print str3. else std::cout << str3 << std::endl << str2 << std::endl << str1; // Now both the above conditions are false that means str3 is smaller than str2 and as we know that str2 is smaller than str1, that means str3 is the smallest, so we firstly printed str3 and then str2 and at last str1. } return 0; } ``` This is whole code I think that you can understand it... And if you cant understand any particular line, you can freely ask.. Upvotes: 1 <issue_comment>username_3: You can get a particular position using operator[indexNumber] But if you want to sort, you should use `std::sort` from #include with a vector of strings `std::vector` //EDIT: note that a and A are different for a sorting or comparation, search ascii table I will put a code example do not read further if you do not want the solution ``` #include #include #include #include int main() { std::vector stringArray; stringArray.reserve(4); stringArray.push\_back("Hi"); stringArray.push\_back("Hello world"); stringArray.push\_back("New c++"); stringArray.push\_back("Animal"); std::sort(stringArray.begin(), stringArray.end()); for(auto&a : stringArray) { std::cout << a << '\n'; } } ``` Upvotes: 2 <issue_comment>username_4: If you are aware of STL container `std::vector`, this job is much easier. If you are looking for something simple, here it is, hopes that the comments help you to understand. ``` #include #include #include #include int main() { int size = 3; //size of your vector std::vector vec; //initialize the vec with string type vec.reserve(size); // just to reserve the memory // input iterator takes strings from std console-size times-and put it back in to vec std::copy\_n(std::istream\_iterator(std::cin), size, back\_inserter(vec)); // this will do your job as per requirement std::sort(vec.begin(), vec.end()); // just to print the result std::copy(vec.begin(),vec.end(), std::ostream\_iterator(std::cout,"\n")); return 0; } ``` Upvotes: 2 [selected_answer]<issue_comment>username_5: Lets suppose that your strings are named st1 , st2 , st3 and you want to perform the above mentioned operation on them. A quick way to do it would be using std::vector. You push all string first index values to the vector and then sort it.A clear cut implementation would be something like this : ``` vector v; v.push\_back(st1[0]); v.push\_back(st2[0]); v.push\_back(st3[0]); sort(v.begin() , v.end()); ``` You can read more about vectors here : <https://www.geeksforgeeks.org/vector-in-cpp-stl/> Upvotes: 1 <issue_comment>username_6: Figured it out. I basically had to call the function at each if statement: ``` #include #include using namespace std; void valueOutput(string firstName, string secondName, string thirdName){ cout << "\n"; cout << firstName << endl; cout << secondName << endl; cout << thirdName << endl; } int main(){ string name1, name2, name3; cout<<"Enter 3 names: "<>name1; cin>>name2; cin>>name3; if(name1[0] < name2[0] && name2[0] < name3[0]){ valueOutput(name1, name2, name3); } else if(name1[0] < name3[0] && name3[0] < name2[0]){ valueOutput(name1, name3, name2); } else if(name2[0] < name1[0] && name1[0] < name3[0]){ valueOutput(name2, name1, name3); } else if(name2[0] < name3[0] && name3[0] < name1[0]){ valueOutput(name2, name3, name1); } else if(name3[0] < name1[0] && name1[0] < name2[0]){ valueOutput(name3, name1, name2); } else if(name3[0] < name2[0] && name2[0] < name1[0]){ valueOutput(name3, name2, name1); } } ``` Upvotes: 0 <issue_comment>username_7: If you don't want the code, I'll just tell you the logic to do the same. 1. Hashing ---------- You can use hashing to sort your strings. The idea behind this is that *at the position of the first character of your (i)th string you store i*. For eg: consider your strings are as follows :: `aaa, bbb, ccc` Now at position `[ith_string[0] - 'a'] you store i` (here **0** position corresponds to **'a'**, **1** position corresponds to **'b'**,.. and so on). In other words you do this `array[str1[0] - 'a'] = 1`, and so on for every string. So your array will look like this :: `array = {1, 2, 3}` Then you can just print the string using the position you stored in our hashing array. I know this seems a bit difficult, but I would suggest you see some tutorials about hashing, you'll understand it then. 2. Sorting ---------- You can also use sorting, but then you need to store the location of your string. You can use `pair` to store the string and the location of your string. Then you can use @VaradBhatnagar's solution to sort the strings and depending on the position, you can print it in that order. In case you're wondering what I just said, please see the below code for reference. I'm sure you'll understand this.(If you don't please feel free to ask doubts). ``` #include using namespace std; void hashing(string str[], int n){ int arr[26] = {0}; // Because there are only 26 alphabets for(int i = 0 ; i < n ; i++){ arr[str[i][0] - 'a'] = i + 1; // we should not store 0, as 0 defines "there was no string starting from this alphaber" } for(int i = 0 ; i < n ; i++){ if(arr[i] != 0){ cout << str[arr[i] - 1] << endl; } } } void sorting(string str[], int n){ std::vector > v; for (int i = 0 ; i < n ; i++){ v.push\_back(make\_pair(str[i][0], i)); } sort(v.begin(), v.end()); // this will sort on the basis of 1st argument in our pair for(int i = 0 ; i < n ; i++){ cout << str[v[i].second] << endl; } } int main(int argc, char const \*argv[]) { int n; string str[30]; // it can not be larger than 26, because of the question cin >> n; for(int i = 0 ; i < n ; i++){ cin >> str[i]; } cout << "Result of hashing" << endl; hashing(str, n); cout << "Result of sorting" << endl; sorting(str, n); return 0; } ``` ### Input ``` 3 cccc bb aaaa ``` ### Output ``` Result of hashing aaaa bb cccc Result of sorting aaaa bb cccc ``` Upvotes: 1 <issue_comment>username_8: [enter image description here](https://i.stack.imgur.com/c8aLX.png)Lets suppose your 3 names is name1 and name2 and name3 and you know that string is just an array of char so you can compare each char using its Ascii number and dont forget to take in consideration the capital and small letters as the difference between them in ascii is 32 and small litters is greater than capital letters so you will convert between them as in code. there isn't any complex function here so it's so easy to understand ``` #include using namespace std; void comparing(string& name1,string& name2); void comparing(string& name1,string& name2,string& name3); int main(){ string name1, name2, name3; cout<<"Enter 3 names: "<>name1; cin>>name2; cin>>name3; comparing(name1,name2,name3); } void comparing(string& name1,string& name2) { string t; for(int i=0;i=122){ x=x-32; } if(y<=97 && y>=122){ y=y-32; } if(x>y){ t=name1; name1=name2; name2=t; break; }} } void comparing(string& name1,string& name2,string& name3) { comparing(name1,name2); comparing(name2,name3); comparing(name1,name3); comparing(name1,name2); cout << name1 << " " < ``` Upvotes: 0
2018/03/22
3,660
11,576
<issue_start>username_0: I'm currently facing a problem that I'm not able to resovle yet, but I hope I can do it with your help. I currently developp an application with gstreamer to playback different kind of files : video and photo (avi and jpg respectively). The user has to have the possibility to switch between those different files. I have achieved this but by creating a new pipeline if the file format is different. There, screen randomly blinks between two files loading. Now, I've played with valve just for jpg files and it works like a charm. But, I'm stuck at the step to implement video files, I don't know how to swith between two video files : the code below doesn't work for video files, it freezes: ``` gst-launch-1.0 filesrc name=photosrc ! jpegdec ! valve name=playvalve drop=false ! imxg2dvideosink ``` Then further in my code, I drop the valve, set differents elements to ready state, change location of filesrc and return to playing state. I take a look a input-selector but it appears that non-read file still playing when one switches to the other (cf doc). Is it possible to set an input as ready to avoid this behavior ? Thanks a lot for helping<issue_comment>username_1: > > I know that strings are just arrays of char so there should be a > method to get the first index. > > > It seems you mean string literals. If you have three declarations like these ``` char *s1 = "Onur"; char *s2 = "Ozbek"; char *s3 = "Hello"; ``` then you can compare first characters of the string literals the following ways ``` if ( s1[0] == s2[0] ) ``` or ``` if ( *s2 == *s3 ) ``` The same expressions can be used if instead of pointers you declared arrays ``` char s1[] = "Onur"; char s2[] = "Ozbek"; char s3[] = "Hello"; ``` In fact you can even use the following expressions ``` if ( "Onur"[0] == "Ozbek"[0] ) ``` or ``` if ( *"Onur" == *"Ozbek" ) ``` or even like ``` if ( 0["Onur"] == 0["Ozbek"] ) ``` to compare first characters of string literals directly. Upvotes: 2 <issue_comment>username_2: ``` #include int main() { std::string str1, str2, str3; std::cout << "Enter First String : "; getline(std::cin, str1); std::cout << "Enter Second String : "; getline(std::cin, str2); std::cout << "Enter Third String : "; getline(std::cin, str3); // The code is descriptive by itself till now // Now we gonna compare each of them. if(str1[0] < str2[0]) { // Once the control flow get inside this if statement that means str1 is smaller than str2 if (str1[0] < str3[0] && str3[0] < str2[0]) std::cout << str1 << std::endl << str3 << std::endl << str2; // In here we get that str1 is smaller than str3 (and str1 is smaller than str2 as well), so the smallest is str1 and we print that. and then we compared str3 with str 2, if str3 is smaller than str2 then we will print secondly str3 and thirdly str2. else if (str1[0] < str3[0] && str2[0] < str3[0]) std::cout << str1 << std::endl << str2 << std::endl << str3; // Here we get that str1 is smaller than str2 and str3 so we firstly print str1 and then we compared str2 with str3, if str2 is smaller than str3 then we will secondly print str2 and thirdly print str3. else std::cout << str3 << std::endl << str1 << std::endl << str2; // Now both the conditions mentioned above are wrong that means str3 is smaller than str1 and from the very first condition, we get that str1 is smaller than str2. That means smallest is str3 then str1 then str2 and we printed all of them in this order. } else { // else will be executed when str2 will be smaller than str1. So till now we get that str2 is smaller than str1. Now remember that str2 is smaller than str1. if (str2[0] < str3[0] && str3[0] < str1[0]) std::cout << str2 << std::endl << str3 << std::endl << str1; // Now here str2 proved to be smaller than str3 (and we already know that str2 is smaller than str1), So str2 is the smallest and we printed firstly str2. Then we compared str3 with str1, if str3 is smaller than str1 then we will secondly print str3 and thirdly print str1. else if (str2[0] < str3[0] && str1[0] < str3[0]) std::cout << str2 << std::endl << str1 << std::endl << str3; // Now here str2 proved to be smaller than str3 (and we already know that str2 is smaller than str1), So str2 is the smallest and we printed firstly str2. Then we compared str1 with str3, if str1 is smaller than str3 then we will secondly print str1 and thirdly print str3. else std::cout << str3 << std::endl << str2 << std::endl << str1; // Now both the above conditions are false that means str3 is smaller than str2 and as we know that str2 is smaller than str1, that means str3 is the smallest, so we firstly printed str3 and then str2 and at last str1. } return 0; } ``` This is whole code I think that you can understand it... And if you cant understand any particular line, you can freely ask.. Upvotes: 1 <issue_comment>username_3: You can get a particular position using operator[indexNumber] But if you want to sort, you should use `std::sort` from #include with a vector of strings `std::vector` //EDIT: note that a and A are different for a sorting or comparation, search ascii table I will put a code example do not read further if you do not want the solution ``` #include #include #include #include int main() { std::vector stringArray; stringArray.reserve(4); stringArray.push\_back("Hi"); stringArray.push\_back("Hello world"); stringArray.push\_back("New c++"); stringArray.push\_back("Animal"); std::sort(stringArray.begin(), stringArray.end()); for(auto&a : stringArray) { std::cout << a << '\n'; } } ``` Upvotes: 2 <issue_comment>username_4: If you are aware of STL container `std::vector`, this job is much easier. If you are looking for something simple, here it is, hopes that the comments help you to understand. ``` #include #include #include #include int main() { int size = 3; //size of your vector std::vector vec; //initialize the vec with string type vec.reserve(size); // just to reserve the memory // input iterator takes strings from std console-size times-and put it back in to vec std::copy\_n(std::istream\_iterator(std::cin), size, back\_inserter(vec)); // this will do your job as per requirement std::sort(vec.begin(), vec.end()); // just to print the result std::copy(vec.begin(),vec.end(), std::ostream\_iterator(std::cout,"\n")); return 0; } ``` Upvotes: 2 [selected_answer]<issue_comment>username_5: Lets suppose that your strings are named st1 , st2 , st3 and you want to perform the above mentioned operation on them. A quick way to do it would be using std::vector. You push all string first index values to the vector and then sort it.A clear cut implementation would be something like this : ``` vector v; v.push\_back(st1[0]); v.push\_back(st2[0]); v.push\_back(st3[0]); sort(v.begin() , v.end()); ``` You can read more about vectors here : <https://www.geeksforgeeks.org/vector-in-cpp-stl/> Upvotes: 1 <issue_comment>username_6: Figured it out. I basically had to call the function at each if statement: ``` #include #include using namespace std; void valueOutput(string firstName, string secondName, string thirdName){ cout << "\n"; cout << firstName << endl; cout << secondName << endl; cout << thirdName << endl; } int main(){ string name1, name2, name3; cout<<"Enter 3 names: "<>name1; cin>>name2; cin>>name3; if(name1[0] < name2[0] && name2[0] < name3[0]){ valueOutput(name1, name2, name3); } else if(name1[0] < name3[0] && name3[0] < name2[0]){ valueOutput(name1, name3, name2); } else if(name2[0] < name1[0] && name1[0] < name3[0]){ valueOutput(name2, name1, name3); } else if(name2[0] < name3[0] && name3[0] < name1[0]){ valueOutput(name2, name3, name1); } else if(name3[0] < name1[0] && name1[0] < name2[0]){ valueOutput(name3, name1, name2); } else if(name3[0] < name2[0] && name2[0] < name1[0]){ valueOutput(name3, name2, name1); } } ``` Upvotes: 0 <issue_comment>username_7: If you don't want the code, I'll just tell you the logic to do the same. 1. Hashing ---------- You can use hashing to sort your strings. The idea behind this is that *at the position of the first character of your (i)th string you store i*. For eg: consider your strings are as follows :: `aaa, bbb, ccc` Now at position `[ith_string[0] - 'a'] you store i` (here **0** position corresponds to **'a'**, **1** position corresponds to **'b'**,.. and so on). In other words you do this `array[str1[0] - 'a'] = 1`, and so on for every string. So your array will look like this :: `array = {1, 2, 3}` Then you can just print the string using the position you stored in our hashing array. I know this seems a bit difficult, but I would suggest you see some tutorials about hashing, you'll understand it then. 2. Sorting ---------- You can also use sorting, but then you need to store the location of your string. You can use `pair` to store the string and the location of your string. Then you can use @VaradBhatnagar's solution to sort the strings and depending on the position, you can print it in that order. In case you're wondering what I just said, please see the below code for reference. I'm sure you'll understand this.(If you don't please feel free to ask doubts). ``` #include using namespace std; void hashing(string str[], int n){ int arr[26] = {0}; // Because there are only 26 alphabets for(int i = 0 ; i < n ; i++){ arr[str[i][0] - 'a'] = i + 1; // we should not store 0, as 0 defines "there was no string starting from this alphaber" } for(int i = 0 ; i < n ; i++){ if(arr[i] != 0){ cout << str[arr[i] - 1] << endl; } } } void sorting(string str[], int n){ std::vector > v; for (int i = 0 ; i < n ; i++){ v.push\_back(make\_pair(str[i][0], i)); } sort(v.begin(), v.end()); // this will sort on the basis of 1st argument in our pair for(int i = 0 ; i < n ; i++){ cout << str[v[i].second] << endl; } } int main(int argc, char const \*argv[]) { int n; string str[30]; // it can not be larger than 26, because of the question cin >> n; for(int i = 0 ; i < n ; i++){ cin >> str[i]; } cout << "Result of hashing" << endl; hashing(str, n); cout << "Result of sorting" << endl; sorting(str, n); return 0; } ``` ### Input ``` 3 cccc bb aaaa ``` ### Output ``` Result of hashing aaaa bb cccc Result of sorting aaaa bb cccc ``` Upvotes: 1 <issue_comment>username_8: [enter image description here](https://i.stack.imgur.com/c8aLX.png)Lets suppose your 3 names is name1 and name2 and name3 and you know that string is just an array of char so you can compare each char using its Ascii number and dont forget to take in consideration the capital and small letters as the difference between them in ascii is 32 and small litters is greater than capital letters so you will convert between them as in code. there isn't any complex function here so it's so easy to understand ``` #include using namespace std; void comparing(string& name1,string& name2); void comparing(string& name1,string& name2,string& name3); int main(){ string name1, name2, name3; cout<<"Enter 3 names: "<>name1; cin>>name2; cin>>name3; comparing(name1,name2,name3); } void comparing(string& name1,string& name2) { string t; for(int i=0;i=122){ x=x-32; } if(y<=97 && y>=122){ y=y-32; } if(x>y){ t=name1; name1=name2; name2=t; break; }} } void comparing(string& name1,string& name2,string& name3) { comparing(name1,name2); comparing(name2,name3); comparing(name1,name3); comparing(name1,name2); cout << name1 << " " < ``` Upvotes: 0
2018/03/22
1,742
5,144
<issue_start>username_0: I would like to get the feature names of a data set after it has been transformed by SKLearn OneHotEncoder. In [active\_features\_ attribute in OneHotEncoder](https://stackoverflow.com/a/33596950) one can see a very good explanation how the attributes `n_values_`, `feature_indices_` and `active_features_` get filled after `transform()` was executed. My question is: For e.g. DataFrame based input data: ``` data = pd.DataFrame({"a": [0, 1, 2,0], "b": [0,1,4, 5], "c":[0,1,4, 5]}).as_matrix() ``` How does the code look like to get from the original feature names `a`, `b` and `c` to a list of the transformed feature names (like e.g: `a-0`,`a-1`, `a-2`, `b-0`, `b-1`, `b-2`, `b-3`, `c-0`, `c-1`, `c-2`, `c-3` or `a-0`,`a-1`, `a-2`, `b-0`, `b-1`, `b-2`, `b-3`, `b-4`, `b-5`, `b-6`, `b-7`, `b-8` or anything that helps to see the assignment of encoded columns to the original columns). Background: I would like to see the feature importances of some of the algorithms to get a feeling for which feature have the most effect on the algorithm used.<issue_comment>username_1: You can use `pd.get_dummies()`: ``` pd.get_dummies(data["a"],prefix="a") ``` will give you: ``` a_0 a_1 a_2 0 1 0 0 1 0 1 0 2 0 0 1 3 1 0 0 ``` which can automatically generates the column names. You can apply this to all your columns and then get the columns names. No need to convert them to a numpy matrix. So with: ``` df = pd.DataFrame({"a": [0, 1, 2,0], "b": [0,1,4, 5], "c":[0,1,4, 5]}) data = df.as_matrix() ``` the solution looks like: ``` columns = df.columns my_result = pd.DataFrame() temp = pd.DataFrame() for runner in columns: temp = pd.get_dummies(df[runner], prefix=runner) my_result[temp.columns] = temp print(my_result.columns) >>Index(['a_0', 'a_1', 'a_2', 'b_0', 'b_1', 'b_4', 'b_5', 'c_0', 'c_1', 'c_4', 'c_5'], dtype='object') ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: If I understand correctly you can use `feature_indices_` to identify which columns correspond to which feature. e.g. ``` import pandas as pd from sklearn.preprocessing import OneHotEncoder data = pd.DataFrame({"a": [0, 1, 2,0], "b": [0,1,4, 5], "c":[0,1,4, 5]}).as_matrix() ohe = OneHotEncoder(sparse=False) ohe_fitted = ohe.fit_transform(data) print(ohe_fitted) print(ohe.feature_indices_) # [ 0 3 9 15] ``` From the above `feature_indices_` we know if we spliced the OneHotEncoded data from `0:3` we would get the features corresponding to the first column in `data` like so: ``` print(ohe_fitted[:,0:3]) ``` Each column in the spliced data represents a value in the first feature. The first column is 0, the second 1 and the third column is 2. To illustrate this on the spliced data, the column labels would look like: ``` a_0 a_1 a_2 [[ 1. 0. 0.] [ 0. 1. 0.] [ 0. 0. 1.] [ 1. 0. 0.]] ``` Note that features are sorted first before they are encoded. Upvotes: 2 <issue_comment>username_3: There is a OneHotEncoder that does all the work for you. Package sksurv has a OneHotEncoder that will return a pandas Dataframe with all the column names set-up for you. Check it out. Make sure you set-up an environment to play with the encoder to ensure it doesn't break your current environment. This encoder saved me a lot of time and effort. [scikit-suvival GitHub](https://github.com/sebp/scikit-survival) [OneHotEncoder Documentation](https://scikit-survival.readthedocs.io/en/latest/generated/sksurv.preprocessing.OneHotEncoder.html#sksurv.preprocessing.OneHotEncoder) Upvotes: 0 <issue_comment>username_4: You can do that with the open source package feature-engine: ``` import pandas as pd from sklearn.model_selection import train_test_split from feature_engine.encoding import OneHotEncoder # load titanic data from openML pd.read_csv('https://www.openml.org/data/get_csv/16826755/phpMYEkMl') # divide into train and test X_train, X_test, y_train, y_test = train_test_split( data[['sex', 'embarked']], # predictors for this example data['survived'], # target test_size=0.3, # percentage of obs in test set random_state=0) # seed to ensure reproducibility ohe_enc = OneHotEncoder( top_categories=None, variables=['sex', 'embarked'], drop_last=True) ohe_enc.fit(X_train) X_train = ohe_enc.transform(X_train) X_test = ohe_enc.transform(X_test) X_train.head() ``` You should see this output returned: ``` sex_female embarked_S embarked_C embarked_Q 501 1 1 0 0 588 1 1 0 0 402 1 0 1 0 1193 0 0 0 1 686 1 0 0 1 ``` More details about feature engine here: <https://www.trainindata.com/feature-engine> <https://github.com/feature-engine/feature_engine> <https://feature-engine.readthedocs.io/en/latest/> Upvotes: 2 <issue_comment>username_5: `OneHotEncoder` now has a method `get_feature_names`. You can use `input_features=data.columns` to match to the training data. Upvotes: 0
2018/03/22
1,489
4,582
<issue_start>username_0: I have three branches Master, Branch1 and Branch2, I am currently working on Branch2. I have done some local changes but I want to commit these changes to a new branch (Branch3). After doing some research I see that I should create a new command using **"git checkout -b [name\_of\_your\_new\_branch]"** I am scared that if I do a checkout new branch all my local changes will be gone. Can someone help confirm what is the safest way to commit and push to a new branch when working on branch2.<issue_comment>username_1: You can use `pd.get_dummies()`: ``` pd.get_dummies(data["a"],prefix="a") ``` will give you: ``` a_0 a_1 a_2 0 1 0 0 1 0 1 0 2 0 0 1 3 1 0 0 ``` which can automatically generates the column names. You can apply this to all your columns and then get the columns names. No need to convert them to a numpy matrix. So with: ``` df = pd.DataFrame({"a": [0, 1, 2,0], "b": [0,1,4, 5], "c":[0,1,4, 5]}) data = df.as_matrix() ``` the solution looks like: ``` columns = df.columns my_result = pd.DataFrame() temp = pd.DataFrame() for runner in columns: temp = pd.get_dummies(df[runner], prefix=runner) my_result[temp.columns] = temp print(my_result.columns) >>Index(['a_0', 'a_1', 'a_2', 'b_0', 'b_1', 'b_4', 'b_5', 'c_0', 'c_1', 'c_4', 'c_5'], dtype='object') ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: If I understand correctly you can use `feature_indices_` to identify which columns correspond to which feature. e.g. ``` import pandas as pd from sklearn.preprocessing import OneHotEncoder data = pd.DataFrame({"a": [0, 1, 2,0], "b": [0,1,4, 5], "c":[0,1,4, 5]}).as_matrix() ohe = OneHotEncoder(sparse=False) ohe_fitted = ohe.fit_transform(data) print(ohe_fitted) print(ohe.feature_indices_) # [ 0 3 9 15] ``` From the above `feature_indices_` we know if we spliced the OneHotEncoded data from `0:3` we would get the features corresponding to the first column in `data` like so: ``` print(ohe_fitted[:,0:3]) ``` Each column in the spliced data represents a value in the first feature. The first column is 0, the second 1 and the third column is 2. To illustrate this on the spliced data, the column labels would look like: ``` a_0 a_1 a_2 [[ 1. 0. 0.] [ 0. 1. 0.] [ 0. 0. 1.] [ 1. 0. 0.]] ``` Note that features are sorted first before they are encoded. Upvotes: 2 <issue_comment>username_3: There is a OneHotEncoder that does all the work for you. Package sksurv has a OneHotEncoder that will return a pandas Dataframe with all the column names set-up for you. Check it out. Make sure you set-up an environment to play with the encoder to ensure it doesn't break your current environment. This encoder saved me a lot of time and effort. [scikit-suvival GitHub](https://github.com/sebp/scikit-survival) [OneHotEncoder Documentation](https://scikit-survival.readthedocs.io/en/latest/generated/sksurv.preprocessing.OneHotEncoder.html#sksurv.preprocessing.OneHotEncoder) Upvotes: 0 <issue_comment>username_4: You can do that with the open source package feature-engine: ``` import pandas as pd from sklearn.model_selection import train_test_split from feature_engine.encoding import OneHotEncoder # load titanic data from openML pd.read_csv('https://www.openml.org/data/get_csv/16826755/phpMYEkMl') # divide into train and test X_train, X_test, y_train, y_test = train_test_split( data[['sex', 'embarked']], # predictors for this example data['survived'], # target test_size=0.3, # percentage of obs in test set random_state=0) # seed to ensure reproducibility ohe_enc = OneHotEncoder( top_categories=None, variables=['sex', 'embarked'], drop_last=True) ohe_enc.fit(X_train) X_train = ohe_enc.transform(X_train) X_test = ohe_enc.transform(X_test) X_train.head() ``` You should see this output returned: ``` sex_female embarked_S embarked_C embarked_Q 501 1 1 0 0 588 1 1 0 0 402 1 0 1 0 1193 0 0 0 1 686 1 0 0 1 ``` More details about feature engine here: <https://www.trainindata.com/feature-engine> <https://github.com/feature-engine/feature_engine> <https://feature-engine.readthedocs.io/en/latest/> Upvotes: 2 <issue_comment>username_5: `OneHotEncoder` now has a method `get_feature_names`. You can use `input_features=data.columns` to match to the training data. Upvotes: 0
2018/03/22
907
2,382
<issue_start>username_0: I have these columns: ``` text.NANA text.22 text.32 1 Female RNDM_MXN95.tif No NA 12 Male RNDM_QOS38.tif No NA 13 Female RNDM_WQW90.tif No NA 14 Male RNDM_BKD94.tif No NA 15 Male RNDM_LGD67.tif No NA 16 Female RNDM_AFP45.tif No NA ``` I want to create a column that only has the barcode that starts with `RNDM_` and ends with `.tif`, but not including `.tif`. The tricky part is to get rid of the gender information that is also in the same column. There are a random amount of spaces between the gender information and the `RNDM_`: ``` text.NANA text.22 text.32 BARCODE 1 Female RNDM_MXN95.tif No NA RNDM_MXN95 12 Male RNDM_QOS38.tif No NA RNDM_QOS38 13 Female RNDM_WQW90.tif No NA RNDM_WQW90 14 Male RNDM_BKD94.tif No NA RNDM_BKD94 15 Male RNDM_LGD67.tif No NA RNDM_LGD67 16 Female RNDM_AFP45.tif No NA RNDM_AFP45 ``` I made a very poor attempt with this, but it didn't work: ``` dfrm$BARCODE <- regexpr("RNDM_", dfrm$text.NANA) # [1] 8 6 9 7 7 8 9 9 8 8 9 9 6 6 7 8 9 8 # attr(,"match.length") # [1] 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 # attr(,"useBytes") # [1] TRUE ``` Please help. Thanks!<issue_comment>username_1: You can use sapply() and strsplit to do it easy, let me show you: `sapply(strsplit(dfrm$text.NANA, "_"),"[", 1)` That should work. Edit: `sapply(strsplit(x, "[ .]+"),"[", 2)` Upvotes: 0 <issue_comment>username_2: So you just want to remove the file extension? Use [`file_path_sans_ext`](https://stat.ethz.ch/R-manual/R-devel/library/tools/html/fileutils.html): ``` dfrm$BARCODE = file_path_sans_ext(dfrm$text.NANA) ``` If there’s more stuff in front, you can use the following regular expression to extract just the suffix: ``` dfrm$BARCODE = stringr::str_match(dfrm$text.NANA, '(RNDM_.*)\\.tif')[, 2] ``` Note that I’m using the {stringr} package here because the base R functions for extracting regex matches are terrible. Nobody uses them. I strongly recommend *against* using `strsplit` here because it’s underspecified: from reading the code it’s absolutely not clear what the purpose of that code is. Write code that is self-explanatory, not code that requires explanation in a comment. Upvotes: 3 [selected_answer]
2018/03/22
334
1,100
<issue_start>username_0: In React - I have an input mask that turns a user input into: (###) ###-#### Unfortunately, that data can't go to the DB in that format, so I need to strip the mask so it saves as ##########. I'm trying to form a RegEx string to accomplish this in a string.replace to no avail. I've tried a few attempts, and finally got it to remove the first paren '(', I thought adding the other characters I needed to remove would work as I go, but it's not. ``` const number = value.replace(/\([\(\)]\)/, ""); ``` **TL;DR** Can someone assist with a regular expression to turn `(###) ###-####` into `##########`? Any supporting documentation as to **why** it works would be greatly appreciated as well.<issue_comment>username_1: ```js const value = '(123) 4348-43492' const number = value.replace(/\D/g, ""); console.log(number); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: How about replacing non-digits with `''` ```js let value = "(111) 111-1111"; let number = value.replace(/[^\d]/g, ""); console.log(number); ``` You could also use `/\D/g` Upvotes: 2
2018/03/22
238
738
<issue_start>username_0: I have to match all `-` inside the following pattern ``` "word-word": #expected result find one - "word-word" #expected result no - find because the : is missing in the end pattern "word-word-word": #expected result find two - "word-word #expected result no - find because the end pattern is ": ```<issue_comment>username_1: ```js const value = '(123) 4348-43492' const number = value.replace(/\D/g, ""); console.log(number); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: How about replacing non-digits with `''` ```js let value = "(111) 111-1111"; let number = value.replace(/[^\d]/g, ""); console.log(number); ``` You could also use `/\D/g` Upvotes: 2
2018/03/22
506
1,154
<issue_start>username_0: For the subset argument i want to specify the first n-1 columns. How'll I do that? For example: in the following dataset ``` 0 1 2 3 4 5 6 0 0 12 1 99 23 2 75 1 0 12 1 99 23 2 66 2 5 12 1 99 23 2 66 ``` I want the result to be 1st and 3 rd row only: ``` 0 1 2 3 4 5 6 0 0 12 1 99 23 2 75 1 5 12 1 99 23 2 66 ``` If I do something like the following I get error: ``` df.drop_duplicates(subset=[0:df.shape[1]-1],keep='first',inplace=True) ```<issue_comment>username_1: You're close, but you can index on the column names, it's easier. ``` df.drop_duplicates(subset=df.columns[:-1], keep='first') 0 1 2 3 4 5 6 0 0 12 1 99 23 2 75 2 5 12 1 99 23 2 66 ``` Where, ``` df.columns[1:].tolist() ['0', '1', '2', '3', '4', '5'] ``` This generalises to any dataFrame regardless of what its column names are. Upvotes: 2 <issue_comment>username_2: You can using `duplicated` ``` df[~df.iloc[:,:-1].duplicated()] Out[53]: 0 1 2 3 4 5 6 0 0 12 1 99 23 2 75 2 5 12 1 99 23 2 66 ``` Upvotes: 3 [selected_answer]
2018/03/22
415
1,082
<issue_start>username_0: I need to import an Audit Command Language(ACL) table created in another project. The tables are saved out to .FIL (ex. FY12P12.FIL) files. When I use the GUI to import the table the resulting data is a jumbled mess. I tried using this command: ``` IMPORT LAYOUT "\\mypath\FY12P12" to FY12P12_Table ``` resulting in this error: File\mypath\FY12P1217.LAYOUT cannot be found Is there any way to import this .FIL file and get good data?<issue_comment>username_1: You're close, but you can index on the column names, it's easier. ``` df.drop_duplicates(subset=df.columns[:-1], keep='first') 0 1 2 3 4 5 6 0 0 12 1 99 23 2 75 2 5 12 1 99 23 2 66 ``` Where, ``` df.columns[1:].tolist() ['0', '1', '2', '3', '4', '5'] ``` This generalises to any dataFrame regardless of what its column names are. Upvotes: 2 <issue_comment>username_2: You can using `duplicated` ``` df[~df.iloc[:,:-1].duplicated()] Out[53]: 0 1 2 3 4 5 6 0 0 12 1 99 23 2 75 2 5 12 1 99 23 2 66 ``` Upvotes: 3 [selected_answer]
2018/03/22
750
2,461
<issue_start>username_0: I have a sql query where I should have something like ``` select * from transactionDetails WHERE OrderID in (400376, 400379) AND IF TransactionDate <> ProcessingDate TransactionId in (2,3,9,14,15) ELSE TransactionId in (2,3,9) ``` But this gives me a error at IF and TransactionId. Then I tried ``` select * from transactionDetails WHERE OrderID in (400376, 400379) AND ((TransactionDate <> ProcessingDate AND TransactionId in (2,3,9,14,15)) OR (TransactionDate = ProcessingDate AND TransactionId in (2,3,9))) ``` But this gives me same result for both <> and = conditions Can someone tell me what I am doing wrong? Thanks<issue_comment>username_1: Just organize better the parentheses like a math sentence ``` select * from transactionDetails WHERE OrderID in (400376, 400379) AND`enter code here` ( (TransactionDate <> ProcessingDate AND TransactionId in (2,3,9,14,15)) OR (TransactionDate = ProcessingDate AND TransactionId in (2,3,9)) ) ``` Upvotes: 0 <issue_comment>username_2: It is impossible for both of those to return the same. `<>` and `=` are mutually exclusive. I suspect a problem with your testing or understanding. Well if one of the dates is null then they would both return false. No way they can both return true. ``` ( (TransactionDate <> ProcessingDate AND TransactionId in (2, 3, 9, 14, 15)) OR (TransactionDate = ProcessingDate AND TransactionId in (2, 3, 9)) ) ``` I will go out on limb here and assert ``` where TransactionDate <> ProcessingDate and TransactionDate = ProcessingDate ``` Will return zero rows every time Upvotes: 2 [selected_answer]<issue_comment>username_3: In addition to the above answers, you might want to try using `UNION ALL` instead of `OR` and check which one is faster: ``` SELECT * FROM transactionDetails WHERE OrderID in (400376, 400379) AND TransactionDate <> ProcessingDate AND TransactionId in (2,3,9,14,15) UNION ALL SELECT * FROM transactionDetails WHERE OrderID in (400376, 400379) AND TransactionDate <> ProcessingDate AND TransactionDate = ProcessingDate AND TransactionId in (2,3,9) ``` Upvotes: 0 <issue_comment>username_4: As an alternative solution you can use this. ``` select * from transactionDetails WHERE OrderID in (400376, 400379) AND ( TransactionId in (2,3,9) OR ( TransactionDate <> ProcessingDate AND TransactionId IN (14,15) ) ) ``` Upvotes: 0
2018/03/22
1,574
5,822
<issue_start>username_0: I have the following config that works fine for loading a bunch of files into BigQuery: ``` config= { 'configuration'=> { 'load'=> { 'sourceUris'=> 'gs://my_bucket/my_files_*', 'schema'=> { 'fields'=> fields_array }, 'schemaUpdateOptions' => [{ 'ALLOW_FIELD_ADDITION'=> true}], 'destinationTable'=> { 'projectId'=> 'my_project', 'datasetId'=> 'my_dataset', 'tableId'=> 'my_table' }, 'sourceFormat' => 'NEWLINE_DELIMITED_JSON', 'createDisposition' => 'CREATE_IF_NEEDED', 'writeDisposition' => 'WRITE_TRUNCATE', 'maxBadRecords'=> 0, } }, } ``` This is then executed with the following where `client` is pre-initialised: ``` result = client.execute( api_method: big_query.jobs.insert, parameters: { projectId: 'my_project', datasetId: 'my_dataset' }, body_object: config ) ``` I am now trying to write the equivalent to create an [external / federated data source](https://cloud.google.com/bigquery/external-data-sources) instead of loading the data. I need to do this to effectively create staging tables for ETL purposes. I have successfully done this using the BigQuery UI but need to run in code as it will eventually be a daily automated process. I've having a bit of trouble with the API docs and can't find any good examples to refer to. Can anyone help? Thanks in advance!<issue_comment>username_1: By creating an external data source, do you mean create a table that refers to an external data source? In this case you can use bigquery.tables.insert and fill out the [externalDataConfiguraiton](https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#externalDataConfiguration). The table can then be used in queries to read from the external data source. If you only want to use the external data source in one query, you can attach a temporary external table with the query, by putting the table definition to [tableDefinitions](https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.tableDefinitions). In command line it looks like this: `bq query --external_table_definition=avroTable::AVRO=gs://path-to-avro 'SELECT * FROM avroTable'` Upvotes: 2 <issue_comment>username_2: For anyone attempting the same, here's what I used to get it working. There are not many working examples online and the docs take some deciphering, so hope this helps someone else! ``` config= { "kind": "bigquery#table", "tableReference": { "projectId": 'my_project', "datasetId": 'my_dataset', "tableId": 'my_table' }, "externalDataConfiguration": { "autodetect": true, "sourceUris": ['gs://my_bucket/my_files_*'], 'sourceFormat' => 'NEWLINE_DELIMITED_JSON', 'maxBadRecords'=> 10, } } ``` The documentation for `externalDataConfiguration` can be found in the BigQuery [REST API reference](https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#resource) and ["Try this API"](https://cloud.google.com/bigquery/docs/reference/rest/v2/tables/insert#try-it) section for `bigquery.tables.insert`. Then as pointed out in username_1's answer you run `bigquery.tables.insert` instead of `bigquery.jobs.insert` ``` result = client.execute( api_method: big_query.tables.insert, parameters: { projectId: my_project, datasetId: my_dataset }, body_object: config ) ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: Use idiomatic Cloud libraries when possible =========================================== Use the BigQuery module in the [idiomatic Ruby client](https://github.com/GoogleCloudPlatform/google-cloud-ruby) for GCP, which is Generally Available, instead of [google-api-ruby-client](https://github.com/google/google-api-ruby-client), which is both in "maintenance mode only" and "alpha". You can find this recommendation [here](https://github.com/google/google-api-ruby-client#user-content-working-with-google-cloud-platform-apis) and [here](https://cloud.google.com/bigquery/docs/reference/libraries#client-libraries-install-ruby). Authentication: =============== You can define project and access using [environment variables](https://googlecloudplatform.github.io/google-cloud-ruby/#/docs/google-cloud/v0.51.1/guides/authentication). How to create an External Data Source object ============================================ This is [an example](http://googlecloudplatform.github.io/google-cloud-ruby/#/docs/google-cloud-bigquery/v1.1.0/google/cloud/bigquery/project?method=external-instance) of creating an External Data Source with `bigquery.external`. I have slightly modified it to add relevant configurations from your solution. ``` bigquery = Google::Cloud::Bigquery.new json_url = "gs://my_bucket/my_files_*" json_table = bigquery.external csv_url do |json| json.autodetect = true json.format = "json" json.max_bad_records = 10 end ``` The object configuration methods are [here](http://googlecloudplatform.github.io/google-cloud-ruby/#/docs/google-cloud-bigquery/v1.1.0/google/cloud/bigquery/external/datasource). For example: `autodetect`, `max_bad_records`, `urls`, etc. How to query it: ================ ``` data = bigquery.query "SELECT * FROM my_ext_table", external: { my_ext_table: json_table } data.each do |row| puts row[:name] end ``` **Note:** Also, both `writeDisposition` and `createDisposition` are only used for load/copy/query jobs which modify permanent BigQuery tables and wouldn't make much sense for an External Data Source. In fact they don't appear neither in the [REST API reference](https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#resource) nor in ["Try this API" section](https://cloud.google.com/bigquery/docs/reference/rest/v2/tables/insert) for `externalDataConfiguration`. Upvotes: 2
2018/03/22
410
1,682
<issue_start>username_0: How do you handle blank values for SSAS Tabular?. Now i am using ssas model 1400. In SSas Multidimensional, we had "unkown member". Can i have something similar using ssas Tabular?.<issue_comment>username_1: In SSAS Multidimensional you could control whether the unknown member was visible, hidden, or it throws an error if a fact points to an invalid dimension. In Tabular if a fact row points to a dimension key that doesn't exist then a new blank row is automatically inserted into the dimension table and the fact row is tied to it. If you aren't happy with this behavior then you need to change the data in SQL. For example, assign the problem rows a -1 dimension key and physically insert a -1 row into the dimension wording it however you like. There is a good [blog post](https://www.skylinetechnologies.com/Blog/Skyline-Blog/November_2017/how-quickly-detect-tabular-referential-integrity) which outlines how you can detect relational integrity issues like this. Upvotes: 2 <issue_comment>username_2: Greg explains the default behavior very clearly. I would add that i have yet come across a project where I dont add an "Unknown" record in most dimensions. Ideally the data will be very clean & there wont be any unknowns, but thats not reality unfortunately. Where this happens depends on your data, systems involved and the process surrounding them but it usually happens in one of a few places: 1. in the source system. 2. during ETL, check if an unknown record exists, if not, create it. 3. add one time during deployment/release. Its almost always better to build in the behavior you want, instead of depending on defaults. Upvotes: 0
2018/03/22
1,106
2,114
<issue_start>username_0: I have a data frame with the intercepts and slopes for six lines. There are other questions that address this (i.e., this [question](https://stackoverflow.com/questions/45287549/does-geom-abline-plot-data-multiple-times)), but when I follow the same approach, it doesn't seem to work. When I try the following, a plot with only a single line is returned: ``` library(ggplot2) library(tibble) d <- tibble::tribble( ~ID, ~b0, ~b1, 1L, -0.253642820580212, 0.0388815148531228, 2L, -0.247859980353316, 0.0462798786249876, 3L, -0.241628306421253, 0.0418616653609702, 4L, -0.476161762130615, 0.0216251842526953, 5L, -0.372079433686108, 0.0564612163378217, 6L, -0.0983318344106016, 0.0759661386473856 ) ggplot(d, aes(intercept = b0, slope = b1)) + geom_abline() + xlim(0, 10) + ylim(0, 10) ``` [![plot with one line](https://i.stack.imgur.com/QSk7D.png)](https://i.stack.imgur.com/QSk7D.png) How can I plot the six lines associated with the six intercepts and slopes?<issue_comment>username_1: Here is what I would do: ``` library(tibble) d <- tibble::tribble( ~ID, ~b0, ~b1, 1L, -0.253642820580212, 0.0388815148531228, 2L, -0.247859980353316, 0.0462798786249876, 3L, -0.241628306421253, 0.0418616653609702, 4L, -0.476161762130615, 0.0216251842526953, 5L, -0.372079433686108, 0.0564612163378217, 6L, -0.0983318344106016, 0.0759661386473856 ) ggplot(d) + geom_abline(aes(intercept = b0, slope = b1, group = "ID")) + xlim(0, 10) + ylim(0, 10) ``` Not sure about the limits though Upvotes: 2 <issue_comment>username_2: You just want to move the `aes` to `geom_abline`. That is ``` ggplot(d) + geom_abline(aes(intercept = b0, slope = b1)) + xlim(0, 10) + ylim(-1, 1) ``` I hope this helps! Upvotes: 2 <issue_comment>username_3: ``` ggplot(d) + geom_abline(aes(intercept = b0, slope = b1, color=factor(ID))) + xlim(0, 100) + ylim(0, 10) ``` [![enter image description here](https://i.stack.imgur.com/T977u.png)](https://i.stack.imgur.com/T977u.png) Upvotes: 2
2018/03/22
614
1,956
<issue_start>username_0: how to count the number of rows with data based on the filter applied? All I can find is methods like `xlUp, xlDown` which I cant apply to this as it will give me last row as opposed the number of rows with filtered data. example ``` 1 animal age 2 dog 10 3 cat 15 ``` I apply the filter on cat and get the following table: ``` 1 animal age 3 cat 15 ``` with xlUp or down it will tell me last row number is 3, but obviously there is only 1 row with filtered data<issue_comment>username_1: The worksheet's [SUBTOTAL function](https://support.office.com/en-us/article/subtotal-function-7b027003-f060-4ade-9040-e478765b9939) can count visible data in a column. ``` dim i as long i = application.subtotal(103, columns(1)) debug.print i ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: What about this?, It is counting the visible cells in the first column in a filtered range. ``` Sub test() data_visible_rows = ActiveSheet.AutoFilter.Range.Columns(1).SpecialCells(xlCellTypeVisible).Count - 1 End Sub ``` Upvotes: 2 <issue_comment>username_3: Suppose you have data in Range A1:C3 [![enter image description here](https://i.stack.imgur.com/A5ReV.png)](https://i.stack.imgur.com/A5ReV.png) and that you applied the filter manually [![enter image description here](https://i.stack.imgur.com/fhwRF.png)](https://i.stack.imgur.com/fhwRF.png) then the statement ``` dim rng as Range set rng = Range("$A$2:$C$3").SpecialCells(xlCellTypeVisible) ``` would return a reference to a Range with visible cells only. So you leave out the header. you can then call ``` rng.Count / rng.Columns.Count ``` to obtain the desired result. Of course you could do it in one go without declaring anything. My previous explanation was for instructive purposes only. ``` Range("$A$2:$C$3").SpecialCells(xlCellTypeVisible).Count / Range("$A$2:$C$3").Columns.Count ``` This should work. Upvotes: 0
2018/03/22
1,503
5,701
<issue_start>username_0: Consider the following method `count` that maps type level natural numbers to value level ones: ``` {-# LANGUAGE DataKinds , KindSignatures , PolyKinds , FlexibleInstances , FlexibleContexts , ScopedTypeVariables #-} module Nat where data Nat = Z | S Nat data Proxy (a :: k) = Proxy class Count a where count :: a -> Int instance Count (Proxy Z) where count _ = 0 instance Count (Proxy n) => Count (Proxy (S n)) where count _ = succ $ count (Proxy :: Proxy n) ``` It seems to work in repl: ``` λ count (Proxy :: Proxy (S(S(S Z))) ) 3 ``` For the recursion to terminate, there must be some indication at run time of the type of the `Proxy`, but types are supposed to be erased at run time. I can even replace `data` with `newtype` in the `Proxy` definition: ``` newtype Proxy (a :: k) = Proxy () ``` — Which would oblige it to have the same memory representation every time, so that it were `Coercible`. With that in mind: 1. I totally do not understand how a method gets dispatched. I would theorize that, either: * A table of form *(Type, Method name) ⟶ Function* is generated by the compiler. Then, at run time, all the objects are tagged with their type, and a method is a higher order function that looks at the type tag and looks up the corresponding function in the table. But people say types are completely erased during compilation, so this does not add up. * A table of form *Method name ⟶ Function* is attached to every object, and a method invocation is represented as *Method name*. Then, a function application looks up the corresponding *Function* and applies it when it is forced. To save space, the table may be shared by all the members of the type, but then it is no different from having objects tagged with type. * A table of form *(Method name, Instance index) ⟶ Function* is generated by the compiler, and tables of form *(Method name -> Instance index)* are attached to objects at run time. This means an object does not know its type, but knows the classes it belongs to, and the correct choice of instance. I do not know if there are any advantages to this approach.  So, I do not understand how the run time system determines the right choice for the method instance if the objects are not tagged with their type in some way, direct or indirect. People all around are talking about some dictionary passing stuff, but I totally do not get it: * What are the keys? * What are the values? * Where does the dictionary reside? (On the heap, in the program text, still elsewhere?) * Who has pointers to the dictionary?...Et cetera. 2. Even if there is a trick in place that allows for the choice of method instance without tagging objects with types, there are only 2 instances of `Count`, so the choice of a method may only carry 1 bit of information. (For example, there may be a `Proxy` with a tag that says *"apply methods from instance A1 to me"*, and the method instance in A1 retags the `Proxy` with *"apply methods from instance A0 to me"*.) This is clearly not enough. There must be something at run time that ticks down every time the recursive instance is applied. Can you walk me through the execution of this code, or throw in some links that describe the applicable particulars of the runtime system?<issue_comment>username_1: Type classes are desugared to records. Everything happens at compile time. ``` data Count a = Count { count :: a -> Int } instance_Count_ProxyZ :: Count (Proxy Z) instance_Count_ProxyZ = Count { count = \_ -> 0 } instance_Count_ProxySn :: Count (Proxy n) -> Count (Proxy (S n)) instance_Count_ProxySn context = Count { count = \_ -> succ (count context (Proxy :: Proxy n)) } ``` Whenever we call `count :: Count n => n -> Int`, the desugarer (that runs after the typechecker) looks at the inferred type for `n`, and tries to construct a record of type `Count n`. So if we write `count (Proxy :: Proxy (S (S (S Z))))`, we need a record of type `Count (S (S (S Z)))`, and the only matching instance is `Count (Proxy n) -> Count (Proxy (S n))`, with `n ~ S (S Z)`. This means we now have to construct its argument, of type `Count (Proxy (S (S Z)))`, and so on. Note that this is also what happens in the process of desugaring the application of `count` in the instance for `Proxy (S n)`. After this process there are no type classes left, everything is just records. Upvotes: 4 [selected_answer]<issue_comment>username_2: Whenever a constraint turns up at the LHS of a function declaration, like ``` count :: (Count a) => a -> Int ``` it's as it were syntactic sugar for ``` count' :: CountDictionary a -> a -> Int ``` where `CountDictionary a` is a runtime-suitable (but singleton – the compiler always chooses exactly one instance for each type!) representation of, indeed, the *methods* of the `Count` class, i.e. ``` data CountDictionary a = CountDict { countMethod :: a -> Int } ``` Before I elaborate further, let me rewrite all without those ugly proxies in favour of `TypeApplications`: ``` {-# LANGUAGE AllowAmbiguousTypes, TypeApplications, ScopedTypeVariables, UnicodeSyntax #-} class Count a where count :: Int ⇒ count' :: CountDictionary a -> Int w/ data CountDictionary a = CountDict Int instance Count Z where count = 0 instance ∀ n . Count n => Count (S n) where count = succ $ count @n ``` Now when you write `count @(S(S(S Z)))`, it's represented by ``` count @(S(S(S Z))) = count' ( CountDict (succ $ count @(S Z)) ) = count' ( CountDict (succ $ count' (CountDict (succ $ count @Z))) ) = count' ( CountDict (succ $ count' (CountDict (succ $ count' (CountDict 0)))) ) ``` Upvotes: 3
2018/03/22
500
1,838
<issue_start>username_0: Is it possible to write Strings to a file in utf-32 format? For example: the RandomAccessFile class only offers the writeUTF() Method, which writes the String in a modifed UTF-8 format. Lets say my task is to write every existing unicode character into a file :).<issue_comment>username_1: You should convert your string to bytes in the UTF-32 format, and then write write those bytes to your random file ``` RandomAccessFile file = ... String str = "Hi"; byte[] bytes = str.getBytes("UTF-32"); file.write(bytes); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You could Use `BufferedWriter`: ``` public class SampleCode { public static void main(String[] args) throws IOException { String aString = "File contents"; BufferedWriter out = new BufferedWriter(new OutputStreamWriter(new FileOutputStream("outfilename"), "UTF-32")); try { out.write(aString); } finally { out.close(); } } } ``` Or you could use > > Analogously, the class java.io.OutputStreamWriter acts as a bridge between characters streams and bytes streams. Create a Writer with this class to be able to write bytes to the file: > > > ``` Writer out = new OutputStreamWriter(new FileOutputStream(outfile), "UTF-32"); ``` Or you can also use String format like below: ``` public static String convertTo32(String toConvert){ for (int i = 0; i < toConvert.length(); ) { int codePoint = Character.codePointAt(toConvert, i); i += Character.charCount(codePoint); //System.out.printf("%x%n", codePoint); String utf32 = String.format("0x%x%n", codePoint); return utf32; } return null; } ``` See [How can I convert UTF-16 to UTF-32 in java?](https://stackoverflow.com/questions/36393811/). Upvotes: 1
2018/03/22
434
1,682
<issue_start>username_0: I'm also using webpack to transpile and bundle onto public. What are the pros and cons of keeping image assets in public vs non-public. Which project structure is better for a react app? ``` public/ -images/ --favicon.ico --(other-image-files...) -index.html -bundle.js src/ -components/ -style/ -images/ -utils/ ```<issue_comment>username_1: Overall, the idea is that images in the public directory are accessible by URL outside your app. Anything in src will only be built in if you load it through webpack import. I generally keep things in src unless it is publicly shared. Upvotes: 1 <issue_comment>username_2: The `src` directory has a long history of source code being compiled, linked, and eventually ending up as an executable. Given today's build tools, that concept is still current. With `webpack`, `brunch`, or `parcel`, you never really "serve" anything directly from a `public` directory - everything is transpiled, bundled, etc. into a `public` or `dist` output directory. With that in mind, I would submit that `src` is completely appropriate for images and even your `index.html` file (noting that the `index.html` that ends up in your distribution folder is likely different that the original "source" `index.html` that you created). I hope that helps. Not sure if it is *better*, but I offer it as a point of view given your question. Upvotes: 2 <issue_comment>username_3: <https://facebook.github.io/create-react-app/docs/using-the-public-folder> The recommended approach in the CRA docs is to put images in `src` and load it by importing it. Using `public` folder should be only for exceptional cases. Upvotes: -1
2018/03/22
676
2,529
<issue_start>username_0: This statement needs to output the # of days absent + '.00'. Currently the statement works to return a value of 500 if the student was absent 5 days. I need the output be formatted like 5.00. My issue is that every time I add the decimal '.00' to this statement I get: > > Conversion failed when converting the varchar value '.00' to data type int. > > > Again it runs fine without the period. How do I cast this statement to include the period and return both statements as int so they can be subtracted from one another? ``` Cast(cast(left(ISNULL(CONVERT(varchar,(select CONVERT(int,(SUM(dayenrolled))) from attsum where suniq = sd.suniq and trkuniq = ss.trkuniq and ddate between ss.edate and ISNULL(ss.xdate,GETDATE()))),'')+'.00',5)as varchar(6)) as int) - Cast(cast(Left(ISNULL(CONVERT(varchar,(select CONVERT(int,(SUM(dayapportion))) from attsum where suniq = sd.suniq and trkuniq = ss.trkuniq and ddate between ss.edate and ISNULL(ss.xdate,GETDATE()))),'')+'.00',5)as varchar(6))as int)[ESSA # Days Absent], --Statement needs a period to output .00 left of the # of days absent and to start on line 2. ```<issue_comment>username_1: Overall, the idea is that images in the public directory are accessible by URL outside your app. Anything in src will only be built in if you load it through webpack import. I generally keep things in src unless it is publicly shared. Upvotes: 1 <issue_comment>username_2: The `src` directory has a long history of source code being compiled, linked, and eventually ending up as an executable. Given today's build tools, that concept is still current. With `webpack`, `brunch`, or `parcel`, you never really "serve" anything directly from a `public` directory - everything is transpiled, bundled, etc. into a `public` or `dist` output directory. With that in mind, I would submit that `src` is completely appropriate for images and even your `index.html` file (noting that the `index.html` that ends up in your distribution folder is likely different that the original "source" `index.html` that you created). I hope that helps. Not sure if it is *better*, but I offer it as a point of view given your question. Upvotes: 2 <issue_comment>username_3: <https://facebook.github.io/create-react-app/docs/using-the-public-folder> The recommended approach in the CRA docs is to put images in `src` and load it by importing it. Using `public` folder should be only for exceptional cases. Upvotes: -1
2018/03/22
591
1,814
<issue_start>username_0: I have a directory with several types of files. How can I count the number of files in a directory with 2 types of extensions (.txt and .csv)? In my search I found how to count with only one certain extension<issue_comment>username_1: Assume `path` is the path to your folder. Then ``` import os # get list of files list_of_files = os.listdir(path) # txt files num_txt = len([x for x in list_of_files if x.endswith(".txt")]) # csv files num_csv = len([x for x in list_of_files if x.endswith(".csv")]) ``` Upvotes: 2 <issue_comment>username_2: A better variant of Yilun answer (which is already nice since it doesn't scan the directory twice like `len(glob.glob("*.csv"))` and `len(glob.glob("*.txt"))` would do for instance). That one doesn't create an extra list (faster) using `sum` (booleans are summed as 0 or 1) and a generator comprehension: ``` import os # get list of files list_of_files = os.listdir(path) # txt files num_txt = sum(x.endswith(".txt") for x in list_of_files) # csv files num_csv = sum(x.endswith(".csv") for x in list_of_files) ``` gencomps+sum is cool, but it still loops/tests twice on `list_of_files`. Good old loop isn't that bad after all (at least it scans once and shortcuts): ``` num_txt, num_csv = 0,0 for x in list_of_files: if x.endswith(".txt"): num_txt += 1 elif x.endswith(".csv"): num_csv += 1 ``` BTW to count both at the same time, use the tuple param capability of `endswith` ``` # csv & txt files num_txt_csv = sum(x.endswith((".csv",".txt")) for x in list_of_files) ``` Upvotes: 2 <issue_comment>username_3: You can use regex to filter the filename: ``` import os import re txt_or_csv = [f for f in os.listdir(path) if re.search(r'.*\.(txt|csv)$', f)] print(len(txt_or_csv)) ``` Upvotes: 2 [selected_answer]
2018/03/22
714
2,375
<issue_start>username_0: I want to display a textView in popup Window when I click a View , but the calculation needs time, so I make the calculation in AsyncTask, but how to show the popup Window immediately after the AsyncTask process is finished? ``` public void onClick(View widget) { MyAsyncTask asyncTask = new MyAsyncTask(new AsyncResponse() { @Override public void processFinish(Object output) { meaning_result = (String) output; } }); asyncTask.execute("xxxxx"); showPopupWindow(widget); } ``` This is my first thought but the `showPopupWindow(widget)` executes first and the `meaning_result` has not assigned yet. How to make `showPopupWindow(widget)` runs once the meaning\_result is assigned?<issue_comment>username_1: Assume `path` is the path to your folder. Then ``` import os # get list of files list_of_files = os.listdir(path) # txt files num_txt = len([x for x in list_of_files if x.endswith(".txt")]) # csv files num_csv = len([x for x in list_of_files if x.endswith(".csv")]) ``` Upvotes: 2 <issue_comment>username_2: A better variant of Yilun answer (which is already nice since it doesn't scan the directory twice like `len(glob.glob("*.csv"))` and `len(glob.glob("*.txt"))` would do for instance). That one doesn't create an extra list (faster) using `sum` (booleans are summed as 0 or 1) and a generator comprehension: ``` import os # get list of files list_of_files = os.listdir(path) # txt files num_txt = sum(x.endswith(".txt") for x in list_of_files) # csv files num_csv = sum(x.endswith(".csv") for x in list_of_files) ``` gencomps+sum is cool, but it still loops/tests twice on `list_of_files`. Good old loop isn't that bad after all (at least it scans once and shortcuts): ``` num_txt, num_csv = 0,0 for x in list_of_files: if x.endswith(".txt"): num_txt += 1 elif x.endswith(".csv"): num_csv += 1 ``` BTW to count both at the same time, use the tuple param capability of `endswith` ``` # csv & txt files num_txt_csv = sum(x.endswith((".csv",".txt")) for x in list_of_files) ``` Upvotes: 2 <issue_comment>username_3: You can use regex to filter the filename: ``` import os import re txt_or_csv = [f for f in os.listdir(path) if re.search(r'.*\.(txt|csv)$', f)] print(len(txt_or_csv)) ``` Upvotes: 2 [selected_answer]
2018/03/22
888
3,187
<issue_start>username_0: I have problem with gson. In object model I add SeriableName Proguard: ``` # For using GSON @Expose annotation -keepattributes *Annotation* # Gson specific classes -dontwarn sun.misc.** -keep class com.google.gson.stream.** { *; } -keepattributes EnclosingMethod # Application classes that will be serialized/deserialized over Gson -keep class com.smartmedia.musicplayer.api.AppSetting { *; } # Prevent proguard from stripping interface information from TypeAdapterFactory, # JsonSerializer, JsonDeserializer instances (so they can be used in @JsonAdapter) -keep class * implements com.google.gson.TypeAdapterFactory -keep class * implements com.google.gson.JsonSerializer -keep class * implements com.google.gson.JsonDeserializer ``` Log crash: ``` java.lang.AssertionError: java.lang.NoSuchFieldException: DESTROYED at com.google.gson.internal.bind.TypeAdapters$EnumTypeAdapter.(SourceFile:791) at com.google.gson.internal.bind.TypeAdapters$30.create(SourceFile:817) at com.google.gson.Gson.getAdapter(SourceFile:423) at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory.createBoundField(SourceFile:115) at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory.getBoundFields(SourceFile:164) at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory.create(SourceFile:100) at com.google.gson.Gson.getAdapter(SourceFile:423) at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory.createBoundField(SourceFile:115) at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory.getBoundFields(SourceFile:164) at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory.create(SourceFile:100) at com.google.gson.Gson.getAdapter(SourceFile:423) at com.google.gson.Gson.fromJson(SourceFile:887) at com.google.gson.Gson.fromJson(SourceFile:853) at com.google.gson.Gson.fromJson(SourceFile:802) at com.google.gson.Gson.fromJson(SourceFile:774) ```<issue_comment>username_1: Oh, snap, I missed that. Your problem is not related to Gson. One of the classes that you are trying to create using `Gson.fromJson()` is being obfuscated from your code. Can you generate unobfuscated log? Basically, your problem is that one of your classes is missing the field `DESTROYED` that was probably renamed by Proguard. Another option is that your Json data is incorrect and it contains the field `DESTROYED` while it should not be in your code. Upvotes: 1 <issue_comment>username_2: ``` # Application classes that will be serialized/deserialized over Gson -keep class com.smartmedia.musicplayer.api.AppSetting { *; } ``` This is not sufficient. You need to protect the members inside the class as well while using proguard to obfuscate your code. In your case I would like to suggest the following proguard rule to be added in your `proguard-rules.pro`. ``` -keepclassmembers class com.smartmedia.musicplayer.api.AppSetting.** { *; } ``` Hope that helps. Upvotes: 1 <issue_comment>username_3: I had the same issue. "DESTROYED" should be one of the enum types you define. In the proguard file, add the following: ``` -keepclassmembers enum * { *; } ``` Upvotes: 0
2018/03/22
508
1,436
<issue_start>username_0: I have data of json as sample ``` [{"Food":"Orange T1","Total":3}, {"Food":"Blue T2","Total":1}, {"Food":"Green T3","Total":1}, {"Food":"White T4","Total":4}] ``` and i want to convert to array object as ``` [['Orange T1', 3], ['Blue T2', 1], ['Green T3', 1],['White T4', 4]] ``` How I do this and I will use `console.log()` to show data sample.<issue_comment>username_1: You could map the values of each object. ```js var array = [{ Food: "Orange T1", Total: 3 }, { Food: "Blue T2", Total: 1 }, { Food: "Green T3", Total: 1 }, { Food: "White T4", Total: 4 }], result = array.map(Object.values); console.log(result); ``` If you do not rely on insertation order of objects values, you could use explit keys and their values. ES6 ```js var array = [{ Food: "Orange T1", Total: 3 }, { Food: "Blue T2", Total: 1 }, { Food: "Green T3", Total: 1 }, { Food: "White T4", Total: 4 }], result = array.map(({ Food, Total }) => [Food, Total]); console.log(result); ``` ES5 ```js var array = [{ Food: "Orange T1", Total: 3 }, { Food: "Blue T2", Total: 1 }, { Food: "Green T3", Total: 1 }, { Food: "White T4", Total: 4 }], result = array.map(function (o) { return [o.Food, o.Total]; }); console.log(result); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: ``` var newArr = JSON.parse(yourJson).map((item) => {return [item.Food, item.Total]}) console.log(newArr); ``` Upvotes: 1
2018/03/22
484
1,507
<issue_start>username_0: I need to show an image at the bottom of a web page after rows of data (dynamic) have been displayed. See diagram [![Diagram](https://i.stack.imgur.com/r5D4P.png)](https://i.stack.imgur.com/r5D4P.png) I have the following html code; ``` // rows of data. could be between 1 and 10 rows. ![](../../images/imagee.png) ``` The css: ``` #wrapper { width: 1080px; position: relative; } .data_area { min-height: 120px; width: 100%; clear: left; float: left; } .logoImage { text-align: center; margin-right: auto; margin-left: auto; margin-bottom: 10px; } .wrapperImage { text-align: center; vertical-align: middle; } ``` I need the image div to stay at the bottom of the page even if there is only one row of data.<issue_comment>username_1: Can you use `absolute` positioning? You could do something like this: ``` .logoImage { position: absolute; bottom: 25px; } ``` Is that the desired effect you want? Or if you want it fixed when the user scrolls and keep it `25px` from the bottom: ``` .logoImage { position: fixed; bottom: 25px; } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: If you always want to see your div to be top of your screen. you need to position that div as fixed. and it's parent div must need to stay at relative position. ``` .logoImage { position: fixed; bottom: 25px; } ``` Your question is not really clear. But maybe this can fix your issue. Upvotes: 0
2018/03/22
1,892
4,824
<issue_start>username_0: I try: ``` ffmpeg -re -i ./2898654.mp4 -b:a:0 32k -b:a:1 64k -b:v:0 1000k -b:v:1 3000k \ -map 0:a -map 0:a -map 0:v -map 0:v -f hls \ -var_stream_map "a:0,agroup:aud_low a:1,agroup:aud_high v:0,agroup:aud_low v:1,agroup:aud_high" \ -master_pl_name master.m3u8 \ ./out_%v.m3u8 ``` error info: ``` ffmpeg version 3.4.2-1~16.04.york0.2 Copyright (c) 2000-2018 the FFmpeg developers built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.9) 20160609 configuration: --prefix=/usr --extra-version='1~16.04.york0.2' --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared libavutil 55. 78.100 / 55. 78.100 libavcodec 57.107.100 / 57.107.100 libavformat 57. 83.100 / 57. 83.100 libavdevice 57. 10.100 / 57. 10.100 libavfilter 6.107.100 / 6.107.100 libavresample 3. 7. 0 / 3. 7. 0 libswscale 4. 8.100 / 4. 8.100 libswresample 2. 9.100 / 2. 9.100 libpostproc 54. 7.100 / 54. 7.100 Unrecognized option 'var_stream_map'. Error splitting the argument list: Option not found ```<issue_comment>username_1: Your `ffmpeg` is too old. The `-var_stream_map` option was added on 2017-11-20 in commit 92a32d0, but the FFmpeg 3.4 release was created on 2017-10-11. New features are not added to releases, so that is why 3.4.2 also does not include this option. You can [download a recent `ffmpeg` version](https://johnvansickle.com/ffmpeg/) from the git master branch, or [compile](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu). Upvotes: 3 [selected_answer]<issue_comment>username_2: ffmpeg has stopped maintaining the builds after version 3 on apt repo. They moved to snap. So, doing `sudo apt install ffmpeg` will install a version 3.x.x. It was `ffmpeg version 3.4.11-0ubuntu0.1 Copyright (c) 2000-2022 the FFmpeg developers` for my case as I write this answer. I could not use snap too because I was using Ubuntu over wsl. When I tried the snap command, I got this: ``` $ sudo snap install ffmpeg Interacting with snapd is not yet supported on Windows Subsystem for Linux. This command has been left available for documentation purposes only. ``` The only option was to just download the binary from their releases page and manually rename and move to /usr/bin/. ``` $ wget https://github.com/eugeneware/ffmpeg-static/releases/download/b4.4.1/linux-x64 $ sudo mv linux-x64 /usr/bin/ffmpeg $ chmod +x /usr/bin/ffmpeg ``` After this, I was able to call ffmpeg ``` $ ffmpeg ffmpeg version 4.4.1-static https://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2021 the FFmpeg developers built with gcc 8 (Debian 8.3.0-6) configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-libgme --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libdav1d --enable-libxvid --enable-libzvbi --enable-libzimg libavutil 56. 70.100 / 56. 70.100 libavcodec 58.134.100 / 58.134.100 libavformat 58. 76.100 / 58. 76.100 libavdevice 58. 13.100 / 58. 13.100 libavfilter 7.110.100 / 7.110.100 libswscale 5. 9.100 / 5. 9.100 libswresample 3. 9.100 / 3. 9.100 libpostproc 55. 9.100 / 55. 9.100 Hyper fast Audio and Video encoder usage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] outfile}... ``` Upvotes: 0
2018/03/22
1,246
3,541
<issue_start>username_0: I am trying to work with an if statement and check if the row values are NaN or not. It turns out to be more difficult that I thought here is an example: ``` df = pd.DataFrame({'key': ['A', 'B', 'C', 'A', 'B', 'C'], 'data1': range(6), 'data2': ['A1', 'B1', 'NaN', 'A1', 'B1','NaN']}, columns = ['key', 'data1', 'data2']) def set_perf(row): if ("C" in row['key']) & (row['data2']=="NaN"): return row['data1'] else: return 1 df['NewColumn'] = df.apply(set_perf, axis=1) ``` the output is ``` key data1 data2 NewColumn 0 A 0 A1 1 1 B 1 B1 1 2 C 2 NaN 2 3 A 3 A1 1 4 B 4 B1 1 5 C 5 NaN 5 ``` The output gives me what I am looking for meaning that I am able to identify the NaN value by adding another condition in the if statement (row['data2']=="NaN") I have applied exactly the same logic in my original dataset but it didnt work. Here is a snapshot ``` NewPerfColumn sec_type tran_type LDI Bucket Alpha vs Markit 0 1.000 GOVT BB NaN 3283.400526 1 1.000 GOVT BB NaN 6710.130364 2 1.000 GOVT BB NaN 3266.912122 3 1.000 GOVT BB NaN 113401.946471 4 1.000 GOVT BB NaN 1938.494818 5 1.000 GOVT BB NaN 9505.724498 6 1.000 GOVT BB NaN 192.196620 7 1.000 MUNITAX RRP NaN -97968.750000 ``` when I add (row['LDI Bucket']=="NaN" ) in the if condition the value "NaN" is not recognizable. here are the distinct values of column "LDI Bucket" ``` data['LDI Bucket'].unique() array([nan, u'0-3m', u'3-6m', u'6-9m', u'9m-1y'], dtype=object) ``` Have I missed anything?<issue_comment>username_1: > > Have I missed anything? > > > Yes. In your MWE, you've represented `NaN` as a string... it's not. It's a float, and represents a certain mathematical quantity that is not equal to any other quantity, including itself. `"NaN" == "NaN"` is true, but `NaN == NaN` is not. This is the underlying cause of your issue. Here's the naive fix, use `pd.isnull` to test for NaNness. ``` def set_perf(row): if ("C" in row['key']) and pd.isnull(row['data2']): return row['data1'] else: return 1 ``` And here's the better fix, use `np.where` and vectorize your function. ``` df['NewColumn'] = np.where( df['key'].str.contains('C') & df['data2'].isnull(), df['data1'], 1 ) ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: You can use numpy package or if statement like ``` if pd.isnull(row[0]): print("do it more") ``` pandas isnull method will handle your Nan value. Upvotes: -1 <issue_comment>username_3: If it is `'NaN'` ``` np.where((df.key.apply(lambda x : 'C' in x))&(df['data2']=='NaN'),df['data1'],1) Out[58]: array([1, 1, 2, 1, 1, 5], dtype=int64) ``` If it is `np.NaN` ``` np.where((df.key.apply(lambda x : 'C' in x))&(df['data2'].isnull()),df['data1'],1) Out[58]: array([1, 1, 2, 1, 1, 5], dtype=int64) ``` Upvotes: 1 <issue_comment>username_4: & is a bitwise and for operations e.g. ``` In [5]: 1 & 3 Out[5]: 1 ``` "and" is what you are looking for, so the if line should be: ``` if ("C" in row['key']) and (row['data2']=="NaN"): ``` Upvotes: 0
2018/03/22
894
3,340
<issue_start>username_0: I've noticed that, in addition to initialization, I'm able to assign initializer lists to STL containers such as std::array and std::vector. For example: ``` #include #include #include using namespace std; int main() { array arr; vector vec(4); arr = {{1, 2, 3, 4}}; vec = {4, 3, 2, 1}; cout << "arr: "; for (auto elem : arr) cout << elem << " "; cout << "\nvec: "; for (auto elem : vec) cout << elem << " "; cout << endl; } ``` I'm compiling this code on Clang 3.8.0 with just the -std=C++11 flag. I'm trying to discern whether this behavior is defined by the C++11 standard, or just the compiler. I've been trying to make my way through the relevant parts of the standard (and cppreference.com when the language in the standard gets too complex) and so far have come up with this: **Initializer Lists** 5.17.9 - a brace-init-list may appear on the right hand side of an assignment defined by a user-defined assignment operator **std::array** 23.3.2.2: class array relies on implicitly-declared special member functions ... to conform to container requirements **std::vector** *vector& operator=( std::initializer\_list ilist );* * Replaces the contents with those identified by initializer list ilist. (Since C++11) From the syntax of the overloaded assignment operator for std::vector it seems clear that assignment by an initializer list is supported. So I'm wondering if passing an initializer list to the overloaded assignment operator implicitly defined for STL containers (std::array in my example) is defined behavior? As a bonus, is std::array the only STL container with an implicitly defined overloaded assignment operator? I've looked at answers to related questions on SO such as: [how to assign an array from an initializer list](https://stackoverflow.com/questions/30178879/how-to-assign-an-array-from-an-initializer-list) [Error: Assigning to an array from an initializer list](https://stackoverflow.com/questions/15603158/error-assigning-to-an-array-from-an-initializer-list) However the answers given do not coincide with behavior I'm getting from my compiler, or what I'm interpreting from the standard. Also, I'm looking for an answer to a more general question than just assigning list initializers to std::array.<issue_comment>username_1: See [defect #1527](http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_defects.html#1527), which changed the wording in **[expr.ass]/9** from *"an assignment defined by a user-defined assignment operator"* to *"an assignment to an object of class type"* - that is, the operator doesn't have to be user-defined. I assume the compiler you use has implemented the resolution for this defect. `std::array` has implicitly-defined copy-assignment `operator=(const std::array&)` - that's the one being called, with argument being a `std::array` temporary constructed via aggregate initialization. Upvotes: 3 [selected_answer]<issue_comment>username_2: There is no assignment operator defined for `std::array` which accepts an `initializer_list`. However the *argument* of the implicitly defined assignment operator (which is an `std::array` itself) can be *constructed* from an initializer list. And this is exactly what happens here. Note that this doesn't work for built-in arrays, those cannot be assigned at all. Upvotes: 2
2018/03/22
916
2,601
<issue_start>username_0: Below query example shows the actual result of what I want my query to fetch. I was wondering if there was any better/efficient way of writing it: ``` with x as (select 'A' institution, 100 value_x, 40 value_y from dual union all select 'B' institution, 200 value_x, 70 value_y from dual union all select 'C' institution, 10 value_x, 95 value_y from dual) select institution, case when sum(value_x) over (partition by null) != 0 then round((value_x/sum(value_x) over (partition by null))*100,2) else 0 end value_x_percent, case when sum(value_y) over (partition by null) != 0 then round((value_y/sum(value_y) over (partition by null))*100,2) else 0 end value_y_percent from x ``` Any advice/suggestion is welcome, however please explain why your query is better than what I am doing. Thanks in advance.<issue_comment>username_1: You may try `RATIO_TO_REPORT` function :[**docs**](https://docs.oracle.com/cd/B19306_01/server.102/b14200/functions124.htm) ``` with x as (select 'A' institution, 100 value_x, 40 value_y from dual union all select 'B' institution, 200 value_x, 70 value_y from dual union all select 'C' institution, 10 value_x, 95 value_y from dual) SELECT institution , ROUND ( 100 * (RATIO_TO_REPORT(value_x) OVER ()), 2) AS value_x_percent , ROUND ( 100 * (RATIO_TO_REPORT(value_y) OVER ()), 2) AS value_y_percent FROM x; ``` [**Demo**](http://sqlfiddle.com/#!4/e0ee9e/580) Upvotes: 2 <issue_comment>username_2: Try this: ``` with x as (select 'A' institution, 100 value_x, 40 value_y union all select 'B' institution, 200 value_x, 70 union all select 'C' institution, 10 value_x, 95 value_y ) select *, value_x/cast((select sum(value_x) from x) as float) x_percentofwhole, value_y/cast((select sum(value_y) from x) as float) y_percentofwhole from x ``` Upvotes: 0 <issue_comment>username_3: I prefer `NULL` to `0` for the case when the denominator is zero. So, I would be inclined to use: ``` with x as ( select 'A' institution, 100 value_x, 40 value_y from dual union all select 'B' institution, 200 value_x, 70 value_y from dual union all select 'C' institution, 10 value_x, 95 value_y from dual ) select institution, round((value_x / nullif(sum(value_x) over (), 0) * 100, 2) as value_x_percent, round((value_y / nullif(sum(value_y) over (), 0)) * 100, 2) value_y_percent from x; ``` You can use `coalesce()` to get `0` back. As for performance, this should be fine. I doubt any other method would have better performance. And the code is already pretty concise. Upvotes: 2 [selected_answer]
2018/03/22
560
1,657
<issue_start>username_0: I am using jmeter 3.3 and groovy and have a IF CONDITION which filters according to the response code. here is what I am doing and it works: ``` ${__jexl3(${code} != 000)} ``` Now I want to add an AND logic to this condition or an OR logic for instance doing this: ``` ${__jexl3(${code} != 000)} && ${__jexl3(${code} != 901)} ``` but this does not seem to work. what is the proper way of adding logic operator?<issue_comment>username_1: If your change the statement to ``` ${__jexl3(${code} != 000 && ${code} != 000)} ``` it will work (i.e. you pull both conditions under the same *jexl3* evaluation). The thing is, you don't need jexl3 evaluation at all. Your *If Controller* will use JavaScriptby default, and thus can be configured like this: [![enter image description here](https://i.stack.imgur.com/ttFvs.png)](https://i.stack.imgur.com/ttFvs.png) So your code can be ``` ${code} != 000 && ${code} != 000 ``` (of course it doesn't make much sense to put same condition there, but I assume it's an example) Upvotes: 3 [selected_answer]<issue_comment>username_2: * If you want [JEXL](http://commons.apache.org/proper/commons-jexl/) you need to use a single function call rather than 2 separate: ``` ${__jexl3("${code}" != "000" && "${code}" != "901" ,)} ``` * If you want to use [Groovy](http://www.groovy-lang.org/) - refer the variable as `vars.get('code')` like: ``` ${__groovy((!vars.get('code').equals('000') && !vars.get('code').equals('901')),)} ``` More information: [6 Tips for JMeter If Controller Usage](https://www.blazemeter.com/blog/six-tips-for-jmeter-if-controller-usage) Upvotes: 3
2018/03/22
2,980
9,868
<issue_start>username_0: I'm currently learning Spark and I came across a problem that says give two text file find the books with a text review of more than 100 words and filter the results to only show the category of horror. Here is an example of my two text files. **BookInformation.data:** Within this data file I have 4 Keys. ``` userName, price, categories, title ``` Each key has a value and each key is separated by a `,` as the delimiter. Some keys use a String Value while others use an Integer Value. ``` {"username": "JAMES250", "price": 19.20, "categories": "Horror", "title": "Friday the 13th"} {"username": "Bro2KXA1", "price": 09.21, "categories": "Fantasy", "title": "Wizard of Oz"} {"username": "LucyLu1272", "price": 18.69, "categories": "Fiction", "title": "Not Real"} {"username": "6302049040", "price": 08.86, "categories": "Fantasy", "title": "Fantastic"} ... etc ... ``` **ReviewerText.data** Within this data file I have 5 Keys. ``` reviewerID, userName, reviewText, overall, reviewTime ``` Each key has a value and each key is separated by a `,` as the delimiter. Some keys use a String Value while others use an Integer Value. ``` {"reviewerID": "A1R3P8MRFSN4X3", "username": "JAMES250", "reviewText": "Wow what a book blah blah… END", "overall": 4.0, "reviewTime": "08 9, 1997"} {"reviewerID": "AVM91SKZ9M58T", " username ": " Bro2KXA1 ", "reviewText": "Different Blah Blah Blah Blah… So on… END", "overall": 5.0, "reviewTime": "08 10, 1997"} {"reviewerID": "A1HC72VDRLANIW", " username ": "DiffUser09", "reviewText": "Another Review Blah Blah Blah Blah… So on… END", "overall": 1.0, "reviewTime": "08 19, 1997"} {"reviewerID": "A2XBTS97FERY2Q", " username ": "MyNameIs01", "reviewText": "I love books. END", "overall": 5.0, "reviewTime": "08 23, 1997"} ... etc ... ``` My Goal here is simple. 1. First I want to check **ReviewInformation.data** for any `reviewText` more than 100 words. 2. Once I have found every `reviewText` with more than 100 words I want to sort the results in order of `overall` rating; starting from 5 to 1. Then I need to also print the corresponding `Title` to each one as well. 3. After that I need to restart the filter and I only need to filter out the `categories` from **BookInformation.data** to show only the `Horror` category. 4. Then calculate the average number of words that appear within the `reviewText` for the `Horror` category. **Code:** So far what I have is that I am creating a Key:Value array for each line entry in each file. The goal here is to create an Array I can parse for any Key and receive its Value. ``` package main.scala import org.apache.spark.{SparkConf, SparkContext} import scala.io.StdIn.readLine import scala.io.Source object ReviewDataSpark { def main(args: Array[String]) { //Create a SparkContext to initialize Spark val conf = new SparkConf() conf.setMaster("local") conf.setAppName("Word Count") val sc = new SparkContext(conf) val metaDataFile = sc.textFile("/src/main/resources/BookInformation.data") val reviewDataFile = sc.textFile("/src/main/resources/ReviewText.data") reviewDataFile.flatMap { line => { val Array(label, rest) = line split "," println(Array) val items = rest.trim.split("\\s+") println(items) items.map(item => (label.trim -> item)) } } metaDataFile.flatMap { line => { val Array(label, rest) = line split "," println(Array) val items = rest.trim.split("\\s+") println(items) items.map(item => (label.trim -> item)) } } } } ``` **Issues:** So my main issue with the code is that I do not believe I am using flatMap correctly. I can't seem to spilt the Keys and Values into a Array of Keys. My code just prints out: `Process finished with exit code 0` It doesn't seem correct. **EDIT:** So i updated my code to use JSON library. ``` val jsonColName = "json" // intermediate column name where we place each line of source data val jsonCol = col(jsonColName) // its reusable ref val metaDataSet = spark.read.textFile("src/main/resources/BookInformation.data") .toDF(jsonColName).select(get_json_object(jsonCol, "$.username") .alias("username"), get_json_object(jsonCol, "$.price") .alias("price"), get_json_object(jsonCol, "$.categories") .alias("categories"), get_json_object(jsonCol, "$.title") .alias("title")) val reviewDataSet = spark.read.textFile("src/main/resources/reviewText.data") .toDF(jsonColName).select(get_json_object(jsonCol, "$.reviewerID") .alias("reviewerID"), get_json_object(jsonCol, "$.username") .alias("username"), get_json_object(jsonCol, "$.reviewText") .alias("reviewText"), get_json_object(jsonCol, "$.overall") .alias("overall").as[Double], get_json_object(jsonCol, "$.reviewTime") .alias("reviewTime")) reviewDataSet.show() metaDataSet.show() ``` Then I was able to merge thanks to the information. ``` val joinedDataSets = metaDataSet.join(reviewDataSet, Seq("username")) joinedDataSets.show() ``` Now my next step is to be able to count the number of words inside `joinedDataSets` in the column `ReviewText` and only keep those that are above 100 words. How can I filter the JSON object from the key `reviewText` and then count all the entries and remove the ones with less than 100 Words.<issue_comment>username_1: I would suggest you to create two dataframes and load the data from files into two dfs: 1.One with books ``` (val books_df = spark.read.json("/some/path/.json")) ``` 2.One with reviewers ``` (val reviewers_df = spark.read.json("/some/path/.json")) ``` Join the dfs as tables based on book.username == reviewer.username to get a third JOINED\_DF. ``` (val joined_df = books_df.join(reviewers_df, Seq("usernamer"), type_of_join)) ``` Now you can filter the df as per categories and get the word count too. For using the flat-map correctly for word count, i would suggest you to refer : <https://stackoverflow.com/a/37680132/4482149>. Upvotes: 0 <issue_comment>username_2: First of all, you need to load the data from the files in a structured manner. Each line of the source files can be parsed as JSON and the information should be placed properly in the respective columns. For example, to load and parse `BookInformation.data`: ``` import org.apache.spark.sql.functions._ // necessary for col, get_json_object functions and others below val session = SparkSession.builder().appName("My app") .master("local[*]") .getOrCreate() val bookInfoFilePath = // path to BookInformation.data val jsonColName = "json" // intermediate column name where we place each line of source data val jsonCol = col(jsonColName) // its reusable ref val bookInfoDf = session.read.textFile(bookInfoFilePath).toDF(jsonColName).select( get_json_object(jsonCol, "$.username").alias("username"), get_json_object(jsonCol, "$.price").alias("price"), get_json_object(jsonCol, "$.categories").alias("categories"), get_json_object(jsonCol, "$.title").alias("title") ) ``` So now we have a book information DataFrame containing properly structured data: ``` bookInfoDf.show() +----------+-----+----------+---------------+ | username|price|categories| title| +----------+-----+----------+---------------+ | JAMES250| 19.2| Horror|Friday the 13th| | Bro2KXA1| 9.21| Fantasy| Wizard of Oz| |LucyLu1272|18.69| Fiction| Not Real| |6302049040| 8.86| Fantasy| Fantastic| +----------+-----+----------+---------------+ ``` The answers to Q3 and Q4 become quite obvious to obtain. ``` val dfQuestion3 = bookInfoDf.where($"categories" === "Horror") dfQuestion3.show() +--------+-----+----------+---------------+ |username|price|categories| title| +--------+-----+----------+---------------+ |JAMES250| 19.2| Horror|Friday the 13th| +--------+-----+----------+---------------+ ``` For Q4, you'll have to join `bookInfoDf` with the DataFrame loaded from `ReviewerText.data`, using `username` column, then aggregate (`.agg`) the data on average length of `reviewText` column (`avg` and `length` functions). To load `ReviewerText.data`, you can proceed exactly by analogy with how `bookInfoDf` was loaded above. `overall` column should be converted to numeric using `.as[Double]` after `.alias` call. *Update* > > I had a question about how to count the number of words within a JSON Key/Value. For example, in the key reviewText I have create and merged both BookInformation and ReviewText into one dataset. Now If i wanted to loop through each reviewText and count the number of words then filter either keep or remove depending on the amount of words within the Key's Value how would I go about doing that? I'm trying to learn how to extract value > > > One of possible ways to do it is by caculating the number of words and storing it in a dedicated column: ``` // reviewerTextDf is the DataFrame with original data from ReviewerText.data val dfWithReviewWordsCount = reviewerTextDf.withColumn("nb_words_review", size(split($"reviewText", "\\s+"))) dfWithReviewWordsCount.show() ``` Which gives the folowing: ``` +--------------+--------+--------------------+-------+-----------+---------------+ | reviewerID|username| reviewText|overall| reviewTime|nb_words_review| +--------------+--------+--------------------+-------+-----------+---------------+ |A1R3P8MRFSN4X3|JAMES250|Wow what a book b...| 4.0| 08 9, 1997| 7| | AVM91SKZ9M58T| null|Different Blah Bl...| 5.0|08 10, 1997| 8| |A1HC72VDRLANIW| null|Another Review Bl...| 1.0|08 19, 1997| 9| |A2XBTS97FERY2Q| null| I love books. END| 5.0|08 23, 1997| 4| +--------------+--------+--------------------+-------+-----------+---------------+ ``` Upvotes: 2 [selected_answer]
2018/03/22
1,564
5,666
<issue_start>username_0: Hi I'm reviewing one sample in react redux and in action file I'm seeing this function ``` function login(email, password) { return dispatch => { dispatch(request({ email })); userService.login(email, password) .then( data => { dispatch(success(data.user)); history.push('/'); }, error => { dispatch(failure(error)); dispatch(alertActions.error(error)); } ); }; function request(user) { return { type: userConstants.LOGIN_REQUEST, user } } function success(user) { return { type: userConstants.LOGIN_SUCCESS, user } } function failure(error) { return { type: userConstants.LOGIN_FAILURE, error } } } ``` it's clear that this function is returning dispatch file to call reducer the only part I don't understand it correctly is how it's defines some functions after return and using it in the return. this function is working properly but if I can do something to make it better please tell me.<issue_comment>username_1: I would suggest you to create two dataframes and load the data from files into two dfs: 1.One with books ``` (val books_df = spark.read.json("/some/path/.json")) ``` 2.One with reviewers ``` (val reviewers_df = spark.read.json("/some/path/.json")) ``` Join the dfs as tables based on book.username == reviewer.username to get a third JOINED\_DF. ``` (val joined_df = books_df.join(reviewers_df, Seq("usernamer"), type_of_join)) ``` Now you can filter the df as per categories and get the word count too. For using the flat-map correctly for word count, i would suggest you to refer : <https://stackoverflow.com/a/37680132/4482149>. Upvotes: 0 <issue_comment>username_2: First of all, you need to load the data from the files in a structured manner. Each line of the source files can be parsed as JSON and the information should be placed properly in the respective columns. For example, to load and parse `BookInformation.data`: ``` import org.apache.spark.sql.functions._ // necessary for col, get_json_object functions and others below val session = SparkSession.builder().appName("My app") .master("local[*]") .getOrCreate() val bookInfoFilePath = // path to BookInformation.data val jsonColName = "json" // intermediate column name where we place each line of source data val jsonCol = col(jsonColName) // its reusable ref val bookInfoDf = session.read.textFile(bookInfoFilePath).toDF(jsonColName).select( get_json_object(jsonCol, "$.username").alias("username"), get_json_object(jsonCol, "$.price").alias("price"), get_json_object(jsonCol, "$.categories").alias("categories"), get_json_object(jsonCol, "$.title").alias("title") ) ``` So now we have a book information DataFrame containing properly structured data: ``` bookInfoDf.show() +----------+-----+----------+---------------+ | username|price|categories| title| +----------+-----+----------+---------------+ | JAMES250| 19.2| Horror|Friday the 13th| | Bro2KXA1| 9.21| Fantasy| Wizard of Oz| |LucyLu1272|18.69| Fiction| Not Real| |6302049040| 8.86| Fantasy| Fantastic| +----------+-----+----------+---------------+ ``` The answers to Q3 and Q4 become quite obvious to obtain. ``` val dfQuestion3 = bookInfoDf.where($"categories" === "Horror") dfQuestion3.show() +--------+-----+----------+---------------+ |username|price|categories| title| +--------+-----+----------+---------------+ |JAMES250| 19.2| Horror|Friday the 13th| +--------+-----+----------+---------------+ ``` For Q4, you'll have to join `bookInfoDf` with the DataFrame loaded from `ReviewerText.data`, using `username` column, then aggregate (`.agg`) the data on average length of `reviewText` column (`avg` and `length` functions). To load `ReviewerText.data`, you can proceed exactly by analogy with how `bookInfoDf` was loaded above. `overall` column should be converted to numeric using `.as[Double]` after `.alias` call. *Update* > > I had a question about how to count the number of words within a JSON Key/Value. For example, in the key reviewText I have create and merged both BookInformation and ReviewText into one dataset. Now If i wanted to loop through each reviewText and count the number of words then filter either keep or remove depending on the amount of words within the Key's Value how would I go about doing that? I'm trying to learn how to extract value > > > One of possible ways to do it is by caculating the number of words and storing it in a dedicated column: ``` // reviewerTextDf is the DataFrame with original data from ReviewerText.data val dfWithReviewWordsCount = reviewerTextDf.withColumn("nb_words_review", size(split($"reviewText", "\\s+"))) dfWithReviewWordsCount.show() ``` Which gives the folowing: ``` +--------------+--------+--------------------+-------+-----------+---------------+ | reviewerID|username| reviewText|overall| reviewTime|nb_words_review| +--------------+--------+--------------------+-------+-----------+---------------+ |A1R3P8MRFSN4X3|JAMES250|Wow what a book b...| 4.0| 08 9, 1997| 7| | AVM91SKZ9M58T| null|Different Blah Bl...| 5.0|08 10, 1997| 8| |A1HC72VDRLANIW| null|Another Review Bl...| 1.0|08 19, 1997| 9| |A2XBTS97FERY2Q| null| I love books. END| 5.0|08 23, 1997| 4| +--------------+--------+--------------------+-------+-----------+---------------+ ``` Upvotes: 2 [selected_answer]
2018/03/22
1,919
5,998
<issue_start>username_0: I have two dataframes, and I want to do a lookup much like a Vlookup in excel. ``` df_orig.head() A 0 3 1 4 2 6 3 7 4 8 df_new Combined Length Group_name 0 [8, 9, 112, 114, 134, 135] 6 Group 1 1 [15, 16, 17, 18, 19, 20] 6 Group 2 2 [15, 16, 17, 18, 19] 5 Group 3 3 [16, 17, 18, 19, 20] 5 Group 4 4 [15, 16, 17, 18] 4 Group 5 5 [8, 9, 112, 114] 4 Group 6 6 [18, 19, 20] 3 Group 7 7 [28, 29, 30] 3 Group 8 8 [21, 22] 2 Group 9 9 [28, 29] 2 Group 10 10 [26, 27] 2 Group 11 11 [24, 25] 2 Group 12 12 [3, 4] 2 Group 13 13 [6, 7] 2 Group 14 14 [11, 14] 2 Group 15 15 [12, 13] 2 Group 16 16 [0, 1] 2 Group 17 ``` How can I add the values in `df_new["Group_name"]` to `df_orig["A"]`? The `"Group_name"` must be based on the lookup of the values from `df_orig["A"]` in `df_new["Combined"]`. So it would look like: ``` df_orig.head() A Looked_up 0 3 Group 13 1 4 Group 13 2 6 Group 14 3 7 Group 14 4 8 Group 1 ``` Thank you!<issue_comment>username_1: I would suggest you to create two dataframes and load the data from files into two dfs: 1.One with books ``` (val books_df = spark.read.json("/some/path/.json")) ``` 2.One with reviewers ``` (val reviewers_df = spark.read.json("/some/path/.json")) ``` Join the dfs as tables based on book.username == reviewer.username to get a third JOINED\_DF. ``` (val joined_df = books_df.join(reviewers_df, Seq("usernamer"), type_of_join)) ``` Now you can filter the df as per categories and get the word count too. For using the flat-map correctly for word count, i would suggest you to refer : <https://stackoverflow.com/a/37680132/4482149>. Upvotes: 0 <issue_comment>username_2: First of all, you need to load the data from the files in a structured manner. Each line of the source files can be parsed as JSON and the information should be placed properly in the respective columns. For example, to load and parse `BookInformation.data`: ``` import org.apache.spark.sql.functions._ // necessary for col, get_json_object functions and others below val session = SparkSession.builder().appName("My app") .master("local[*]") .getOrCreate() val bookInfoFilePath = // path to BookInformation.data val jsonColName = "json" // intermediate column name where we place each line of source data val jsonCol = col(jsonColName) // its reusable ref val bookInfoDf = session.read.textFile(bookInfoFilePath).toDF(jsonColName).select( get_json_object(jsonCol, "$.username").alias("username"), get_json_object(jsonCol, "$.price").alias("price"), get_json_object(jsonCol, "$.categories").alias("categories"), get_json_object(jsonCol, "$.title").alias("title") ) ``` So now we have a book information DataFrame containing properly structured data: ``` bookInfoDf.show() +----------+-----+----------+---------------+ | username|price|categories| title| +----------+-----+----------+---------------+ | JAMES250| 19.2| Horror|Friday the 13th| | Bro2KXA1| 9.21| Fantasy| Wizard of Oz| |LucyLu1272|18.69| Fiction| Not Real| |6302049040| 8.86| Fantasy| Fantastic| +----------+-----+----------+---------------+ ``` The answers to Q3 and Q4 become quite obvious to obtain. ``` val dfQuestion3 = bookInfoDf.where($"categories" === "Horror") dfQuestion3.show() +--------+-----+----------+---------------+ |username|price|categories| title| +--------+-----+----------+---------------+ |JAMES250| 19.2| Horror|Friday the 13th| +--------+-----+----------+---------------+ ``` For Q4, you'll have to join `bookInfoDf` with the DataFrame loaded from `ReviewerText.data`, using `username` column, then aggregate (`.agg`) the data on average length of `reviewText` column (`avg` and `length` functions). To load `ReviewerText.data`, you can proceed exactly by analogy with how `bookInfoDf` was loaded above. `overall` column should be converted to numeric using `.as[Double]` after `.alias` call. *Update* > > I had a question about how to count the number of words within a JSON Key/Value. For example, in the key reviewText I have create and merged both BookInformation and ReviewText into one dataset. Now If i wanted to loop through each reviewText and count the number of words then filter either keep or remove depending on the amount of words within the Key's Value how would I go about doing that? I'm trying to learn how to extract value > > > One of possible ways to do it is by caculating the number of words and storing it in a dedicated column: ``` // reviewerTextDf is the DataFrame with original data from ReviewerText.data val dfWithReviewWordsCount = reviewerTextDf.withColumn("nb_words_review", size(split($"reviewText", "\\s+"))) dfWithReviewWordsCount.show() ``` Which gives the folowing: ``` +--------------+--------+--------------------+-------+-----------+---------------+ | reviewerID|username| reviewText|overall| reviewTime|nb_words_review| +--------------+--------+--------------------+-------+-----------+---------------+ |A1R3P8MRFSN4X3|JAMES250|Wow what a book b...| 4.0| 08 9, 1997| 7| | AVM91SKZ9M58T| null|Different Blah Bl...| 5.0|08 10, 1997| 8| |A1HC72VDRLANIW| null|Another Review Bl...| 1.0|08 19, 1997| 9| |A2XBTS97FERY2Q| null| I love books. END| 5.0|08 23, 1997| 4| +--------------+--------+--------------------+-------+-----------+---------------+ ``` Upvotes: 2 [selected_answer]
2018/03/22
1,787
6,462
<issue_start>username_0: I'm trying to write a program that checks if a circle contains another circle, if a certain point is inside a circle, or the one I'm having trouble with, if a circle overlaps with another circle. ``` import javafx.scene.shape.Circle; public class Problem10_11 { public static void main(String[] args) { //Create circle with certain parameters. Circle2D c1 = new Circle2D(2, 2, 5.5); //Create output which will be tested by all our methods. System.out.println("The area for circle 1 is " +c1.getArea()+ " and its perimeter is " + c1.getPerimeter()); System.out.println("Is (3,3) contained within circle 1? " + c1.contains(3, 3)); System.out.println("Does circle 1 contain circle 2? " + c1.contains(new Circle2D(4,5,10.5))); System.out.println("Does circle 1 overlap with circle 3? " + c1.overlaps(new Circle2D(3, 5, 2.3))); } } class Circle2D { double x; //first parameter double y; //second parameter double radius; //third parameter Circle2D() { } public Circle2D(double x, double y, double radius) { this.x = x; this.y = y; this.radius = radius; } public void setX(double x) { this.x = x; //set x } public double getX() { return x; //grab x } public void setY(double y) { this.y = y; //set y } public double getY() { return y; //grab y } public void setRadius(double radius) { this.radius = radius; //set radius } public double getRadius() { return radius; //grab radius } public double getArea() { double area = Math.PI*radius*radius; //formula for area return area; } public double getPerimeter() { double perimeter = 2*Math.PI*radius; //formula for perimeter return perimeter; } public boolean contains(double x, double y) { //Use distance formula to check if a specific point is within our circle. double distance = Math.sqrt(Math.pow(this.x - x, 2) + (Math.pow(this.y - y, 2))); if (distance <= radius * 2) return true; else { return false; } } public boolean contains(Circle2D circle) { //Use distance formula to check if a circle is contained within another circle. double distance = Math.sqrt(Math.pow(circle.getX() - x, 2) + (Math.pow(circle.getY() - y, 2))); if (distance <= (this.radius - circle.radius)) { return true; } else { return false; } } public boolean overlaps(Circle2D circle) { //Use distance formula to check if a circle overlaps with another circle. double distance = Math.sqrt(Math.pow(circle.getX() - x, 2) + (Math.pow(circle.getY() - y, 2))); } } ``` So my overlap method is all the way at the bottom but I don't really have anything inside because I'm not sure exactly what to do. I tried this : ``` if (distance <= radius) return true; else return false; ``` but that didnt work. So I'm not sure what else to try. FYI, I'm trying to check if c1 overlaps with a circle with parameters (3, 5, 2.3). I appreciate any suggestions/advice.<issue_comment>username_1: if the distance between the centres of the circles is less than the sum of the radius of the two circles then they are overlapping. ``` double minDistance = Math.max(circle.getRadius(),this.radius) - Math.min(circle.getRadius(),this.radius); if (distance <= (this.radius + circle.getRadius()) && distance>= minDistance) return true; else return false; ``` Upvotes: 2 <issue_comment>username_2: 1.- you have to to place both circles in space, give them some cordinates 2.- you have to get the vectors from 2 circles. 3.- you have to normailze those vectors and get the correct distance in units, im gonna use pixels. 4.- finally you have to check if the distance between those 2 vectors are less that the radious of both circles, if so, then they are overlaped. here you have a link that is better explained: <https://gamedevelopment.tutsplus.com/tutorials/when-worlds-collide-simulating-circle-circle-collisions--gamedev-769>, actually this is something very common we use in game development when we want to check circle collisions ( for 2D games ) Upvotes: 0 <issue_comment>username_3: You can refer to [Relative position of two circles](http://mathinschool.com/page/473.html). ``` public boolean overlaps(Circle2D circle) { //Use distance formula to check if a circle overlaps with another circle. double distance = Math.sqrt(Math.pow(circle.getX() - x, 2) + (Math.pow(circle.getY() - y, 2))); return distance <= (this.radius + circle.radius) and distance >= Math.abs(this.radius - circle.radius) } ``` Upvotes: 3 <issue_comment>username_4: Most answers here are wrong. There are three cases to consider: 1. [![enter image description here](https://i.stack.imgur.com/xUKBf.png)](https://i.stack.imgur.com/xUKBf.png) Circles overlap and the center of the smaller circle is inside the bigger circle 2. [![enter image description here](https://i.stack.imgur.com/9kNpa.png)](https://i.stack.imgur.com/9kNpa.png) Circles overlap and the center of the smaller circle is outside the bigger circle 3.[![enter image description here](https://i.stack.imgur.com/nig8w.png)](https://i.stack.imgur.com/nig8w.png) The two circles touch at their borders The algorithm is this (in Java): 1. Calculate the distance between centers: ``` double centersDistance = Math.sqrt((x2 - x1)^2 + (y2 - y1)^2) ``` 2. Check if one circle contains another: ``` boolean blueContainsRed = blue.radius > centersDistance + red.radius; boolean redContainsBlue = red.radius > centersDistance + blue.radius; ``` 3. Check if they overlap: ``` boolean circlesOverlap = centersDistance <= blue.radius + red.radius; ``` the `<=` would make sure that true is returned for case 3 (when borders only touch). If you don't want that, use `<` . 4. So the final formula would be: ``` return !blueContainsRed && !redContainsBlue && circlesOverlap; ``` This table might also prove useful (from <https://planetcalc.com/8098/>): [![enter image description here](https://i.stack.imgur.com/YY9Bt.png)](https://i.stack.imgur.com/YY9Bt.png) Upvotes: 1
2018/03/22
4,944
14,415
<issue_start>username_0: How do I initialize weights and biases of a network (via e.g. He or Xavier initialization)?<issue_comment>username_1: Single layer ============ To initialize the weights of a single layer, use a function from [`torch.nn.init`](https://pytorch.org/docs/master/nn.init.html). For instance: ``` conv1 = torch.nn.Conv2d(...) torch.nn.init.xavier_uniform(conv1.weight) ``` Alternatively, you can modify the parameters by writing to `conv1.weight.data` (which is a [`torch.Tensor`](http://pytorch.org/docs/master/tensors.html#torch.Tensor)). Example: ``` conv1.weight.data.fill_(0.01) ``` The same applies for biases: ``` conv1.bias.data.fill_(0.01) ``` `nn.Sequential` or custom `nn.Module` ------------------------------------- Pass an initialization function to [`torch.nn.Module.apply`](http://pytorch.org/docs/master/nn.html#torch.nn.Module.apply). It will initialize the weights in the entire `nn.Module` recursively. > > **apply(*fn*):** Applies `fn` recursively to every submodule (as returned by `.children()`) as well as self. Typical use includes initializing the parameters of a model (see also torch-nn-init). > > > Example: ``` def init_weights(m): if isinstance(m, nn.Linear): torch.nn.init.xavier_uniform(m.weight) m.bias.data.fill_(0.01) net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2)) net.apply(init_weights) ``` Upvotes: 9 [selected_answer]<issue_comment>username_2: Sorry for being so late, I hope my answer will help. To initialise weights with a `normal distribution` use: ``` torch.nn.init.normal_(tensor, mean=0, std=1) ``` Or to use a `constant distribution` write: ``` torch.nn.init.constant_(tensor, value) ``` Or to use an `uniform distribution`: ``` torch.nn.init.uniform_(tensor, a=0, b=1) # a: lower_bound, b: upper_bound ``` You can check other methods to initialise tensors [here](https://pytorch.org/docs/stable/nn.html#torch-nn-init) Upvotes: 4 <issue_comment>username_3: ``` import torch.nn as nn # a simple network rand_net = nn.Sequential(nn.Linear(in_features, h_size), nn.BatchNorm1d(h_size), nn.ReLU(), nn.Linear(h_size, h_size), nn.BatchNorm1d(h_size), nn.ReLU(), nn.Linear(h_size, 1), nn.ReLU()) # initialization function, first checks the module type, # then applies the desired changes to the weights def init_normal(m): if type(m) == nn.Linear: nn.init.uniform_(m.weight) # use the modules apply function to recursively apply the initialization rand_net.apply(init_normal) ``` Upvotes: 4 <issue_comment>username_4: We compare different mode of weight-initialization using the same neural-network(NN) architecture. -------------------------------------------------------------------------------------------------- ### All Zeros or Ones If you follow the principle of [Occam's razor](https://en.wikipedia.org/wiki/Occam's_razor), you might think setting all the weights to 0 or 1 would be the best solution. This is not the case. With every weight the same, all the neurons at each layer are producing the same output. This makes it hard to decide which weights to adjust. ``` # initialize two NN's with 0 and 1 constant weights model_0 = Net(constant_weight=0) model_1 = Net(constant_weight=1) ``` * After 2 epochs: [![plot of training loss with weight initialization to constant](https://i.stack.imgur.com/jpjTk.png)](https://i.stack.imgur.com/jpjTk.png) ``` Validation Accuracy 9.625% -- All Zeros 10.050% -- All Ones Training Loss 2.304 -- All Zeros 1552.281 -- All Ones ``` ### Uniform Initialization A [uniform distribution](https://en.wikipedia.org/wiki/Uniform_distribution) has the equal probability of picking any number from a set of numbers. Let's see how well the neural network trains using a uniform weight initialization, where `low=0.0` and `high=1.0`. Below, we'll see another way (besides in the Net class code) to initialize the weights of a network. To define weights outside of the model definition, we can: > > 1. Define a function that assigns weights by the type of network layer, *then* > 2. Apply those weights to an initialized model using `model.apply(fn)`, which applies a function to each model layer. > > > ``` # takes in a module and applies the specified weight initialization def weights_init_uniform(m): classname = m.__class__.__name__ # for every Linear layer in a model.. if classname.find('Linear') != -1: # apply a uniform distribution to the weights and a bias=0 m.weight.data.uniform_(0.0, 1.0) m.bias.data.fill_(0) model_uniform = Net() model_uniform.apply(weights_init_uniform) ``` * After 2 epochs: [![enter image description here](https://i.stack.imgur.com/rTTP9.png)](https://i.stack.imgur.com/rTTP9.png) ``` Validation Accuracy 36.667% -- Uniform Weights Training Loss 3.208 -- Uniform Weights ``` General rule for setting weights -------------------------------- The general rule for setting the weights in a neural network is to set them to be close to zero without being too small. > > Good practice is to start your weights in the range of [-y, y] where `y=1/sqrt(n)` > > (n is the number of inputs to a given neuron). > > > ``` # takes in a module and applies the specified weight initialization def weights_init_uniform_rule(m): classname = m.__class__.__name__ # for every Linear layer in a model.. if classname.find('Linear') != -1: # get the number of the inputs n = m.in_features y = 1.0/np.sqrt(n) m.weight.data.uniform_(-y, y) m.bias.data.fill_(0) # create a new model with these weights model_rule = Net() model_rule.apply(weights_init_uniform_rule) ``` below we compare performance of NN, weights initialized with uniform distribution [-0.5,0.5) versus the one whose weight is initialized using **general rule** * After 2 epochs: [![plot showing performance of uniform initialization of weight versus general rule of initialization](https://i.stack.imgur.com/AAmh6.png)](https://i.stack.imgur.com/AAmh6.png) ``` Validation Accuracy 75.817% -- Centered Weights [-0.5, 0.5) 85.208% -- General Rule [-y, y) Training Loss 0.705 -- Centered Weights [-0.5, 0.5) 0.469 -- General Rule [-y, y) ``` normal distribution to initialize the weights --------------------------------------------- > > The normal distribution should have a mean of 0 and a standard deviation of `y=1/sqrt(n)`, where n is the number of inputs to NN > > > ``` ## takes in a module and applies the specified weight initialization def weights_init_normal(m): '''Takes in a module and initializes all linear layers with weight values taken from a normal distribution.''' classname = m.__class__.__name__ # for every Linear layer in a model if classname.find('Linear') != -1: y = m.in_features # m.weight.data shoud be taken from a normal distribution m.weight.data.normal_(0.0,1/np.sqrt(y)) # m.bias.data should be 0 m.bias.data.fill_(0) ``` below we show the performance of two NN one initialized using **uniform-distribution** and the other using **normal-distribution** * After 2 epochs: [![performance of weight initialization using uniform-distribution versus the normal distribution](https://i.stack.imgur.com/144Mn.png)](https://i.stack.imgur.com/144Mn.png) ``` Validation Accuracy 85.775% -- Uniform Rule [-y, y) 84.717% -- Normal Distribution Training Loss 0.329 -- Uniform Rule [-y, y) 0.443 -- Normal Distribution ``` Upvotes: 7 <issue_comment>username_5: If you see a deprecation warning (@username_1)... ``` def init_weights(m): if type(m) == nn.Linear: torch.nn.init.xavier_uniform_(m.weight) m.bias.data.fill_(0.01) net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2)) net.apply(init_weights) ``` Upvotes: 1 <issue_comment>username_6: **To initialize layers, you typically don't need to do anything.** PyTorch will do it for you. If you think about it, this makes a lot of sense. Why should we initialize layers, when PyTorch can do that following the latest trends? For instance, the [`Linear`](https://github.com/pytorch/pytorch/blob/af7dc23124a6e3e7b8af0637e3b027f3a8b3fb76/torch/nn/modules/linear.py#L101) layer's `__init__` method will do [Kaiming He](http://kaiminghe.com/) initialization: ``` init.kaiming_uniform_(self.weight, a=math.sqrt(5)) if self.bias is not None: fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight) bound = 1 / math.sqrt(fan_in) if fan_in > 0 else 0 init.uniform_(self.bias, -bound, bound) ``` Similarly, this holds for other layers types. For e.g., `Conv2d`, check [here](https://github.com/pytorch/pytorch/blob/029a968212b018192cb6fc64075e68db9985c86a/torch/nn/modules/conv.py#L49). **NOTE:** The advantage of proper initialization is faster training speed. If your problem requires special initialization, you can still do it afterwards. Upvotes: 6 <issue_comment>username_7: Iterate over parameters ======================= If you cannot use `apply` for instance if the model does not implement `Sequential` directly: Same for all ------------ ``` # see UNet at https://github.com/milesial/Pytorch-UNet/tree/master/unet def init_all(model, init_func, *params, **kwargs): for p in model.parameters(): init_func(p, *params, **kwargs) model = UNet(3, 10) init_all(model, torch.nn.init.normal_, mean=0., std=1) # or init_all(model, torch.nn.init.constant_, 1.) ``` Depending on shape ------------------ ``` def init_all(model, init_funcs): for p in model.parameters(): init_func = init_funcs.get(len(p.shape), init_funcs["default"]) init_func(p) model = UNet(3, 10) init_funcs = { 1: lambda x: torch.nn.init.normal_(x, mean=0., std=1.), # can be bias 2: lambda x: torch.nn.init.xavier_normal_(x, gain=1.), # can be weight 3: lambda x: torch.nn.init.xavier_uniform_(x, gain=1.), # can be conv1D filter 4: lambda x: torch.nn.init.xavier_uniform_(x, gain=1.), # can be conv2D filter "default": lambda x: torch.nn.init.constant(x, 1.), # everything else } init_all(model, init_funcs) ``` You can try with `torch.nn.init.constant_(x, len(x.shape))` to check that they are appropriately initialized: ``` init_funcs = { "default": lambda x: torch.nn.init.constant_(x, len(x.shape)) } ``` Upvotes: 3 <issue_comment>username_8: If you want some extra flexibility, **you can also set the weights manually**. Say you have input of all ones: ``` import torch import torch.nn as nn input = torch.ones((8, 8)) print(input) ``` ``` tensor([[1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1.]]) ``` And you want to make a dense layer with no bias (so we can visualize): ``` d = nn.Linear(8, 8, bias=False) ``` Set all the weights to 0.5 (or anything else): ``` d.weight.data = torch.full((8, 8), 0.5) print(d.weight.data) ``` The weights: ``` Out[14]: tensor([[0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000], [0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000], [0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000], [0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000], [0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000], [0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000], [0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000], [0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000]]) ``` All your weights are now 0.5. Pass the data through: ``` d(input) ``` ``` Out[13]: tensor([[4., 4., 4., 4., 4., 4., 4., 4.], [4., 4., 4., 4., 4., 4., 4., 4.], [4., 4., 4., 4., 4., 4., 4., 4.], [4., 4., 4., 4., 4., 4., 4., 4.], [4., 4., 4., 4., 4., 4., 4., 4.], [4., 4., 4., 4., 4., 4., 4., 4.], [4., 4., 4., 4., 4., 4., 4., 4.], [4., 4., 4., 4., 4., 4., 4., 4.]], grad_fn=) ``` Remember that each neuron receives 8 inputs, all of which have weight 0.5 and value of 1 (and no bias), so it sums up to 4 for each. Upvotes: 4 <issue_comment>username_9: Cuz I haven't had the enough reputation so far, I can't add a comment under > > the answer posted by *username_6* in *Jun 26 '19 at 13:16*. > > > ```py def reset_parameters(self): init.kaiming_uniform_(self.weight, a=math.sqrt(3)) if self.bias is not None: fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight) bound = 1 / math.sqrt(fan_in) init.uniform_(self.bias, -bound, bound) ``` But I wanna point out that actually we know some assumptions in the paper of *Kaiming He*, *Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification*, are not appropriate, though it looks like the deliberately designed initialization method makes a hit in practice. E.g., within the subsection of *Backward Propagation Case*, they assume that $w\_l$ and $\delta y\_l$ are independent of each other. But as we all known, take the score map $\delta y^L\_i$ as an instance, it often is $y\_i-softmax(y^L\_i)=y\_i-softmax(w^L\_ix^L\_i)$ if we use a typical cross entropy loss function objective. So I think the true underlying reason why *He's Initialization* works well remains to unravel. Cuz everyone has witnessed its power on boosting deep learning training. Upvotes: 2 <issue_comment>username_10: Here is the better way, just pass your whole model ``` import torch.nn as nn def initialize_weights(model): # Initializes weights according to the DCGAN paper for m in model.modules(): if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d, nn.BatchNorm2d)): nn.init.normal_(m.weight.data, 0.0, 0.02) # if you also want for linear layers ,add one more elif condition ``` Upvotes: 2
2018/03/22
1,114
4,066
<issue_start>username_0: I have a WordPress site and there is Contact form 7 plugin and I want to add attr to submit button to disable double submission. Now I have this code to prevent double submission ``` $(document).on('click', '.wpcf7-submit', function(e){ if( $('.ajax-loader').hasClass('is-active') ) { e.preventDefault(); return false; } }); ``` but I want to add attr disabled while form sending or getting error response for better user experience<issue_comment>username_1: ``` $(document).on('click', '.wpcf7-submit', function(e){ $(this).prop('disabled',true); }); ``` Upvotes: 0 <issue_comment>username_2: This will disable the button and submit the form. You need to re-call submit after disabling the button. ``` jQuery( '.wpcf7-submit' ).click(function() { jQuery( this ).attr( 'disabled', true ); jQuery( this ).submit(); }); ``` This will re-enable the button if there's an error with the submission. ``` document.addEventListener( 'wpcf7invalid', function() { jQuery( '.wpcf7-submit' ).attr( 'disabled', false ); }, false ); ``` Upvotes: 2 <issue_comment>username_3: Improving on username_2's answer - ``` $('.wpcf7-form').on('submit', function() { $(this).find('.wpcf7-submit').attr('disabled', true); }); ``` This would disable the submit button when clicked on it. Now to get that activated again after success or failure you would need to remove the attribute after the submission is complete(whether success or failure). Since the plugin developer is a bit whimsical about how the events work, I am writing this solution for first quarter of 2019 - ``` $('.wpcf7').on('wpcf7submit', function (e) { $(this).find('.wpcf7-submit').removeAttr('disabled'); }); ``` where '.wpcf7' is the parent container of the form, '.wpcf7-form' is the form itself. The '**wpcf7submit**' is event listener that the DOM listens to, after the form gets submitted(irrespective of the fact that is valid or invalid). Upvotes: 3 <issue_comment>username_4: I implemented this script to help prevent multiple submissions. The biggest difference from the others is that it works with multiple CF7 forms on each page. Basically, it disables the form and the submit button on submit (since a form can also be submitted with an Enter-press), adds a new label "Please Wait.." to the submit button, and re-enables them if there are input errors. Also, not dependent on jQuery (Vanilla JS). ``` (function () { var elems = document.querySelectorAll('.wpcf7'); if (!elems.length) { return false; } var forms = document.querySelectorAll('.wpcf7-form'); if (!forms.length) { return false; } function _evtFormSubmit() { this.disabled = true; var submitBtn = this.querySelector('button[type="submit"]'); submitBtn.disabled = true; submitBtn.setAttribute('data-default-text', submitBtn.innerText); submitBtn.innerHTML = 'Please wait...'; } function _evtInvalid(e) { var thisForm = document.querySelector('#' + e.detail.id + ' form'); thisForm.disabled = false; var submitBtn = thisForm.querySelector('button[type="submit"]'); setTimeout(function() { submitBtn.disabled = false; submitBtn.innerHTML = '' + submitBtn.getAttribute('data-default-text') + ''; }, 600); // give it a bit of time in case it is a fast submit } for(var i = forms.length-1; i >= 0; i--) { forms[i].addEventListener('submit', _evtFormSubmit, false); } for(i = elems.length-1; i >= 0; i--) { elems[i].addEventListener('wpcf7invalid', _evtInvalid, false); } })(); ``` *Note: if you have more than one submit button (why?), this only affects the first button in the form.* Upvotes: 0 <issue_comment>username_5: For future people who are looking for a solution here. Simple SCSS/CSS option without Javascript need. For me is work pefect. It always works reliably for me. **(2022)** ``` .wpcf7-form { &.submitting { .wpcf7-submit { pointer-events: none; } } } ``` Upvotes: 0
2018/03/22
618
1,832
<issue_start>username_0: I've downloaded the last Unity (2018.1) and changed the scripting runtime version to 4.x. But I can't find which System.Data.dll I should include in my project (I need it for my System.Data.SQLite library). If I check in Unity's folders There are a lot of folders in MonoBleedingEdge/lib/mono : 4.0, 4.0-api, 4.5, 4.5.1-api ... I tried with a few System.Data.dll found in those folders I always get that "Loading script assembly "Assets/Plugins/System.Data.dll" failed!" when I run my game. Do you know why ? Or is there a possibility to have more details (like version of the dll expected)<issue_comment>username_1: What if you try replacing `System.Data` here `/Applications/Unity/Hub/Editor/2018.1.0b11/Unity.app/Contents/MonoBleedingEdge/lib/mono/unity` **Note:** I've Unity HUb installed but the path should be similar Upvotes: 0 <issue_comment>username_2: My Weeks Journey: ----------------- * [Stack Overflow](https://stackoverflow.com/questions/52050951/how-do-i-import-mysql-connector-into-unity-project/52067993#52067993) * [Unity Answers](https://answers.unity.com/questions/1546955/how-do-i-import-mysql-connector-from-nuget-into-un.html?childToView=1547402#answer-1547402) Solution: --------- It took me a week to finally figure it out... You need to use: * I18N.dll * I18N.West.dll * I18N.\*.dll (Optional, they are region specific) * System.Data.dll from C:\Program Files\Unity\Editor\Data\Mono\lib\mono\2.0 ***NOT*** the BleedingEdge path. Then it will work without errors... confirmed in the latest Unity 2018.2.6f1 Upvotes: 1 <issue_comment>username_3: The thing that worked for me was following this [Unity Forum](https://forum.unity.com/threads/c-compression-zip-missing.577492/page-2) thread and using a csc.rsp as Unity Suggests. You can see my answer there. Upvotes: 0
2018/03/22
240
943
<issue_start>username_0: I got this layout in my app: ``` ``` Now my Problem is, that the textfield between the plus and the minus button is dynamically set. So sometimes it is empty and sometimes there is content in it. Same goes for the price on the right side. Sometimes it is a longer string sometimes shorter. Now because of this the buttons get moved as you can see in this picture: [![enter image description here](https://i.stack.imgur.com/Kegrz.png)](https://i.stack.imgur.com/Kegrz.png) I would like to prevent this by setting a bigger minDistance between these two buttons from the beginning. And the same for the right button and the right screen end. But I got no clue how to do this.<issue_comment>username_1: You can set a fixed width for textview instead of seting wrap\_content as follows ``` ``` Upvotes: 0 <issue_comment>username_2: Try using `android:minWidth=""` option on your textView Upvotes: 2 [selected_answer]
2018/03/22
926
3,406
<issue_start>username_0: I am using swagger hub to [create this API](https://app.swaggerhub.com/apis/PHP-Point-Of-Sale/PHP-Point-Of-Sale/1.0); but it doesn't support multi files in the UI so I am unsure if I am doing this right My goal is to have the following ``` item:{Json describe the item} images[] = images for item posted as an array titles[] = Parallel array to images that has the title for image alt_texts[] = Parallel array to images that has the alt text for image ``` This has to be multipart since it is files; but am unsure if I setup the structure correctly. Swagger/Open API Code ``` post: summary: Add a new item to the store description: '' operationId: addItem requestBody: content: multipart/form-data: schema: $ref: '#/components/schemas/NewItemWithImage' description: Item object that needs to be added to the store required: true NewItemWithImage: type: object properties: item: $ref: '#/components/schemas/NewItem' images[]: type: array items: type: string format: binary titles[]: type: array items: type: string alt_texts[]: type: array items: type: string variation_ids[]: type: array items: type: string required: - item ```<issue_comment>username_1: According to [File Upload](https://swagger.io/docs/specification/describing-request-body/file-upload/) section in OpenAPI 3 specification: **File Upload** > > Files use a `type: string` schema with `format: binary` or `format: base64`, > depending on how the file contents will be encoded. > > > **Multiple File Upload** > > Use the > multipart media type to define uploading an arbitrary number of files > (an array of files): > > > ```yaml requestBody: content: multipart/form-data: schema: type: object properties: filename: type: array items: type: string format: binary ``` You current definition corresponds to the specification precisely: ```yaml requestBody: content: application/json: schema: $ref: '#/components/schemas/NewItemWithImageUrl' multipart/form-data: schema: $ref: '#/components/schemas/NewItemWithImage' NewItemWithImage: type: object properties: item: $ref: '#/components/schemas/NewItem' images[]: type: array items: type: string format: binary titles[]: type: array items: type: string ... ``` Upvotes: 3 <issue_comment>username_2: The code for swagger-ui is failing as of today in curlify.js ``` if (v instanceof win.File) { curlified.push( `"${k}=@${v.name}${v.type ? `;type=${v.type}` : ""}"` ) } else { curlified.push( `"${k}=${v}"` ) } ``` curlify.js is not taking the array into account and is sending: `curl -X POST "http://localhost:9122/upload-all" -H "accept: */*" -H "Content-Type: multipart/form-data" -F "my-attachment=[object File],[object File]"` and not something like: `curl -X POST "https://foo/v1/api/upload/" -H "accept: application/json" -H "Content-Type: multipart/form-data" -F "myfile=@bla;type=application/zip" -F "myfile=@foo;type=application/zip"` Upvotes: 1
2018/03/22
674
2,429
<issue_start>username_0: I am testing an application in browser and I need to check if the website is using `jquery` or `angularJs` or `angular` For `jQuery` I can check for `$` function and for `angularJs` I can check for `window.angular` function. But is there anything similar for `angular 2/4/5` to check if its a `angular` application in browser?<issue_comment>username_1: According to [File Upload](https://swagger.io/docs/specification/describing-request-body/file-upload/) section in OpenAPI 3 specification: **File Upload** > > Files use a `type: string` schema with `format: binary` or `format: base64`, > depending on how the file contents will be encoded. > > > **Multiple File Upload** > > Use the > multipart media type to define uploading an arbitrary number of files > (an array of files): > > > ```yaml requestBody: content: multipart/form-data: schema: type: object properties: filename: type: array items: type: string format: binary ``` You current definition corresponds to the specification precisely: ```yaml requestBody: content: application/json: schema: $ref: '#/components/schemas/NewItemWithImageUrl' multipart/form-data: schema: $ref: '#/components/schemas/NewItemWithImage' NewItemWithImage: type: object properties: item: $ref: '#/components/schemas/NewItem' images[]: type: array items: type: string format: binary titles[]: type: array items: type: string ... ``` Upvotes: 3 <issue_comment>username_2: The code for swagger-ui is failing as of today in curlify.js ``` if (v instanceof win.File) { curlified.push( `"${k}=@${v.name}${v.type ? `;type=${v.type}` : ""}"` ) } else { curlified.push( `"${k}=${v}"` ) } ``` curlify.js is not taking the array into account and is sending: `curl -X POST "http://localhost:9122/upload-all" -H "accept: */*" -H "Content-Type: multipart/form-data" -F "my-attachment=[object File],[object File]"` and not something like: `curl -X POST "https://foo/v1/api/upload/" -H "accept: application/json" -H "Content-Type: multipart/form-data" -F "myfile=@bla;type=application/zip" -F "myfile=@foo;type=application/zip"` Upvotes: 1
2018/03/22
1,371
4,198
<issue_start>username_0: After learning how to setup a environment with html, js, ajax, php and mysql I'm able to insert single instances into a table. I'm trying to get it working with multiple entries into the table for quite a while now. I was reading several documentation pages and recommendations on here but couldn't really figure out what I'm missing. To clarify: What I'm trying to do is to insert 2 of the same entity. I input `SafetyStock` and `LotSize` of the `Material Pack` thereupon, insert another Material named PET with different There is no problem with getting the variables from ajax but i don't get the `$stmt->bind_param` line. What do I have to write there to make the code running? I tried to just have two lines of `stmt` binding, but then it only wrote the last entry into the database. Another thing I did was execute the `stmt` and then overwrite the variables and then do execute again, wasn't working as well. my actual insert.php: ``` php $servername = "localhost"; $username = "root"; $password = ""; $dbname = "scm"; // Create connection $conn = new mysqli($servername, $username, $password, $dbname); // Check connection if ($conn-connect_error) { die("Connection failed: " . $conn->connect_error); } // prepare and bind settingssupplychaincomponents $stmt = $conn->prepare("INSERT INTO test1 (settingsSupplyChainComponentsId, settingsSupplyChainId, componentId, safetyStockW, lotSizeW) VALUES (?, ?, ?, ?, ?), (?, ?, ?, ?, ?)"); $stmt->bind_param("iiiii", $settingsSupplyChainComponentsId, $settingsSupplyChainId, $pack, $sspack, $lspack); //Pack // set parameters for components $settingsSupplyChainComponentsId = ''; $settingsSupplyChainId = $_POST['ssci']; // set parameters for pack $pack = 1; $sspack = $_POST['sspack']; $lspack = $_POST['lspack']; // set parameters for PET $pet = 2; $sspet = $_POST['sspet']; $lspet = $_POST['lspet']; $values = array(); $stmt->execute($values); echo "New records created successfully"; $stmt->close(); $conn->close(); ?> ```<issue_comment>username_1: you'll need to repeat the variables that should be inserted in both rows. ``` $stmt->bind_param("iiiiiiiiii", $settingsSupplyChainComponentsId, $settingsSupplyChainId, $pack, $sspack, $lspack, $settingsSupplyChainComponentsId, $settingsSupplyChainId, $pet, $sspet, $lspet); ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: I had this problem, there was no way of doing multiple entries, instead I used several $stmt, below is an example: ``` $sql = "SELECT TOP 1 SALDOS_DEUDA.FECHA_SALDOS FROM [dbo].[SALDOS_DEUDA] WHERE FECHA_SALDOS = (SELECT MAX(FECHA_SALDOS) FROM dbo.SALDOS_DEUDA);"; $sql1 = "SELECT SUM(SALDO_DEUDA) / 1000000 AS SALDO_TOTAL FROM SALDOS_DEUDA WHERE FECHA_SALDOS = (SELECT MAX(FECHA_SALDOS) FROM dbo.SALDOS_DEUDA);"; $sql2 = "SELECT (SUM(CASE WHEN TIPO_DEUDA = 'E' THEN SALDO_DEUDA ELSE 0 END) / SUM(SALDO_DEUDA)) AS PORCENTAJE_EXTERNO FROM SALDOS_DEUDA WHERE FECHA_SALDOS = (SELECT MAX(FECHA_SALDOS) FROM dbo.SALDOS_DEUDA);"; $sql3 = "SELECT SUM(MONTO) AS TASA_FIJA FROM dbo.RIESGO_MERCADO_TASA WHERE ANIO = (SELECT MAX(ANIO) FROM dbo.RIESGO_MERCADO_TASA) AND MES = (SELECT MAX(MES) FROM dbo.RIESGO_MERCADO_TASA WHERE ANIO = (SELECT MAX(ANIO) FROM dbo.RIESGO_MERCADO_TASA) ) AND TIPOTASA_ID = 1;"; $sql4 = "SELECT SUM(MONTO) AS MONEDA_LOCAL FROM dbo.RIESGO_MERCADO_TASA WHERE ANIO = (SELECT MAX(ANIO) FROM dbo.RIESGO_MERCADO_TASA) AND MES = (SELECT MAX(MES) FROM dbo.RIESGO_MERCADO_TASA WHERE ANIO = (SELECT MAX(ANIO) FROM dbo.RIESGO_MERCADO_TASA) ) AND MONEDA_ID = 13;"; $sql5 = "SELECT (SUM(P.PROMEDIO * P.SALDO) / SUM(SALDO)) AS CPP FROM dbo.PROMEDIO_PONDERADO P INNER JOIN dbo.CG_CATEGORIA_PONDERADO C ON P.CATEGORIA_ID = C.CATEGORIA_ID WHERE FECHA = (SELECT MAX(FECHA) FROM dbo.PROMEDIO_PONDERADO) AND C.TIPO = 1;"; $stmt = sqlsrv_query( $conn, $sql); $stmt1 = sqlsrv_query( $conn, $sql1); $stmt2 = sqlsrv_query( $conn, $sql2); $stmt3 = sqlsrv_query( $conn, $sql3); $stmt4 = sqlsrv_query( $conn, $sql4); $stmt5 = sqlsrv_query( $conn, $sql5); ``` Upvotes: 0
2018/03/22
931
2,492
<issue_start>username_0: I'm trying to deploy a locally written OpenFaaS function to Minishift. My YAML file is: ``` provider: name: faas gateway: http://gateway-openfaas.10.10.80.33.nip.io functions: test: lang: python handler: ./test image: 172.30.1.1:5000/test ``` 172.30.1.1:5000 is the result of calling ``` minishift openshift registry ``` When I access the OpenFaaS UI through the Openshift Console, I can deploy functions properly from there. I can also see the function I tried to deploy locally there, but the Docker image is not in the Minishift Docker registry. To push my image there, I'm trying to use the command: ``` faas-cli push -f ./test.yml ``` Unfortunately, I receive the following error: ``` PS D:\projects> faas-cli push -f ./test.yml [0] > Pushing test. The push refers to a repository [172.30.1.1:5000/test] 8124325a272a: Preparing 2fbb584cb870: Preparing e6dd715c8997: Preparing ac20ff3419a9: Preparing 18adf8f88cf9: Preparing ab495c1b9bd4: Waiting 09c56bd3ad6c: Waiting a291b1700920: Waiting 8a65d1376e5b: Waiting 155a0aa5c33a: Waiting 8984417f4638: Waiting a7d53ea16e81: Waiting e53f74215d12: Waiting unauthorized: repository name "test" invalid: it must be of the format / 2018/03/22 11:30:53 ERROR - Could not execute command: [docker push 172.30.1.1:5000/test] ``` What am I doing incorrectly? Any assistance is appreciated.<issue_comment>username_1: [Looks like](https://docs.openshift.com/enterprise/3.0/install_config/install/docker_registry.html#access "Access the OpenShift Registry") you need a project name before your image name, like `openshift` in the following example ``` $ docker push 172.30.124.220:5000/openshift/busybox ... cf2616975b4a: Image successfully pushed Digest: sha256:3662dd821983bc4326bee12caec61367e7fb6f6a3ee547cbaff98f77403cab55 ``` Try changing the image section in your test.yml to be `172.30.1.1:5000/openfaas/test` or similar. You may also need to get an access token and use it with `docker login`, as described in the above link, if you have not already done so: ``` $ oc whoami -t $ docker login -u -e -p ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: @username_1 is right. One need to include project name as part of any application image. Also, I would suggest you to try $(minishift openshift registry) since if you give you registry IP:Port. See more detail See more <https://docs.openshift.org/latest/minishift/openshift/openshift-docker-registry.html>. Upvotes: 0
2018/03/22
493
1,742
<issue_start>username_0: I am using Hibernate 5.2.14 through Spring Boot 2.0 with MySQL 5.7.20. I am letting Hibernate generate my Schema (`ddl-auto=update`, I am aware to only use this during development phase) and I am unable to make Hibernate generate a `TIMESTAMP` column in the Schema. Things I have tried (in Kotlin): ``` @Column var modifiedAt: Instant? @Column @Type(type = "timestamp") var modifiedAt: Instant? @Column @Type(type = "Instant") var modifiedAt: Instant? @Column @Temporal(TIMESTAMP) var modifiedAt: Date? @Column @Type(type = "timestamp") var modifiedAt: Date? ``` All these generate a `DATETIME` column in the database. How do I instruct Hibernate to create a `TIMESTAMP` column? I am aware I could use `columnDefinition = "TIMESTAMP"`, however this is just ramming raw SQL down Hibernate's throat, which seems wrong. Is it really the only way?<issue_comment>username_1: This is how I handle create and update timestamps. ``` @Column(nullable = false) @Temporal(TemporalType.TIMESTAMP) var created: Date = Date() @Column(nullable = false) @Temporal(TemporalType.TIMESTAMP) var modified: Date = Date() @PreUpdate protected fun onUpdate() { this.modified = Date() } ``` Upvotes: 0 <issue_comment>username_2: As it turns out the Hibernate people are much smarter than me and actually read the MySQL documentation- As it also turns out, `TIMESTAMP` in MySQL sucks, mostly for being 32-Bit and therefor susceptible to the year-2038-bug. `DATETIME` is therefor the lesser evil and you just need to make sure to store everything in UTC. Link to Hibernate Forums: <https://discourse.hibernate.org/t/why-does-hibernate-orm-uses-datetime-by-default-on-mysql-instead-of-timestamp/422> Upvotes: 3 [selected_answer]
2018/03/22
1,381
3,756
<issue_start>username_0: I have a pandas data frame counted and grouped by specific columns. ``` import pandas as pd df = pd.DataFrame({'x':list('aaabbbbbccccc'),'y':list('2225555577777'), 'z':list('1312223224432')}) # df.groupby(['x','y','z'])['z'].count() # or df.groupby(['x','y','z'])['z'].agg(['count']) # or df.groupby(['x','y','z'])['z'].count().reset_index(name='counts') ``` Results is; ``` x y z counts 0 a 2 1 2 1 a 2 3 1 2 b 5 2 4 3 b 5 3 1 4 c 7 2 2 5 c 7 3 1 6 c 7 4 2 ``` How can I convert the result to following form? ``` x y 1 2 3 4 0 a 2 2 0 1 0 1 b 5 0 4 1 0 2 c 7 0 2 1 2 ```<issue_comment>username_1: You will need to use `unstack` + `reset_index`: ``` (df.groupby(['x','y','z'])['z'] .count() .unstack(-1, fill_value=0) .reset_index() .rename_axis(None, axis=1) ) x y 1 2 3 4 0 a 2 2 0 1 0 1 b 5 0 4 1 0 2 c 7 0 2 1 2 ``` Note, you can replace `df.groupby(['x','y','z'])['z'].count()` with `df.groupby(['x','y','z']).size()` for compactness, but beware that `size` also counts NaNs. Upvotes: 3 [selected_answer]<issue_comment>username_2: Something like `crosstab` ``` pd.crosstab([df.x,df.y],df.z).reset_index() Out[81]: z x y 1 2 3 4 0 a 2 2 0 1 0 1 b 5 0 4 1 0 2 c 7 0 2 1 2 ``` Upvotes: 2 <issue_comment>username_3: *PROJECT*/**KILL** `<--` (read: project overkill) --- This uses Pandas `factorize` to get integer representations of unique values. The function `pd.factorize` returns the passed array in its new integer form as well as an array of what the unique values were. If we do this for two arrays whose positions correspond to each other, we can perform a cross tabulation using Numpy's `bincount`. Bin counting simply increments a "bin" each time a value is encountered that represents the "bin". `np.bincount` assumes the bins are array indices from `0:`. So this starts to make sense if I want to bin count a single array of integers. But how do I handle a two dimensional array? I have to shift my integer values such that they start counting in a "new row". I also have to figure out what "new row" that is. The integer values from the unique row values represent the row. The number of unique column values represent the "shift" ``` tups = list(zip(df.x, df.y)) i, r = pd.factorize(tups) j, c = pd.factorize(df.z) n, m = len(r), len(c) b = np.bincount(i * m + j, minlength=n * m).reshape(n, m) pd.DataFrame( np.column_stack([r.tolist(), b]), columns=['x', 'y'] + c.tolist() ) x y 1 3 2 4 0 a 2 2 1 0 0 1 b 5 0 1 4 0 2 c 7 0 1 2 2 ``` --- With sorting the z's Notice that I used Pandas `factorize`. I could have used Numpy's `unique` to do some of this. However, there are two reasons why I didn't. One, `np.unique` sorts values by default when returning the inverse array (that's what gives me the factorization). Sorting has a time complexity of `O(n * log(n))` and can be a hit on performance for larger arrays. The second reason is that Numpy would require some additional annoying handling to perform the task on tuples. However, in this case, I wanted to see the z columns presented in the same order OP had them. That required that I sort while factorizing. I still didn't want to use Numpy so I just used the `sort` flag in `pd.factorize` ``` tups = list(zip(df.x, df.y)) i, r = pd.factorize(tups) j, c = pd.factorize(df.z, sort=True) n, m = len(r), len(c) b = np.bincount(i * m + j, minlength=n * m).reshape(n, m) pd.DataFrame( np.column_stack([r.tolist(), b]), columns=['x', 'y'] + c.tolist() ) x y 1 2 3 4 0 a 2 2 0 1 0 1 b 5 0 4 1 0 2 c 7 0 2 1 2 ``` Upvotes: 2
2018/03/22
1,384
4,081
<issue_start>username_0: [enter image description here](https://i.stack.imgur.com/pCoKR.png) I have two tables : 1. UserInfo 2. Skill and the join table between them called `UserSkill` as you can see at the right part of the diagram. I want to know whoever knows or is skillful in Java, what else he is skillful at. I mean for example I know java, Go, PHP, python and user number 2 knows java and python and CSS. So the answer to the question: whoever knows java what else he knows would be GO, PHP, Python and CSS. It's like recommendation systems for example whoever but this product what else do they bought? Like what we have in amazon .. What would be the best query for this ? Thank you More information: **UserInfo** ``` U-id U-name 1 A 2 B 3 C ``` **SkillInfo** ``` S-id S-Name 1 Java 2 GO 3 PHP 4 Python 5 CSS ``` **UserSkill**: ``` U-id S-id 1 1 1 2 1 3 1 4 2 1 2 4 2 5 ```<issue_comment>username_1: You will need to use `unstack` + `reset_index`: ``` (df.groupby(['x','y','z'])['z'] .count() .unstack(-1, fill_value=0) .reset_index() .rename_axis(None, axis=1) ) x y 1 2 3 4 0 a 2 2 0 1 0 1 b 5 0 4 1 0 2 c 7 0 2 1 2 ``` Note, you can replace `df.groupby(['x','y','z'])['z'].count()` with `df.groupby(['x','y','z']).size()` for compactness, but beware that `size` also counts NaNs. Upvotes: 3 [selected_answer]<issue_comment>username_2: Something like `crosstab` ``` pd.crosstab([df.x,df.y],df.z).reset_index() Out[81]: z x y 1 2 3 4 0 a 2 2 0 1 0 1 b 5 0 4 1 0 2 c 7 0 2 1 2 ``` Upvotes: 2 <issue_comment>username_3: *PROJECT*/**KILL** `<--` (read: project overkill) --- This uses Pandas `factorize` to get integer representations of unique values. The function `pd.factorize` returns the passed array in its new integer form as well as an array of what the unique values were. If we do this for two arrays whose positions correspond to each other, we can perform a cross tabulation using Numpy's `bincount`. Bin counting simply increments a "bin" each time a value is encountered that represents the "bin". `np.bincount` assumes the bins are array indices from `0:`. So this starts to make sense if I want to bin count a single array of integers. But how do I handle a two dimensional array? I have to shift my integer values such that they start counting in a "new row". I also have to figure out what "new row" that is. The integer values from the unique row values represent the row. The number of unique column values represent the "shift" ``` tups = list(zip(df.x, df.y)) i, r = pd.factorize(tups) j, c = pd.factorize(df.z) n, m = len(r), len(c) b = np.bincount(i * m + j, minlength=n * m).reshape(n, m) pd.DataFrame( np.column_stack([r.tolist(), b]), columns=['x', 'y'] + c.tolist() ) x y 1 3 2 4 0 a 2 2 1 0 0 1 b 5 0 1 4 0 2 c 7 0 1 2 2 ``` --- With sorting the z's Notice that I used Pandas `factorize`. I could have used Numpy's `unique` to do some of this. However, there are two reasons why I didn't. One, `np.unique` sorts values by default when returning the inverse array (that's what gives me the factorization). Sorting has a time complexity of `O(n * log(n))` and can be a hit on performance for larger arrays. The second reason is that Numpy would require some additional annoying handling to perform the task on tuples. However, in this case, I wanted to see the z columns presented in the same order OP had them. That required that I sort while factorizing. I still didn't want to use Numpy so I just used the `sort` flag in `pd.factorize` ``` tups = list(zip(df.x, df.y)) i, r = pd.factorize(tups) j, c = pd.factorize(df.z, sort=True) n, m = len(r), len(c) b = np.bincount(i * m + j, minlength=n * m).reshape(n, m) pd.DataFrame( np.column_stack([r.tolist(), b]), columns=['x', 'y'] + c.tolist() ) x y 1 2 3 4 0 a 2 2 0 1 0 1 b 5 0 4 1 0 2 c 7 0 2 1 2 ``` Upvotes: 2
2018/03/22
469
2,144
<issue_start>username_0: I've got a stateful service running in a Service Fabric cluster that I now know fails to honor a cancellation token passed into it. My fault. I'm ready to release the fix, but during the upgrade process, I'm expecting the service replica on the faulty primary node to get stuck since it won't honor the token passed in. I can use `Restart-ServiceFabricDeployedCodePackage` or even `Restart-ServiceFabricNode` to manually take down the stuck replica, but that will result in a brief service interruption during the upgrade process. Is there any way to release this fix with zero downtime?<issue_comment>username_1: This is not possible for a stateful service using the Service Fabric infrastructure, you will need to have downtime on the upgrade. Once you have a version that supports the cancellation token then you will be fine. That said, depending on the use of the state, and if you have a load balancer between your clients and the service, you can stand up another service instance on the new fixed version and use the load balancer to drain your traffic across to then new version, upgrade the old, drain back to it and then drop the second service you created. This will allow for a zero downtime scenario. Upvotes: 3 [selected_answer]<issue_comment>username_2: The only workarounds I can think of are worse since they turn off parts of health checks during upgrades and "force" the process to come down. This doesn't make things more graceful or improve downtime, and has a side effect of potentially causing other health issues to be ignored. There's always *some* downtime, even with the fully rolling upgrades, since swapping a primary to another node is never instantaneous and callers need to discover the new location. With those commands, you're just converting a more graceful shutdown and cleanup into a failure, which results in the same primary swap. Shouldn't be a huge difference since clients (and SF) have to deal with failure normally anyway. I'd keep using those commands since they give you good manual control over which replicas/processes to poke when things get stuck. Upvotes: 1
2018/03/22
990
3,296
<issue_start>username_0: I have an array: ``` ["apple", "banana", "animal", "car", "angel"] ``` I want to push elements that start with `"a"` into separate arrays. I want to return: ``` ["apple"], ["animal"], ["angel"] ``` I have only been able to make it work if I push them into an empty array that I pre-created.<issue_comment>username_1: ``` array_of_arrays = [] your_array.each do |ele| if ele.starts_with?("a") array_of_arrays << ele.to_a end end ``` Upvotes: 0 <issue_comment>username_2: The simplest I can come up with is: ``` arr = %w{ apple banana animal car angel } arr.map {|i| i.start_with?('a') ? [i] : nil }.compact => [["apple"], ["animal"], ["angel"]] ``` Upvotes: 0 <issue_comment>username_3: Here is some code I got to do this in console: ``` > arr = ["apple", "banana", "animal", "car", "angel"] => ["apple", "banana", "animal", "car", "angel"] > a_words = [] => [] arr.each do |word| a_words << word if word.chars.first == 'a' end => ["apple", "banana", "animal", "car", "angel"] > a_words => ["apple", "animal", "angel"] ``` If you wanted to do something more complex than first letter you might want to use a regex like: ``` if word.matches(/\Aa/) # \A is beginning of string ``` Upvotes: 0 <issue_comment>username_4: Generally in order to pick elements from array that match some specific conditione use `select` method. `select` returns an array of all elements that matched critera or an empty list in case neither element has matched example: ``` new_array = array.select do |element| return_true_or_false_depending_on_element(element) end ``` now when we would like to put every element in its own array we could you another array method that is available on array - `map` which takes every element of an array and transforms it in another element. In our case we will want to take every matching string and wrap it in array `map` usage: ``` new_array = array.map do |element| element_transformation(element) # in order to wrap element just return new array with element like this: [element] end ``` coming back to your question. in order to verify whether a string starts with a letter you could use `start_with?` method which is available for every string glueing it all together: ``` strings = ["apple", "banana", "animal", "car", "angel"] result = strings.select do |string| string.start_with?("a") end.map do |string_that_start_with_a| [string_that_start_with_a] end puts result ``` Upvotes: 2 <issue_comment>username_5: Here's a golfed down version: ``` array.grep(/\Aa/).map(&method(:Array)) ``` I might consider my audience before putting something this clever into production, since it can be a little confusing. [`Array#grep`](https://ruby-doc.org/core-2.5.0/Enumerable.html#method-i-grep) returns all elements that match the passed regular expression, in this case `/\Aa/` matches strings that begin with `a`. `\A` is a regular expression token that matches the beginning of the string. You could change it to `/\Aa/i` if you want it to be case insensitive. The `&method(:Array)` bit grabs a reference to the kernel method [`Array()`](https://ruby-doc.org/core-2.5.0/Kernel.html#method-i-Array) and runs it on each element of the array, wrapping each element in the array in its own array. Upvotes: 1
2018/03/22
586
2,499
<issue_start>username_0: I'm creating periodic snapshots of my EBS volume using a Scheduled Cron expression rule (thanks, [<NAME>](https://stackoverflow.com/users/628267/john-c)). My data is all *binary,* and I suspect that the automatic compression AWS performs on my data - will actually *enlarge* the resulting snapshots. **Is there a way to instruct AWS to *not* employ compression when creating snapshots (so I could compare the snapshot's size with/without compression)?** Note: [Creating an Amazon EBS Snapshot](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html) seems to indicate that using compression is mandatory.<issue_comment>username_1: You have no control over the compression used for EBS snapshots. EBS snapshots are incremental (except for the first snapshot). That data is compressed based on AWS's own heuristics. You have no visibility into the actual compressed data's size. When you're looking at an EBS snapshot, the snapshot's "size" will always be reported as the originating EBS volume's size, regardless of the actual size of the snapshot. Upvotes: 3 [selected_answer]<issue_comment>username_2: I don't think EBS snapshots are now compressed (I am not sure if they were earlier) and I could not find any reference to compression in AWS documentation as well. That is why the size of initial snapshot is same as the size of the volume. And after first snapshot, other snapshots are incremental so only the blocks on the device that have changed or added after last snapshot are saved in the new snapshot. You can refer the blog on how the [ebs snapshots backup & restore](https://n2ws.com/blog/ebs-snapshot/how-do-ebs-snapshots-ebs-restore-work) work. Upvotes: 0 <issue_comment>username_3: In referencing the CUR database which is at any given time reporting to two days prior, you can pull associated cost metrics including actual snapshot size. AWS DOES NOT MAKE THIS EASY. AWS provides mechanisms that will calculate change between snapshots in order to provide cost metrics through use of tags. For the oldest or what should be the largest snapshot, this is compressed and per the AWS engineer I am chatting with right now, we should expect to see standard compression ratios. He told me the compression should not be much different than zipping. So, when I look at my first snapshot, I am seeing volume variances between 99.99% and 0% with an average of 86% compression for a fairly large Oracle EC2 instance with 20+ volumes. Upvotes: 0
2018/03/22
1,131
3,887
<issue_start>username_0: I have a procedure, that in theory, should be skipping whitespace using a look\_ahead loop. Problem is, it's not working, if there's any whitespace in the input file, it is adding it to the array of records. I think my logic is correct, but could use another pair of eyes to let me know what I'm missing, and why it's not working. ``` PROCEDURE Read(Calc: OUT Calculation) IS EOL: Boolean; C: Character; I: Integer := 1; BEGIN LOOP LOOP Look_Ahead(C, EOL); EXIT WHEN EOL or C /= ' '; Get(C); END LOOP; EXIT WHEN ADA.Text_IO.END_OF_FILE; Look_Ahead(C, EOL); IF Is_Digit(C) THEN Calc.Element(I).Kind := Number; Get(Calc.Element(I).Int_Value); ELSE Calc.Element(I).Kind := Symbol; Get(Calc.Element(I).Char_Value); END IF; Calc.Len := Calc.Len+1; IF Calc.Element(I).Char_Value = '=' THEN EXIT; END IF; I := I+1; END LOOP; END Read; ``` EDIT: If any of the other procedures, the code for the record etc is needed for an answer, let me know and I will post it.<issue_comment>username_1: For `Ada.Text_IO.Look_Ahead`, [ARM A.10.7(8)](http://www.ada-auth.org/standards/rm12_w_tc1/html/RM-A-10-7.html#p8) says > > Sets End\_Of\_Line to True if at end of line, including if at end of page or at end of file; in each of these cases the value of Item is not specified. Otherwise, End\_Of\_Line is set to False and Item is set to the next character (**without consuming it**) from the file. > > > (my emphasis) and I think the "without consuming it" is key. Once `Look_Ahead` has found an interesting character, you need to call `Get` to retrieve that character. I hacked this little demo together: I left end-of-file to exception handling, and I called `Skip_Line` once end-of-line’s been seen because just `Get` wasn’t right (sorry not to be more precise!). ```ada with Ada.Text_IO; with Ada.IO_Exceptions; procedure Justiciar is procedure Read is Eol: Boolean; C: Character; begin -- Instead of useful processing, echo the input to the output -- replacing spaces with periods. Outer: loop Inner: loop Ada.Text_IO.Look_Ahead (C, Eol); exit Outer when Eol; -- C is undefined exit Inner when C /= ' '; Ada.Text_IO.Get (C); -- consume the space Ada.Text_IO.Put ('.'); -- instead of the space for visibility end loop Inner; Ada.Text_IO.Get (C); -- consume the character which isnt a space Ada.Text_IO.Put (C); -- print it (or other processing!) end loop Outer; Ada.Text_IO.Skip_Line; -- consume the newline Ada.Text_IO.New_Line; -- clear for next call end Read; begin loop Ada.Text_IO.Put ("reading: "); Read; end loop; exception when Ada.IO_Exceptions.End_Error => null; end Justiciar; ``` Upvotes: 2 <issue_comment>username_2: Usually it's better to read an entire line and parse it than to try to parse character by character. The latter is usually more complex, harder to understand, and more error prone. So I'd suggest something like ``` function De_Space (Source : String) return String is Line : Unbounded_String := To_Unbounded_String (Source); begin -- De_Space Remove : for I in reverse 1 .. Length (Line) loop if Element (Line, I) = ' ' then Delete (Source => Line, From => I, Through => I); end if; end loop Remove; return To_String (Line); end De_Space; Line : constant String := De_Space (Get_Line); ``` You can then loop over `Line'range` and parse it. Since I'm not clear if ``` Get(C); Get(Calc.Element(I).Int_Value); Get(Calc.Element(I).Char_Value); ``` represent 1, 2, or 3 different procedures, I can't really help with that part. Upvotes: 1
2018/03/22
937
3,220
<issue_start>username_0: i want to get all the users with there Iteration in a project. Lets say Project 'Precient' has 9 distinct users with 20 iteration so i want distinct users with all the Iteration in the project `WIQL C#`.it's related to the Question. [WIQL query to get all the team and the users in a Project?](https://stackoverflow.com/questions/49092830/wiql-query-to-get-all-the-team-and-the-users-in-a-project) but does not help me fully<issue_comment>username_1: For `Ada.Text_IO.Look_Ahead`, [ARM A.10.7(8)](http://www.ada-auth.org/standards/rm12_w_tc1/html/RM-A-10-7.html#p8) says > > Sets End\_Of\_Line to True if at end of line, including if at end of page or at end of file; in each of these cases the value of Item is not specified. Otherwise, End\_Of\_Line is set to False and Item is set to the next character (**without consuming it**) from the file. > > > (my emphasis) and I think the "without consuming it" is key. Once `Look_Ahead` has found an interesting character, you need to call `Get` to retrieve that character. I hacked this little demo together: I left end-of-file to exception handling, and I called `Skip_Line` once end-of-line’s been seen because just `Get` wasn’t right (sorry not to be more precise!). ```ada with Ada.Text_IO; with Ada.IO_Exceptions; procedure Justiciar is procedure Read is Eol: Boolean; C: Character; begin -- Instead of useful processing, echo the input to the output -- replacing spaces with periods. Outer: loop Inner: loop Ada.Text_IO.Look_Ahead (C, Eol); exit Outer when Eol; -- C is undefined exit Inner when C /= ' '; Ada.Text_IO.Get (C); -- consume the space Ada.Text_IO.Put ('.'); -- instead of the space for visibility end loop Inner; Ada.Text_IO.Get (C); -- consume the character which isnt a space Ada.Text_IO.Put (C); -- print it (or other processing!) end loop Outer; Ada.Text_IO.Skip_Line; -- consume the newline Ada.Text_IO.New_Line; -- clear for next call end Read; begin loop Ada.Text_IO.Put ("reading: "); Read; end loop; exception when Ada.IO_Exceptions.End_Error => null; end Justiciar; ``` Upvotes: 2 <issue_comment>username_2: Usually it's better to read an entire line and parse it than to try to parse character by character. The latter is usually more complex, harder to understand, and more error prone. So I'd suggest something like ``` function De_Space (Source : String) return String is Line : Unbounded_String := To_Unbounded_String (Source); begin -- De_Space Remove : for I in reverse 1 .. Length (Line) loop if Element (Line, I) = ' ' then Delete (Source => Line, From => I, Through => I); end if; end loop Remove; return To_String (Line); end De_Space; Line : constant String := De_Space (Get_Line); ``` You can then loop over `Line'range` and parse it. Since I'm not clear if ``` Get(C); Get(Calc.Element(I).Int_Value); Get(Calc.Element(I).Char_Value); ``` represent 1, 2, or 3 different procedures, I can't really help with that part. Upvotes: 1
2018/03/22
723
2,541
<issue_start>username_0: The title sums it up pretty much. I want to do it via the console using the `bundler. There are several changes on the way rails has handled things such as the frontend. More details over [here](https://naildrivin5.com/blog/2016/05/17/announcing-rails-6-an-imagined-roadmap.html). Will I have to rewrite frontend of my app again? Will it be enough to just update the bins and executables manually? Or run `$ rails app:update` to overwrite old stuff.<issue_comment>username_1: Latest version — Rails 5.1.5 released February 14, 2018 . Rails 6 is NOT released !! And if it was the case bundler should make it clean Upvotes: 1 <issue_comment>username_2: The post you linked is just a joke. Anyway, Rails 6 is under development (since January 30 I believe). Just wait. <https://github.com/rails/rails/blob/master/version.rb> [![enter image description here](https://i.stack.imgur.com/Zc9LC.jpg)](https://i.stack.imgur.com/Zc9LC.jpg) Upvotes: 1 <issue_comment>username_3: Just run on your terminal: ``` gem update rails ``` That worked for me. Upvotes: 1 <issue_comment>username_4: Well, it may have been a joke back when you originally posted this question, but Rails 6.0.0 is now a reality! [Released Aug. 15, 2019](https://rubyonrails.org/). I would recommend running `gem update rails` as a first try, but "updating" to Rails 6 was not so easy for my Windows 10 system. If you used railsinstaller.org to install Rails previously, you will not be able to install Rails 6 with it - or update - at the time of this writing. In order to update to Rails 6 you *must* be running ruby 2.4.4 or greater, but railsinstaller.org is limited to ruby 2.3 max. Check your version with `ruby -v`. I used [this tutorial](https://medium.com/ruby-on-rails-web-application-development/how-to-install-rubyonrails-on-windows-7-8-10-complete-tutorial-2017-fc95720ee059) to do a fresh install of Ruby 2.6 and Rails 6 while updating gem to 3.0. I would recommend installing everything fresh so you know where all of your dependencies are and how they work. It will also be **much** easier to update each individual component this way -- unless you opt to use tools like [rvm](https://rvm.io/), which by nature will be easier. If you're having issues and want a fresh install of the latest Ruby and Rails, read the article I linked :) I only posted this answer because I know railsinstaller.org was recommended to a lot of people (like me) as an easy install method back when it was being maintained. Upvotes: 3 [selected_answer]
2018/03/22
682
2,755
<issue_start>username_0: I've made a function that calls on the FireBase database and will return a MutableList. However, when I try to make it return on a specific line, it says it requires a Unit instead of the MutableList. ``` fun firebaseCollect(key: String): MutableList { var ref = FirebaseDatabase.getInstance().getReference(key) var lessonList = mutableListOf() ref.addValueEventListener(object: ValueEventListener{ override fun onCancelled(p0: DatabaseError?) { } override fun onDataChange(p0: DataSnapshot?) { if (p0!!.exists()) { lessonList.clear() for (index in p0.children) { val lesson = index.getValue(CustomList::class.java) lessonList.add(lesson!!) } return lessonList } } }) return lessonList } ``` `Type mismatch. Required: Unit, Found: MutableList< CustomList >` is found at the first return lessonList since what I am asking for it to return *is* a MutableList not a Unit. I am confused as to why this happens. The last return would give an empty list. It is currently my first jab at FireBase and this is a practice I am doing. The rules for read and write have been set to public as well. How should I recode the function that I am able to return the data from FireBase into the function and passed back to the caller?<issue_comment>username_1: Data is loaded asynchronously from Firebase. Once the data is fetched the method `onDatachange()` is invoked. You are returning `lessonList` inside onDatachange(). Return type of `onDatachange()` is `void`**(Unit in kotlin)**. **This is the reason for the type mismatch error.** For returning the result from the method `onDatachange()` try [this](https://stackoverflow.com/a/47853774/4247543). Upvotes: 1 <issue_comment>username_2: Firebase APIs are asynchronous. For your case, that means `addValueEventListener` returns immediately. Then, some time later, the listener you passed to it will be invoked with the data you're looking for. Your return statement in the callback doesn't actually return any data to the caller. In fact, you can't return anything from that callback. At the bottom of your function, when you `return lessonList`, you're actually returning an initially empty list to the caller, which may change later when the data finally arrives. To get a better sense of how your code works, put log lines in various places, and see for yourself the order in which the code is invoked. You can read more about why Firebase APIs are asynchronous by [reading this article](https://medium.com/google-developers/why-are-the-firebase-apis-asynchronous-e037a6654a93). The bottom line is that you'll need to interact with the asynchronous APIs using asynchronous programming techniques. Don't try to make them synchronous. Upvotes: 3 [selected_answer]
2018/03/22
665
2,274
<issue_start>username_0: I know a similar question has been asked/answered several times. But please do read on .. I am trying to create a Class from a string value as described in "[Convert string to Python Class Object](https://stackoverflow.com/questions/1176136/convert-string-to-python-class-object)" in Python 3.6. **utils.py** ``` class Foo(object): def __init__(self): print("In the constructor of Foo") def What(self): print("so what ... ") class FooParam(object): def __init__(self, v): self.value = v print("In the constructor of FooParam") def What(self): print("Value=" % self.value) print("So what now ...") ``` **welcome.py** ``` def TEST1(): m = importlib.import_module("utils") c = getattr(m, "Foo") c.What() if __name__ == '__main__': TEST1() ``` **Error** ``` TypeError: What() missing 1 required positional argument: 'self' ``` So what am I doing wrong ? Also how can I create an object of "FooParam" and pass a value to the constructor.<issue_comment>username_1: Once you import the module just access with the variable you stored imported module: ``` m = importlib.import_module("utils") foo = m.Foo() foo.What() ``` `import_module` performs the same steps as `import`. This `c = getattr(m, "Foo")` line of code is equivalent `f = Foo` so that means you are not creating an instance instead you are getting a reference to that class. Upvotes: 2 <issue_comment>username_2: I suspect that c is the class Foo but not an instance of the class. This is equivalent to simply calling ``` Foo.what() ``` Which is why self is not defined! Whereas what you want is to create an instance of the class (giving it a 'self' property), then call its method, i.e. ``` foo_instance = Foo() foo_instance.What() ``` so try replacing c.What() with.. ``` foo_instance = c() foo_instance.What() ``` for FooParam: ``` #import the class FooParam c = getattr(m, "FooParam") #create an instance of the class, initializing its values (and self) fooparam_instance = c(3.14) #call its method! fooparam_instance.What() ``` on the whole I would rename the variable c, to something like foo\_import and fooparam\_import respectively :) Upvotes: 2 [selected_answer]
2018/03/22
515
1,813
<issue_start>username_0: I go to a View, submit data via POST, but the redirect cannot find the Controller method. What am I doing wrong here? After submitting the form I get: ``` 404 error: cannot find page. URL is: http://localhost:52008/InternalController/UpdateCardFormPost ``` Snippet from InternalController.cs: ``` public ActionResult UpdateCardFormView() { var CardToUpdate = new CardView(); return View(CardToUpdate);//return implementation of Cards.cshtml with the empty model that was passed to it } [HttpPost] [ValidateAntiForgeryToken] public ActionResult UpdateCardFormPost(CardView c) { CardModelIO.WriteCard(c);//@TODO: IMPLEMENT return View("CardDetailView", c); } ``` UpdateCardFormView.cshtml (the view with the form I am submitting): ``` @using LeanKit.API.Client.Library.TransferObjects @model CardView @Html.BeginForm("UpdateCardFormPost", "InternalController", FormMethod.Post) @Html.TextBoxFor(c => c.AssignedUserName); ``` Heres the CardDetailView.cshtml (the view I should be redirected to): ``` @using LeanKit.API.Client.Library.TransferObjects @model IEnumerable CardView j = Model; j.AssignedUserId ```<issue_comment>username_1: you are missing closing form tag you should do it like ``` using (@Html.BeginForm("UpdateCardFormPost", "InternalController", FormMethod.Post)) { ... } ``` Upvotes: 0 <issue_comment>username_2: You've specified the controller name as InternalController but it's probably just called "Internal". Try changing `@Html.BeginForm("UpdateCardFormPost", "InternalController", FormMethod.Post)` to `@Html.BeginForm("UpdateCardFormPost", "Internal", FormMethod.Post)` Upvotes: 2 [selected_answer]<issue_comment>username_3: @using(Html.BeginForm()) { ``` @Html.TextBoxFor(c => c.AssignedUserName); ``` } Upvotes: 0
2018/03/22
2,181
6,856
<issue_start>username_0: I've got a batch file that I use to restore databases. These are `.bak` files created by other people outside our department, and the database names are predictably unpredictable, as they are usually named for our various customers. ``` SET servername=XXXXXX SET mssqldir=C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL SET datapath=%mssqldir%\DATA SET dbfile=%~1 SqlCmd -E -S %servername% -Q "RESTORE DATABASE [MyDatabase] FROM DISK = N'%dbfile%' WITH FILE = 1, MOVE 'Customer' TO N'%datapath%\Customer.mdf', MOVE 'Customer_log' TO '%datapath%\Customer.ldf', NOUNLOAD, STATS = 10" ``` As stated above, the problem with the above command is that `Customer` is not always `Customer`. So when I run it, I get something like: > > Msg 3234, Level 16, State 2, Server XXXXXX, Line 1 > > Logical file 'Customer' is not part of database 'MyDatabase'. Use RESTORE FILELISTONLY to list the logical file names. > > > Msg 3013, Level 16, State 1, Server XXXXXX, Line 1 > > RESTORE DATABASE is terminating abnormally. > > > If I try to restore without the MOVE clauses, the restore tries to put files "back" where they originally came from--like paths that contain other people's home directory: > > Msg 5133, Level 16, State 1, Server XXXXXX, Line 1 > > Directory lookup for the file "C:\TEMP\Not.Me\WidgetsRUs.mdf" failed with the operating system error 2(The system cannot find the file specified.). > > > I'm hoping there is a magic way to basically say: `MOVE '*.*' TO '"%datapath%'` Any ideas?<issue_comment>username_1: As suggested, Use **RESTORE FILELISTONLY** to get database and log name ``` @echo off SetLocal EnableDelayedExpansion EnableExtensions set "servername=XXXXXX" set "mssqldir=C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL" set "datapath=%mssqldir%\DATA" set "dbfile=%~1" set "command=" & set "restore=" set "database=" & set "databaselog=" set "command=%command%DECLARE @Table TABLE (LogicalName varchar(128),[PhysicalName] varchar(128),[Type] varchar,[FileGroupName] varchar(128)," set "command=%command%[Size] varchar(128),[MaxSize] varchar(128),[FileId] varchar(128),[CreateLSN] varchar(128),[DropLSN] varchar(128)," set "command=%command%[UniqueId] varchar(128),[ReadOnlyLSN] varchar(128),[ReadWriteLSN] varchar(128),[BackupSizeInBytes] varchar(128)," set "command=%command%[SourceBlockSize] varchar(128),[FileGroupId] varchar(128),[LogGroupGUID] varchar(128),[DifferentialBaseLSN] varchar(128)," set "command=%command%[DifferentialBaseGUID] varchar(128),[IsReadOnly] varchar(128),[IsPresent] varchar(128),[TDEThumbprint] varchar(128));" set "command=%command%DECLARE @LogicalNameData varchar(128),@LogicalNameLog varchar(128);" set "command=%command%INSERT INTO @table EXEC('RESTORE FILELISTONLY FROM DISK='''+'%dbfile%'+''' ');" set "command=%command%SET @LogicalNameData=(SELECT LogicalName FROM @Table WHERE Type='D');" set "command=%command%SET @LogicalNameLog=(SELECT LogicalName FROM @Table WHERE Type='L');" set "command=%command%SELECT @LogicalNameData,@LogicalNameLog;" set "restore=%restore%RESTORE DATABASE [MyDatabase] FROM DISK = N'%dbfile%' WITH FILE = 1, " set "restore=%restore%MOVE '%database%' TO N'%datapath%\Customer.mdf', " set "restore=%restore%MOVE '%databaselog%' TO '%datapath%\Customer.ldf', " set "restore=%restore%NOUNLOAD, STATS = 10" for /f "skip=2 usebackq tokens=1,2* delims= " %%a in (`sqlcmd -h-1 -b -E -S %servername% -Q "%command%"`) do if not defined database set "database=%%a" & set "databaselog=%%b" echo %database% echo %databaselog% if not exist "%datapath%" md "%datapath%">nul sqlcmd -E -S %servername% -Q "%restore%" EndLocal exit/B 1 ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Based on the answer by @Elzooilogico. I thought I'd just include my final version in case it adds some clarity for someone else. ``` @ECHO OFF SET servername=XXXXXXX SET mssqldir=C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL SET datapath=%mssqldir%\DATA SET dbfile=%mssqldir%\Backup\default.bak SET isDefault=true IF NOT "%~1"=="" ( SET dbfile=%~1 SET isDefault=false ) ECHO Target file: %dbfile% ECHO. ECHO Closing connections ... SqlCmd -E -S %servername% -Q "ALTER DATABASE MyDatabase SET SINGLE_USER WITH ROLLBACK IMMEDIATE" ECHO. ECHO Removing old database ... SqlCmd -E -S %servername% -Q "DROP DATABASE MyDatabase" SET command= SET command=%command%DECLARE @FileListTable TABLE ( SET command=%command% [LogicalName] NVARCHAR(128), SET command=%command% [PhysicalName] NVARCHAR(260), SET command=%command% [Type] CHAR(1), SET command=%command% [FileGroupName] NVARCHAR(128), SET command=%command% [Size] NUMERIC(20,0), SET command=%command% [MaxSize] NUMERIC(20,0), SET command=%command% [FileId] BIGINT, SET command=%command% [CreateLSN] NUMERIC(25,0), SET command=%command% [DropLSN] NUMERIC(25,0), SET command=%command% [UniqueId] UNIQUEIDENTIFIER, SET command=%command% [ReadOnlyLSN] NUMERIC(25,0), SET command=%command% [ReadWriteLSN] NUMERIC(25,0), SET command=%command% [BackupSizeInBytes] BIGINT, SET command=%command% [SourceBlockSize] INT, SET command=%command% [FileGroupID] INT, SET command=%command% [LogGroupGUID] UNIQUEIDENTIFIER, SET command=%command% [DifferentialBaseLSN] NUMERIC(25,0), SET command=%command% [DifferentialBaseGUID] UNIQUEIDENTIFIER, SET command=%command% [IsReadOnly] BIT, SET command=%command% [IsPresent] BIT, SET command=%command% [TDEThumbprint] VARBINARY(32), SET command=%command% [SnapshotUrl] NVARCHAR(360) SET command=%command%); SET command=%command%INSERT INTO @FileListTable EXEC('RESTORE FILELISTONLY FROM DISK = ''%dbfile%'''); SET command=%command%SELECT [LogicalName], [Type] FROM @fileListTable; SET command=%command%DECLARE @LogicalNameData varchar(128), @LogicalNameLog varchar(128); SET command=%command%SET @LogicalNameData=(SELECT LogicalName FROM @FileListTable WHERE Type='D'); SET command=%command%SET @LogicalNameLog=(SELECT LogicalName FROM @FileListTable WHERE Type='L'); SET command=%command%RESTORE DATABASE [MyDatabase] FROM DISK = N'%dbfile%' WITH FILE = 1, SET command=%command% MOVE @LogicalNameData TO N'%datapath%\MyDatabase.mdf', SET command=%command% MOVE @LogicalNameLog TO N'%datapath%\MyDatabase.ldf', SET command=%command% NOUNLOAD, STATS = 10; ECHO. ECHO Resotring database from file ... SqlCmd -E -S %servername% -Q "%command%" ECHO. ECHO Changing Owner ... SqlCmd -E -S %servername% -d "MyDatabase" -Q "EXEC sp_changedbowner 'sa'" ECHO. ECHO. IF "%isDefault%"=="false" ( PAUSE REM timeout /t 3 ) ``` Upvotes: 0
2018/03/22
1,152
3,828
<issue_start>username_0: I want to create an array that has incremental random steps, I've used this simple code. ``` t_inici=(0:10*rand:100); ``` The problem is that the random number keeps unchangable between steps. Is there any simple way to change the seed of the random number within each step?<issue_comment>username_1: My first approach to this would be to generate N-2 samples, where N is the desired amount of samples randomly, sort them, and add the extrema: ``` N=50; endpoint=100; initpoint=0; randsamples=sort(rand(1, N-2)*(endpoint-initpoint)+initpoint); t_inici=[initpoint randsamples endpoint]; ``` However not sure how "uniformly random" this is, as you are "faking" the last 2 data, to have the extrema included. This will somehow distort pure randomness (I think). If you are not necessarily interested on including the extrema, then just remove the last line and generate N points. That will make sure that they are indeed random (or as random as MATLAB can create them). Upvotes: 2 <issue_comment>username_2: **If you have a set number of points**, say `nPts`, then you could do the following ``` nPts = 10; % Could use 'randi' here for random number of points lims = [0, 10] % Start and end points x = rand(1, nPts); % Create random numbers % Sort and scale x to fit your limits and be ordered x = diff(lims) * ( sort(x) - min(x) ) / diff(minmax(x)) + lims(1) ``` This approach always includes your end point, which a `0:dx:10` approach would not necessarily. --- **If you had some maximum number of points**, say `nPtsMax`, then you could do the following ``` nPtsMax = 1000; % Max number of points lims = [0,10]; % Start and end points % Could do 10* or any other multiplier as in your example in front of 'rand' x = lims(1) + [0 cumsum(rand(1, nPtsMax))]; x(x > lims(2)) = []; % remove values above maximum limit ``` This approach may be slower, but is still fairly quick and better represents the behaviour in your question. Upvotes: 3 <issue_comment>username_3: Here is an alternative solution with "uniformly random" ``` [initpoint,endpoint,coef]=deal(0,100,10); t_inici(1)=initpoint; while(t_inici(end) ``` In my point of view, it fits your attempts well with unknown steps, start from 0, but not necessarily end at 100. Upvotes: 2 <issue_comment>username_4: From your code it seems you want a uniformly random step that varies between each two entries. This implies that the number of entries that the vector will have is unknown in advance. A way to do that is as follows. This is similar to [username_3's answer](https://stackoverflow.com/questions/49434189/define-a-vector-with-random-steps/49434789#49434789) but adds entries in batches instead of one by one, in order to reduce the number of loop iterations. 1. Guess a number of required entries, `n`. Any value will do, but a large value will result in fewer iterations and will probably be more efficient. 2. Initiallize result to the first value. 3. Generate `n` entries and concatenate them to the (temporary) result. 4. See if the current entries are already too many. 5. If they are, cut as needed and output (final) result. Else go back to step 3. Code: ``` lower_value = 0; upper_value = 100; step_scale = 10; n = 5*(upper_value-lower_value)/step_scale*2; % STEP 1. The number 5 here is arbitrary. % It's probably more efficient to err with too many than with too few result = lower_value; % STEP 2 done = false; while ~done result = [result result(end)+cumsum(step_scale*rand(1,n))]; % STEP 3. Include % n new entries ind_final = find(result>upper_value,1)-1; % STEP 4. Index of first entry exceeding % upper_value, if any if ind_final % STEP 5. If non-empty, we're done result = result(1:ind_final-1); done = true; end end ``` Upvotes: 2
2018/03/22
1,140
4,577
<issue_start>username_0: I've tried using MassTransit to publish a message to a topic named `events` in an Azure Service Bus. I have problems configuring MassTransit to use my predefined topic `events`, instead it creates a new topic named by the namespace/classname for the message type. So I wonder how to specify which topic to use instead of creating a new one. This is the code I've tested with: ``` using System; using System.Threading.Tasks; using MassTransit; using MassTransit.AzureServiceBusTransport; using Microsoft.ServiceBus; namespace PublisherNameSpace { public class Publisher { public static async Task PublishMessage() { var topic = "events"; var bus = Bus.Factory.CreateUsingAzureServiceBus( cfg => { var azureServiceBusHost = cfg.Host(new Uri("sb://.servicebus.windows.net"), host => { host.OperationTimeout = TimeSpan.FromSeconds(5); host.TokenProvider = TokenProvider.CreateSharedAccessSignatureTokenProvider( "RootManageSharedAccessKey", "" ); }); cfg.ReceiveEndpoint(azureServiceBusHost, topic, e => { e.Consumer(); }); }); await bus.Publish(new TestMessage { TestString = "testing" }); } } public class TestConsumer : IConsumer { public Task Consume(ConsumeContext context) { return Console.Out.WriteAsync("Consuming message"); } } public class TestMessage { public string TestString { get; set; } } } ```<issue_comment>username_1: If you want to consume from a specific topic, create a subscription endpoint instead of a receive endpoint, and specify the topic and subscription name in the configuration. The simplest form is shown in the unit tests: <https://github.com/MassTransit/MassTransit/blob/develop/tests/MassTransit.Azure.ServiceBus.Core.Tests/Subscription_Specs.cs> Upvotes: 3 [selected_answer]<issue_comment>username_2: The accepted answer clears up the subscription side: ``` cfg.SubscriptionEndpoint( host, "sub-1", "my-topic-1", e => { e.ConfigureConsumer(provider); }); ``` For those wondering how to get the bus configuration right on the publish side, it should look like: ``` cfg.Message(x => { x.SetEntityName("my-topic-1"); }); ``` You can then call publish on the bus: ``` await bus.Publish(message); ``` Thanks to @ChrisPatterson for [pointing this out to me](https://stackoverflow.com/questions/57593881/correct-way-to-publish-and-subscribe-to-an-explicit-service-bus-topic-with-masst)! Upvotes: 4 <issue_comment>username_3: I was able to send to an Azure Service Bus Topic using the **\_sendEndpointProvider.GetSendEndpoint(new Uri("topic:shape"));** where... "shape" is the topic name. ``` public class MassTransitController : ControllerBase { private readonly ILogger \_logger; private readonly ISendEndpointProvider \_sendEndpointProvider; public MassTransitController(ILogger logger, ISendEndpointProvider sendEndpointProvider) { \_logger = logger; \_sendEndpointProvider = sendEndpointProvider; } [HttpGet] public async Task Get() { try { var randomType = new Random(); var randomColor = new Random(); var shape = new Shape(); shape.ShapeId = Guid.NewGuid(); shape.Color = ShapeType.ShapeColors[randomColor.Next(ShapeType.ShapeColors.Count)]; shape.Type = ShapeType.ShapeTypes[randomType.Next(ShapeType.ShapeTypes.Count)]; var endpoint = await \_sendEndpointProvider.GetSendEndpoint(new Uri("topic:shape")); await endpoint.Send(shape); return Ok(shape); } catch (Exception ex) { throw ex; } } } ``` I also was able to get a .NET 5 Worker Consumer working with code like this... where the subscription "sub-all" would catch all shapes.. I'm going to make a blog post / git repo of this. ``` public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .ConfigureServices((hostContext, services) => { services.AddMassTransit(x => { x.UsingAzureServiceBus((context, cfg) => { cfg.Host("Endpoint=sb://******"); cfg.SubscriptionEndpoint( "sub-all", "shape", e => { e.Handler(async context => { await Console.Out.WriteLineAsync($"Shape Received: {context.Message.Type}"); }); e.MaxDeliveryCount = 15; }); }); }); services.AddMassTransitHostedService(); }); ``` Upvotes: 0
2018/03/22
1,248
3,997
<issue_start>username_0: I want to retrieve different categories from a news website. I am using BeautifulSoup to get title of articles from right side. How can I loop to various categories available on the left side of the website? I just started learning this kind of code so much behind understanding how it works. Any help would be appreciated.This is the website I am working on. <http://query.nytimes.com/search/sitesearch/#/>\*/ Below is my code which returns the headlines of various articles from the right side: ``` import json from bs4 import BeautifulSoup import urllib from urllib2 import urlopen from urllib2 import HTTPError from urllib2 import URLError import requests resp = urlopen("https://query.nytimes.com/svc/add/v1/sitesearch.json") content = resp.read() j = json.loads(content) articles = j['response']['docs'] headlines = [ article['headline']['main'] for article in articles ] for article in articles: print article['headline']['main'] ```<issue_comment>username_1: If I understood you correctly, you can get those articles by changing the api query like this: ``` import requests data_range = ['24hours', '7days', '30days', '365days'] news_feed = {} with requests.Session() as s: for rng in data_range: news_feed[rng] = s.get('http://query.nytimes.com/svc/add/v1/sitesearch.json?begin_date={}ago&facet=true'.format(rng)).json() ``` And access the values like this: ``` print(news_feed) #or print(news_feed['30days']) ``` **EDIT** To query aditional pages, you may try this: ``` import requests data_range = ['7days'] news_feed = {} news_list = [] page = 1 with requests.Session() as s: for rng in data_range: while page < 20: #this is limited to 120 news_list.append(s.get('http://query.nytimes.com/svc/add/v1/sitesearch.json?begin_date={}ago&page={}&facet=true'.format(rng, page)).json()) page += 1 news_feed[rng] = news_list for new in news_feed['7days']: print(new) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: First of all, instead of using `urllib` + `json` to parse the JSON response, you can use the [`requests`](http://docs.python-requests.org/en/master/) module and its built-in [`.json()`](http://docs.python-requests.org/en/master/user/quickstart/#json-response-content) function. Example: ``` import requests r = requests.get("https://query.nytimes.com/svc/add/v1/sitesearch.json") json_data = r.json() # rest of the code is same ``` Now, to scrape the `Date Range` tabs, first, go to `Developer Tools` > `Network` > `XHR`. Then, click on any of the tabs. For example, if you click on the `Past 24 Hours` tab, you'll see an AJAX request made to this URL: ``` http://query.nytimes.com/svc/add/v1/sitesearch.json?begin_date=24hoursago&facet=true ``` If you click on `Past 7 Days`, you'll see this URL: ``` http://query.nytimes.com/svc/add/v1/sitesearch.json?begin_date=7daysago&facet=true ``` In general, you can format these URLs using this: ``` url = "http://query.nytimes.com/svc/add/v1/sitesearch.json?begin_date={}&facet=true" past_24_hours = url.format('24hoursago') r = requests.get(past_24_hours) data = r.json() ``` This will get you all the NEWS items in the JSON object `data`. For example, you can get the NEWS titles like this: ``` for item in data['response']['docs']: print(item['headline']['main']) ``` Output: ``` Austrian Lawmakers Vote to Hinder Smoking Ban in Restaurants and Bars Soccer-Argentine World Cup Winner Houseman Dies Aged 64 Response to UK Spy Attack Not Expected at EU Summit: French Source Florida Man Reunites With Pet Cat Lost 14 Years Ago Citigroup Puts Restrictions on Gun Sales EU Exemptions From U.S. Steel Tariffs 'Possible but Not Certain': French Source Trump Initiates Trade Action Against China Trump’s Trade Threats Put China’s Leader on the Spot Poland Plans Concessions in Judicial Reforms to Ease EU Concerns: Lawmaker Florida Bridge Collapse Victim's Family Latest to Sue ``` Upvotes: 1
2018/03/22
1,014
3,811
<issue_start>username_0: Given the following: ``` interface Component {} interface Composite { getChildren(): component[]; } class StackSegment implements Component {} class Stack implements Composite { getChildren(): StackSegment[] { return new Array(); } } ``` Why does the following cause a compiler error? ``` class Bar = Stack> { } ``` The error that I get (on Stack as the default value of composite) says that StackSegment is not assignable to type 'component'.<issue_comment>username_1: The problem, I think, is that the second type parameter depends on the first. I mean, if the first parameter is of type `P` then the second must be of type `Composite` (assuming `P` implments `Component`). But as you are assigning default types, you could do this: ``` let b = new Bar(); ``` Here I don't pass a second type parameter, so TypeScript assumes it is `Stack`. However, `Stack` does not extend `P`. A default value for a type parameter not always enforces the type constraints, and so such a type cannot be declared. Upvotes: 1 <issue_comment>username_2: Changing the line of code that causes the compiler error as follows: ``` class Bar = Stack> {} ``` eliminates the error. Now the question is, will it work as expected when overridden. Upvotes: 0 <issue_comment>username_3: Note: in what follows I'm using short uppercase identifiers for type parameters, as the normal convention. You can replace `CN` with `component` and `CS` with `composite` at your pleasure (but I'd recommend against using regular identifiers for generic type parameters) --- I'm not sure what your use cases are, but default type parameters don't set constraints. In your code, ``` class Bar = Stack> { // error } ``` The type parameter `CN` is not *required* to be `StackSegment`, and thus it's possible for `Stack` not to meet the constraint of `Composite`. One way to deal with it is to make the composite just `Composite` instead of `Stack`: ``` class Bar = Composite> { // okay } ``` If you *really* want to see the default as `Stack` (e.g., if `Stack` has some extra methods that just `Composite` doesn't), then you can do this: ``` class Bar = Stack & Composite> { // okay } ``` since `Stack & Composite` is the same structural type as `Stack`. But again, I don't know your use case. As of now, you get stuff like: ``` interface OtherComponent extends Component { thingy: string; } class OtherComposite implements Composite { getChildren(): OtherComponent[] { throw new Error("Method not implemented."); } } new Bar(); // Bar as desired new Bar(); // also makes sense // but this is a Bar> new Bar(); // wha? // which is weird. ``` Do you ever intend to use just one default parameter? If not, maybe there's a better way to represent the generics. But without more use case information I wouldn't know how to advise you. Good luck. --- EDIT: An ugly syntax that works the way I think you want is to specify both types in one parameter, like this: ``` class Bar]=[StackSegment, Stack], CN extends CC[0] = CC[0], CS extends CC[1] = CC[1]> { } ``` In the above, `CC` is a two-tuple of types with the constraint you want (the second parameter must be compatible with `Composite<>` of the first parameter), and it defaults to the pair `[StackSegment, Stack]` as desired. (The `CN` and `CS` types are just there for convenience so you don't need to use `CC[0]` for the component type and `CC[1]` for the composite type). Now the behavior is something like this: ``` new Bar(); // CN is StackSegment and CS is Stack, as desired new Bar<[OtherComponent, OtherComposite]>(); // also makes sense ``` But you can't easily break it like before: ``` new Bar<[OtherComponent]>(); // error new Bar<[OtherComponent, Stack]>(); // error ``` Okay, good luck again! Upvotes: 1
2018/03/22
1,277
4,442
<issue_start>username_0: I've read this question [Load data from txt with pandas](https://stackoverflow.com/questions/21546739/load-data-from-txt-with-pandas). However, my data format is a little bit different. Here is the example of the data: ``` product/productId: B003AI2VGA review/userId: A141HP4LYPWMSR review/profileName: <NAME> "Rainbow Sphinx" review/helpfulness: 7/7 review/score: 3.0 review/time: 1182729600 review/summary: "There Is So Much Darkness Now ~ Come For The Miracle" review/text: Synopsis: On the daily trek from Juarez, Mexico to ... product/productId: B003AI2VGA review/userId: A328S9RN3U5M68 review/profileName: <NAME> review/helpfulness: 4/4 review/score: 3.0 review/time: 1181952000 review/summary: Worthwhile and Important Story Hampered by Poor Script and Production review/text: THE VIRGIN OF JUAREZ is based on true events... . . ``` I intend to do a sentiment analysis so I want to get only the `text` and `score` row in each section. Does anybody how to do this using pandas? Or I need to read the file and analysis each line to extract the review and rating?<issue_comment>username_1: The problem, I think, is that the second type parameter depends on the first. I mean, if the first parameter is of type `P` then the second must be of type `Composite` (assuming `P` implments `Component`). But as you are assigning default types, you could do this: ``` let b = new Bar(); ``` Here I don't pass a second type parameter, so TypeScript assumes it is `Stack`. However, `Stack` does not extend `P`. A default value for a type parameter not always enforces the type constraints, and so such a type cannot be declared. Upvotes: 1 <issue_comment>username_2: Changing the line of code that causes the compiler error as follows: ``` class Bar = Stack> {} ``` eliminates the error. Now the question is, will it work as expected when overridden. Upvotes: 0 <issue_comment>username_3: Note: in what follows I'm using short uppercase identifiers for type parameters, as the normal convention. You can replace `CN` with `component` and `CS` with `composite` at your pleasure (but I'd recommend against using regular identifiers for generic type parameters) --- I'm not sure what your use cases are, but default type parameters don't set constraints. In your code, ``` class Bar = Stack> { // error } ``` The type parameter `CN` is not *required* to be `StackSegment`, and thus it's possible for `Stack` not to meet the constraint of `Composite`. One way to deal with it is to make the composite just `Composite` instead of `Stack`: ``` class Bar = Composite> { // okay } ``` If you *really* want to see the default as `Stack` (e.g., if `Stack` has some extra methods that just `Composite` doesn't), then you can do this: ``` class Bar = Stack & Composite> { // okay } ``` since `Stack & Composite` is the same structural type as `Stack`. But again, I don't know your use case. As of now, you get stuff like: ``` interface OtherComponent extends Component { thingy: string; } class OtherComposite implements Composite { getChildren(): OtherComponent[] { throw new Error("Method not implemented."); } } new Bar(); // Bar as desired new Bar(); // also makes sense // but this is a Bar> new Bar(); // wha? // which is weird. ``` Do you ever intend to use just one default parameter? If not, maybe there's a better way to represent the generics. But without more use case information I wouldn't know how to advise you. Good luck. --- EDIT: An ugly syntax that works the way I think you want is to specify both types in one parameter, like this: ``` class Bar]=[StackSegment, Stack], CN extends CC[0] = CC[0], CS extends CC[1] = CC[1]> { } ``` In the above, `CC` is a two-tuple of types with the constraint you want (the second parameter must be compatible with `Composite<>` of the first parameter), and it defaults to the pair `[StackSegment, Stack]` as desired. (The `CN` and `CS` types are just there for convenience so you don't need to use `CC[0]` for the component type and `CC[1]` for the composite type). Now the behavior is something like this: ``` new Bar(); // CN is StackSegment and CS is Stack, as desired new Bar<[OtherComponent, OtherComposite]>(); // also makes sense ``` But you can't easily break it like before: ``` new Bar<[OtherComponent]>(); // error new Bar<[OtherComponent, Stack]>(); // error ``` Okay, good luck again! Upvotes: 1
2018/03/22
831
3,051
<issue_start>username_0: I have a search bar that passes data to the server. I am taking the sentence sent and breaking it into individual words. I am then comparing a column against each word in the sentence. ``` $term = filter_var($input['term'], FILTER_SANITIZE_STRING); $terms = explode(" ", $term); $size = sizeof($terms); $posts = DB::select('SELECT * FROM cars WHERE color = ?', $terms[0] || $terms[1] || $terms[2] || $terms[3] || $terms[4] ); ``` What is the proper way to bind with multiple parameters on one bind? This way would get messy, as I would want to search additional columns. ``` for ($i=0; $i < $size ; $i++) { $posts = DB::select('SELECT * FROM cars WHERE color = ? AND WHERE model =?', $terms[$i], $terms[$i],); } ```<issue_comment>username_1: You should use In to search between various items, and if it's a search, a OR operator would work better: ``` $posts = DB::select('SELECT * FROM cars WHERE color in (?) or model in (?)', implode(',', $terms), implode(',', $terms)); ``` Upvotes: -1 <issue_comment>username_2: > > What is the proper way to bind with multiple parameters on one bind. > > > Think of this rule: You can use a parameter in an SQL query in place of **one single scalar value**. That is, where you would normally use in your SQL statement one numeric constant, one quoted string constant, or one quoted date constant, you can replace that one query element with one parameter. Parameters **can not** be used in place of: * Lists of multiple values * SQL expressions * SQL keywords * Identifiers like table names, column names, or database names If you want to compare your `color` column to multiple values, you need multiple parameter placeholders. ``` $posts = DB::select('SELECT * FROM cars WHERE color IN (?, ?, ?, ?)'); ``` It doesn't work to pass a string containing a comma-separated list of values to a single placeholder. You end up with a query that works as if you had written it this way: ``` SELECT * FROM cars WHERE color IN ('12,34,56,78'); ``` This query will run without error, but it won't give you want you want. In a numeric context, the string `'12,34,56,78'` has a numeric value of 12. It ignores all the rest of the characters in the string after the first non-numeric character `,`. So it will succeed in searching for color 12, but it will fail to find the other colors. --- PDO makes it easy to deal with lists of values, because when it is time to supply the values for a parameterized query, you can simply pass an array to the `PDOStatement::execute()` function. If you don't know how many color values you need to search for, you can use PHP builtin functions to make a list of question mark placeholders that is the same length as your array of color values: ``` $list_of_question_marks = implode(',', array_fill(1, count($color_values), '?')); $sql = "SELECT * FROM cars WHERE color IN ($list_of_question_marks)" $stmt = $pdo->prepare($sql); $stmt->execute($color_values); ``` Upvotes: 4 [selected_answer]
2018/03/22
1,990
5,149
<issue_start>username_0: Let's say I have a tibble. ``` library(tidyverse) tib <- as.tibble(list(record = c(1:10), gender = as.factor(sample(c("M", "F"), 10, replace = TRUE)), like_product = as.factor(sample(1:5, 10, replace = TRUE))) tib # A tibble: 10 x 3 record gender like_product 1 1 F 2 2 2 M 1 3 3 M 2 4 4 F 3 5 5 F 4 6 6 M 2 7 7 F 4 8 8 M 4 9 9 F 4 10 10 M 5 ``` I would like to dummy code my data with 1's and 0's so that the data looks more/less like this. ``` # A tibble: 10 x 8 record gender_M gender_F like_product_1 like_product_2 like_product_3 like_product_4 like_product_5 1 1 0 1 0 0 1 0 0 2 2 0 1 0 0 0 0 0 3 3 0 1 0 1 0 0 0 4 4 0 1 1 0 0 0 0 5 5 1 0 0 0 0 0 0 6 6 0 1 0 0 0 0 0 7 7 0 1 0 0 0 0 0 8 8 0 1 0 1 0 0 0 9 9 1 0 0 0 0 0 0 10 10 1 0 0 0 0 0 1 ``` My workflow would require that I know a range of variables to dummy code (i.e. `gender:like_product`), but don't want to identify EVERY variable by hand (there could be hundreds of variables). Likewise, I don't want to have to identify every level/unique value of every variable to dummy code. I'm ultimately looking for a `tidyverse` solution. I know of several ways of doing this, but none of them that fit perfectly within tidyverse. I know I could use mutate... ``` tib %>% mutate(gender_M = ifelse(gender == "M", 1, 0), gender_F = ifelse(gender == "F", 1, 0), like_product_1 = ifelse(like_product == 1, 1, 0), like_product_2 = ifelse(like_product == 2, 1, 0), like_product_3 = ifelse(like_product == 3, 1, 0), like_product_4 = ifelse(like_product == 4, 1, 0), like_product_5 = ifelse(like_product == 5, 1, 0)) %>% select(-gender, -like_product) ``` But this would break my workflow rules of needing to specify every dummy coded output. I've done this in the past with model.matrix, from the `stats` package. ``` model.matrix(~ gender + like_product, tib) ``` Easy and straightforward, but I want a solution in the tidyverse. **EDIT:** Reason being, I still have to specify every variable, and being able to use select helpers to specify something like `gender:like_product` would be much preferred. I think the solution is in `purrr` ``` library(purrr) dummy_code <- function(x) { lvls <- levels(x) sapply(lvls, function(y) as.integer(x == y)) %>% as.tibble } tib %>% map_at(c("gender", "like_product"), dummy_code) $record [1] 1 2 3 4 5 6 7 8 9 10 $gender # A tibble: 10 x 2 F M 1 1 0 2 0 1 3 0 1 4 1 0 5 1 0 6 0 1 7 1 0 8 0 1 9 1 0 10 0 1 $like\_product # A tibble: 10 x 5 `1` `2` `3` `4` `5` 1 0 1 0 0 0 2 1 0 0 0 0 3 0 1 0 0 0 4 0 0 1 0 0 5 0 0 0 1 0 6 0 1 0 0 0 7 0 0 0 1 0 8 0 0 0 1 0 9 0 0 0 1 0 10 0 0 0 0 1 ``` This attempt produces a list of tibbles, with the exception of the excluded variable `record`, and I've been unsuccessful at combining them all back into a single tibble. Additionally, I still have to specify every column, and overall it seems clunky. Any better ideas? Thanks!!<issue_comment>username_1: An alternative to `model.matrix` is using the package `recipes`. This is still a work in progress and is not yet included in the tidyverse. At some point it might / will be included in the [tidyverse packages](https://www.tidyverse.org/packages/). I will leave it up to you to read up on recipes, but in the step `step_dummy` you can use special selectors from the `tidyselect` package (installed with `recipes`) like the selectors you can use in `dplyr` as `starts_with()`. I created a little example to show the steps. Example code below. But if this is handier I will leave up to you as this has already been pointed out in the comments. The function `bake()` uses model.matrix to create the dummies. The difference is mostly in the column names and of course in the internal checks that are being done in the underlying code of all the separate steps. ``` library(recipes) library(tibble) tib <- as.tibble(list(record = c(1:10), gender = as.factor(sample(c("M", "F"), 10, replace = TRUE)), like_product = as.factor(sample(1:5, 10, replace = TRUE)))) dum <- tib %>% recipe(~ .) %>% step_dummy(gender, like_product) %>% prep(training = tib) %>% bake(newdata = tib) dum # A tibble: 10 x 6 record gender_M like_product_X2 like_product_X3 like_product_X4 like_product_X5 1 1 1. 1. 0. 0. 0. 2 2 1. 1. 0. 0. 0. 3 3 1. 1. 0. 0. 0. 4 4 0. 0. 1. 0. 0. 5 5 0. 0. 0. 0. 0. 6 6 0. 1. 0. 0. 0. 7 7 0. 1. 0. 0. 0. 8 8 0. 0. 0. 1. 0. 9 9 0. 0. 0. 0. 1. 10 10 1. 0. 0. 0. 0. ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: In case you don't want to load any additional packages, you could also use pivot\_wider statements like this: ``` tib %>% mutate(dummy = 1) %>% pivot_wider(names_from = gender, values_from = dummy, values_fill = 0) %>% mutate(dummy = 1) %>% pivot_wider(names_from = like_product, values_from = dummy, values_fill = 0, names_glue = "like_product_{like_product}") ``` Upvotes: 0
2018/03/22
774
2,245
<issue_start>username_0: Same configuration is working with window 10. but when i tried same configuration run with ubuntu, its showing an error. [error message while running with jenkins](https://i.stack.imgur.com/HpkVi.png)<issue_comment>username_1: An alternative to `model.matrix` is using the package `recipes`. This is still a work in progress and is not yet included in the tidyverse. At some point it might / will be included in the [tidyverse packages](https://www.tidyverse.org/packages/). I will leave it up to you to read up on recipes, but in the step `step_dummy` you can use special selectors from the `tidyselect` package (installed with `recipes`) like the selectors you can use in `dplyr` as `starts_with()`. I created a little example to show the steps. Example code below. But if this is handier I will leave up to you as this has already been pointed out in the comments. The function `bake()` uses model.matrix to create the dummies. The difference is mostly in the column names and of course in the internal checks that are being done in the underlying code of all the separate steps. ``` library(recipes) library(tibble) tib <- as.tibble(list(record = c(1:10), gender = as.factor(sample(c("M", "F"), 10, replace = TRUE)), like_product = as.factor(sample(1:5, 10, replace = TRUE)))) dum <- tib %>% recipe(~ .) %>% step_dummy(gender, like_product) %>% prep(training = tib) %>% bake(newdata = tib) dum # A tibble: 10 x 6 record gender_M like_product_X2 like_product_X3 like_product_X4 like_product_X5 1 1 1. 1. 0. 0. 0. 2 2 1. 1. 0. 0. 0. 3 3 1. 1. 0. 0. 0. 4 4 0. 0. 1. 0. 0. 5 5 0. 0. 0. 0. 0. 6 6 0. 1. 0. 0. 0. 7 7 0. 1. 0. 0. 0. 8 8 0. 0. 0. 1. 0. 9 9 0. 0. 0. 0. 1. 10 10 1. 0. 0. 0. 0. ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: In case you don't want to load any additional packages, you could also use pivot\_wider statements like this: ``` tib %>% mutate(dummy = 1) %>% pivot_wider(names_from = gender, values_from = dummy, values_fill = 0) %>% mutate(dummy = 1) %>% pivot_wider(names_from = like_product, values_from = dummy, values_fill = 0, names_glue = "like_product_{like_product}") ``` Upvotes: 0
2018/03/22
789
2,815
<issue_start>username_0: I want to link glfw and glew to my project for graphics programming. Adding glfw was pretty straight forward, I followed the instructions on their website. Creating a window with glfw worked perfectly. However, I can't see what's wrong with my CMakeLists.txt for adding GLEW. The program gives the error: "GL/glew.h: No such file or directory". My CMakeLists.txt: ``` cmake_minimum_required( VERSION 3.5 ) project(Starting) find_package( OpenGL REQUIRED ) set( GLFW_BUILD_DOCS OFF CACHE BOOL "" FORCE ) set( GLFW_BUILD_TESTS OFF CACHE BOOL "" FORCE ) set( GLFW_BUILD_EXAMPLES OFF CACHE BOOL "" FORCE ) add_subdirectory( ${PROJECT_SOURCE_DIR}/GLEW/build/cmake ) add_subdirectory( ${PROJECT_SOURCE_DIR}/GLFW ) add_executable( Starting ${PROJECT_SOURCE_DIR}/src/main.cxx ) target_link_libraries( Starting glew32s glfw ) ``` I've tried giving it the names GLEW, glew, glew32 instead but nothing changed. The library is downloaded from here: <https://github.com/Perlmint/glew-cmake> If it has any importance, this is the batch file with which I run my CMakeLists.txt (located in a build folder inside my project source directory): ``` @echo off cmake -G"Unix Makefiles" -DCMAKE_BUILD_TYPE=Debug .. make all ``` Looking at OpenGL projects on github didn't help since almost all of them are using visual studio. It would be great if someone could tell me what I got wrong.<issue_comment>username_1: Your issue is you're forgetting to add the GLEW include directories to your project. You can use `target_include_directories` or `include_directories`, the only difference being where you put it in your `CMakeLists.txt` and the syntax. I prefer `target_include_directories` so your CMakeLists.txt after adding it would look like this: ``` cmake_minimum_required( VERSION 3.5 ) project(Starting) find_package( OpenGL REQUIRED ) set( GLFW_BUILD_DOCS OFF CACHE BOOL "" FORCE ) set( GLFW_BUILD_TESTS OFF CACHE BOOL "" FORCE ) set( GLFW_BUILD_EXAMPLES OFF CACHE BOOL "" FORCE ) add_subdirectory( ${PROJECT_SOURCE_DIR}/GLEW/build/cmake ) add_subdirectory( ${PROJECT_SOURCE_DIR}/GLFW ) add_executable( Starting ${PROJECT_SOURCE_DIR}/src/main.cxx ) target_include_directories(Starting PRIVATE ${PROJECT_SOURCE_DIR}/GLEW/include ) target_link_libraries( Starting glew32s glfw ) ``` Upvotes: 1 <issue_comment>username_2: While username_1's suggestion will likely work, there is a find script included with CMake for GLEW, assuming you are using a new enough version, so you should be using that instead of including paths manually. Just add the following: ``` find_package(GLEW 2.0 REQUIRED) target_link_libraries(Starting GLEW::GLEW) ``` This will find GLEW on your system then both link with the necessary libraries and add the necessary include directories. Upvotes: 3
2018/03/22
509
1,937
<issue_start>username_0: We would like to work with the datagrid but we have an issue with the buttons in all our site. Once Clarity is installed it is not possible anymore to make small square buttons. In a kind of way the minimum width of every buttons seems to be overidden by something in your system. Is there a way to seperate distinct components of the Clarity Design System ? Or maybe exclude some of them ? Are you aware of this issue with button minimum width ? Do you know a way to prevent this behavior ? Many thanks.<issue_comment>username_1: Your issue is you're forgetting to add the GLEW include directories to your project. You can use `target_include_directories` or `include_directories`, the only difference being where you put it in your `CMakeLists.txt` and the syntax. I prefer `target_include_directories` so your CMakeLists.txt after adding it would look like this: ``` cmake_minimum_required( VERSION 3.5 ) project(Starting) find_package( OpenGL REQUIRED ) set( GLFW_BUILD_DOCS OFF CACHE BOOL "" FORCE ) set( GLFW_BUILD_TESTS OFF CACHE BOOL "" FORCE ) set( GLFW_BUILD_EXAMPLES OFF CACHE BOOL "" FORCE ) add_subdirectory( ${PROJECT_SOURCE_DIR}/GLEW/build/cmake ) add_subdirectory( ${PROJECT_SOURCE_DIR}/GLFW ) add_executable( Starting ${PROJECT_SOURCE_DIR}/src/main.cxx ) target_include_directories(Starting PRIVATE ${PROJECT_SOURCE_DIR}/GLEW/include ) target_link_libraries( Starting glew32s glfw ) ``` Upvotes: 1 <issue_comment>username_2: While username_1's suggestion will likely work, there is a find script included with CMake for GLEW, assuming you are using a new enough version, so you should be using that instead of including paths manually. Just add the following: ``` find_package(GLEW 2.0 REQUIRED) target_link_libraries(Starting GLEW::GLEW) ``` This will find GLEW on your system then both link with the necessary libraries and add the necessary include directories. Upvotes: 3
2018/03/22
409
1,155
<issue_start>username_0: Hi I'm am creating 3 webApi's a GateWay and I'm using docker in visualStudio0217 (.netCore). The projects compile fine and I see the images were created. [![enter image description here](https://i.stack.imgur.com/GSvWZ.png)](https://i.stack.imgur.com/GSvWZ.png) But whe I try to go to the Url's <http://LocalHost:9002> or <http://LocalHost:9000> these dont work I have this docker compose: [![enter image description here](https://i.stack.imgur.com/h9b0G.png)](https://i.stack.imgur.com/h9b0G.png) Do I need to do something else?<issue_comment>username_1: ``` instead of http://LocalHost:9002 use http://localhost:57978 instead of http://LocalHost:9000 use http://localhost:46429 ``` Explanation 0.0.0.0:57978->8041/tcp means that host port 57978 is mapped to container port 8041 0.0.0.0:46429->8043/tcp means that host port 46429 is mapped to container port 8043 You can use this command to inspect your connections ``` docker inspect container_name ``` Upvotes: 1 <issue_comment>username_2: Maybe you can try to add the "ports" in your docker-compose for each service. Example: ports: - "9002:80" Upvotes: 0
2018/03/22
564
2,084
<issue_start>username_0: I want to add a product short description by default whenever is the new product is being created. All the products will have the same short description, so there is no point keep copying and pasting it. So it should just be there when I click on the add a new product. I would appreciate any help. ``` add_filter( 'woocommerce_short_description', 'single_product_short_description', 10, 1 ); function single_product_short_description( $post_excerpt ) { global $product; if ( is_single( $product->id ) ) $post_excerpt = '' . \_\_( "Article only available in the store.", "woocommerce" ) . ' ' . $post_excerpt; return $post_excerpt; } ``` I found the above code but couldn't get it to work :( Thank you. Regards, Emre.<issue_comment>username_1: This will work to automatically override anything put into a product's short description field on the front-end only. It will not add the text to the backend field itself, which is good because it keeps it globalized if you need to change it later. ``` add_filter( 'woocommerce_short_description', 'filter_woocommerce_short_description', 10, 1 ); function filter_woocommerce_short_description( $post_excerpt ) { $post_excerpt = '' . \_\_( "Article only available in the store.", "woocommerce" ) . ' '; return $post_excerpt; } ``` Upvotes: 0 <issue_comment>username_2: * Add this code inside your themes function.php * Change the short description content as per your need - *"Here goes your short desc."* ``` add_filter( 'wp_insert_post_data' , 'cdx_add_product_short_desc' , '99', 1 ); function cdx_add_product_short_desc( $data ) { //only for product post type if($data['post_type'] == 'product' ) { //only if short description is not present if( '' == trim($data['post_excerpt']) ): $short_desc = '**Here goes your short desc**.'; $data['post_excerpt'] = $short_desc ; endif; } // Returns the modified data. return $data; } ``` Upvotes: 2
2018/03/22
988
4,034
<issue_start>username_0: Back in Xcode 9, there was a build option called "Clean Build Folder..." (`⌥``⇧``⌘``K`), which deleted all files in the build folder, only leaving the folder behind with no contents. Since then, this behavior was removed, the menu item's title changed to "Clean Build Folder", and now behaving like the old "Clean" used to. `xcodebuild` has a build option called `clean` which simply does the same thing as Xcode's "Clean Build Folder" (`⌘``⇧``K`), which leaves stuff around. **Is there any way to delete all files in the build folder via a scriptable command?** --- What I've tried so far: ```bash xcodebuild clean -workspace "My Workspace.xcworkspace" -scheme "My Scheme" ``` This, as I said, doesn't actually clean everything up. For that, I added this bodge to my build script: ``` export IS_XCODE_CACHE_FOLDER_PRESENT="`ls -la ~/Library/Developer/ | grep -x "Xcode"`" if [ 0 -ne "$IS_XCODE_CACHE_FOLDER_PRESENT" ]; then echo "Xcode cache folder should not be present at build time! Attempting to delete..." rm -rf "~/Library/Developer/Xcode" RM_RESULT=$? if [ 0 -ne "$RM_RESULT" ]; then echo "FAILED to remove Xcode cache folder!" exit $RM_RESULT fi fi ```<issue_comment>username_1: I faced a similar requirement. So after trying for several hours, I resolved to a custom script instead of using Xcode's run script. So instead of using Xcode to run the app on the simulator I use my script which in turn first cleans the build folder, then builds the project, then installs and finally launches the app in the simulator. Here is what I am using as a quick script: ``` # Delete Build directory rm -rf ./build/Build # pod install pod install # Build project xcrun xcodebuild -scheme Example -workspace Example.xcworkspace -configuration Debug -destination 'platform=iOS Simulator,name=iPhone 11 Pro Max,OS=13.1' -derivedDataPath build # Install App xcrun simctl install "iPhone 11 Pro Max" ./build/Build/Products/Debug-iphonesimulator/Example.app/ # Launch in Simulator xcrun simctl launch "iPhone 11 Pro Max" com.ihak.arpatech.Example ``` Note: See [this question I posted](https://stackoverflow.com/questions/59139258/automatically-clean-the-project-on-run-in-xcode) to know the issue I was facing. Upvotes: 2 <issue_comment>username_2: You can add `clean` action. ``` xcodebuild clean build -workspace "My Workspace.xcworkspace" -scheme "My Scheme" ``` see more in `man xcodebuild` ``` action ... Specify one or more actions to perform. Available actions are: build Build the target in the build root (SYMROOT). This is the default action, and is used if no action is given. build-for-testing Build the target and associated tests in the build root (SYMROOT). This will also produce an xctestrun file in the build root. This requires speci- fying a scheme. analyze Build and analyze a target or scheme from the build root (SYMROOT). This requires specifying a scheme. archive Archive a scheme from the build root (SYMROOT). This requires specifying a scheme. test Test a scheme from the build root (SYMROOT). This requires specifying a scheme and optionally a destination. test-without-building Test compiled bundles. If a scheme is provided with -scheme then the command finds bundles in the build root (SRCROOT). If an xctestrun file is provided with -xctestrun then the command finds bundles at paths specified in the xctestrun file. installsrc Copy the source of the project to the source root (SRCROOT). install Build the target and install it into the target's installation directory in the distribution root (DSTROOT). clean Remove build products and intermediate files from the build root (SYMROOT). ``` Upvotes: -1
2018/03/22
2,026
7,028
<issue_start>username_0: I am trying to read a txt file that is nested within a parent zip file. The folder structure is like below: **Parent Zip File:** `ParentFile.zip` **Contents:** `ParentFolder/Subfolder1/Subfolder2/File1.zip` `File1.zip` contains `File1.txt` which I am trying to read in memory. I checked the documentation for `Archive::Zip and Archive::Zip::MemberRead`. I couldn't find a method that returns a new zip object from the members list so that i could use the below method. ``` $fh = Archive::Zip::MemberRead->new($zipObj, "File1.txt"); ``` The file I am trying to read is 200MB and I need to loop through 300 such files. `ParentFile.zip` is located on a network drive and i only have read access to it. I am trying to find out how i can extract the zip file to my local drive. I found below approaches but doesn't seem to help when i have a nested structure. ``` use strict; use Archive::Zip; my $destinationDirectory = 'C:\test'; my $zipObj = Archive::Zip->new('\\NetworkDrive\ParentFile.zip'); #SourceFile Path #Cannot do below - No write permission on the network drive $zipObj->extractMember('ParentFolder/Subfolder1/Subfolder2/File1.zip') #Cannot do below as well since i have a folder structure foreach my $member ($zip-> members()){ my $extractName = $member->fileName; $member->extractToFileNamed("$destinationDirectory/$extractName"); } ```<issue_comment>username_1: > > The file I am trying to read is 200MB and I don't want to extract. > > > You'll probably have to extract it. If the inner ZIP file has been deflated, there's no choice -- you can't seek within a deflated stream, and ZIP archives cannot be read without seeking. (The table of contents is stored at the end of the archive.) If the inner ZIP file is stored (i.e, not compressed), it might be technically possible to treat the stored content as a ZIP archive, but I'm not aware of any way to make Archive::Zip do that. Upvotes: 0 <issue_comment>username_2: 200MB isn't a big file, and you shouldn't be anticipating bottlenecks in your code before you have done some timings `File1.zip` has been doubly-compressed into `ParentFile.zip`. There is no way to extract the information from the former without expanding at least the relevant part of the latter Unless `File1.zip` is enormous (the zip format allows many gigabytes of simple data to be compressed to a few hundred bytes) you should simply extract the entire file and process it in a second step If you're desperate, then there are ways to extract a list of the items within a zip file without access to the entire contents, but I don't think that is going to help you Upvotes: 1 <issue_comment>username_3: You can use [IO::Uncompress::Unzip](http://search.cpan.org/%7Epmqs/IO-Compress-2.074/lib/IO/Uncompress/Unzip.pm) to work with a nested zip file without having to uncompress and store any of the enclosing zip files to disk. Here is an example to show how it works. In my test setup I have a zip file called `outer.zip` that contains `inner.zip`. ``` $ unzip -l outer.zip Archive: outer.zip Length Date Time Name --------- ---------- ----- ---- 185 03-23-2018 12:53 inner.zip --------- ------- 185 1 file ``` `inner.zip` contains the file we want to get access to. ``` $ unzip -l inner.zip Archive: inner.zip Length Date Time Name --------- ---------- ----- ---- 14 03-23-2018 12:53 payload.txt --------- ------- 14 1 file ``` In this instance it just contains a few lines of text. ``` $ cat payload.txt line 1 line 2 ``` The script below will read the payload data from the `inner zip` and write to `output.txt`. ``` #!/usr/bin/perl use warnings; use strict; use IO::Uncompress::Unzip qw(unzip) ; my $outer = "outer.zip"; my $inner = "inner.zip"; my $data = "payload.txt"; my $output = "output.txt"; my $z = new IO::Uncompress::Unzip $outer, Name => $inner or die "Cannot open $outer\n"; unzip $z => $output, Name => $data or die "Cannot unzip $inner"; ``` This is what I see in `output.txt` ``` $ cat output.txt line 1 line 2 ``` **Points to Note** 1. The `$z` object returned from the `IO::Uncompress::Unzip` constructor is a Perl filehandle that will read `outer.zip` in *streaming mode*. The `Name => $inner` parameters tell it that we are only interested in the `inner.zip` entry in `outer.zip`. 2. The `$z` filehandle is then used in a call to the `unzip` method to read the `payload.txt` entry of `inner.zip`. 3. [IO::Uncompress::Unzip](http://search.cpan.org/%7Epmqs/IO-Compress-2.074/lib/IO/Uncompress/Unzip.pm) is a *streaming* uncompressor. That means you get the ability to access a nested zip file (potentially to any depth), without having to store any data from the enclosing zip files to disk. 4. The use of [IO::Uncompress::Unzip](http://search.cpan.org/%7Epmqs/IO-Compress-2.074/lib/IO/Uncompress/Unzip.pm) does not mean that you get access to a nested zip file without having to incur the cost of uncompressing the enclosing zip file. In this case the data from all the nested layers of zip files is uncompressed in-memory a bit at a time as it is needed. 5. Running a streaming unzip does come with a health warning. Most zip files can be uncompressed in *streaming* mode, but there are exceptions. Handle with care. Recursive Uncompression ----------------------- Taking the above example one step further, you can use the fact that the Perl `IO::Compress::*` modules all return a real Perl filehandle to create a recursive script that will walk well-formed nested zip files to any depth. This script below, `nested-unzip`, uses a derivative module of `IO::Uncompress::Unzip` called [Archive::Zip::SimpleUnzip](https://metacpan.org/pod/Archive::Zip::SimpleUnzip) to do the work. All it does is list the members of all the zip files found. ```perl #!/usr/bin/perl use strict; use warnings; use Archive::Zip::StreamedUnzip qw($StreamedUnzipError) ; sub walk { my $unzip = shift ; my @unzip_path = @{ shift() }; while (my $member = $unzip->next()) { my $name = $member->name(); print " " x @unzip_path . "$name\n" ; if ($name =~ /\.zip$/i) { if ($member->isEncrypted()) { print " " x @unzip_path . "$name ENCRYPTED\n" ; next; } my $fh = $member->open(); my $newunzip = new Archive::Zip::StreamedUnzip $fh or die "Cannot open '$name': $StreamedUnzipError"; walk($newunzip, [@unzip_path, $name]); } } } my $zipfile = $ARGV[0]; my $unzip = new Archive::Zip::StreamedUnzip $zipfile or die "Cannot open '$zipfile': $StreamedUnzipError"; print "$zipfile\n" ; walk($unzip, [$zipfile]) ; ``` Running that against `outer.zip` gives ``` $ perl nested-unzip oute.zip outer.zip inner.zip payload.txt ``` Upvotes: 3
2018/03/22
384
1,379
<issue_start>username_0: I'm not sure but after setting a value in struct I'm getting nil when trying to read the variable: ``` struct MainStruct : Decodable{ var array : [InternalArray]? } struct InternalArray : Decodable{ var firstName : String? var lastName : String? var Number : Int? } var testing: MainStruct? testing?.array![0].firstName = "TEST" print("test value \(testing?.array![0].firstName!)") ``` prints **nil**<issue_comment>username_1: You haven't initialised testing. Upvotes: 2 <issue_comment>username_2: This should work ``` var testing: MainStruct? = MainStruct() testing?.array = [] testing?.array!.append(InternalArray()) testing?.array![0].firstName = "TEST" ``` Upvotes: 1 [selected_answer]<issue_comment>username_3: First of all you are using too many optionals. Three(!) issues: 1. `Testing` is not initialized. 2. `InternalArray` is not initialized. 3. You cannot access an array with index subscription if there is no item at given index (causes an exception). --- ``` struct MainStruct : Decodable { var array = [InternalArray]() } struct InternalArray : Decodable { var firstName : String? var lastName : String? var Number : Int? } var testing = MainStruct() testing.array.append(InternalArray()) testing.array[0].firstName = "TEST" print("test value \(testing.array[0].firstName!)") ``` Upvotes: 3
2018/03/22
564
1,876
<issue_start>username_0: I have some terrible data that I need to turn into something that means something. I realise this isn't the best way to show I haven't used StackOverflow in ages and I am not sure on syntax. I have tried to write some queries but really I am not sure where to start on this sort of query so any help would be much appreciated, Thanks. I have a header table which links to a details table. In the details table I have 3 sets of 2 records where I want to get the difference between two columns. **Header Table** ``` headerId 1 2 ``` **Detail Table** ``` detailid|headerId|name|totalElapsedMs 1|1|Request1|100 2|1|Response1|1000 3|1|Request2|1100 4|1|Response2|1800 5|1|Request3|2000 6|1|Response3|2600 ``` **Results** I want to subtract the rows that match each other and then pivot them up to the header row as shown below. ``` headerId|Request1ElapsedMs|Request2ElapsedMs|Request3ElapsedMs 1|900|700|600 ```<issue_comment>username_1: You haven't initialised testing. Upvotes: 2 <issue_comment>username_2: This should work ``` var testing: MainStruct? = MainStruct() testing?.array = [] testing?.array!.append(InternalArray()) testing?.array![0].firstName = "TEST" ``` Upvotes: 1 [selected_answer]<issue_comment>username_3: First of all you are using too many optionals. Three(!) issues: 1. `Testing` is not initialized. 2. `InternalArray` is not initialized. 3. You cannot access an array with index subscription if there is no item at given index (causes an exception). --- ``` struct MainStruct : Decodable { var array = [InternalArray]() } struct InternalArray : Decodable { var firstName : String? var lastName : String? var Number : Int? } var testing = MainStruct() testing.array.append(InternalArray()) testing.array[0].firstName = "TEST" print("test value \(testing.array[0].firstName!)") ``` Upvotes: 3
2018/03/22
1,106
3,744
<issue_start>username_0: [View Grades Page Example](https://i.stack.imgur.com/5NeRx.jpg) So as the title suggests, I have a problem with PHP and AJAX giving me repeating rows. I just want to display the total no. of units and the gpa but for every subject there is, it adds an extra row AJAX: ``` $.ajax({ traditional: true, url: "getGrades.php", method: "GET", data: { id: id, semester: semester, year: year }, dataType: "JSON", success: function(data){ $("#gradeBody").children().remove(); $("#gradeFoot").children().remove(); console.log(data); $("#gradeLoad").css("display", "block"); for(var i = 0; i < data.length; i++){ $("#gradeTable tbody").append('| '+data[i].subjectCode+' | '+data[i].subjectName+' |> '+data[i].units+' |> '+data[i].mg+' |> '+data[i].units+' | '); $("#gradeTable tfoot").append('| | |> **Total No. of Units:** '+data[i].un+' |> **GPA** |> '+data[i].ave+' | '); } } }); ``` PHP: ``` [php require("connect.php"); $id = $_GET\['id'\]; $year = $_GET\['year'\]; $semester = $_GET\['semester'\]; $query = " SELECT *, SUM(s.units) AS un, ROUND(AVG(g.fg), 2) AS ave FROM subjectschedule AS ss JOIN subject AS s ON s.subjectID = ss.subjectid JOIN grados as g ON g.subjectid = s.subjectID WHERE g.studentid = '$id' AND ss.academic_year_start = '$year' AND ss.semester = '$semester' GROUP BY ss.subSchedID "; $retval = mysqli_query($db, $query); $data = array(); while($row = mysqli_fetch_assoc($retval)){ $data\[\] = $row; } echo json_encode($data); ? ``` The image I uploaded shows the current result, how do I prevent this from repeating?<issue_comment>username_1: You can simply do it by appending the Sum and GPA at the end after the loop like this ```js $.ajax({ traditional: true, url: "getGrades.php", method: "GET", data: { id: id, semester: semester, year: year }, dataType: "JSON", success: function(data) { $("#gradeBody").children().remove(); $("#gradeFoot").children().remove(); console.log(data); $("#gradeLoad").css("display", "block"); for (var i = 0; i < data.length; i++) { $("#gradeTable tbody").append('| ' + data[i].subjectCode + ' | ' + data[i].subjectName + ' |> ' + data[i].units + ' |> ' + data[i].mg + ' |> ' + data[i].units + ' | '); } $("#gradeTable tfoot").append('| | < td class = "course" > < /td>> **Total No. of Units:** ' + data[0].un + ' < /td> |> < td class = "course" > < strong > GPA < /strong> < /td>> ' + data[0].ave + ' | < /tr>'); } }); ``` Upvotes: 1 <issue_comment>username_2: try this out.I guess u need to put the footer out of the for loop. ``` $.ajax({ traditional: true, url: "getGrades.php", method: "GET", data: { id: id, semester: semester, year: year }, dataType: "JSON", success: function(data){ $("#gradeBody").children().remove(); $("#gradeFoot").children().remove(); console.log(data); $("#gradeLoad").css("display", "block"); for(var i = 0; i < data.length; i++){ $("#gradeTable tbody").append('| '+data[i].subjectCode+' | '+data[i].subjectName+' |> '+data[i].units+' |> '+data[i].mg+' |> '+data[i].units+' | '); } $("#gradeTable tfoot").append('| | |> **Total No. of Units:** '+data[i].un+' |> **GPA** |> '+data[i].ave+' | '); } }); ``` Upvotes: 0
2018/03/22
962
3,412
<issue_start>username_0: What type of event should I set the `setEndpointState` method of this component to. I tried setting it to `React.FormEvent` but then I get a type error in the setState function saying that I am missing props `name`, `hostname` and `description`. Is the type of the event wrong or is there a way to write the setState function differently? ``` import * as React from "react"; import EditBasicInfoForm from "./EditBasicInfoForm"; import { Endpoint } from "../../Endpoints/model"; interface EditBasicInfoProps { editBasicInfo: (endpoint: Endpoint) => void; ocid: string; } interface EditBasicInfoState { name: string; hostname: string; description: string; } export class EditBasicInfo extends React.Component { constructor(props: any) { super(props); this.state = { name: "", hostname: "", description: "" }; this.setEndpointState = this.setEndpointState.bind(this); this.editBasicInfo = this.editBasicInfo.bind(this); } public setEndpointState(e: React.FormEvent): void { const target = e.currentTarget; const value = target.value; const name = target.name; this.setState({ [name]: value }); } public editBasicInfo(): void { const endpoint: any = { name: this.state.name, hostname: this.state.hostname, description: this.state.description, ocid: this.props.ocid, }; this.props.editBasicInfo(endpoint); } public render(): JSX.Element { return ( <> ); } } export default EditBasicInfo; ```<issue_comment>username_1: You can simply do it by appending the Sum and GPA at the end after the loop like this ```js $.ajax({ traditional: true, url: "getGrades.php", method: "GET", data: { id: id, semester: semester, year: year }, dataType: "JSON", success: function(data) { $("#gradeBody").children().remove(); $("#gradeFoot").children().remove(); console.log(data); $("#gradeLoad").css("display", "block"); for (var i = 0; i < data.length; i++) { $("#gradeTable tbody").append('| ' + data[i].subjectCode + ' | ' + data[i].subjectName + ' |> ' + data[i].units + ' |> ' + data[i].mg + ' |> ' + data[i].units + ' | '); } $("#gradeTable tfoot").append('| | < td class = "course" > < /td>> **Total No. of Units:** ' + data[0].un + ' < /td> |> < td class = "course" > < strong > GPA < /strong> < /td>> ' + data[0].ave + ' | < /tr>'); } }); ``` Upvotes: 1 <issue_comment>username_2: try this out.I guess u need to put the footer out of the for loop. ``` $.ajax({ traditional: true, url: "getGrades.php", method: "GET", data: { id: id, semester: semester, year: year }, dataType: "JSON", success: function(data){ $("#gradeBody").children().remove(); $("#gradeFoot").children().remove(); console.log(data); $("#gradeLoad").css("display", "block"); for(var i = 0; i < data.length; i++){ $("#gradeTable tbody").append('| '+data[i].subjectCode+' | '+data[i].subjectName+' |> '+data[i].units+' |> '+data[i].mg+' |> '+data[i].units+' | '); } $("#gradeTable tfoot").append('| | |> **Total No. of Units:** '+data[i].un+' |> **GPA** |> '+data[i].ave+' | '); } }); ``` Upvotes: 0
2018/03/22
1,336
6,215
<issue_start>username_0: I have a g suites account and applications associated with my e-mails. I was looking at the Laravel mail functions but I do not see any option to log in to gmail smtp with xoauth auth type. I was using PHPMailer with codeigniter and I had to use clientId, clientSecret and refreshToken to send emails via smtp.gmail.com Is there any chance I can authenticate using xoauth with native laravel swiftmailer?<issue_comment>username_1: Since Laravel doesn't have available configuration to set AuthMode then we need to tweak it a little bit. 1. Register a new Mail service provider in `config/app.php`: ``` // ... 'providers' => [ // ... // Illuminate\Mail\MailServiceProvider::class, App\MyMailer\MyMailServiceProvider::class, // ... ``` 2. `app/MyMailer/MyMailServiceProvider.php` should create your own `TransportManager` class: ``` ``` namespace App\MyMailer; class MyMailServiceProvider extends \Illuminate\Mail\MailServiceProvider { public function registerSwiftTransport() { $this->app['swift.transport'] = $this->app->share(function ($app) { return new MyTransportManager($app); }); } } ``` ``` 3. In the `app/MyMailer/MyTransportManager.php` we can provide additional configuration to the `SwiftMailer`: ``` ``` php namespace App\MyMailer; class MyTransportManager extends \Illuminate\Mail\TransportManager { /** * Create an instance of the SMTP Swift Transport driver. * * @return \Swift_SmtpTransport */ protected function createSmtpDriver() { $transport = parent::createSmtpDriver(); $config = $this-app->make('config')->get('mail'); if (isset($config['authmode'])) { $transport->setAuthMode($config['authmode']); } return $transport; } } ``` ``` 4. Last thing to do is to provide mail configuration with `authmode` set to `XOAUTH2` and `password` to your access token: ``` ``` php return array( /* |-------------------------------------------------------------------------- | Mail Driver |-------------------------------------------------------------------------- | | Laravel supports both SMTP and PHP's "mail" function as drivers for the | sending of e-mail. You may specify which one you're using throughout | your application here. By default, Laravel is setup for SMTP mail. | | Supported: "smtp", "mail", "sendmail" | */ 'driver' = 'smtp', /* |-------------------------------------------------------------------------- | SMTP Host Address |-------------------------------------------------------------------------- | | Here you may provide the host address of the SMTP server used by your | applications. A default option is provided that is compatible with | the Postmark mail service, which will provide reliable delivery. | */ 'host' => 'smtp.gmail.com', /* |-------------------------------------------------------------------------- | SMTP Host Port |-------------------------------------------------------------------------- | | This is the SMTP port used by your application to delivery e-mails to | users of your application. Like the host we have set this value to | stay compatible with the Postmark e-mail application by default. | */ 'port' => 587, /* |-------------------------------------------------------------------------- | Global "From" Address |-------------------------------------------------------------------------- | | You may wish for all e-mails sent by your application to be sent from | the same address. Here, you may specify a name and address that is | used globally for all e-mails that are sent by your application. | */ 'from' => array('address' => '<EMAIL>', 'name' => 'user'), /* |-------------------------------------------------------------------------- | E-Mail Encryption Protocol |-------------------------------------------------------------------------- | | Here you may specify the encryption protocol that should be used when | the application send e-mail messages. A sensible default using the | transport layer security protocol should provide great security. | */ 'encryption' => 'tls', /* |-------------------------------------------------------------------------- | SMTP Server Username |-------------------------------------------------------------------------- | | If your SMTP server requires a username for authentication, you should | set it here. This will get used to authenticate with your server on | connection. You may also set the "password" value below this one. | */ 'username' => '<EMAIL>', /* |-------------------------------------------------------------------------- | SMTP Server Password |-------------------------------------------------------------------------- | | Here you may set the password required by your SMTP server to send out | messages from your application. This will be given to the server on | connection so that the application will be able to send messages. | */ 'password' => '<PASSWORD>', /* |-------------------------------------------------------------------------- | Sendmail System Path |-------------------------------------------------------------------------- | | When using the "sendmail" driver to send e-mails, we will need to know | the path to where Sendmail lives on this server. A default path has | been provided here, which will work well on most of your systems. | */ 'sendmail' => '/usr/sbin/sendmail -bs', /* |-------------------------------------------------------------------------- | Mail "Pretend" |-------------------------------------------------------------------------- | | When this option is enabled, e-mail will not actually be sent over the | web and will instead be written to your application's logs files so | you may inspect the message. This is great for local development. | */ 'pretend' => false, 'authmode' => 'XOAUTH2', ); ``` ``` Upvotes: 1 <issue_comment>username_2: Best way to handle this is with App Passwords. You set the app password up in Google account security settings, then use that password in the .env for mail\_password. It may ask you to setup two-factor, that's okay as the app password will bypass that Upvotes: 0
2018/03/22
499
1,694
<issue_start>username_0: I'm switching from Jquery AJAX to react-dropzone & Axios, I'd like to upload a file to my Django server, I have no issue posting a blob url of the image on the server but I want to get it under `request.FILES` but I am getting an empty queryset. ``` request.FILES : <!--- empty request.POST : <QueryDict: {}> <!--- able to get a blob url </code> ``` --- Here's what my axios configuration looks like : ``` const temporaryURL = URL.createObjectURL(step3.drop[0]); var fd = new FormData(); fd.append('image', temporaryURL); axios({ method: 'post', url: SITE_DOMAIN_NAME + '/business-card/collect/', data: fd, headers: { "X-CSRFToken": CSRF_TOKEN, "content-type": "application/x-www-form-urlencoded" } }).then(function (response) { console.log(response) URL.revokeObjectURL(temporaryURL); }).catch(function (error) { console.log(error) }); ``` I am receiving the file on a classBasedView on POST request. How can I upload the file? Where am I wrong? **Edit** I also tried "application/form-data", doesn't solve the problem<issue_comment>username_1: the problem came from the `content-type` as it was using `"application/form-data"` instead of `"multipart/form-data"`. Upvotes: 3 [selected_answer]<issue_comment>username_2: I am answering in case, someone comes here by searching on google: ``` let formData = new FormData(); formData.append('myFile', file); formData.append('otherParam', 'myValue'); axios({ method: 'post', url: 'myUrl', data: formData, headers: { 'content-type': 'multipart/form-data' } }).then(function (response) { // on success }).catch(function (error) { // on error }); ``` Upvotes: 0
2018/03/22
1,187
3,734
<issue_start>username_0: The problem is that each time I click on the button in my app, then whenever one of my `EditText` or both are empty, the app will crash. The idea is that I will write the calories in the `EditText` called caloriesIn and it will put out an `int` in the `caloriesOut` which is a textfield. The same idea goes for "fat". The problem just to sum up is that if I write something in the calories, but don't write anything in fat, or just don't write anything in either of them, the app will crash. My Code: ``` public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); Button button = (Button) findViewById(R.id.button); button.setOnClickListener( new View.OnClickListener() { @Override public void onClick(View view) { EditText caloriesIn = (EditText) findViewById(R.id.caloriesIn); EditText fatIn = (EditText) findViewById(R.id.fatIn); TextView caloriesOut = (TextView) findViewById(R.id.caloriesOut); TextView fatOut = (TextView) findViewById(R.id.fatOut); int calories = Integer.parseInt(caloriesIn.getText().toString()); int fat = Integer.parseInt(fatIn.getText().toString()); int caloriesResult = calories; int fatResult = fat; caloriesOut.setText(caloriesResult + ""); fatOut.setText(fatResult + ""); } }); } } ``` Crash report: > > 03-22 17:20:02.512 22193-22193/ottolopez.healthynote I/Choreographer: Skipped 47 frames! The application may be doing too much work on its main thread. > 03-22 17:20:02.556 22193-22193/ottolopez.healthynote V/View: dispatchProvideAutofillStructure(): not laid out, ignoring 0 children of 1073741833 > 03-22 17:20:02.561 22193-22193/ottolopez.healthynote I/AssistStructure: Flattened final assist data: 2936 bytes, containing 1 windows, 11 views > 03-22 17:20:05.047 22193-22193/ottolopez.healthynote D/AndroidRuntime: Shutting down VM > 03-22 17:20:05.049 22193-22193/ottolopez.healthynote E/AndroidRuntime: FATAL EXCEPTION: main > Process: ottolopez.healthynote, PID: 22193 > java.lang.NumberFormatException: For input string: "" > at java.lang.Integer.parseInt(Integer.java:620) > at java.lang.Integer.parseInt(Integer.java:643) > at ottolopez.healthynote.MainActivity$1.onClick(MainActivity.java:28) > at android.view.View.performClick(View.java:6294) > at android.view.View$PerformClick.run(View.java:24770) > at android.os.Handler.handleCallback(Handler.java:790) > at android.os.Handler.dispatchMessage(Handler.java:99) > at android.os.Looper.loop(Looper.java:164) > at android.app.ActivityThread.main(ActivityThread.java:6494) > at java.lang.reflect.Method.invoke(Native Method) > at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:438) > at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:807) > > ><issue_comment>username_1: the problem came from the `content-type` as it was using `"application/form-data"` instead of `"multipart/form-data"`. Upvotes: 3 [selected_answer]<issue_comment>username_2: I am answering in case, someone comes here by searching on google: ``` let formData = new FormData(); formData.append('myFile', file); formData.append('otherParam', 'myValue'); axios({ method: 'post', url: 'myUrl', data: formData, headers: { 'content-type': 'multipart/form-data' } }).then(function (response) { // on success }).catch(function (error) { // on error }); ``` Upvotes: 0
2018/03/22
1,054
2,975
<issue_start>username_0: I have the following square DataFrame: ``` In [104]: d Out[104]: a b c d e a inf 5.909091 8.636364 7.272727 4.454545 b 7.222222 inf 8.666667 7.666667 1.777778 c 15.833333 13.000000 inf 9.166667 14.666667 d 4.444444 3.833333 3.055556 inf 4.833333 e 24.500000 8.000000 44.000000 43.500000 inf ``` this is modified distance matrix, representing pairwise distance between objects ['a','b','c','d','e'], where each row is divided by a coefficient (weight) and all diagonal elements artificially set to `np.inf`. How may I get a list/vector of indices like as follows in an *efficient* (*vectorized*) way: ``` d # index of minimal element in the column `a` a # index of minimal element in the column `b` (excluding already found indices: [d]) b # index of minimal element in the column `c` (excluding already found indices: [d,a]) c # index of minimal element in the column `d` (excluding already found indices: [d,a,b]) ``` I.e. in the first column we had found index `d`, so when we search for a minimum in the second column we are excluding row with index `d` (found previously in the first column) - this would be `a`. When we are looking for the minimum in the third column we are excluding rows with indices found previously (`['d','a']`) - this would be `b`. When we are looking for the minimum in the fourth column we are excluding rows with indices found previously (`['d','a','b']`) - this would be `c`. I don't need diagonal (`inf`) elements, so the resulting list/vector will contain `d.shape[0] - 1` elements. --- I.e. the resulting list will look like: `['d','a','b','c']` or in case of Numpy solution the corresponding numerical indices: `[3,0,1,2]` It's not a problem to do it using slow `for loop` solution, but I can't wrap my head around a vectorized (fast) solution...<issue_comment>username_1: Here is my solution, which I'm sure is not the best one: resulting list: ``` res = [] ``` main function, that will search for a minimum in a column, excluding previously found indices and adding found index to `res`: ``` def f(col): ret = col.loc[~col.index.isin(res)].idxmin() if ret not in res: res.append(ret) ``` apply function to each column: ``` _ = d.apply(f) ``` result: ``` In [55]: res Out[55]: ['d', 'a', 'b', 'c', 'e'] ``` excluding last element: ``` In [56]: res[:-1] Out[56]: ['d', 'a', 'b', 'c'] ``` Upvotes: 1 <issue_comment>username_2: A loop is the only solution I can see here. But you can use `numpy` + `numba` to optimise. ``` from numba import jit @jit(nopython=True) def get_min_lookback(A, res): for i in range(A.shape[1]): res[i] = np.argmin(A[:, i]) A[res[i], :] = np.inf return res arr = df.values get_min_lookback(arr, np.zeros(arr.shape[1], dtype=int)) # array([3, 0, 1, 2, 0]) ``` Upvotes: 3 [selected_answer]
2018/03/22
472
1,290
<issue_start>username_0: My PC is running Ubuntu 17.10 and has GTK+ 3.22. When compiled from this environment the binaries cannot be run under Ubuntu 16.04 since the only GTK+ 3.18 is compatible with Ubuntu 16.04. How do I compile for a lower GTK runtime level?<issue_comment>username_1: Here is my solution, which I'm sure is not the best one: resulting list: ``` res = [] ``` main function, that will search for a minimum in a column, excluding previously found indices and adding found index to `res`: ``` def f(col): ret = col.loc[~col.index.isin(res)].idxmin() if ret not in res: res.append(ret) ``` apply function to each column: ``` _ = d.apply(f) ``` result: ``` In [55]: res Out[55]: ['d', 'a', 'b', 'c', 'e'] ``` excluding last element: ``` In [56]: res[:-1] Out[56]: ['d', 'a', 'b', 'c'] ``` Upvotes: 1 <issue_comment>username_2: A loop is the only solution I can see here. But you can use `numpy` + `numba` to optimise. ``` from numba import jit @jit(nopython=True) def get_min_lookback(A, res): for i in range(A.shape[1]): res[i] = np.argmin(A[:, i]) A[res[i], :] = np.inf return res arr = df.values get_min_lookback(arr, np.zeros(arr.shape[1], dtype=int)) # array([3, 0, 1, 2, 0]) ``` Upvotes: 3 [selected_answer]