date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/19
420
1,472
<issue_start>username_0: I was hoping to experiment with cl-async to run a series of external programs with a large combinations of command line arguments. However, I can't figure out how to read the stdout of the processes launched with `as:spawn`. I would typically use uiop which makes it easy to capture the process output: ``` (let ((p (uiop:launch-program ... :output :stream))) (do-something-else-until-p-is-done) (format t "~a~%" (read-line (uiop:process-info-output p)))) ``` I've tried both `:output :pipe` and `:output :stream` options to `as:spawn` and executing `(as:process-output process-object)` in my exit-callback shows the appropriate pipe or async-stream objects but I can't figure out how to read from them. Can anyone with experience with this library tell how to accomplish this?<issue_comment>username_1: You can check the error messages from Google Map [here](https://developers.google.com/maps/documentation/static-maps/error-messages). In your case, the size parameter is not within the expected range of numeric values, or is missing from the request. Upvotes: 0 <issue_comment>username_2: Template literals use `$`, eg `${varNameHere}` - not `#{location.coords.lat}`. Try: `src="http://maps.googleapis.com/maps/api/staticmap?center=${location.coords.lat},${location.coords.lng}&zoom=17&size=400x350&sensor=false&markers=${location.coords.lat},${location.coords.lng}&scale=2&key=<KEY>"` Upvotes: 2
2018/03/19
1,846
7,564
<issue_start>username_0: I have an AWS step-function/state-machine of Lambda functions written primarily in Javascript (although one is in Java) and I'd like to manage the error processing better. I have no problem with having an error condition being *caught* and then forwarded to another state in the flow. So for instance, the following state definition in my state machine passes execution to the `NotifyOfError` state where I am able to email and sms appropriately about the error state. ``` Closure: Type: Task Resource: >- arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:xxx-services-${opt:stage}-transportClosure Next: WaitForCloudWatch Catch: - ErrorEquals: - "States.ALL" ResultPath: "$.error-info" Next: NotifyOfError ``` However, rather than hand ALL errors to this one state there are a few errors I'd like handle differently. So at first I thought that if I threw a Javascript/Node error with a given "name" then that name would be something I could branch off of in the *ErrorEquals* configuration. Example: ``` catch(e) { if (e.message.indexOf('something') !== -1) { e.name = "SomethingError"; throw e; } ``` but soon realized that name was only being prepended to the `Cause` portion of the step-function and not something that would branch. I then tried extending the base Error class like so: ``` export default class UndefinedAssignment extends Error { constructor(e: Error) { super(e.message); this.stack = e.stack; } } ``` but throwing this error actually did nothing, meaning that by the time it showed up in the Step Function the Error's type was still just "Error": ``` "error-info": { "Error": "Error", "Cause": "{\"errorMessage\":\"Error: the message",\"errorType\":\"Error\",\"stackTrace\":[\"db.set.catch.e (/var/task/lib/prepWorker/Handler.js:247:23)\",\"process._tickDomainCallback (internal/process/next_tick.js:135:7)\"]}" } ``` So I'm still unclear how I can distinguish errors sourced in Node that are *branchable* within the step function. > > **Note:** with Java, it appears it *does* pickup the error class correctly (although I've done far less testing on the Java side) > > ><issue_comment>username_1: You should return thrown exception from Lambda using `callback`. Example Cloud Formation template creating both lambda and state machine: ``` AWSTemplateFormatVersion: 2010-09-09 Description: Stack creating AWS Step Functions state machine and lambda function throwing custom error. Resources: LambdaFunction: Type: AWS::Lambda::Function Properties: Handler: "index.handler" Role: !GetAtt LambdaExecutionRole.Arn Code: ZipFile: | exports.handler = function(event, context, callback) { function SomethingError(message) { this.name = "SomethingError"; this.message = message; } SomethingError.prototype = new Error(); const error = new SomethingError("something-error"); callback(error); }; Runtime: "nodejs6.10" Timeout: 25 StateMachine: Type: AWS::StepFunctions::StateMachine Properties: RoleArn: !GetAtt StatesExecutionRole.Arn DefinitionString: !Sub - > { "Comment": "State machine for nodejs error handling experiment", "StartAt": "FirstState", "States": { "FirstState": { "Type": "Task", "Resource": "${ThrowErrorResource}", "Next": "Success", "Catch": [ { "ErrorEquals": ["SomethingError"], "ResultPath": "$.error", "Next": "CatchSomethingError" } ] }, "Success": { "Type": "Pass", "End": true }, "CatchSomethingError": { "Type": "Pass", "Result": { "errorHandlerOutput": "Huh, I catched an error" }, "ResultPath": "$.errorHandler", "End": true } } } - ThrowErrorResource: !GetAtt LambdaFunction.Arn LambdaExecutionRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: Service: - lambda.amazonaws.com Action: - sts:AssumeRole StatesExecutionRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: Service: - !Sub states.${AWS::Region}.amazonaws.com Action: sts:AssumeRole Policies: - PolicyName: ExecuteLambda PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Action: - lambda:InvokeFunction Resource: arn:aws:lambda:*:*:function:* ``` Essential part is Lambda Function definition: ``` exports.handler = function(event, context, callback) { function SomethingError(message) { this.name = "SomethingError"; this.message = message; } SomethingError.prototype = new Error(); const error = new SomethingError("something-error"); callback(error); }; ``` Custom error with custom name is defined here. Of course you can also simply overwrite name (but I do not recommend that): ``` exports.handler = function(event, context, callback) { var e = new Error(); e.name = "SomethingError"; callback(e); }; ``` Error returned like that will be passed to Step Functions without losing error name. I suggest creating some top `try-catch` statement in Lambda Function where you would simply call `callback` with error. Upvotes: 2 <issue_comment>username_2: Here's how I get Step Functions to report a custom error and message as its `Error` and `Cause`. Note I'm using the Node.js 8.10 Lambda runtime with `async` and `try/catch`. ```js exports.handler = async (event) => { function GenericError(name, message) { this.name = name; this.message = message; } GenericError.prototype = new Error(); try { // my implementation which might throw an error // ... } catch (e) { console.log(e); let error = new GenericError('CustomError', 'my message'); throw error; } }; ``` Note for simplicity I'm ignoring the error object from `catch(e)` here. You could also feed its `stack` into the GenericError if wanted. This lambda function returns: ```json { "errorMessage": "my message", "errorType": "CustomError", "stackTrace": [ "exports.handler (/var/task/index.js:33:28)" ] } ``` Step Functions turns this into: ```json { "error": "CustomError", "cause": { "errorMessage": "my message", "errorType": "CustomError", "stackTrace": [ "exports.handler (/var/task/index.js:33:28)" ] } } ``` in its `LambdaFunctionFailed` event history, and ultimately converts it again into this state output (depending on our `ResultPath` - here without any): ```json { "Error": "CustomError", "Cause": "{\"errorMessage\":\"my message\",\"errorType\":\"CustomError\",\"stackTrace\":[\"exports.handler (/var/task/index.js:33:28)\"]}" } ``` Upvotes: 3
2018/03/19
1,708
5,280
<issue_start>username_0: I know this is something that I probably learned when I first started, but don't remember how its done because I have never used it. I have an array that I am looping through and am not getting the desired results. I am trying to get the output to go like this.. One Two Three Four Five Six Seven But it keeps coming out as One Two Three Four One Two Three Four Can someone tell me what I have done wrong? ```js var arr = [ "One", "Two", "Three", "Four", "Five", "Six", "Seven" ]; for (row = 0; row < arr.length; row++) { for (col = 0; col < 4; col++) { document.write(arr[col] + " "); } document.write(' '); } ```<issue_comment>username_1: You can multiply the row number by the size of a row: ``` var arr = [ "One", "Two", "Three", "Four", "Five", "Six", "Seven" ]; const rowSize = 4; for (row = 0; row < arr.length / rowSize; row++) { const startingIdx = row * rowSize; for (col = startingIdx; col < arr.length && col < startingIdx + rowSize; col++) { document.write(arr[col] + " "); } document.write(' '); } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: ``` for (row = 0, col = 0; row < arr.length; row++, col++) { if (col == 4) { col = 0; document.write(' '); } document.write(arr[row] + " "); } ``` Upvotes: 2 <issue_comment>username_3: Right now, the element of the array that you're writing to the page is determined the current column (`col`) which continuously goes from 1 to 4 for each iteration of the outer row loop. You want to instead iterate over each element of the array and every time it grows greater than the number of columns, insert a line break. ```js var arr = [ "One", "Two", "Three", "Four", "Five", "Six", "Seven" ]; for (var index = 0; index < arr.length; index++) { document.write(arr[index] + " "); if ((index + 1) % 4 == 0) { document.write(" "); } } ``` This prints each element of the array, and when the index (plus 1, because the array starts at 0, but we need to start at 1) is a multiple of 4 (meaning the end of a row has been reached), a line break tag is written to begin the next row of elements. Upvotes: 1 <issue_comment>username_4: You can make a minor modification: ```js var arr = [ "One", "Two", "Three", "Four", "Five", "Six", "Seven" ]; for (index = 0; index < arr.length; index++) { document.write(arr[index] + " "); if ((index + 1) % 4 == 0) { document.write(' '); } } //for (col = 0; col < 4; col++) { // // for (row = 0; row < arr.length; row++) { // document.write(arr[row] + " "); // } // // document.write("\n"); //} ``` Upvotes: 2 <issue_comment>username_5: You shouldn't use two loops but test when you have to put the : you can test that with the `%` operator. ``` for (index = 0; index < arr.length; index++) { if (index !== 0 && index % 4 === 0) { document.write(' '); } document.write(arr[index] + " "); } ``` Upvotes: 1 <issue_comment>username_6: On each iteration, you're starting out starting with `col` (actually, the array index) at 0: `for (col = 0; col < 4; col++) {` Try adjusting your original `arr` to account for the desired structure: ``` const arr = [ [ "One", "Two", "Three", "Four", ], [ "Five", "Six", "Seven", "Eight", ] ]; for (let rowIndex = 0; rowIndex < arr.length; rowIndex++) { const row = arr[rowIndex]; for (let colIndex = 0; colIndex < row.length; colIndex++) { document.write(row[colIndex] + " "); } document.write(' '); } ``` Upvotes: 1 <issue_comment>username_7: I'd advice to prevent multiple `document.write` calls, because it can quickly become a painful bottleneck, if you have a bigger set of data: ``` var arr = [ "One", "Two", "Three", "Four", "Five", "Six", "Seven" ]; var table = ""; arr.forEach(function (item, i) { table += item + ((i + 1) % 4 !== 0 ? " " : " "); }); document.write(table); // One Two Three Four Five Six Seven  ``` Also if you're confident with `Array.prototype.reduce` (and transpile your code for cross browser) then I would suggest to have even less code by reducing the array into a string: ``` var table = arr.reduce( (reduced, item, i) => reduced + item + ((i + 1) % 4 !== 0 ? " " : " ") , "" ); ``` Upvotes: 1 <issue_comment>username_8: This might be easier to do using more declarative syntax. You can reduce the contents of the array to a string and if you want to add a break tag on every `n` element, you can use `modulus` to test to see if the reduce loop index is the same as `n`. The array reduce method provides the previous value, the next value, and the loop index to the function callback. ```js const arr = [ "One", "Two", "Three", "Four", "Five", "Six", "Seven" ]; const getString = (dataArr, breakOn) => { return dataArr.reduce((prev, next, index) => { let str = `${prev} ${next}`; if (index && index % breakOn === 0) { str = `${str} `; } return str; }, ''); } const container = document.getElementById('results'); container.innerHTML = getString(arr, 3); ``` Upvotes: 1
2018/03/19
468
1,697
<issue_start>username_0: I know that you can specify data types when reading excels using `pd.read_excel` (as outlined [here](https://stackoverflow.com/questions/32591466/python-pandas-how-to-specify-data-types-when-reading-an-excel-file)). Can you do the same using `pd.ExcelFile`? I have the following code: ``` if ".xls" in xl = pd.ExcelFile(path + "\\" + name, ) for sheet in xl.sheet_names: xl_parsed = xl.parse(sheet) ``` When parsing the sheet, some of the values in the columns are displayed in scientific notation. I don't know the column names before loading so I need to import everything as string. Ideally I would like to be able to do something like `xl_parsed = xl.parse(sheet, dtype = str)`. Any suggestions?<issue_comment>username_1: I went with roganjosh's suggestion - open the excel first, get column names and then pass as converter. ``` if ".xls" in name: xl = pd.ExcelFile(path) sheetCounter = 1 for sheet in xl.sheet_names: ### Force to read as string ### column_list = [] df_column = pd.read_excel(path, sheetCounter - 1).columns for i in df_column: column_list.append(i) converter = {col: str for col in column_list} ################## xl_parsed = xl.parse(sheet, converters=converter) sheetCounter = sheetCounter + 1 ``` Upvotes: 1 <issue_comment>username_2: If you would prefer a cleaner solution, I used the following: ```py excel = pd.ExcelFile(path) for sheet in excel.sheet_names: columns = excel.parse(sheet).columns converters = {column: str for column in columns} data = excel.parse(sheet, converters=converters) ``` Upvotes: 2
2018/03/19
1,320
4,318
<issue_start>username_0: I am combining two questions here because they are related to each other. Question 1: I am trying to use glob to open all the files in a folder but it is giving me "Syntax Error". I am using Python 3.xx. Has the syntax changed for Python 3.xx? Error Message: ``` File "multiple_files.py", line 29 files = glob.glob(/src/xyz/rte/folder/) SyntaxError: invalid syntax ``` Code: ``` import csv import os import glob from pandas import DataFrame, read_csv #extracting files = glob.glob(/src/xyz/rte/folder/) for fle in files: with open (fle) as f: print("output" + fle) f_read.close() ``` Question 2: I want to read input files, append "output" to the names and print out the names of the files. How can I do that? Example: Input file name would be - xyz.csv and the code should print output\_xyz.csv . Your help is appreciated.<issue_comment>username_1: This works in <http://pyfiddle.io>: Doku: <https://docs.python.org/3/library/glob.html> ``` import csv import os import glob # create some files for n in ["a","b","c","d"]: with open('{}.txt'.format(n),"w") as f: f.write(n) print("\nFiles before") # get all files files = glob.glob("./*.*") for fle in files: print(fle) # print file path,fileName = os.path.split(fle) # split name from path # open file for read and second one for write with modified name with open (fle) as f,open('{}{}output_{}'.format(path,os.sep, fileName),"w") as w: content = f.read() # read all w.write(content.upper()) # write all modified # check files afterwards print("\nFiles after") files = glob.glob("./*.*") # pattern for all files for fle in files: print(fle) ``` Output: ``` Files before ./d.txt ./main.py ./c.txt ./b.txt ./a.txt Files after ./d.txt ./output_c.txt ./output_d.txt ./main.py ./output_main.py ./c.txt ./b.txt ./output_b.txt ./a.txt ./output_a.txt ``` I am on windows and would use [`os.walk` (Doku)](https://docs.python.org/3/library/os.html#os.walk) instead. ``` for d,subdirs,files in os.walk("./"): # deconstruct returned aktDir, all subdirs, files print("AktDir:", d) print("Subdirs:", subdirs) print("Files:", files) Output: AktDir: ./ Subdirs: [] Files: ['d.txt', 'output_c.txt', 'output_d.txt', 'main.py', 'output_main.py', 'c.txt', 'b.txt', 'output_b.txt', 'a.txt', 'output_a.txt'] ``` It also recurses into subdirs. Upvotes: 0 <issue_comment>username_2: Your first problem is that strings, including pathnames, need to be in quotes. This: ``` files = glob.glob(/src/xyz/rte/folder/) ``` … is trying to divide a bunch of variables together, but the leftmost and rightmost divisions are missing operands, so you've confused the parser. What you want is this: ``` files = glob.glob('/src/xyz/rte/folder/') ``` --- Your next problem is that this glob pattern doesn't have any globs in it, so the only thing it's going to match is the directory itself. That's perfectly legal, but kind of useless. And then you try to open each match as a text file. Which you can't do with a directory, hence the `IsADirectoryError`. The answer here is less obvious, because it's not clear what you want. * Maybe you just wanted all of the files in that directory? In that case, you don't want `glob.glob`, you want `listdir` (or maybe `scandir`): `os.listdir('/src/xyz/rte/folder/')`. * Maybe you wanted all of the files in that directory or any of its subdirectories? In that case, you could do it with `rglob`, but `os.walk` is probably clearer. * Maybe you did want all the files in that directory that match some pattern, so `glob.glob` is right—but in that case, you need to specify what that pattern is. For example, if you wanted all `.csv` files, that would be `glob.glob('/src/xyz/rte/folder/*.csv')`. --- Finally, you say "I want to read input files, append "output" to the names and print out the names of the files". Why do you want to read the files if you're not doing anything with the contents? You can do that, of course, but it seems pretty wasteful. If you just want to print out the filenames with output appended, that's easy: ``` for filename in os.listdir('/src/xyz/rte/folder/'): print('output'+filename) ``` Upvotes: 1
2018/03/19
1,314
3,940
<issue_start>username_0: How can I find file with highest alphabet character in the file's extension? Example of files my application creates: ``` $ find . -name 'L_*.[J-Z]' L_58420.K L_58420.J L_58420.M L_46657.J L_58420.N L_58420.P L_46657.N L_58420.Q L_46657.K L_58420.O L_46657.O L_46657.L L_46657.M L_58420.L ``` and I'd like to have returned : ``` L_58420.Q L_46657.O ``` Higher alphabet character is created only if file with previous character already exists, so it's possible to search/sort by date too.<issue_comment>username_1: This works in <http://pyfiddle.io>: Doku: <https://docs.python.org/3/library/glob.html> ``` import csv import os import glob # create some files for n in ["a","b","c","d"]: with open('{}.txt'.format(n),"w") as f: f.write(n) print("\nFiles before") # get all files files = glob.glob("./*.*") for fle in files: print(fle) # print file path,fileName = os.path.split(fle) # split name from path # open file for read and second one for write with modified name with open (fle) as f,open('{}{}output_{}'.format(path,os.sep, fileName),"w") as w: content = f.read() # read all w.write(content.upper()) # write all modified # check files afterwards print("\nFiles after") files = glob.glob("./*.*") # pattern for all files for fle in files: print(fle) ``` Output: ``` Files before ./d.txt ./main.py ./c.txt ./b.txt ./a.txt Files after ./d.txt ./output_c.txt ./output_d.txt ./main.py ./output_main.py ./c.txt ./b.txt ./output_b.txt ./a.txt ./output_a.txt ``` I am on windows and would use [`os.walk` (Doku)](https://docs.python.org/3/library/os.html#os.walk) instead. ``` for d,subdirs,files in os.walk("./"): # deconstruct returned aktDir, all subdirs, files print("AktDir:", d) print("Subdirs:", subdirs) print("Files:", files) Output: AktDir: ./ Subdirs: [] Files: ['d.txt', 'output_c.txt', 'output_d.txt', 'main.py', 'output_main.py', 'c.txt', 'b.txt', 'output_b.txt', 'a.txt', 'output_a.txt'] ``` It also recurses into subdirs. Upvotes: 0 <issue_comment>username_2: Your first problem is that strings, including pathnames, need to be in quotes. This: ``` files = glob.glob(/src/xyz/rte/folder/) ``` … is trying to divide a bunch of variables together, but the leftmost and rightmost divisions are missing operands, so you've confused the parser. What you want is this: ``` files = glob.glob('/src/xyz/rte/folder/') ``` --- Your next problem is that this glob pattern doesn't have any globs in it, so the only thing it's going to match is the directory itself. That's perfectly legal, but kind of useless. And then you try to open each match as a text file. Which you can't do with a directory, hence the `IsADirectoryError`. The answer here is less obvious, because it's not clear what you want. * Maybe you just wanted all of the files in that directory? In that case, you don't want `glob.glob`, you want `listdir` (or maybe `scandir`): `os.listdir('/src/xyz/rte/folder/')`. * Maybe you wanted all of the files in that directory or any of its subdirectories? In that case, you could do it with `rglob`, but `os.walk` is probably clearer. * Maybe you did want all the files in that directory that match some pattern, so `glob.glob` is right—but in that case, you need to specify what that pattern is. For example, if you wanted all `.csv` files, that would be `glob.glob('/src/xyz/rte/folder/*.csv')`. --- Finally, you say "I want to read input files, append "output" to the names and print out the names of the files". Why do you want to read the files if you're not doing anything with the contents? You can do that, of course, but it seems pretty wasteful. If you just want to print out the filenames with output appended, that's easy: ``` for filename in os.listdir('/src/xyz/rte/folder/'): print('output'+filename) ``` Upvotes: 1
2018/03/19
1,297
4,325
<issue_start>username_0: I would like to retrieve reviews for a clinic in New York via the Yelp API. However, the API only seems to only return the first three reviews. **My code** ``` # Finding reviews for a particular clinic import http.client import json import urllib.parse api_key= 'MY API KEY' API_HOST = 'https://api.yelp.com/reviews' SEARCH_PATH = '/v3/businesses/search' BUSINESS_PATH = '/v3/businesses/' # Business ID will come after slash. headers = { 'Authorization': 'Bearer %s' % api_key, } #need the following parameters (type dict) params = {'name':'MinuteClinic', 'address1':'241 West 57th St', 'city':'New York', 'state':'NY', 'country':'US'} param_string = urllib.parse.urlencode(params) conn = http.client.HTTPSConnection("api.yelp.com") conn.request("GET", "/v3/businesses/matches/best?"+param_string, headers=headers) res = conn.getresponse() data = res.read() data = json.loads(data.decode("utf-8")) print(data) b_id = data['businesses'][0]['id'] r_url = "/v3/businesses/" + b_id + "/reviews" #review request URL creation based on business ID conn.request("GET",r_url,headers=headers) rev_res = conn.getresponse() #response and read functions needed else error(?) rev_data = rev_res.read() yelp_reviews = json.loads(rev_data.decode("utf-8")) print(yelp_reviews) print(len(yelp_reviews)) ``` Is there a way to get all the reviews? Thank you so much.<issue_comment>username_1: As you may have seen on the [Yelp API documentation](https://www.yelp.com/developers/documentation/v3/business_reviews), there is currently no way to retrieve more than three reviews for a single business with the Business Reviews endpoint (`/businesses/{id}/reviews`) that you are using. The only accepted parameter for the Business Reviews endpoint is `locale`. In addition, the first sentence of the documentation for that endpoint is > > This endpoint returns up to three review excerpts for a given business ordered by [Yelp's default sort order](https://www.yelp-support.com/article/How-is-the-order-of-reviews-determined?). > > > So, at this time, it seems that Yelp only exposes via their API at most three reviews per business. Consider submitting a feature request to the [GitHub repository for the Yelp API](https://github.com/Yelp/yelp-fusion). Upvotes: 2 <issue_comment>username_2: I hate Yelp, I also hate that Google follows suite and also caps the amount of reviews returned. The reviews are public, it's retarded they aren't willing to give programmatic access to get all of them; and they wonder why devs have to create workarounds to bypass these limitations.. Anywho; I created a temp API key for one of my APIs; this one will fetch all the reviews you need from any Yelp profile; Example call: <http://api.reviewsmaker.com/yelp/?url=https://www.yelp.com/biz/chicha-brooklyn&api_key=<KEY>> Parameters: url - full URL of the yelp business page you need to get the reviews for (required) api\_key - use the one in the above link, I provisioned it to expire (keep that in mind) rating - you can specify &rating=5 to only pull 5 star reviews, or &rating=2 to only pull 2 star reviews, etc; this is optional, leaving it blank will return all the reviews Go ahead and grab your clinic's stuff :) Upvotes: 0 <issue_comment>username_3: Yelp's [Fusion API](https://www.yelp.com/fusion) allows users to search for up to 1000 business listings for a keyword, but when it comes to reviews Yelp is not so generous. However, getting access to their API is next to impossible. I know many people who applied with no success. The only remaining option is to scrape the reviews from Yelp. While Yelp may claim that they do not "allow" any scraping of their data, they cannot enforce this as scraping of public data remains legal. The following technologies can be used to write a crawler for Yelp reviews: * [Scrapy](https://docs.scrapy.com) (Python) * Requests & lxml (Python) * [Cheerio](https://cheerio.js.org) (Node) If you don't have time and don't mind spending a few bucks. I have also built a service that scrapes Yelp reviews for you and returns them as an API response for any listing on Yelp. It is called [Yelp Reviews API](https://docs.unwrangle.com/yelp-reviews-api/) and can be used to scrape up to 10,000 reviews for free. Upvotes: 0
2018/03/19
1,013
4,026
<issue_start>username_0: My code currently looks like like: ``` foreach(var request in requestsList){ foreach(var myVar in request.anotherList){ if(myVar.hasPermissions()) { //do something } } } ``` `requestsList` is a `List` of `Request`s. `myVar.hasPermissions()` requires a connection to the database so I want to minimize the number of calls to database. I want to move it outside of the inner `foreach` loop and make only one call per request. I am trying to achieve something like this: ``` foreach(var request in requestsList){ //check for permissions boolean perm = myVar.hasPermissions(); //database call to check permissions foreach(var myVar in request.anotherList){ if(perm) { //do something } } } ``` All I want to do is to move the `hasPermissions()` outside of inner `foreach` loop. The problem I am facing is that I don't have access to `myVar` in the outer `foreach` loop. Both the loops iterating over lists is making it difficult for me.<issue_comment>username_1: I think you are calling it the minimum number of times, in the top loop. If you think about it another way... ``` //First get all the 'another' guys: var allAnother = requestsList.SelectMany(anotherList => anotherList).ToList(); //If you don't have to check them all var permitGuys = allAnother.Distinct().Where(a => a.hasPermissions()).ToList(); //Do something with them foreach(var permitGuy in permitGuys) { //Do something } ``` Upvotes: -1 <issue_comment>username_2: If `hasPermission` is relatively static, i.e. you are certain that it wouldn't change across the runs of the outer loop, you could cache permissions as you check them: ``` IDictionary checked = new IDictionary(); foreach(var request in requestsList){ foreach(var myVar in request.anotherList) { bool permitted; if (!checked.TryGetValue(myVar, out permitted)) { permitted = myVar.hasPermissions(); checked.Add(myVar, permitted); } if(permitted) { //do something } } } ``` This way you would make exactly one call to `hasPermissions` per distinct instance of `myVar`; all subsequent checks would come from the `checked` cache. Upvotes: 0 <issue_comment>username_3: Without much more detail about your classes, we can only speculate how best to solve the problem. Assuming your `anotherList` consists of something like a list of users, you could cache the result of the check so you don't check again for the same user. You can add a public field that uses `Lazy` to cache the result of calling `hasPermissions` - you have to initialize it in the constructors: ``` public class User { public bool hasPermissions() { // check database for permissions var ans = false; // ... return ans; } public Lazy cachedPermissions; public User() { UncachePermissions(); } public void UncachePermissions() => cachedPermissions = new Lazy(() => hasPermissions()); } ``` Now you can access the `cachedPermissions` instead of calling `hasPermissions`: ``` foreach (var request in requestsList) { foreach (var myVar in request.anotherList) { if (myVar.cachedPermissions.Value) { //do something } } } ``` and `hasPermissions` will only be called once per `User` object. If it is possible that multiple `User` objects exist for a single database call, then more details on your classes and methods would be needed. I added the `UncachePermissions` method to reset the cache as otherwise you could use really old values of `hasPermissions` which could cause issues. If that might be a common problem, you could cache outside the objects as part of the looping: ``` var permissionCache = new Dictionary(); foreach (var request in requestsList) { foreach (var myVar in request.anotherList) { bool permission; if (!permissionCache.TryGetValue(myVar, out permission)) { permission = myVar.hasPermissions(); permissionCache.Add(myVar, permission); } if (permission) { //do something } } } ``` Upvotes: 0
2018/03/19
1,817
6,553
<issue_start>username_0: Setup: Reservations can be assigned multiple Resources. A reservation-resource combo can have multiple SetUps. I tried to set up the model like this: ``` class SetUp < ApplicationRecord has_many :reservation_resource_set_ups, dependent: :destroy has_many :reservations, through: :reservation_resource_set_ups has_many :resources, through: :reservation_resource_set_ups end class Resource < ApplicationRecord has_many :reservation_resources, dependent: :destroy has_many :reservation_resource_set_ups, dependent: :destroy has_many :reservations, through: :reservation_resources has_many :set_ups, through: :reservation_resource_set_ups end class Reservation < ApplicationRecord has_many :reservation_resources, dependent: :destroy has_many :reservation_resource_set_ups, dependent: :destroy has_many :resources, through: :reservation_resources has_many :set_ups, through: :reservation_resource_set_ups end class ReservationResource < ApplicationRecord belongs_to :reservation belongs_to :resource has_many :reservation_resource_set_ups has_many :set_ups, through: :reservation_resource_set_ups end class ReservationResourceSetUp < ApplicationRecord belongs_to :reservation belongs_to :resource belongs_to :set_up end ``` Steps: 1. Create a reservation, assigning a resource, works: ``` res1 = Reservation.create(name:"res name") res1.resources << Resource.find(1) # resource with id = 1 exists ``` The reservations and reservation\_resources tables are updated correctly. 2. Assign a setup to the reservation\_resource, fails: ``` res1.resources.first.set_ups << SetUp.find(1) # set_ups with id = 1 exists ``` This fails with error `ActiveRecord::RecordInvalid (Validation failed: Reservation must exist)` Can you help point me in the right direction? Thanks! (Here's the schema, if helpful...) ``` create_table "reservation_resource_set_ups", force: :cascade do |t| t.integer "reservation_id" t.integer "resource_id" t.integer "set_up_id" t.datetime "created_at", null: false t.datetime "updated_at", null: false t.index ["reservation_id"], name: "index_reservation_resource_set_ups_on_reservation_id" t.index ["resource_id"], name: "index_reservation_resource_set_ups_on_resource_id" t.index ["set_up_id"], name: "index_reservation_resource_set_ups_on_set_up_id" end create_table "reservation_resources", force: :cascade do |t| t.integer "reservation_id" t.integer "resource_id" t.text "comments" t.datetime "created_at", null: false t.datetime "updated_at", null: false t.index ["reservation_id"], name: "index_reservation_resources_on_reservation_id" t.index ["resource_id"], name: "index_reservation_resources_on_resource_id" end create_table "reservations", force: :cascade do |t| t.string "name" ... t.datetime "created_at", null: false t.datetime "updated_at", null: false t.index ["end_date"], name: "index_reservations_on_end_date" t.index ["repeat_end_date"], name: "index_reservations_on_repeat_end_date" t.index ["start_date"], name: "index_reservations_on_start_date" end create_table "resources", force: :cascade do |t| t.string "name" t.text "description" t.string "resource_type" t.text "location" t.integer "quantity", default: 1 t.datetime "created_at", null: false t.datetime "updated_at", null: false end create_table "set_ups", force: :cascade do |t| t.string "name" t.text "instructions" t.string "image" t.datetime "created_at", null: false t.datetime "updated_at", null: false end ```<issue_comment>username_1: I think you are calling it the minimum number of times, in the top loop. If you think about it another way... ``` //First get all the 'another' guys: var allAnother = requestsList.SelectMany(anotherList => anotherList).ToList(); //If you don't have to check them all var permitGuys = allAnother.Distinct().Where(a => a.hasPermissions()).ToList(); //Do something with them foreach(var permitGuy in permitGuys) { //Do something } ``` Upvotes: -1 <issue_comment>username_2: If `hasPermission` is relatively static, i.e. you are certain that it wouldn't change across the runs of the outer loop, you could cache permissions as you check them: ``` IDictionary checked = new IDictionary(); foreach(var request in requestsList){ foreach(var myVar in request.anotherList) { bool permitted; if (!checked.TryGetValue(myVar, out permitted)) { permitted = myVar.hasPermissions(); checked.Add(myVar, permitted); } if(permitted) { //do something } } } ``` This way you would make exactly one call to `hasPermissions` per distinct instance of `myVar`; all subsequent checks would come from the `checked` cache. Upvotes: 0 <issue_comment>username_3: Without much more detail about your classes, we can only speculate how best to solve the problem. Assuming your `anotherList` consists of something like a list of users, you could cache the result of the check so you don't check again for the same user. You can add a public field that uses `Lazy` to cache the result of calling `hasPermissions` - you have to initialize it in the constructors: ``` public class User { public bool hasPermissions() { // check database for permissions var ans = false; // ... return ans; } public Lazy cachedPermissions; public User() { UncachePermissions(); } public void UncachePermissions() => cachedPermissions = new Lazy(() => hasPermissions()); } ``` Now you can access the `cachedPermissions` instead of calling `hasPermissions`: ``` foreach (var request in requestsList) { foreach (var myVar in request.anotherList) { if (myVar.cachedPermissions.Value) { //do something } } } ``` and `hasPermissions` will only be called once per `User` object. If it is possible that multiple `User` objects exist for a single database call, then more details on your classes and methods would be needed. I added the `UncachePermissions` method to reset the cache as otherwise you could use really old values of `hasPermissions` which could cause issues. If that might be a common problem, you could cache outside the objects as part of the looping: ``` var permissionCache = new Dictionary(); foreach (var request in requestsList) { foreach (var myVar in request.anotherList) { bool permission; if (!permissionCache.TryGetValue(myVar, out permission)) { permission = myVar.hasPermissions(); permissionCache.Add(myVar, permission); } if (permission) { //do something } } } ``` Upvotes: 0
2018/03/19
432
1,319
<issue_start>username_0: I have list as follows: ``` readonly List carMake = new List { "Toyota", "Honda", "Audi", "Tesla" }; ``` I have a string at runtime which is as follows: ``` string strOutput = "Project1:Toyota:Corolla"; ``` Now, I will like to use strOutput and carMake to make sure the string as correct car make. How do I do this using Linq? I want to: * **return true** when strOutput = "Project1:Toyota:Corolla" (as Toyota is in list) * **return false** when strOutput = "Project1:Foo:Corolla" (as Foo is not in the list)<issue_comment>username_1: Use the [Any()](https://msdn.microsoft.com/en-us/library/bb534972(v=vs.110).aspx) method with a predicate to check if any of the strings in the `carMake` list is contained inside the `strOutput`: ``` return carMake.Any(i => strOutput.Contains(i)); ``` OR, if your runtime string will always be in that format, you can split by ':' and compare to the value in the middle: ``` string runtimeValue = strOutput.Split(':')[1]; return carMake.Contains(runtimeValue); ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: ``` List carMake = new List { "Toyota", "Honda", "Audi", "Tesla" }; string strOutput = "Project1:Toyota:Corolla"; string carBrand = strOutput.Split(':')[1]; bool result = carMake.Contains(carBrand); ``` Upvotes: 0
2018/03/19
1,404
3,661
<issue_start>username_0: I have multiple (type) inputs put inside a list `x` and I'm doing the `test train split` using: ``` x = [some_matrix, scalar_value, something_else, ...] x0_train, x0_test, x1_train, x1_test, ... , y_train, y_test = train_test_split(x[0],x[1],... , y, test_size=0.2, random_state=np.random, shuffle=True) ``` I managed to change the input parameters `x[0], x[1], ...` to `*x`: ``` x0_train, x0_test, x1_train, x1_test, ... , y_train, y_test = train_test_split(*x, y, test_size=0.2, random_state=np.random, shuffle=True) # But I have to manually repack x_train = [x0_train, x1_train] x_test = [x0_test, x1_test] ``` But is there a way to receive it without having to manually repack? What is the equivalent of: ``` *x_train, *x_test, y_train, y_test = train_test_split(*x, y, test_size=0.2, random_state=np.random, shuffle=True) ``` Or is there any other way to do this? For eg: constructing a dictionary and using \*\* to unpack, but I still have the same problem. What is the convention anyway (if one exists)?<issue_comment>username_1: Unpacking is just a way of allocating the elements of a list, tuple, or other iterable to several variables. The normal way to 'repack' is to collect those variables in a list (or tuple): ``` In [48]: a,b,c = [[1,2,3],3,[4,5]] In [49]: a Out[49]: [1, 2, 3] In [50]: b Out[50]: 3 In [51]: c Out[51]: [4, 5] In [52]: [a,b,c] Out[52]: [[1, 2, 3], 3, [4, 5]] ``` There's minimal cost to this since it is just Python playing with object pointers. No copies of big data blocks. I'm not familiar with the details of the `train_test_split` action. Your inputs and outputs suggest that is doing something like ``` alist = [(x[mask], x[~mask]) for x in xinput] alist = itertools.chain(*alist) ``` That is, it applies some sort of split, index or sliced, to each of the input `*args`, and then flattens the resulting list. Newer Pythons have some form of `*` or `...` unpacking, that allocates multiple items to a variable. I haven't used it much, so would have to look up the docs. But in this case I think you want to collect every-other value in some a list. I can see doing that with a iteration and list appends. Using one list comprehension is tricky if not impossible, but two is fine. '\*' syntax in unpacking: ``` In [55]: a, *b = [[1,2,3],3,[4,5]] In [56]: a Out[56]: [1, 2, 3] In [57]: b Out[57]: [3, [4, 5]] In [58]: [a,b] Out[58]: [[1, 2, 3], [3, [4, 5]]] In [59]: [a,*b] Out[59]: [[1, 2, 3], 3, [4, 5]] ``` You can't have 2 (or more) starred expressions in an assignment. --- Inspired by your list comprehensions, here's a another way of collecting ever other item in a list: ``` In [65]: *a, = [1,2,3],[4,5],[10,11,12],[13,14] In [66]: a Out[66]: [[1, 2, 3], [4, 5], [10, 11, 12], [13, 14]] In [67]: a[::2] Out[67]: [[1, 2, 3], [10, 11, 12]] In [68]: a[1::2] Out[68]: [[4, 5], [13, 14]] ``` Upvotes: 1 <issue_comment>username_2: This solves the *zigzag split* problem: ``` recv = [None for i in range(2*len(x))] *recv, y_train, y_test = train_test_split(*x, y, test_size=0.2, random_state=np.random, shuffle=True) # Edit: credits username_1 x_train = recv[::2] x_test = recv[1::2] ``` --- Also, if there's a way to copy references this will work too ``` x_train = [ None for _ in range(len(x))] x_test = [ None for _ in range(len(x))] recv = [item for sublist in zip(x_train, x_test) for item in sublist] # But unfortunately the above line gives only the values and not references # Hence doesn't work *recv, y_train, y_test = train_test_split(*x, y, test_size=0.2, random_state=np.random, shuffle=True) ``` Upvotes: 1 [selected_answer]
2018/03/19
1,121
3,566
<issue_start>username_0: I want to move an IMG via an arrow key using jQuery. I do not know how to use a method on a document. I do not know whether or not to use parentheses at the end of the method in the switch statement. ``` function moveIMG(event) { var x = event.keyCode; switch(x) { case 37: doLeft; break; case 38: doTop; break; case 39: doRight; break; case 40: doBottom; break; } } function doTop() { $("div").animate({top: '+=100px'},1200); } function doBottom() { $("div").animate({bottom: '+=100px'},1200); } function doLeft() { $("div").animate({left: '+=100px'},1200); } function doRight() { $("div").animate({right: '+=100px'},1200); } ![](black.png) $(document).moveIMG() ```<issue_comment>username_1: You have to wrap your function inside a live listener (`on`) to catch all live events. Also use parenthesis when calling a function. ``` $(document).on("keydown",function(e) { var x = e.keyCode; switch(x) { case 37: doLeft(); break; case 38: doTop(); break; case 39: doRight(); break; case 40: doBottom(); break; } }) ``` Upvotes: 2 <issue_comment>username_2: Try using ``` $(document).on("keydown", moveIMG) ``` and add missing parens like `doLeft();` ```js function moveIMG(event) { var x = event.which; // use which instead in jQuery switch (x) { case 37: doLeft(); // Add parens break; case 38: doTop(); break; case 39: doRight(); break; case 40: doBottom(); break; } } function doTop() { $("div").animate({ top: '+=100px' }, 1200); } function doBottom() { $("div").animate({ bottom: '+=100px' }, 1200); } function doLeft() { $("div").animate({ left: '+=100px' }, 1200); } function doRight() { $("div").animate({ right: '+=100px' }, 1200); } $(document).on("keydown", moveIMG); ``` ```html ![](//placehold.it/50x50/000) ``` Switch is a mess. ================= Since from a personal preference I don't like `switch` statements here's my suggestion. By using a predefined Object literal to store your moves mapped to a *keyCode* integer: ```js var keyMoves = { 37: {left: '-=100px'}, 38: {top: '-=100px'}, 39: {left: '+=100px'}, 40: {top: '+=100px'}, }; function moveIMG (ev) { ev.preventDefault(); $('div').stop().animate(keyMoves[ev.which], 1200); } $(document).on('keydown', moveIMG); ``` ```html ![](//placehold.it/50x50/000) ``` <http://api.jquery.com/on/> <https://api.jquery.com/stop/> <https://developer.mozilla.org/en-US/docs/Web/API/Event/preventDefault> <https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Objects/Basics> Upvotes: 3 <issue_comment>username_3: to capture the arrow keys you need `keydown` event on `document : ``` $(document).on("keydown", function(e) { switch(e.keyCode) { case 37: // left doLeft(); break; case 38: // up doTop(); break; case 39: // right doRight(); break; case 40: // down doBottom(); break; default: return; // exit this handler for other keys } e.preventDefault(); // prevent the default action (scroll / move caret) }); ``` Upvotes: 1
2018/03/19
473
1,601
<issue_start>username_0: I'm hosting my first shiny app from [www.shinyapps.io](http://www.shinyapps.io). My r script uses a glm I created locally that I have stored as a .RDS file. How can I read this file into my application directly using a free file host such as dropbox or google drive? (or another better alternative?) ``` test<-readRDS(gzcon(url("https://www.dropbox.com/s/p3bk57sqvlra1ze/strModel.RDS?dl=0"))) ``` however, I get the error: ``` Error in readRDS(gzcon(url("https://www.dropbox.com/s/p3bk57sqvlra1ze/strModel.RDS?dl=0"))) : unknown input format ``` I assume this is because the URL doesn't lead directly to the file but rather a landing page for dropbox? That being said, I can't seem to find any free file hosting sites that have that functionality. As always, I'm sure the solution is very obvious, any help is appreciated.<issue_comment>username_1: I figured it out. Hosted the file in a GitHub repository. From there I was able to copy the link to the raw file and placed that link in the `readRDS(gzcon(url()))` wrappers. Upvotes: 1 <issue_comment>username_2: Remotely reading using `readRDS()` can be disappointing. You might want to try this wrapper that saves the data set to a temporary location before reading it locally: ```r readRDS_remote <- function(file, quiet = TRUE) { if (grepl("^http", file, ignore.case = TRUE)) { # temp location file_local <- file.path(tempdir(), basename(file)) # download the data set download.file(file, file_local, quiet = quiet, mode = "wb") file <- file_local } readRDS(file) } ``` Upvotes: 0
2018/03/19
282
1,088
<issue_start>username_0: Im adding flash message on form submit. After successful submission im redirecting to route, then displaying the message via base twig template. But when user goes to another location on the site and then clicks "back" button in the browser message is appearing again. Is this normal behavior or im doing something wrong?<issue_comment>username_1: I figured it out. Hosted the file in a GitHub repository. From there I was able to copy the link to the raw file and placed that link in the `readRDS(gzcon(url()))` wrappers. Upvotes: 1 <issue_comment>username_2: Remotely reading using `readRDS()` can be disappointing. You might want to try this wrapper that saves the data set to a temporary location before reading it locally: ```r readRDS_remote <- function(file, quiet = TRUE) { if (grepl("^http", file, ignore.case = TRUE)) { # temp location file_local <- file.path(tempdir(), basename(file)) # download the data set download.file(file, file_local, quiet = quiet, mode = "wb") file <- file_local } readRDS(file) } ``` Upvotes: 0
2018/03/19
1,517
6,193
<issue_start>username_0: How do you protect, sanitize applications that take raw JSON bodies and typically output JSON responses and don't use Spring Boot. I only saw one good example that might work and used JsonComponent. If we don't use jsoncomponent, how to filter out a request to remove bad cross site scripting tags from a the entire JSON request body? Also, it would be OK to detect XSS tags in the request body and throw an error. Also looking for a global solution that might protect all input/output of JSON requests and add that code in one area. We could use JSR bean validation but we would have to hit all of the define properties and variables. Is it possible to also look at the JSON payload for data which could include script tags.<issue_comment>username_1: Ok finally I did it, I post my solution as response instead of comment, it is functional but no very robust, if you want me to improve it with exception handlers etc let me know The AntiXssDemoApplication.java is package com.melardev.stackoverflow.demos.antixssdemo; ``` import com.melardev.stackoverflow.demos.antixssdemo.filters.AntiXssFilter; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.context.annotation.Bean; import javax.servlet.Filter; @SpringBootApplication @ServletComponentScan public class AntiXssDemoApplication { public static void main(String[] args) { SpringApplication.run(AntiXssDemoApplication.class, args); } } ``` the AntiXssFilter ``` package com.melardev.stackoverflow.demos.antixssdemo.filters; import org.springframework.web.util.HtmlUtils; import javax.servlet.*; import javax.servlet.annotation.WebFilter; import java.io.IOException; @WebFilter(urlPatterns = "/*") public class AntiXssFilter implements Filter { @Override public void init(FilterConfig filterConfig) throws ServletException { System.out.println("Filter initialized"); } @Override public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException { String userInput = servletRequest.getParameter("param"); if (userInput != null && !userInput.equalsIgnoreCase(HtmlUtils.htmlEscape(userInput))) throw new RuntimeException(); filterChain.doFilter(servletRequest, servletResponse); } @Override public void destroy() { System.out.println("destroy"); } } ``` The Controller ``` package com.melardev.stackoverflow.demos.antixssdemo.controllers; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.ResponseBody; @Controller @RequestMapping("/") public class HomeController { @RequestMapping("/xss-reflected") @ResponseBody public String xssDemo(@RequestParam("param") String userInput) { return userInput; } } ``` The demo: 1. open browser at localhost:8080/xss-reflected?param=Look at this reflected content, works!! 2. open browser at localhost:8080/xss-reflected?param=Look at this reflected content, works!! ======================================= At step 2, I have used the html tag h2. You should see a Runtime exception thrown from Filter, what happened is: The Filter intercepts all urls(because of urlPatterns=/\*\*), for each interception doFilter is called, if the user supplied Html content, then HtmlUtils.htmlEscape will return the filtered string, in other words, the returned string is different from the original one, this means the user supplied Html in his json input, which is not what we expect, so we throw the exception, if the returned string is the same as the string returned by htmlEscape(userInput) this means the user has not supplied any Html content, in that case we let the request pipeline to flow as usual with filterChain.doFilter(servletRequest, servletResponse); I am not using a live XSS demo because chrome will most likely protect you since it is a very basic reflected XSS detected by anyone ... The Spring Boot skeleton project was downloaded from <https://start.spring.io/> with Web as the only starter dependency. Edit: Improved code Upvotes: 2 <issue_comment>username_2: First of all , concept to protect against vulnerabilities has nothing to do with SpringBoot and XSS is one of those vulnerabilities. This vulnerability is protected by implementing a `org.springframework.web.filter.OncePerRequestFilter` and depending on which top framework you use & what kind of app you have - filter registration & chaining process has to be implemented. Idea is to simply sanitize every incoming JSON body & call next filter in chain with sanitized request body. If you have a Spring based project, you should first try to use Spring Security dependencies and enable default security features. [refer this question](https://stackoverflow.com/questions/31282379/how-to-use-spring-security-to-prevent-xss-and-xframe-attack) For xss protection , offered by spring security they have this disclaimer - > > Note this is not comprehensive XSS protection! > > > In my case, I wrote a custom XSS protection filter implementing - `org.springframework.web.filter.OncePerRequestFilter` In this filter- I have used this API , ``` org.owasp.esapi esapi ``` In my code , I have listed down possible attack patterns but I guess there might be better way to do it. Refer these two on SO to know more as what I am talking about - [XSS filter to remove all scripts](https://stackoverflow.com/questions/31308968/xss-filter-to-remove-all-scripts) & [How to Modify QueryParam and PathParam in Jersey 2](https://stackoverflow.com/questions/32939919/how-to-modify-queryparam-and-pathparam-in-jersey-2/40591538#40591538) Answer by username_1 is explaining the only case for `@RequestParam` & you have to extend that approach to handle the case when its a JSON body. I have handled the case of a json body but can't share my code due to company copy right . Upvotes: 3 [selected_answer]
2018/03/19
999
3,100
<issue_start>username_0: I've been using the Measurement object to convert from mostly lengths. But I have a strange issue. If I convert from miles to feet I get almost the right answer. ``` import Foundation let heightFeet = Measurement(value: 6, unit: UnitLength.feet) // 6.0ft let heightInches = heightFeet.converted(to: UnitLength.inches) // 72.0 in let heightMeters = heightFeet.converted(to: UnitLength.meters) // 1.8288 m let lengthMiles = Measurement(value: 1, unit: UnitLength.miles) // 1.0 mi let lengthFeet = lengthMiles.converted(to: UnitLength.feet) // 5279.98687664042 ft // Should be 5280.0 ``` They all work except the last one lengthFeet. In my playground (Xcode Version 9.2 (9C40b)) it returns 5279.98687664042 ft. I also tested in a regular app build and same results. Any ideas what is going on?<issue_comment>username_1: You can see the definition of [`UnitLength` here.](https://github.com/apple/swift-corelibs-foundation/blob/main/Sources/Foundation/Unit.swift#L1210) Every unit of length has a name and a coefficient. The mile unit has a coefficient of `1609.34`, and the foot unit has a coefficient of `0.3048`. When represented as a `Double` (IEEE 754 Double precision floating point number), the closest representations are `1609.3399999999999` and `0.30480000000000002`, respectively. When you do the conversion `1 * 1609.34 / 0.3048`, you get `5279.9868766404197` rather than the expected `5280`. That's just a consequence of the imprecision of fixed-precision floating point math. This *could* be mitigated, if the base unit of length was a mile. This would be incredibly undesirable of course, because most of the world doesn't use this crazy system, but it could be done. Foot could be defined with a coefficient of `5280`, which can be represented precisely by `Double`. But now, instead of mile->foot being imprecise, meter->kilometer will be imprecise. You can't win, I'm afraid. Upvotes: 3 [selected_answer]<issue_comment>username_2: The “miles” unit is defined incorrectly in the Foundation library, as can be seen with ``` print(UnitLength.miles.converter.baseUnitValue(fromValue: 1.0)) // 1609.34 ``` where as the [correct value](https://en.wikipedia.org/wiki/Mile#International_mile) is `1.609344`. As a workaround for that flaw in the Foundation library you can define your “better” mile unit: ``` extension UnitLength { static var preciseMiles: UnitLength { return UnitLength(symbol: "mile", converter: UnitConverterLinear(coefficient: 1609.344)) } } ``` and using that gives the intended result: ``` let lengthMiles = Measurement(value: 1, unit: UnitLength.preciseMiles) let lengthFeet = lengthMiles.converted(to: UnitLength.feet) print(lengthFeet) // 5280.0 ft ``` Of course, as [username_1 said](https://stackoverflow.com/a/49373502/1187415), rounding errors can occur when doing calculations with the units, because the measurements use binary floating point values as underlying storage. But the reason for that “blatantly off” result is the wrong definition of the miles unit. Upvotes: 3
2018/03/19
622
1,841
<issue_start>username_0: I have this check in my script: ``` [[ $KEY == contact@(s|_groups) ]] && CONFIG[$KEY]="$VALUE" ``` It is writing lines that contain contact\* from one file to an array. How can I add another check that will skip the xi\* values in that line and write it in the array? I tried something like: ``` [[ $KEY == contact@(s|_groups) ]] && [[ $VALUE != "xi*" ]] && CONFIG[$KEY]="$VALUE" ``` But it is not working for me. :/<issue_comment>username_1: > > The first file looks like this: > > > > ```none > … > contacts <NAME>,Mijo,nagiosadmin,Patrick,ximgersic > … > > ``` > > The second file needs to look like this: > > > > ```none > … > contacts <NAME>,Mijo,nagiosadmin,Patrick > … > > ``` > > So, without the xi\* in the contact\* lines. > > > Since the `xi*` is at the end of the `$VALUE`, you can simply use the [**`bash` Parameter Expansion**](https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html) *Remove matching suffix pattern*: ```none [[ $KEY == contact@(s|_groups) ]] && CONFIG[$KEY]="${VALUE%,xi*}" ``` > > xi\* values aren´t always at the end of the line > > > If the `xi*` is amid the `$VALUE` elements, you could use *Pattern substitution*: ```none [[ $KEY == contact@(s|_groups) ]] && CONFIG[$KEY]="${VALUE/,xi*([^,])}" ``` > > and if there are multiple xi\* values? > > > To delete multiple `xi*` elements, you just have to double the `/` above: ```none [[ $KEY == contact@(s|_groups) ]] && CONFIG[$KEY]="${VALUE//,xi*([^,])}" ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: `[[ $KEY == contact@(s|_groups) ]] && CONFIG[$KEY]="${VALUE//xi*([^,])}" && CONFIG[$KEY]="${VALUE//,xi*([^,])}"` This is the check that gave me the wanted results. :) Upvotes: 1
2018/03/19
917
2,835
<issue_start>username_0: I'm trying ES6 template literals for multiline strings. The following one works well: ```js var customer = { name: "Foo" } var card = { amount: 7, product: "Bar", unitprice: 42 } var message = `Hello ${customer.name}, want to buy ${card.amount} ${card.product} for a total of ${card.amount * card.unitprice} bucks?` ``` Then I tried to apply it to a URL string as follows: ```js let topic = "pizza"; let url = `https://en.wikipedia.org/w/api.php?action=parse&section=0∝=text&format=json&page=${topic}`; ``` The single line version works well. But when I changed it to multiline, it didn't work: ```js let topic = "pizza"; let url = `https://en.wikipedia.org/w/api.php ?action=parse&section=0∝=text&format=json&page=${topic}`; or let url = `https://en.wikipedia.org/w/api.php\ ?action=parse&section=0∝=text&format=json&page=${topic}`; ``` I used this URL to retrieve data: ```js let https = require("https"); https.get(url, res => {...}); ``` Can anyone tell me what's wrong with the multiline URL? How can I do correctly? Thanks a lot.<issue_comment>username_1: String literals retain the exact structure you write your lines in. So writing on a new line effectively adds a `\n` to your string. ``` let url = `https://en.wikipedia.org/w/api.php\ ?action=parse&section=0∝=text&format=json&page=${topic}`; ``` As a string actually looks like this: ``` let url = `https://en.wikipedia.org/w/api.php\\n?action=parse&section=0∝=text&format=json&page=${topic}`; ``` That extra `\n` creates a an incorrect request to the server. ```js let str = `I am a string literal` console.log(str.split('\n')) ``` edit: You can add an escape `\` to the end of each line to remove the `\n`. Thank you for the correction. ```js let str = `I\ am\ a\ string\ literal` console.log(str.split('\n')) ``` Upvotes: 0 <issue_comment>username_2: You have newline characters in your url. Template literal syntax allow you to add newline characters to your string if you have newline chars in side of ``, so your URL has newline characters in it. ``` console.log('\n' === ` `) // true ``` Either do a String.prototype.replace() on the url before making the HTTP call ``` let topic = "pizza"; let url = `https://en.wikipedia.org/w/api.php ?action=parse&section=0∝=text&format=json&page=${topic}`; // removes newline characters from template string url.replace('\n', ''); // do your stuff here let https = require("https"); https.get(url, res => {...}); ``` Or escape the newline characters in the template string ``` let topic = "pizza"; // escape the newline character let url = `https://en.wikipedia.org/w/api.php\ ?action=parse&section=0∝=text&format=json&page=${topic}`; // do your stuff here let https = require("https"); https.get(url, res => {...}); ``` Upvotes: 3 [selected_answer]
2018/03/19
601
2,449
<issue_start>username_0: If label `title` contains value from `stadiumName`, then `numberOfRowsInSection` function will return a value `first!.count`. If label `title` contains value from `countryName`, then `numberOfRowsInSection` function will return a value of `3`. ``` func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { let pfile = Bundle.main.path(forResource: "scheduling", ofType: "plist") let indexes = defaults.integer(forKey: "index") let indexes2 = defaults.integer(forKey: "index2") let stadiumName = stadia[indexes] let countryName = country[indexes2] let arrays = NSDictionary(contentsOfFile: pfile!) let first = arrays?.value(forKey: stadia[indexes]) as? [[String]] if (titles.text?.contains(stadiumName))!{ let returning = first!.count return returning } if (titles.text?.contains(countryName))!{ let returning = 3 return returning } } ``` However, I am faced with this error message: `Missing return in a function expected to return 'Int'` What can I do to ensure I can return a value conditionally without this error message?<issue_comment>username_1: It's saying that your "if" statements aren't exhaustive and there are scenarios where the end of the function will be reached without returning a value. The simple solution is to just add "return 0" at the end of the function. Upvotes: 3 [selected_answer]<issue_comment>username_2: You have 2 `if` statements that both could not be satisfied. Try this: ``` func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { let pfile = Bundle.main.path(forResource: "scheduling", ofType: "plist") let indexes = defaults.integer(forKey: "index") let indexes2 = defaults.integer(forKey: "index2") let stadiumName = stadia[indexes] let countryName = country[indexes2] let arrays = NSDictionary(contentsOfFile: pfile!) let first = arrays?.value(forKey: stadia[indexes]) as? [[String]] if (titles.text?.contains(stadiumName))!{ let returning = first!.count return returning } else if (titles.text?.contains(countryName))!{ let returning = 3 return returning } else { return 0 } } ``` Upvotes: 1
2018/03/19
728
2,616
<issue_start>username_0: I'm triying to create my first lambda function in Java. I want to start with a little example, reading a S3 Input Event. It's my code: ``` package com.amazonaws.lambda.alfreddo; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestHandler; import com.amazonaws.services.lambda.runtime.events.S3Event; public class LambdaFunctionHandler implements RequestHandler { @Override public String handleRequest(S3Event input, Context context) { context.getLogger().log("Input: " + input); // TODO: implement your handler return "Hello from Lambda!"; } } ``` But, when in i try to run it on AWS Console i get the next error: ``` { "errorMessage": "Error loading method handleRequest on class com.amazonaws.lambda.alfreddo.LambdaFunctionHandler", "errorType": "java.lang.NoClassDefFoundError" } Error loading method handleRequest on class com.amazonaws.lambda.alfreddo.LambdaFunctionHandler: java.lang.NoClassDefFoundError java.lang.NoClassDefFoundError: com/amazonaws/services/lambda/runtime/events/S3Event at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) at java.lang.Class.privateGetPublicMethods(Class.java:2902) at java.lang.Class.getMethods(Class.java:1615) Caused by: java.lang.ClassNotFoundException: com.amazonaws.services.lambda.runtime.events.S3Event at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 4 more ``` I'm using the AWS Toolkit for Eclipse. any help? Thanks!<issue_comment>username_1: This: `com/amazonaws/services/lambda/runtime/events/S3Event` Isn't in your `ClassPath`. If you are building a jar you have to make sure to add your dependencies, or, if you are running from the CLI make sure to explicitly add the dependency location via `-cp /dir/to/location` Upvotes: 2 <issue_comment>username_2: It means that the dependencies required for the jar to work standalone are not included in the jar file. Thus the AWS SDK dependencies must be packaged in the jar. Check the below link for aws documentation. <https://docs.aws.amazon.com/lambda/latest/dg/java-create-jar-pkg-maven-and-eclipse.html> The important part is to use the maven-shade-plugin. Make sure it is being executed when you package the jar. Upvotes: 2 <issue_comment>username_3: Had similar problem, adding maven-shade-plugin and rebuilding with GOAL: package shade:shade solved the issue Upvotes: 2
2018/03/19
674
2,360
<issue_start>username_0: So for a school project I have to create some playing field and afterwards it has to be checked. My thought was to add a grid in python, just a simple list ``` grid = ([[' ',' ',' ',' ',' ',' '], [' ',' ',' ',' ',' ',' '], [' ',' ',' ',' ',' ',' '], [' ',' ',' ',' ',' ',' '], [' ',' ',' ',' ',' ',' '], [' ',' ',' ',' ',' ',' ']]) ``` As you can see it's 6 elements, each containing 6 spaces. the grid.insert only works for adding elements but not for these 'element elements'. the idea is that when, ``` if x == 25 and y == 25: ``` then grid becomes: ``` ([['1',' ',' ',' ',' ',' '], [' ',' ',' ',' ',' ',' '], [' ',' ',' ',' ',' ',' '], [' ',' ',' ',' ',' ',' '], [' ',' ',' ',' ',' ',' '], [' ',' ',' ',' ',' ',' ']]) ``` Is this even possible in python? Or should I look for a different way of solving this? I hope this is enough information, else I will glady supply you with more!<issue_comment>username_1: To access an element of a list, you can use `my_list[i]`, where `i` is the position of the element you want to access. To access an element within an element of a list, you can use `my_list[i][j]`, where `i` is the position of your highest level element and `j` is the position of the element within the element. Note that the first position of a list is `0` and not `1`. You can then change this element to whatever you want it to contain by setting it equal to something. Upvotes: 0 <issue_comment>username_2: If you know the point you're trying to change you can just do it by directly referencing the point e.g. ``` grid[0][1] = 1 ``` where the first brackets is which list and then the second brackets is the index within that list Or if you're trying to check the spaces for a value you can use the referencing in a similar way ``` if grid[0][1] == "1": Do Stuff ``` Otherwise I'm not sure what you're asking for... Upvotes: 1 <issue_comment>username_3: First of all, you could create the array using a for loop nested inside such as ``` grid = [(["*",]*6)for i in range(6)] ``` Then to append just do ``` grid[0][0] = 1 ``` Maybe another idea for printing it out is ``` rowFormat=("{:<5}")*6 for i in range(len(grid)): print(rowFormat.format(*grid[i])) ``` something like that. Hope that helps Upvotes: 2 [selected_answer]
2018/03/19
453
1,701
<issue_start>username_0: I need to be able to edit content in a mat-chip. The process is pretty straightforward in HTML: ```html Editable content ``` [StackBlitz demo](https://stackblitz.com/edit/angular-vyx47b) You can properly edit the content, however you can't delete it with [DELETE] or [BACKSPACE]. However you can cut the content (with clipboard). I think this is due to how mat-chips handle keyboard events. From the [Angular Material Doc](https://material.angular.io/components/chips/api#MatChip), it indicates on the 'remove' method: > > Allows for programmatic removal of the chip. Called by the MatChipList when > the DELETE or BACKSPACE keys are pressed." > > > So is there a way to unbind these events from the mat-chip, so that I can edit the content using these keys? I don't intend to make chips deletable via keyboard anyway. I tried using [removable]="false" but it didn't do anything (see stackblitz) I thought that maybe I could disable all keyboard interaction, but there doesn't seem to be any way to do so in the section on [keyboard interaction](https://material.angular.io/components/chips/overview#keyboard-interaction).<issue_comment>username_1: You can intercept the bubbling key event by add a keypress-handler in your mat-chip-content. Your Template: ``` ``` Your Javascript / Typescript: ``` onMatChipKeyPress(event) { event.stopImmediatePropagation(); } ``` Upvotes: 2 <issue_comment>username_2: ``` example data cancel ``` Though its late but wanted to keep a simpler way to solve this which is on element focus we instant blur it, and this will not affect any of its functionality which is pre defined by angular material... Upvotes: 1
2018/03/19
644
2,565
<issue_start>username_0: Recently I came accross a class containing a private unique pointer member variable. In the constructer it was initialized with make\_unique and in the destructer it was cleared with reset. I know that make\_unique performs a heap allocation. Is there any reason for this overhead ? Why not use a "normal" member variable.<issue_comment>username_1: > > and in the destructer it was cleared with reset. > > > This is redundant. The implicitly generated destructor would have been sufficient. > > Is there any reason for this overhead ? > > > Possibly. Not necessarily. Depends on why dynamic allocation was used. > > Why not use a "normal" member variable. > > > There can be reasons. It's impossible to tell those reasons with no hints about the definition, or usage of the class. Upvotes: 1 <issue_comment>username_2: There's a few valid reasons to make a `unique_ptr` member variable of a class. Without seeing the class I can't say which, if any apply; as I wrote in the comment calling `reset` explicitly in the destructor seems pretty awful so I don't think I would give the author the benefit of the doubt. Here are some of the reasons I can think of: 1. The object in question is runtime polymorphic. That is, you have a `unique_ptr` and you could actually be pointing to a `Derived`. 2. You want your class to be movable, but the member in question is immovable, or "sucks" to move for some reason. This comes up a lot with `std::mutex`; it's very awkward as a memeber variable. In 17, immovable objects can be returned from functions so they aren't as bad. But in 14 or prior, immovable objects are just really irritating. So your object can hold a `unique_ptr` instead and still be movable. Variants of this are types that are expensive to move (structs with tons of members) or don't obey proper exception safety guarantees (but this is very rarely valid IMHO). 3. You really want to reclaim the memory associated with the type. I've seen this too, sometimes you are doing tons of complex config during initialization, that later you don't need. If you have a `unique_ptr` or something like that you can call `reset` so that once your program is up and running in the long running perf critical part, you've returned a lot of memory. A bad but common reason to use `unique_ptr` is to defer initialization, i.e. you need to construct a member later than the object as a whole gets constructed. This is bad in general but even if you need to do this, you can do it with `optional`. Upvotes: 4 [selected_answer]
2018/03/19
860
3,211
<issue_start>username_0: I trying to get base64 string (using'@ionic-native/base64') of an image file uri from ('@ionic-native/image-picker'), but after running this code image seems to be broken. Any suggestions? My html: ``` ![]() ``` Picker options: ``` this.pickerOptions = { maximumImagesCount: 1, width: 10, height: 10, quality: 100, outputType: 0 }; ``` My code: ``` chooseImage() { this.imagePicker.getPictures(this.pickerOptions).then((results) => { for (let i = 0; i < results.length; i++) { let filePath: string = results[i]; this.base64.encodeFile(filePath) .then((base64File: string) => { this.base64ImageChosen = base64File }, (err) => { console.log(err); }) .then((res) => this.myForm.patchValue({ imageChosen: this.base64ImageChosen }) ) } }, (err) => { }); } ```<issue_comment>username_1: > > and in the destructer it was cleared with reset. > > > This is redundant. The implicitly generated destructor would have been sufficient. > > Is there any reason for this overhead ? > > > Possibly. Not necessarily. Depends on why dynamic allocation was used. > > Why not use a "normal" member variable. > > > There can be reasons. It's impossible to tell those reasons with no hints about the definition, or usage of the class. Upvotes: 1 <issue_comment>username_2: There's a few valid reasons to make a `unique_ptr` member variable of a class. Without seeing the class I can't say which, if any apply; as I wrote in the comment calling `reset` explicitly in the destructor seems pretty awful so I don't think I would give the author the benefit of the doubt. Here are some of the reasons I can think of: 1. The object in question is runtime polymorphic. That is, you have a `unique_ptr` and you could actually be pointing to a `Derived`. 2. You want your class to be movable, but the member in question is immovable, or "sucks" to move for some reason. This comes up a lot with `std::mutex`; it's very awkward as a memeber variable. In 17, immovable objects can be returned from functions so they aren't as bad. But in 14 or prior, immovable objects are just really irritating. So your object can hold a `unique_ptr` instead and still be movable. Variants of this are types that are expensive to move (structs with tons of members) or don't obey proper exception safety guarantees (but this is very rarely valid IMHO). 3. You really want to reclaim the memory associated with the type. I've seen this too, sometimes you are doing tons of complex config during initialization, that later you don't need. If you have a `unique_ptr` or something like that you can call `reset` so that once your program is up and running in the long running perf critical part, you've returned a lot of memory. A bad but common reason to use `unique_ptr` is to defer initialization, i.e. you need to construct a member later than the object as a whole gets constructed. This is bad in general but even if you need to do this, you can do it with `optional`. Upvotes: 4 [selected_answer]
2018/03/19
722
2,093
<issue_start>username_0: We have an invoice # format the is very strict and must match a certain format. We do this manually and I often get multiple variations so was trying to create data validation to control the entry of the invoice number. Here’s the format: YYYYMMDD-RNN 1. The invoice number is exactly 12 characters long 2. The first 4 characters are the year (full year, like 2018) 3. The next 2 characters are the month (like 02, must have the leading zero for 1 to 9) and not allow higher that 12. 4. The next 2 characters are the day (like 08, must have the leading zero for 1 to 9) and not allow higher that 31. 5. The next character is a “-“ 6. The next character is a region identifier. Allowable numbers are 0 to 9. 7. The final 2 characters are sequential #’s beginning with 01. Must have the leading zero for 1 to 9. Need a formula to validate this.<issue_comment>username_1: The following formula should work, but I couldn't insert it on custom data validation. I suspect there's a character limit. If anyone could shed a light, it would be nice. =IF(AND(LEFT(A1;4)\*1<=YEAR(TODAY());LEFT(A1;4)\*1>=2000);IF(AND(MID(A1;5;2)\*1>=1;MID(A1;5;2)\*1<=12);IF(AND(MID(A1;7;2)\*1>=1;MID(A1;5;2)\*1<=30);IF(MID(A1;9;1)="-";IF(AND(MID(A1;10;1)\*1>=0;MID(A1;10;1)\*1<=9);IF(AND(MID(A1;11;2)\*1>=1;MID(A1;11;2)\*1<=99);TRUE;FALSE);FALSE);FALSE);FALSE);FALSE);FALSE) P.S.: I assumed no invoices after the year 2000. Upvotes: 2 [selected_answer]<issue_comment>username_2: Try this UDF in a standard module code sheet. ``` Option Explicit Function invoiceCheck(rng As Range) As Boolean Dim tmp As String tmp = rng.Value2 'check length If Len(tmp) <> 12 Then Exit Function 'check valid date If Not IsDate(Join(Array(Mid(tmp, 5, 2), Mid(tmp, 7, 2), Left(tmp, 4)), "/")) Then Exit Function 'check date is today or earlier If CDate(Join(Array(Mid(tmp, 5, 2), Mid(tmp, 7, 2), Left(tmp, 4)), "/")) > Date Then Exit Function 'make sure there is a hyphen tmp = Split(tmp, "-")(1) invoiceCheck = IsNumeric(tmp) End Function ``` Upvotes: 0
2018/03/19
1,625
5,032
<issue_start>username_0: We've started to learn about stored procedures in class the other week, professor already gave a big assignment that is very complicated, and at this point I'm very confused. I'm not even sure what exactly am I trying to accomplish here. I need to create a procedure for the following select statement ``` SELECT * FROM Vehicle, VAN WHERE SEATCAPACITY > 5 AND MAXIMUMPAYLOAD > 5000; ``` This is what I have compiled and stored procedure was compiled successfully. ``` create or replace PROCEDURE GET_VAN_SP ( van_cursor OUT SYS_REFCURSOR ) AS BEGIN OPEN van_cursor for SELECT Vehicle.VINNUMBER VINNUMBER, Vehicle.MAKE MAKE, Vehicle.MODELKIND MODELKIND, Vehicle.YEARMADE YEARMADE, Vehicle.RENTALCATEGORYID RENTALCATEGORYID, Vehicle.COLOR COLOR, Vehicle.PLATENUMBER PLATENUMBER, Vehicle.MILEAGE MILEAGE, Vehicle.TRANSMISIONTYPE TRANSMISIONTYPE, Vehicle.SEATCAPACITY SEATCAPACITY, Vehicle.DAILYRENTALCOST DAILYRENTALCOST, Vehicle.VEHICLESTATUSID VEHICLESTATUSID, Vehicle.ASSIGNEDAGENCYID ASSIGNEDAGENCYID, Vehicle.CURRENTAGENCYID CURRENTAGENCYID, Vehicle.VEHICLETYPE VEHICLETYPE, Vehicle.PRICE PRICE, Vehicle.MPH MPH, Vehicle.HORSEPOWER HORSEPOWER, Vehicle.MPG MPG, VAN.VVINNUMBER VVINNUMBER, VAN.CARGOCAPACITY CARGOCAPACITY, VAN.MAXIMUMPAYLOAD MAXIMUMPAYLOAD FROM Vehicle, VAN WHERE SEATCAPACITY > 5 AND MAXIMUMPAYLOAD > 5000; END GET_VAN_SP; ``` This is the error while attempting to execute. > > Error starting at line : 37 in command - > BEGIN GET\_VAN\_SP(5); END; > Error report - > ORA-06550: line 1, column 52: > PLS-00306: wrong number or types of arguments in call to 'GET\_VAN\_SP' > ORA-06550: line 1, column 63: > PLS-00363: expression 'TO\_NUMBER(SQLDEVBIND1Z\_1)' cannot be used as an > assignment target > ORA-06550: line 1, column 52: > PL/SQL: Statement ignored > 06550. 00000 - "line %s, column %s:\n%s" > \*Cause: Usually a PL/SQL compilation error. > \*Action: > > ><issue_comment>username_1: > > Why is an integer passed to the procedure GET\_VAN\_SP when it expects a > parameter of type SYS\_REFCURSOR? > > > Upvotes: 0 <issue_comment>username_2: This: > > I'm not even sure what exactly am I trying to accomplish here > > > is the biggest problem of all. In my opinion, you should re-read the assignment as many times as necessary, until you know exactly what you should do. If you can't make it, you should consult the professor. *How* to do that job is another problem. OK, to get you started, here are two procedures that use the same `SELECT` statement as source of data. It is based on Scott's schema (as I don't have your tables, and you didn't provide test case). The first procedure accepts two `IN` parameters - department number and salary (which is similar to what you are doing). Note that I'm joining two tables, which is what you did not do (but should have): there are two tables in your query, VEHICLE and VAN - without join, you'll get Cartesian product. ``` SQL> set serveroutput on SQL> create or replace procedure p_test 2 (par_deptno in dept.deptno%type, 3 par_sal in emp.sal%type 4 ) 5 is 6 begin 7 for cur_r in (select d.dname, e.ename, e.sal 8 from dept d join emp e on e.deptno = d.deptno 9 where d.deptno = par_deptno 10 and e.sal > par_sal 11 ) 12 loop 13 dbms_output.put_line(cur_r.dname ||' '|| cur_r.ename ||' '|| cur_r.sal); 14 end loop; 15 end; 16 / Procedure created. SQL> begin 2 p_test(10, 2000); 3 end; 4 / ACCOUNTING KING 5000 ACCOUNTING CLARK 2450 PL/SQL procedure successfully completed. SQL> ``` The second one uses `refcursor` (as in your example - I'm not sure whether that's what you really need to do, because YOU don't know it either). Your procedure expects one `OUT` parameter (a `refcursor`), but you're passing an `IN` parameter, a `NUMBER` (5). Doesn't make much sense, does it? Note that I'm passing two `IN` and one `OUT` parameter, which is what Oracle expects. ``` SQL> create or replace procedure p_test 2 (par_deptno in dept.deptno%type, 3 par_sal in emp.sal%type, 4 par_out out sys_refcursor 5 ) 6 is 7 begin 8 open par_out for 9 select d.dname, e.ename, e.sal 10 from dept d join emp e on e.deptno = d.deptno 11 where d.deptno = par_deptno 12 and e.sal > par_sal; 13 end; 14 / Procedure created. SQL> var l_out refcursor SQL> SQL> begin 2 p_test(10, 2000, :l_out); 3 end; 4 / PL/SQL procedure successfully completed. SQL> print l_out DNAME ENAME SAL -------------- ---------- ---------- ACCOUNTING KING 5000 ACCOUNTING CLARK 2450 SQL> ``` I hope it'll help; try to apply such a code to your case. Say if you can't make it work. Upvotes: 1
2018/03/19
948
3,354
<issue_start>username_0: I build react native app and I use with scrollView for header with list of text horizontal. The issue is that the height of the scroll view takes half size of the screen. Even after declared it as a style, it still stays as it is. screen with the scrollView ```js {this.props.ExtendedNavigationStore.HeaderTitle ? : } {this.renderScrollableHeader()} /\* stack with dashboard screen \*/ ) } ``` styles ```js import {StyleSheet} from 'react-native' import {calcSize} from '../../utils' const Styles = StyleSheet.create({ container : { flex:1, backgroundColor:"#e9e7e8" }, scrollableView:{ height: calcSize(40), backgroundColor: '#000', }, textCategory:{ fontSize: calcSize(25), color:'#fff' }, scrollableButton:{ flex:1, margin:calcSize(30) } }) export default Styles ``` [![enter image description here](https://i.stack.imgur.com/YOiAq.png)](https://i.stack.imgur.com/YOiAq.png) As you can see the black size is the scroll View, I want it to be small. In routes stack into dashboard screen, the style: ```js const Style = StyleSheet.create({ container: { backgroundColor: '#9BC53D', flex: 1, justifyContent: 'space-around', alignItems: 'center' }, text: { fontSize: 35, color: 'white', margin: 10, backgroundColor: 'transparent' }, button: { width: 100, height: 75, margin: 20, borderWidth: 2, borderColor: "#ecebeb", justifyContent: "center", alignItems: "center", borderRadius: 40 } }) ```<issue_comment>username_1: There is an existing limitation with `ScrollView` where `height` cannot be provided directly. Wrap the `ScrollView` in another `View` and give height to that `View`. Like, ```js render() { return ( title1 title2 title3 title4 title5 ); } const styles = StyleSheet.create({ container: { flex: 1, paddingTop: Constants.statusBarHeight, backgroundColor: '#ecf0f1', }, }); ``` **snack sample:** <https://snack.expo.io/HkVDBhJoz> EXTRAS: Unlike height providing width to a `ScrollView` will work correctly Upvotes: 9 [selected_answer]<issue_comment>username_2: Considering the issue that you are using `fixed height` for the `header`, and flex for the **Routes** maybe, the *orientation for different devices would not scale well and would look weird*. Therefore you may consider switching it to the `flex` Here is the example by adding `flexGrow` to the `styles` of the `ScrollView` since it accepts [view props](https://facebook.github.io/react-native/docs/view.html#props) ```js Label Label Label Label Label ``` and here's the link to the [snack expo](https://snack.expo.io/BJ3-ATUiM) Upvotes: 2 <issue_comment>username_3: That worked for me: ``` ... ``` Upvotes: 4 <issue_comment>username_4: Use `flexGrow:0` inside ScrollView style ``` ``` Upvotes: 5 <issue_comment>username_5: Use `maxHeight` inside scrollview style ``` ••• ``` Upvotes: 2 <issue_comment>username_6: Setting minHeight: int, maxHeight: int to ScrollView should work where int is height in pixels. example below ``` ``` Upvotes: 2 <issue_comment>username_7: This worked for me .... Upvotes: 0
2018/03/19
573
2,192
<issue_start>username_0: I have written a Python script that I need to share with folks who may or may not have Python installed on their machine. As a dirty hack I figured I could copy my local Python3.6 install into the same folder as the script I made, and then create a .bat file that runs python from the copied Python source ie. ``` Python36\python.exe script.py %* ``` In this way I could just send them the folder, and all they have to do is double click the .bat file. Now this does work, but it takes about 2 - 5 mins for **script.py** to begin executing. How could I configure the copied python source so that it runs like it "should"?<issue_comment>username_1: Are you using any libraries? A quick solution would be converting the python script to executable using [py2exe](http://www.py2exe.org/index.cgi/Tutorial). More details are also in this [post](https://stackoverflow.com/questions/5458048/how-to-make-a-python-script-standalone-executable-to-run-without-any-dependency). ``` from distutils.core import setup import py2exe setup(console=['sample.py']) ``` And then run the command ``` C:\Tutorial>python setup.py py2exe ``` Upvotes: 1 <issue_comment>username_2: In terms of speed that is little you can do. You could convert your Python script into a compiled extension, this increases speed of a Python script greatly. [Cython](http://cython.org/) can do this and once compile you then do as you have done already. Honestly you will notice little difference if you do this, and that is about the best you will do with that method. A better method is to turn it into an executable directly. What you are doing currantly is: * The batch command starts and executes (this is slow by itself). This starts the Python interpreter. * The Python interpreter loads the file and then starts. --- You should use a tool such as [Cx\_Freeze](https://anthony-tuininga.github.io/cx_Freeze/) or [Pyinstaller](https://www.pyinstaller.org/) to convert your script into an executable, then it could be run just like any other appliocation. You could also use [Cython](http://cython.org/) to achieve this. You can use installers as well. Upvotes: 3 [selected_answer]
2018/03/19
631
2,147
<issue_start>username_0: I have a pyspark data frame that looks like this: ``` df.show() +---+ |dim| +---+ |1x1| |0x0| |1x0| +---+ ``` The data type in `dim` is `str`. Now I want to separate `dim` into 2 column, and have something like this: ``` df.show() +---+----+----+ |dim|dim1|dim2| +---+----+----+ |1x1| 1| 1| |0x0| 0| 0| |1x0| 1| 0| +---+----+----+ ``` I know that if I were to operate on a single string I'd just use the `split()` method in python: `"1x1".split("x")`, but how do I simultaneously create multiple columns as a result of one column mapped through a split function?<issue_comment>username_1: Are you using any libraries? A quick solution would be converting the python script to executable using [py2exe](http://www.py2exe.org/index.cgi/Tutorial). More details are also in this [post](https://stackoverflow.com/questions/5458048/how-to-make-a-python-script-standalone-executable-to-run-without-any-dependency). ``` from distutils.core import setup import py2exe setup(console=['sample.py']) ``` And then run the command ``` C:\Tutorial>python setup.py py2exe ``` Upvotes: 1 <issue_comment>username_2: In terms of speed that is little you can do. You could convert your Python script into a compiled extension, this increases speed of a Python script greatly. [Cython](http://cython.org/) can do this and once compile you then do as you have done already. Honestly you will notice little difference if you do this, and that is about the best you will do with that method. A better method is to turn it into an executable directly. What you are doing currantly is: * The batch command starts and executes (this is slow by itself). This starts the Python interpreter. * The Python interpreter loads the file and then starts. --- You should use a tool such as [Cx\_Freeze](https://anthony-tuininga.github.io/cx_Freeze/) or [Pyinstaller](https://www.pyinstaller.org/) to convert your script into an executable, then it could be run just like any other appliocation. You could also use [Cython](http://cython.org/) to achieve this. You can use installers as well. Upvotes: 3 [selected_answer]
2018/03/19
746
2,631
<issue_start>username_0: the problem seems to be with the compiler I'm using though I'm fairly new to programming so I'm not sure how to mess with that(I'm using VSCode on Mac OSX) This is my header: ``` #ifndef STICKMAN_H #define STICKMAN_H class Stickman{ public: Stickman(); }; #endif ``` This is my source file: ``` #include "stickman.h" #include using namespace std; Stickman::Stickman(){ cout << "Hello\n"; } ``` This is my main: ``` #include "stickman.h" #include int main(){ Stickman figure; } ``` This is the ERROR message in the terminal: ``` Alexandres-MBP:Game alexandrecarqueja$ cd "/Users/alexandrecarqueja/Desktop/Game/" && g++ main.cpp -o main && "/Users/alexandrecarqueja/Desktop/Game/"main Undefined symbols for architecture x86_64: "Stickman::Stickman()", referenced from: _main in main-d38641.o ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) ```<issue_comment>username_1: It must be compiler specific because I ran the code in Visual Studio and it built successfully. I would suggest you get the free express/community Visual Studio 2017 IDE software if you happen to have a Windows computer. The code looks fine so I'm personally unsure of what may be causing your issue if it's not compiler related. If you only have a Mac computer, then I suggest maybe looking into other free compilers. Upvotes: 0 <issue_comment>username_2: You need to call this instead: ``` g++ main.cpp stickman.cpp -o main ``` which will also compile `stickman.cpp`. Then the linker will know what to do. Right now you have a `#include stickman.h` in your main which declares the class, but does not define it. The linker sees a constructor is declared (in `stickman.h`), but does not see how it is implemented (`stickman.cpp` was not compiled). Hence it is not able to link with the constructor body. Upvotes: 3 [selected_answer]<issue_comment>username_3: You also receive this error in vscode if your project has a path that includes spaces. As mentioned above you also need to compile all your cpp-files. To do this in vscode in e.g. macOS Catalina please see my answer here <https://stackoverflow.com/a/61331301/1071899> Basically you need to make a tasks.json file with the compiler specific flags. Here you need to include that all \*.cpp files should be compiled AND you need to escape the whitespaces by adding `"\"${workspaceFolder}\"/*.cpp",` instead of `"${file}",`. Take note of the two `\"`. This will make sure that your project path is surrounded by `""` and it will not complain about linker errors. Upvotes: 0
2018/03/19
1,559
4,587
<issue_start>username_0: I was trying to plot some predicted vs. actual data, something that resembles the following: ``` # Some random data x <- seq(1: 10) y_pred <- runif(10, min = -10, max = 10) y_obs <- y_pred + rnorm(10) # Faking a CI Lo.95 <- y_pred - 1.96 Hi.95 <- y_pred + 1.96 my_df <- data.frame(x, y_pred, y_obs, Lo.95, Hi.95) ggplot(my_df, aes(x = x, y = y_pred)) + geom_line(aes(colour = "Forecasted Data"), size = 1.2) + geom_point(aes(x = x, y = y_obs, colour = "Actual Data"), size = 3) + geom_ribbon(aes(ymin=Lo.95, ymax=Hi.95, x=x, linetype = NA, colour = "Confidence Interval"), alpha=0.2) + theme_grey() + scale_colour_manual( values = c("gray30", "blue", "red"), guide = guide_legend(override.aes = list( border=c(NA, NA, NA), fill=c("gray30", "white", "white"), linetype = c("blank", "blank", "solid"), shape = c(NA, 19, NA)))) ``` The plot looks like this: [![enter image description here](https://i.stack.imgur.com/pvtDy.png)](https://i.stack.imgur.com/pvtDy.png) The only issue I have with this plot is the red border surrounding the legend item symbol for the line (i.e. the forecasted data). Is there any way I can remove it without breaking the rest of my plot?<issue_comment>username_1: I think `geom_ribbon` was the problem. If we take its `color` & `fill` out of `aes`, everything looks fine ```r library(ggplot2) # Some random data x <- seq(1: 10) y_pred <- runif(10, min = -10, max = 10) y_obs <- y_pred + rnorm(10) # Faking a CI Lo.95 <- y_pred - 1.96 Hi.95 <- y_pred + 1.96 my_df <- data.frame(x, y_pred, y_obs, Lo.95, Hi.95) m1 <- ggplot(my_df, aes(x = x, y = y_pred)) + geom_point(aes(x = x, y = y_obs, colour = "Actual"), size = 3) + geom_line(aes(colour = "Forecasted"), size = 1.2) + geom_ribbon(aes(x = x, ymin = Lo.95, ymax = Hi.95), fill = "grey30", alpha = 0.2) + scale_color_manual("Legend", values = c("blue", "red"), labels = c("Actual", "Forecasted")) + guides( color = guide_legend( order = 1, override.aes = list( color = c("blue", "red"), fill = c("white", "white"), linetype = c("blank", "solid"), shape = c(19, NA)))) + theme_bw() + # remove legend key border color & background theme(legend.key = element_rect(colour = NA, fill = NA), legend.box.background = element_blank()) m1 ``` ![](https://i.stack.imgur.com/lzwEb.png) As we leave `Confidence Interval` out of `aes`, we no longer have its legend. One workaround is to create an invisible point and take one unused `geom` to manually create a legend key. Here we can use `size/shape` (credit to this [answer](https://stackoverflow.com/a/16535347/786542)) ```r m2 <- m1 + geom_point(aes(x = x, y = y_obs, size = "Confidence Interval", shape = NA)) + guides(size = guide_legend(NULL, order = 2, override.aes = list(shape = 15, color = "lightgrey", size = 6))) + # Move legends closer to each other theme(legend.title = element_blank(), legend.justification = "center", legend.spacing.y = unit(0.05, "cm"), legend.margin = margin(0, 0, 0, 0), legend.box.margin = margin(0, 0, 0, 0)) m2 ``` ![](https://i.stack.imgur.com/RPhD2.png) Created on 2018-03-19 by the [reprex package](http://reprex.tidyverse.org) (v0.2.0). Upvotes: 4 [selected_answer]<issue_comment>username_2: A better way to address this question would be to specify `show.legend = F` option in the `geom_ribbon()`. This will eliminate the need for the second step for adding and merging the legend key for the confidence interval. Here is the code with slight modifications. ``` ggplot(my_dff, aes(x = x, y = y_pred)) + geom_line(aes(colour = "Forecasted Data"), size = 1) + geom_point(aes(x = x, y = y_obs, colour = "Actual Data"), size = 1) + geom_ribbon(aes(ymin=Lo.95, ymax=Hi.95, x=x, linetype = NA, colour = "Confidence Interval"), alpha=0.2, show.legend = F) + theme_grey() + scale_colour_manual( values = c("blue", "gray30", "red"))+ guides(color = guide_legend( override.aes = list(linetype = c(1, 1, 0)), shape = c(1, NA, NA), reverse = T)) ``` [My plot](https://i.stack.imgur.com/LTc8g.png) Credit to <https://stackoverflow.com/users/4282026/marblo> for their answer to similar question. Upvotes: 0
2018/03/19
1,282
3,856
<issue_start>username_0: I have an existing collection of variables a\_0,...,a\_45 where a\_i represents the amount of stuff I have on day i. I'd like to create a new collection of variables b\_0,...,b\_45 to represent the incremental change in stuff I have on day i (i.e. b\_k=a\_k-a\_(k-1) ). My approach: ``` data test; set dataset; array a a_0-a_45; array b b_0-b_45; b(1)=a(1); do i=2 to 45; b(i)=a(i)-a(i-1); end; run; ``` However my b variables just come out missing.<issue_comment>username_1: I think `geom_ribbon` was the problem. If we take its `color` & `fill` out of `aes`, everything looks fine ```r library(ggplot2) # Some random data x <- seq(1: 10) y_pred <- runif(10, min = -10, max = 10) y_obs <- y_pred + rnorm(10) # Faking a CI Lo.95 <- y_pred - 1.96 Hi.95 <- y_pred + 1.96 my_df <- data.frame(x, y_pred, y_obs, Lo.95, Hi.95) m1 <- ggplot(my_df, aes(x = x, y = y_pred)) + geom_point(aes(x = x, y = y_obs, colour = "Actual"), size = 3) + geom_line(aes(colour = "Forecasted"), size = 1.2) + geom_ribbon(aes(x = x, ymin = Lo.95, ymax = Hi.95), fill = "grey30", alpha = 0.2) + scale_color_manual("Legend", values = c("blue", "red"), labels = c("Actual", "Forecasted")) + guides( color = guide_legend( order = 1, override.aes = list( color = c("blue", "red"), fill = c("white", "white"), linetype = c("blank", "solid"), shape = c(19, NA)))) + theme_bw() + # remove legend key border color & background theme(legend.key = element_rect(colour = NA, fill = NA), legend.box.background = element_blank()) m1 ``` ![](https://i.stack.imgur.com/lzwEb.png) As we leave `Confidence Interval` out of `aes`, we no longer have its legend. One workaround is to create an invisible point and take one unused `geom` to manually create a legend key. Here we can use `size/shape` (credit to this [answer](https://stackoverflow.com/a/16535347/786542)) ```r m2 <- m1 + geom_point(aes(x = x, y = y_obs, size = "Confidence Interval", shape = NA)) + guides(size = guide_legend(NULL, order = 2, override.aes = list(shape = 15, color = "lightgrey", size = 6))) + # Move legends closer to each other theme(legend.title = element_blank(), legend.justification = "center", legend.spacing.y = unit(0.05, "cm"), legend.margin = margin(0, 0, 0, 0), legend.box.margin = margin(0, 0, 0, 0)) m2 ``` ![](https://i.stack.imgur.com/RPhD2.png) Created on 2018-03-19 by the [reprex package](http://reprex.tidyverse.org) (v0.2.0). Upvotes: 4 [selected_answer]<issue_comment>username_2: A better way to address this question would be to specify `show.legend = F` option in the `geom_ribbon()`. This will eliminate the need for the second step for adding and merging the legend key for the confidence interval. Here is the code with slight modifications. ``` ggplot(my_dff, aes(x = x, y = y_pred)) + geom_line(aes(colour = "Forecasted Data"), size = 1) + geom_point(aes(x = x, y = y_obs, colour = "Actual Data"), size = 1) + geom_ribbon(aes(ymin=Lo.95, ymax=Hi.95, x=x, linetype = NA, colour = "Confidence Interval"), alpha=0.2, show.legend = F) + theme_grey() + scale_colour_manual( values = c("blue", "gray30", "red"))+ guides(color = guide_legend( override.aes = list(linetype = c(1, 1, 0)), shape = c(1, NA, NA), reverse = T)) ``` [My plot](https://i.stack.imgur.com/LTc8g.png) Credit to <https://stackoverflow.com/users/4282026/marblo> for their answer to similar question. Upvotes: 0
2018/03/19
574
2,292
<issue_start>username_0: I have an S3 location with the below directory structure with a Hive table created on top of it: ``` s3:/// / ``` Let's say I have a Spark program which writes data into above table location spanning multiple partitions using the below line of code: ``` Df.write.partitionBy("orderdate").parquet("s3:/// /") ``` If another program such as "Hive SQL query" or "AWS Athena Query" started reading data from the table at the same time: Do they consider temporary files being written? Does spark lock the data file while writing into S3 location? How can we handle such concurrency situations using Spark as an ETL tool?<issue_comment>username_1: Spark writes the output in a two-step process. First, it writes the data to `_temporary` directory and then once the write operation is complete and successful, it moves the file to the output directory. > > Do they consider temporary files being written? > > > As the files starting with `_` are hidden files, you can not read them from Hive or AWS Athena. > > Does spark lock the data file while writing into S3 location? > > > Locking or any concurrency mechanism is not required because of the simple two-step write process of spark. > > How can we handle such concurrency situations using Spark as an ETL tool? > > > Again using the simple writing to temporary location mechanism. One more thing to note here is, in your example above after writing output to the output directory you need to add the partition to hive external table using `Alter table add partition (...)` command or `msck repair table tbl_name` command else data won't be available in hive. Upvotes: 1 <issue_comment>username_2: 1. No locks. Not implemented in S3 or HDFS. 2. The process of committing work in HDFS is not atomic in HDFS; there's some renaming going on in job commit which is fast but not instantaneous 3. With S3 things are pathologically slow with the classic output committers, which assume rename is atomic and fast. 4. The Apache S3A committers avoid the renames and only make the output visible in job commit, which is fast but not atomic 5. Amazon EMR now has their own S3 committer, but it makes files visible when each task commits, so exposes readers to incomplete output for longer Upvotes: 2
2018/03/19
4,087
14,401
<issue_start>username_0: Problem ------- On a brand new react native project (created with `create-react-native-app`), the gradle build fails. Output ------ ### --debug `$ cd android/ $ ./gradlew build --debug` gives this output (truncated to the error point) ``` 16:17:09.777 [DEBUG] [com.android.build.gradle.internal.pipeline.TransformManager] InputStream: OriginalStream{jarFiles=[], folders=[], scopes=[SUB_PROJECTS], contentTypes=[CLASSES], dependencies=[prepareDebugDependencies, build dependencies configuration ':app:_debugApk' all dependencies]} 16:17:09.777 [DEBUG] [com.android.build.gradle.internal.pipeline.TransformManager] InputStream: OriginalStream{jarFiles=[], folders=[], scopes=[SUB_PROJECTS_LOCAL_DEPS], contentTypes=[CLASSES], dependencies=[prepareDebugDependencies, build dependencies configuration ':app:_debugApk' all dependencies]} 16:17:09.777 [DEBUG] [com.android.build.gradle.internal.pipeline.TransformManager] InputStream: OriginalStream{jarFiles=[], folders=[/Users/noel/w/crna-test/android/app/build/intermediates/classes/debug], scopes=[PROJECT], contentTypes=[CLASSES], dependencies=[compileDebugJavaWithJavac]} 16:17:09.777 [DEBUG] [com.android.build.gradle.internal.pipeline.TransformManager] OutputStream: IntermediateStream{rootLocation=/Users/noel/w/crna-test/android/app/build/intermediates/transforms/dex/debug, scopes=[PROJECT, PROJECT_LOCAL_DEPS, SUB_PROJECTS, SUB_PROJECTS_LOCAL_DEPS, EXTERNAL_LIBRARIES], contentTypes=[DEX], dependencies=[transformClassesWithDexForDebug]} 16:17:09.778 [DEBUG] [org.gradle.model.internal.registry.DefaultModelRegistry] Project :app - Registering model element 'tasks.transformClassesWithDexForDebug' (hidden = false) 16:17:09.793 [ERROR] [org.gradle.BuildExceptionReporter] 16:17:09.793 [ERROR] [org.gradle.BuildExceptionReporter] FAILURE: Build failed with an exception. 16:17:09.793 [ERROR] [org.gradle.BuildExceptionReporter] 16:17:09.793 [ERROR] [org.gradle.BuildExceptionReporter] * What went wrong: 16:17:09.793 [ERROR] [org.gradle.BuildExceptionReporter] A problem occurred configuring project ':app'. 16:17:09.793 [ERROR] [org.gradle.BuildExceptionReporter] > java.lang.NullPointerException (no error message) 16:17:09.793 [ERROR] [org.gradle.BuildExceptionReporter] 16:17:09.793 [ERROR] [org.gradle.BuildExceptionReporter] * Try: 16:17:09.793 [ERROR] [org.gradle.BuildExceptionReporter] Run with --stacktrace option to get the stack trace. 16:17:09.795 [LIFECYCLE] [org.gradle.BuildResultLogger] 16:17:09.795 [LIFECYCLE] [org.gradle.BuildResultLogger] BUILD FAILED 16:17:09.795 [LIFECYCLE] [org.gradle.BuildResultLogger] 16:17:09.795 [LIFECYCLE] [org.gradle.BuildResultLogger] Total time: 6.915 secs ``` ### --stacktrace and the stacktrace is `$ ./gradlew build --stacktrace` ``` * Exception is: org.gradle.api.ProjectConfigurationException: A problem occurred configuring project ':app'. at org.gradle.configuration.project.LifecycleProjectEvaluator.addConfigurationFailure(LifecycleProjectEvaluator.java:79) at org.gradle.configuration.project.LifecycleProjectEvaluator.notifyAfterEvaluate(LifecycleProjectEvaluator.java:74) at org.gradle.configuration.project.LifecycleProjectEvaluator.evaluate(LifecycleProjectEvaluator.java:61) at org.gradle.api.internal.project.AbstractProject.evaluate(AbstractProject.java:540) at org.gradle.api.internal.project.AbstractProject.evaluate(AbstractProject.java:93) at org.gradle.execution.TaskPathProjectEvaluator.configureHierarchy(TaskPathProjectEvaluator.java:47) at org.gradle.configuration.DefaultBuildConfigurer.configure(DefaultBuildConfigurer.java:35) at org.gradle.initialization.DefaultGradleLauncher$2.run(DefaultGradleLauncher.java:124) at org.gradle.internal.Factories$1.create(Factories.java:22) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:91) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:53) at org.gradle.initialization.DefaultGradleLauncher.doBuildStages(DefaultGradleLauncher.java:121) at org.gradle.initialization.DefaultGradleLauncher.access$200(DefaultGradleLauncher.java:32) at org.gradle.initialization.DefaultGradleLauncher$1.create(DefaultGradleLauncher.java:98) at org.gradle.initialization.DefaultGradleLauncher$1.create(DefaultGradleLauncher.java:92) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:91) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:63) at org.gradle.initialization.DefaultGradleLauncher.doBuild(DefaultGradleLauncher.java:92) at org.gradle.initialization.DefaultGradleLauncher.run(DefaultGradleLauncher.java:83) at org.gradle.launcher.exec.InProcessBuildActionExecuter$DefaultBuildController.run(InProcessBuildActionExecuter.java:99) at org.gradle.tooling.internal.provider.ExecuteBuildActionRunner.run(ExecuteBuildActionRunner.java:28) at org.gradle.launcher.exec.ChainingBuildActionRunner.run(ChainingBuildActionRunner.java:35) at org.gradle.launcher.exec.InProcessBuildActionExecuter.execute(InProcessBuildActionExecuter.java:48) at org.gradle.launcher.exec.InProcessBuildActionExecuter.execute(InProcessBuildActionExecuter.java:30) at org.gradle.launcher.exec.ContinuousBuildActionExecuter.execute(ContinuousBuildActionExecuter.java:81) at org.gradle.launcher.exec.ContinuousBuildActionExecuter.execute(ContinuousBuildActionExecuter.java:46) at org.gradle.launcher.exec.DaemonUsageSuggestingBuildActionExecuter.execute(DaemonUsageSuggestingBuildActionExecuter.java:51) at org.gradle.launcher.exec.DaemonUsageSuggestingBuildActionExecuter.execute(DaemonUsageSuggestingBuildActionExecuter.java:28) at org.gradle.launcher.cli.RunBuildAction.run(RunBuildAction.java:43) at org.gradle.internal.Actions$RunnableActionAdapter.execute(Actions.java:173) at org.gradle.launcher.cli.CommandLineActionFactory$ParseAndBuildAction.execute(CommandLineActionFactory.java:239) at org.gradle.launcher.cli.CommandLineActionFactory$ParseAndBuildAction.execute(CommandLineActionFactory.java:212) at org.gradle.launcher.cli.JavaRuntimeValidationAction.execute(JavaRuntimeValidationAction.java:35) at org.gradle.launcher.cli.JavaRuntimeValidationAction.execute(JavaRuntimeValidationAction.java:24) at org.gradle.launcher.cli.ExceptionReportingAction.execute(ExceptionReportingAction.java:33) at org.gradle.launcher.cli.ExceptionReportingAction.execute(ExceptionReportingAction.java:22) at org.gradle.launcher.cli.CommandLineActionFactory$WithLogging.execute(CommandLineActionFactory.java:205) at org.gradle.launcher.cli.CommandLineActionFactory$WithLogging.execute(CommandLineActionFactory.java:169) at org.gradle.launcher.Main.doAction(Main.java:33) at org.gradle.launcher.bootstrap.EntryPoint.run(EntryPoint.java:45) at org.gradle.launcher.bootstrap.ProcessBootstrap.runNoExit(ProcessBootstrap.java:55) at org.gradle.launcher.bootstrap.ProcessBootstrap.run(ProcessBootstrap.java:36) at org.gradle.launcher.GradleMain.main(GradleMain.java:23) at org.gradle.wrapper.BootstrapMainStarter.start(BootstrapMainStarter.java:30) at org.gradle.wrapper.WrapperExecutor.execute(WrapperExecutor.java:127) at org.gradle.wrapper.GradleWrapperMain.main(GradleWrapperMain.java:61) Caused by: java.lang.NullPointerException at com.android.build.gradle.internal.ndk.DefaultNdkInfo.findTargetPlatformVersionOrLower(DefaultNdkInfo.java:167) at com.android.build.gradle.internal.ndk.DefaultNdkInfo.findLatestPlatformVersion(DefaultNdkInfo.java:89) at com.android.build.gradle.internal.ndk.NdkHandler.getPlatformVersion(NdkHandler.java:131) at com.android.build.gradle.internal.ndk.NdkHandler.supports64Bits(NdkHandler.java:234) at com.android.build.gradle.internal.ndk.NdkHandler.getSupportedAbis(NdkHandler.java:297) at com.android.build.gradle.internal.transforms.StripDebugSymbolTransform.(StripDebugSymbolTransform.java:86) at com.android.build.gradle.internal.TaskManager.createStripNativeLibraryTask(TaskManager.java:1342) at com.android.build.gradle.internal.ApplicationTaskManager.createTasksForVariantData(ApplicationTaskManager.java:289) at com.android.build.gradle.internal.VariantManager.createTasksForVariantData(VariantManager.java:485) at com.android.build.gradle.internal.VariantManager$3.call(VariantManager.java:293) at com.android.build.gradle.internal.VariantManager$3.call(VariantManager.java:290) at com.android.builder.profile.ThreadRecorder.record(ThreadRecorder.java:156) at com.android.builder.profile.ThreadRecorder.record(ThreadRecorder.java:120) at com.android.build.gradle.internal.profile.SpanRecorders.record(SpanRecorders.java:44) at com.android.build.gradle.internal.VariantManager.createAndroidTasks(VariantManager.java:286) at com.android.build.gradle.BasePlugin$11.call(BasePlugin.java:688) at com.android.build.gradle.BasePlugin$11.call(BasePlugin.java:685) at com.android.builder.profile.ThreadRecorder.record(ThreadRecorder.java:156) at com.android.builder.profile.ThreadRecorder.record(ThreadRecorder.java:120) at com.android.build.gradle.BasePlugin.createAndroidTasks(BasePlugin.java:683) at com.android.build.gradle.BasePlugin$10.call(BasePlugin.java:608) at com.android.build.gradle.BasePlugin$10.call(BasePlugin.java:605) at com.android.builder.profile.ThreadRecorder.record(ThreadRecorder.java:156) at com.android.builder.profile.ThreadRecorder.record(ThreadRecorder.java:120) at com.android.build.gradle.BasePlugin.lambda$createTasks$1(BasePlugin.java:603) at org.gradle.internal.event.BroadcastDispatch$ActionInvocationHandler.dispatch(BroadcastDispatch.java:93) at org.gradle.internal.event.BroadcastDispatch$ActionInvocationHandler.dispatch(BroadcastDispatch.java:82) at org.gradle.internal.event.AbstractBroadcastDispatch.dispatch(AbstractBroadcastDispatch.java:44) at org.gradle.internal.event.BroadcastDispatch.dispatch(BroadcastDispatch.java:79) at org.gradle.internal.event.BroadcastDispatch.dispatch(BroadcastDispatch.java:30) at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93) at com.sun.proxy.$Proxy11.afterEvaluate(Unknown Source) at org.gradle.configuration.project.LifecycleProjectEvaluator.notifyAfterEvaluate(LifecycleProjectEvaluator.java:67) ... 44 more ``` .gradles -------- ### root ``` // Top-level build file where you can add configuration options common to all sub-projects/modules. buildscript { repositories { jcenter() } dependencies { classpath 'com.android.tools.build:gradle:2.2.3' // NOTE: Do not place your application dependencies here; they belong // in the individual module build.gradle files } } allprojects { repositories { mavenLocal() jcenter() maven { // All of React Native (JS, Obj-C sources, Android binaries) is installed from npm url "$rootDir/../node_modules/react-native/android" } } } ``` ### app ``` apply plugin: "com.android.application" import com.android.build.OutputFile project.ext.react = [ entryFile: "index.js" ] apply from: "../../node_modules/react-native/react.gradle" def enableSeparateBuildPerCPUArchitecture = false def enableProguardInReleaseBuilds = false android { compileSdkVersion 23 buildToolsVersion "23.0.1" defaultConfig { applicationId "com.crnatest" minSdkVersion 16 targetSdkVersion 22 versionCode 1 versionName "1.0" ndk { abiFilters "armeabi-v7a", "x86" } } splits { abi { reset() enable enableSeparateBuildPerCPUArchitecture universalApk false // If true, also generate a universal APK include "armeabi-v7a", "x86" } } buildTypes { release { minifyEnabled enableProguardInReleaseBuilds proguardFiles getDefaultProguardFile("proguard-android.txt"), "proguard-rules.pro" } } // applicationVariants are e.g. debug, release applicationVariants.all { variant -> variant.outputs.each { output -> // For each separate APK per architecture, set a unique version code as described here: // http://tools.android.com/tech-docs/new-build-system/user-guide/apk-splits def versionCodes = ["armeabi-v7a":1, "x86":2] def abi = output.getFilter(OutputFile.ABI) if (abi != null) { // null for the universal-debug, universal-release variants output.versionCodeOverride = versionCodes.get(abi) * 1048576 + defaultConfig.versionCode } } } } dependencies { compile fileTree(dir: "libs", include: ["*.jar"]) compile "com.android.support:appcompat-v7:23.0.1" compile "com.facebook.react:react-native:+" // From node_modules } // Run this once to be able to run the application with BUCK // puts all compile dependencies into folder libs for BUCK to use task copyDownloadableDepsToLibs(type: Copy) { from configurations.compile into 'libs' } ``` Environment ----------- * `npm ls react-native-scripts`: `(empty)` * `npm ls react-native`: `[email protected]` * `npm ls expo`: `(empty)` * `node -v`: `v8.3.0` * `npm -v`: `5.6.0` * `yarn --version`: `0.21.3` * `watchman version`: `"version": "4.7.0"` * Operating system: macOS 10.13.3 Reproducible Demo ----------------- <https://github.com/noelweichbrodt/crna-test><issue_comment>username_1: I had the exact same problem when building on Mac but not Windows (using gradle `2.2.3` as well). the only thing I found that helped was to downgrade the gradle version to `2.1.2`. Not 100% sure why this works but it does. Hope this helps! Upvotes: 3 <issue_comment>username_2: I have no idea will it helpful for someone or not but my problem was resolved by adding required (they were used in `build.gradle`) but missed variables to .env Upvotes: 0
2018/03/19
809
3,024
<issue_start>username_0: I know this has been ask3d before but I have tried several of the posts but still cannot get it to work. I am being forced to use strict mode for url redirects and no matter what I put for the domain nothing works. ``` php if(!session_id()){ session_start(); } // Include the autoloader provided in the SDK require_once __DIR__ . '/src/Facebook/autoload.php'; // Include required libraries use Facebook\Facebook; use Facebook\Exceptions\FacebookResponseException; use Facebook\Exceptions\FacebookSDKException; /* * Configuration and setup Facebook SDK */ $appId = '********'; //Facebook App ID $appSecret = '***************'; //Facebook App Secret $redirectURL = 'https://www.themathouse.com/'; //Callback URL $fbPermissions = array('email'); //Optional permissions $fb = new Facebook(array( 'app_id' = $appId, 'app_secret' => $appSecret, 'default_graph_version' => 'v2.2', )); // Get redirect login helper $helper = $fb->getRedirectLoginHelper(); // Try to get access token try { if(isset($_SESSION['facebook_access_token'])){ $accessToken = $_SESSION['facebook_access_token']; }else{ $accessToken = $helper->getAccessToken(); } } catch(FacebookResponseException $e) { echo 'Graph returned an error: ' . $e->getMessage(); exit; } catch(FacebookSDKException $e) { echo 'Facebook SDK returned an error: ' . $e->getMessage(); exit; } ?> ``` On the facebook app I have themathouse.com as the app domain and <https://www.themathouse.com> as the Valid OAuth redirect URIs. When I try logging in with facebook I get the following error: Graph returned an error: Can't Load URL: The domain of this URL isn't included in the app's domains. To be able to load this URL, add all domains and subdomains of your app to the App Domains field in your app settings. Any help would be greatly appreciated.<issue_comment>username_1: Make sure that your Redirect Url matches what is set in your App Settings, under **Facebook Login** -> **Settings** -> **Valid OAuth redirect URIs** In this case, it seems to be `https://www.themathouse.com/` **EDIT:** Also, as it seems that you are using the PHP SDK, make sure that you are using the currently latest version, 5.6.2, as this one fixed an issue present on 5.6.1 and older that may affect you. Upvotes: 2 [selected_answer]<issue_comment>username_2: **THIS** worked for me!! (after fiddling with a bunch of stuff suggested on various forums, to no avail) I updated the 'FacebookRedirectLoginHelper.php' file: <https://github.com/facebook/php-graph-sdk/blob/5.x/src/Facebook/Helpers/FacebookRedirectLoginHelper.php> and voila! Pesky login error fixed :) (Oh, I also updated code for other recently changed files [within last few months or so] in the Facebook PHP/SDK, so you should also do this as well: <https://github.com/facebook/php-graph-sdk>). Good luck! Upvotes: 1 <issue_comment>username_3: Update with the latest SDK, it will solve your problem. Upvotes: 0
2018/03/19
692
2,324
<issue_start>username_0: I am trying to install Tabula for Python, as it seems it is the way of extracting tables from PDFs. However I am unable to install it. I am using Anaconda and have followed the step on Tabula's Anaconda page (<https://anaconda.org/auto/tabula>) to attempt to install it: ``` conda install -c auto tabula ``` But I just get an error message: [link here](https://i.stack.imgur.com/NdtmX.png) As far as I'm aware, I have added the "auto" channel so it should be able to install it. But I guess I must be missing something. Any help much appreciated!<issue_comment>username_1: Since you are using Windows and in the [link](https://anaconda.org/auto/tabula) you provide I just see Linux-64 and Linux-32 I think that installing Tabula with Conda can return errors. Activate your Conda environment and install Tabula using pip: ``` pip3 install tabula-py ``` **Note** As pointed out in a comment by chezou the conda-forge way of installing Tabula seems not the best way to go if you want to keep it updated: > > Conda package is supported by someone else and it seems not maintained > well. > > As of Feb 24th, 2019, conda version is v1.1.1 while the latest > pypi package is 1.3.1. > > I would recommend installing via pip. > > > Upvotes: 3 [selected_answer]<issue_comment>username_2: Conda sources are limited to packages available in the channels you have set up. You will need to either: 1. Set up a channel in Conda that contains tabula.(I tried this with other pacages but couldn't figure out a working method.) <https://conda.io/docs/user-guide/tasks/manage-environments.html> 2. Install tabula into your Anaconda environment from source. <https://docs.python.org/2/install/> 3. use pip to install tabula in Conda environment.(you will need to install pip first) <https://github.com/ContinuumIO/anaconda-issues/issues/1429> Then: If tabula is in pipy, this will probably work. Upvotes: 0 <issue_comment>username_3: You could just try: ``` conda install -c conda-forge tabula-py ``` It works fine for me. Upvotes: 3 <issue_comment>username_4: <https://anaconda.org/conda-forge/tabula-py> To install this package with conda run one of the following: ``` > conda install -c conda-forge tabula-py conda install -c > conda-forge/label/cf201901 tabula-py ``` Upvotes: 0
2018/03/19
495
1,645
<issue_start>username_0: Below is the my ansible tasks which will fetch domain name and which will register the output to item value so that i can use the variable across my playbook. ``` - name: Fetching the domain name shell: dnsdomainname | cut -d "." -f 1 register: domain_name - debug: msg: "DC detected {{domain_name}}" when: domain_name.stdout == item.key with_dict: {abc: 01, cde: 05} register: number == item.value ``` But it was throwing the error as below: ``` fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'number' is undefined\n\n ``` Any help will be appreciated .<issue_comment>username_1: change to ``` when: domain_name == item.key ``` **EDIT** change `domain_name` to `domain_name.stdout` and test `server_path.stat is defined` first. ``` - name: Checking for webpage path stat: path=/etc/apps/dc{{item.value}}/webpage.html when: domain_name.stdout == item.key with_dict: {abc: 08, cde: 04} register: server_path - debug: msg="server path exists" when: server_path.stat is defined and server_path.stat.isdir is defined and server_path.stat.isdir ``` Upvotes: 0 <issue_comment>username_2: The problem is that the you are looping using `with_dict: {abc: 08, cde: 04}` and registering into `server_path`. In this case `server_path` will contain a results array that will wrap all the output of the calls to `stat`. You can verify this by debugging the `server_path` variable. ``` - debug: msg="{{server_path}}" ``` You need to access the result via an array index. Example: `server_path.results[0].stat.isdir` Upvotes: 1
2018/03/19
429
1,444
<issue_start>username_0: Does anyone know if there is a way to have any webpage on our website to be defaulted to scale at 50% when someone tries printing from the browser? Or do I need to go in and create @print styles for printing? Everything looks perfect at 50% but for some reason scaled at 100% looks like the browser is trying to force a mobile view. Website is [VolleyballUSA.com](https://www.volleyballusa.com/professional-pole-pad/) Thanks in advance.<issue_comment>username_1: change to ``` when: domain_name == item.key ``` **EDIT** change `domain_name` to `domain_name.stdout` and test `server_path.stat is defined` first. ``` - name: Checking for webpage path stat: path=/etc/apps/dc{{item.value}}/webpage.html when: domain_name.stdout == item.key with_dict: {abc: 08, cde: 04} register: server_path - debug: msg="server path exists" when: server_path.stat is defined and server_path.stat.isdir is defined and server_path.stat.isdir ``` Upvotes: 0 <issue_comment>username_2: The problem is that the you are looping using `with_dict: {abc: 08, cde: 04}` and registering into `server_path`. In this case `server_path` will contain a results array that will wrap all the output of the calls to `stat`. You can verify this by debugging the `server_path` variable. ``` - debug: msg="{{server_path}}" ``` You need to access the result via an array index. Example: `server_path.results[0].stat.isdir` Upvotes: 1
2018/03/19
808
2,475
<issue_start>username_0: I have this bash script that runs calculus operations for me. It starts off with "read"... How can I make Script A enter a value into Script B, read the output and dismiss Script B again? Example: ``` #ScriptA.sh a=12 b=4 [open Script B, enter $a and $b and read the result] echo "The result is [the result of Script B]." #ScriptB.sh read x y echo $(($x+$y)) ``` Desired Output: ``` bash ScriptA.sh The result is 16. ``` Of course it's about more complex maths but you get the idea. Note that, for convenience purposes, I don't want to change the structure of Script B (read x y). But I hope that there are some guys here that can solve this problem. Thanks in advance.<issue_comment>username_1: You should do something like this: ``` #!/bin/bash a=12 b=4 result=$(echo $a $b | ./script_b.sh) echo "the result is $result" ``` Upvotes: 2 <issue_comment>username_2: Script B should work like -bc does. Example: ``` echo `echo "4^3.5" | -scriptb.sh` [result] ``` Edit: I just came up with a part of the solution by myself and thought I'd share it: ``` # ScriptA.sh echo `echo "44 33" | bash ScriptB.sh` # ScriptB.sh read x y echo $(($x+$y)) ``` Output: ``` bash ScriptA.sh 77 ``` The next problem is my ScriptB.sh looks a little more like this: ``` # ScriptB.sh until [[ 1 = 2 ]]; do echo Enter x and y read x y if [[ x = q ]]; then break 1 fi echo $(($x+$y)) done ``` This is in order to allow multiple inputs, if I want to use ScriptB manually. If I let ScriptA use ScriptB in the above mentioned way the output looks like this: ``` bash ScriptA.sh b.sh: line 9: +: syntax error: operand expected (error token is "+") Enter x and y 77 Enter x and y ``` It seems to be the case that after ScriptA inputs 44 and 33 and hits enter, like it should, but it hits enter right away a second time triggering the syntax error message and ending ScriptB. This is suboptimal, because in the case of the real ScriptB it will enter a "(standard\_in) 1: parse error"-chain, resulting in no result at all. The solution to this problem would be by teaching ScriptA to read what ScriptB promts as result and ending it right after this. Or making it enter "q" as a second input instead of just hitting enter. Edit 2: Ok. Got it. Script A should look like this in order to work as desired: ``` e=2.7182818285 pi=3.141 a=$(printf "$e $pi \n q \n" | bash ScriptB.sh) a=${a:14:20} echo $a ``` Upvotes: 1 [selected_answer]
2018/03/19
752
2,335
<issue_start>username_0: I have datasets which measure voltage values in certain column. I'm looking for elegant way to extract the rows that is deviated from mean value. There are couple of group in "volt\_id" and I'd like to have each group create their own mean/std and use them to decide which rows are deviated from each group. for example, I have original dataset as below. ``` time volt_id value 0 14 A 300.00 1 15 A 310.00 2 15 B 200.00 3 16 B 210.00 4 17 B 300.00 5 14 C 100.00 6 16 C 110.00 7 20 C 200.00 ``` After the algorithm running, I'd only keep row 4 and 7 which is highly deviated from their groups as below. ``` time volt_id value 4 17 B 300.00 7 20 C 200.00 ``` I could do this if there is only single group but my codes would be messy and lengthy if do this for multiple groups. I'd appreciate if there's simpler way to do this. thanks,<issue_comment>username_1: You can compute and filter on the [zscore](https://en.wikipedia.org/wiki/Standard_score) on each `group` using `groupby`. Assuming you want only those rows which are 1 or more standard deviations away from mean, ``` g = df.groupby('volt_id').value v = (df.value - g.transform('mean')) / g.transform('std') df[v.abs().ge(1)] time volt_id value 4 17 B 300.0 7 20 C 200.0 ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: One way to do this would be using outliers: <http://www.mathwords.com/o/outlier.htm> You would need to define your inner quartile range and first and third quartiles. You could then filter your data onsimple comparison. Quartiles are not the only way to determine outliers howevet. Heres a discussion comparing standard deviation and quartiles for locating outliers: <https://stats.stackexchange.com/questions/175999/determine-outliers-using-iqr-or-standard-deviation> Upvotes: 0 <issue_comment>username_3: Similar to @COLDSPEED's solution: ``` In [179]: from scipy.stats import zscore In [180]: df.loc[df.groupby('volt_id')['value'].transform(zscore) > 1] Out[180]: time volt_id value 4 17 B 300.0 7 20 C 200.0 ``` Upvotes: 1
2018/03/19
1,120
4,973
<issue_start>username_0: I know this has been asked 100 times already but none of the solutions seem to be working for me. Want to read the database of "user\_preferences" for the user that is signed in (userID) and read the gender/age/weight/height values and store them in the variables shown below. currently returns null on everything (the log statement and the values). Feel like i havent got the path set up properly or something. help would be great! ![db](https://gyazo.com/c0120b360355ff4e25bdb0d3eefe17c0.png) and my code ``` mAuth = FirebaseAuth.getInstance(); mFirebaseDatabase = FirebaseDatabase.getInstance(); myRef = mFirebaseDatabase.getReference(); userID = mAuth.getCurrentUser().getUid(); DatabaseReference testRef = myRef.child("user_preferences"); testRef.addValueEventListener(new ValueEventListener() { @Override public void onDataChange(DataSnapshot dataSnapshot) { for (DataSnapshot ds : dataSnapshot.getChildren()) { //loop through the firebase nodes UserPreferences userPreferences = new UserPreferences(); userPreferences.setAge(ds.child(userID).getValue(UserPreferences.class).getAge()); userPreferences.setHeight(ds.child(userID).getValue(UserPreferences.class).getHeight()); userPreferences.setWeight(ds.child(userID).getValue(UserPreferences.class).getWeight()); userPreferences.setGender(ds.child(userID).getValue(UserPreferences.class).getGender()); genderSet = userPreferences.getGender(); age = userPreferences.getAge(); height = userPreferences.getHeight(); weight = userPreferences.getWeight(); Log.d(TAG, "onDataChange: " + genderSet); // } } @Override public void onCancelled(DatabaseError databaseError) { } }); } ```<issue_comment>username_1: These two lines of code: ``` DatabaseReference testRef = myRef.child("user_preferences"); testRef.addValueEventListener(...) ``` are effectively querying the **entire** node called `user_preferences`. That means everything at that location - all users. It sounds like this is not what you want. If you want to query just a single user, you should be specific about that in your query by adding the userID that you want to the query location: ``` DatabaseReference testRef = myRef.child("user_preferences").child(userID); testRef.addValueEventListener(...) ``` Also, these lines of code are confusing to me: ``` userPreferences.setAge(ds.child(userID).getValue(UserPreferences.class).getAge()); userPreferences.setHeight(ds.child(userID).getValue(UserPreferences.class).getHeight()); userPreferences.setWeight(ds.child(userID).getValue(UserPreferences.class).getWeight()); userPreferences.setGender(ds.child(userID).getValue(UserPreferences.class).getGender()); ``` You're deserializing a UserPreferences object for each and every field you want to populate, which is wasteful. It seems to me that you really just want to deserialize it once and remember the object: ``` UserPreferences userPreferences = dataSnapshot.getValue(UserPreferences.class); ``` Upvotes: 2 <issue_comment>username_2: Regarding the null values, you seem to be using external fields, which will not be set until the Firebase returns the network call after at least a second. Your values will be null in the meantime, so you should not be setting them onto a UI element outside of `onDataChange`. Also, you have a lot of gets/sets going on, when you only need to call one `getValue()` for the class, then additional ones for the fields. Then, you don't seem to want to loop over anything, so you should directly access the user node from the top reference. For example, ``` mAuth = FirebaseAuth.getInstance(); mFirebaseDatabase = FirebaseDatabase.getInstance(); myRef = mFirebaseDatabase.getReference(); userID = mAuth.getCurrentUser().getUid(); DatabaseReference testRef = myRef.child("user_preferences/"+userID); // or .child("user_preferences").child(userID) testRef.addValueEventListener(new ValueEventListener() { @Override public void onDataChange(DataSnapshot dataSnapshot) { UserPreferences userPreferences = dataSnapshot.getValue(UserPreferences.class); Log.d(TAG, "onDataChange: " + userPreferences.getGender()); // TODO: Update some UI element here } @Override public void onCancelled(DatabaseError databaseError) { // TODO: Add error handling } }); } ``` If you only want to read the values once, use `testRef.addListenerForSingleValueEvent()` Upvotes: 2 [selected_answer]
2018/03/19
746
2,867
<issue_start>username_0: I am trying to write a function to search for a specific date, entered as a parameter, in a range of cells in an adjacent worksheet. On finding the date, the function should return a string, "found: " and the cell reference. All seems to be working well enough, but the function returns 'nothing' even when there is a (deliberately entered) date, in date format, both in the cell range and the cell referred to when the function is called. Have I missed something critical when calling find when using a Date? A note, the function looks in the same row that it is called from, in the other sheet. This may help explain how i'm setting rng ``` Public Function d_scan(targ As Date) As String Dim ws As Worksheet Dim targetSheet As Worksheet Dim ret As String Dim rng As String Dim scanner As Date Dim found As Range Set targetSheet = ThisWorkbook.Worksheets("2018") Set ws = Application.Caller.Worksheet Let intRow = Application.Caller.Row Let intCol = Application.Caller.Column Let rng = "F" & intRow & ":" & "X" & intRow Set found = targetSheet.Range(rng).Find(What:=targ, LookAt:=xlWhole) If found Is Nothing Then Let ret = "nothing" Else Let ret = "found: " & found End If d_scan = ret End Function ```<issue_comment>username_1: I think you are comparing day/hour/minute/second with day/hour/minute/second and getting no matches (everything's too specific). I used this to massage targ into "today" at 12:00 AM, but you would need to do something to massage the data on the sheet like this as well for the range.find to work. ``` targ = Application.WorksheetFunction.Floor(targ, 1) ``` I suggest using a method other than range.find... Looping perhaps, looking for a difference between targ and the cell that's less than 1? Upvotes: 1 <issue_comment>username_2: date issues are quite subtle and their solution may depend on the actual scenario (what variable type is used, what data format is used in the sheet,...) for a start, you may want: * specify all relevant `Find()` method parameters, since undefined ones will be implicitly assumed as per its last usage (even from Excel UI!) * convert `Date` to `String` via the `CStr()` function so, you may want to try this code: ``` Option Explicit Public Function d_scan(targ As Date) As String Dim rng As String Dim found As Range Dim intRow As Long intRow = Application.Caller.Row rng = "F" & intRow & ":" & "X" & intRow Set found = ThisWorkbook.Worksheets("2018").Range(rng).Find(What:=CStr(targ), LookAt:=xlWhole, LookIn:=xlValues) ' specify 'LookIn' parameter, too If found Is Nothing Then d_scan = "nothing" Else d_scan = "found: " & found End If End Function ``` Upvotes: 3 [selected_answer]
2018/03/19
1,758
5,505
<issue_start>username_0: Whenever I try to insert values in form it gives error... > > com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column 'regno' in 'field list'com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column 'regno' in 'field list' > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at com.mysql.jdbc.Util.handleNewInstance(Util.java:404) > at com.mysql.jdbc.Util.getInstance(Util.java:387) > at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:939) > at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3878) > at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3814) > at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2478) > at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2625) > at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2547) > at com.mysql.jdbc.StatementImpl.executeUpdateInternal(StatementImpl.java:1541) > at com.mysql.jdbc.StatementImpl.executeLargeUpdate(StatementImpl.java:2605) > at com.mysql.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1469) > at org.apache.jsp.insertRegister\_jsp.\_jspService(insertRegister\_jsp.java:96) > at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:728) > at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:432) > at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:390) > at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:334) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:728) > at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305) > at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) > at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) > at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) > at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) > at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222) > at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123) > at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) > at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171) > at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:100) > at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:953) > at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) > at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408) > at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1041) > at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:603) > at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > > > My JSP file registerVehicles.jsp is ... ``` <%@page contentType="text/html" pageEncoding="UTF-8"%> JSP Example | Register New Vehicle | | --- | | Vehicle Registration Number | | | Manufacturer | | || Model | | | Manufactured Date | | | Fuel Type | | | | ``` insertRegister.jsp ``` <%@ page import ="java.sql.*" %> <%@page import="java.io.*, java.util.*,java.text.*"%> <% String vrn=request.getParameter("vrn"); String maker=request.getParameter("maker"); String model=request.getParameter("model"); String mfd=request.getParameter("mfd"); String ft=request.getParameter("ft"); java.util.Date date = Calendar.getInstance().getTime(); DateFormat dateFormat = new SimpleDateFormat("dd-mm-yyyy "); String currDate = dateFormat.format(date); Calendar c = Calendar.getInstance(); c.setTime(new java.util.Date()); // Now use today date. c.add(Calendar.DATE, +90); String validDate = dateFormat.format(c.getTime()); try { Class.forName("com.mysql.jdbc.Driver"); Connection conn = DriverManager.getConnection("jdbc:mysql://localhost:3306/puc", "root", "root"); Statement st=conn.createStatement(); int i=st.executeUpdate("insert into login(regno,maker,model,made_year,fuel_type,curr_date,vaalid_to)"+"values('"+vrn+"','"+maker+"','"+model+"','"+mfd+"','"+ft+"','"+currDate+"','"+ validDate +"')"); out.println("Data is successfully inserted!"); } catch(Exception e) { System.out.print(e); e.printStackTrace(); } %> ``` Can someone help me in finding the reason for this exception?<issue_comment>username_1: The error message `Unknown column 'regno' in 'field list'` indicates that that column does not exist in the table `login`. Either you've typed the column name wrong or, got your table's schema wrong or it simply doesn't exist. Upvotes: 2 <issue_comment>username_2: The error is clearly visible. your table named "login" has no column named "regno". Upvotes: 1 [selected_answer]
2018/03/19
508
1,522
<issue_start>username_0: I would like, from a pair of lists, say: ``` a = ['str1', 'str2', 'str3', 'str4', 'str5'] b = [var1, var2, var3, var4, var5] ``` to be able to create multiple dictionaries, with arbitrarily selected pairs of elements from a (as keyword) and b (as value). I have found in a comment on an answer to this [question](https://stackoverflow.com/questions/5844672/delete-an-item-from-a-dictionary), the following method: ``` full_set = dict(zip(a,b)) subset = {i:full_set[i] for i in a if i not in ['str2', 'str5']} ``` which produces subset dictionary: ``` {'str1': var1, 'str3': var3, 'str4': var4} ``` This works fine, but I am curious as to whether there is an equally short or shorter method of building subset dictionaries from the two lists, e.g. via specifying the list indices for the elements I want included, without creating a full dictionary containing all elements first. For context, my vars refer to scikit-learn estimator objects.<issue_comment>username_1: You can combine both statements into one: ``` avoid = {'str2', 'str5'} # Sets have better lookup time :) {k:v for k,v in zip(a,b) if k not in avoid} ``` Upvotes: 2 <issue_comment>username_2: Probably convert this to numpy arrays and use fancy indexing like this: ``` import numpy as np a = np.array(['str1', 'str2', 'str3', 'str4', 'str5']) b = np.array([1,2,3,4,5]) indices = [0, 1, 4] d = dict(zip(a[indices],b[indices])) ``` d returns: ``` {'str1': 1, 'str2': 2, 'str5': 5} ``` Upvotes: 1 [selected_answer]
2018/03/19
529
1,818
<issue_start>username_0: I need to make a page in SuiteCRM (v7.9 -- based loosely on Sugar 6.5 CE) that has a list of objects (of a custom module), with checkboxes in front of each one. So far, so good: that's a standard ListView. The catch is that only *some* records should be in the list (filtering on whether there is an associated row in a related custom module/object). This page needs to be distinct from the "regular" list for this module, which should indeed list all records. It seems to me it makes sense to use a custom "action" to access this page view, and I can get my custom action code to fire with the right URL. But I don't see how to hook in the filtering. At first, it looked like the process\_record logic hook might be helpful here, but it just gives the bean for every record to be displayed. Unless there's a flag "display this record" that I'm not seeing, that's not so helpful. Ideally, of course, I'd like to be able to inject a different WHERE clause in my custom controller action before calling ``` parent::action_listview(); ``` to display the page, but I'm not seeing doc to indicate how that might work. I would include source code, but so far, the line above is everything (but boilerplate) that's in the `controller.php` file.<issue_comment>username_1: You can combine both statements into one: ``` avoid = {'str2', 'str5'} # Sets have better lookup time :) {k:v for k,v in zip(a,b) if k not in avoid} ``` Upvotes: 2 <issue_comment>username_2: Probably convert this to numpy arrays and use fancy indexing like this: ``` import numpy as np a = np.array(['str1', 'str2', 'str3', 'str4', 'str5']) b = np.array([1,2,3,4,5]) indices = [0, 1, 4] d = dict(zip(a[indices],b[indices])) ``` d returns: ``` {'str1': 1, 'str2': 2, 'str5': 5} ``` Upvotes: 1 [selected_answer]
2018/03/19
422
1,352
<issue_start>username_0: I have a sequelize model that uses mysql functions to create guid such as: ``` guid: { type: DataTypes.STRING.BINARY, defaultValue: sequelize.fn('UuidToBin', sequelize.fn('uuid')), primaryKey: true }, inProcess: DataTypes.BOOLEAN, ... ``` I successfully create new records including a binary guid using the create method, ``` MessagesDBModel.create(messageObj) .then((savedMessage) => res.send(200, { status: 200, message: "OK", guid: savedMessage.guid.toString('hex') //outputs [object Object] ``` But, the value of guid cannot be retrieved from the savedMessage object. When I set a breakpoint, savedMessage.dataValues.guid = Fn. How can I access the inserted value of guid instead of the function that created it?<issue_comment>username_1: You can combine both statements into one: ``` avoid = {'str2', 'str5'} # Sets have better lookup time :) {k:v for k,v in zip(a,b) if k not in avoid} ``` Upvotes: 2 <issue_comment>username_2: Probably convert this to numpy arrays and use fancy indexing like this: ``` import numpy as np a = np.array(['str1', 'str2', 'str3', 'str4', 'str5']) b = np.array([1,2,3,4,5]) indices = [0, 1, 4] d = dict(zip(a[indices],b[indices])) ``` d returns: ``` {'str1': 1, 'str2': 2, 'str5': 5} ``` Upvotes: 1 [selected_answer]
2018/03/19
1,226
3,854
<issue_start>username_0: Earlier I was pointed out how to achieve a certain layout I wanted to have for my page. However, this messes with my image height. As far as I understand, `height: auto;` should set the height to right proportion when a certain `width` is set. Here's my code: ``` .floatingImage { width: 50px; height: auto; } #floatingImageContainer { background-color: red; width: 75%; height: 100%; margin: auto; display: flex; flex-wrap: wrap; justify-content: center; } ![](images\miniImages\1.jpg) ![](images\miniImages\2.jpg) ![](images\miniImages\3.jpg) ![](images\miniImages\4.jpg) ![](images\miniImages\5.jpg) ![](images\miniImages\6.jpg) ``` My guess is that it's got to do with the `display` property or maybe the `flex-wrap`, but that was the solution for my last problem and I'm not entirely sure yet how it could effect my image height... haven't added a margin in the screenshot, however that wouldn't change the height. here's a screenshot of the issue: [what the hell](https://i.stack.imgur.com/EMAjB.png) Thank you in advance! New problem: [![I don't know anymore...](https://i.stack.imgur.com/09jTk.jpg)](https://i.stack.imgur.com/09jTk.jpg)<issue_comment>username_1: Use > > align-items: center > > > on the container to prevent it from stretching its children. ```css .floatingImage { width: 50px; height: auto; } #floatingImageContainer { background-color: red; width: 75%; height: 100%; margin: auto; display: flex; flex-wrap: wrap; justify-content: center; align-items: center; } ``` ```html ![](http://lorempixel.com/g/400/200/) ![](http://lorempixel.com/g/400/200/) ![](http://lorempixel.com/g/400/200/) ![](http://lorempixel.com/g/400/200/) ![](http://lorempixel.com/g/400/200/) ![](http://lorempixel.com/g/400/200/) ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: You need to add `align-items` with an appropriate setting to the container for the `auto` height to work, for example `align-items: flex-start;`. Otherwise the items will be stretched to full height of the container by default: ```css html, body { margin: 0; height: 100%; } .floatingImage { width: 50px; height: auto; } #floatingImageContainer { background-color: red; width: 75%; height: 100%; margin: auto; display: flex; flex-wrap: wrap; justify-content: center; align-items: flex-start; } ``` ```html ![](http://placehold.it/300x500) ![](http://placehold.it/300x200) ![](http://placehold.it/200x200) ![](http://placehold.it/300x400) ![](http://placehold.it/400x200) ![](http://placehold.it/300x350) ``` BTW, as mentioned in the comments: Closing tags are invalid HTML - erase them... Upvotes: 1 <issue_comment>username_3: Instead of using `display: flex` to center the images horizontally which I am guessing was your last issue to fix, just use `display: block` and `text-align: center`. This will not mess with the images. Also, I recommend setting image widths inside the actual `![]()` tag, setting the width property inside the tag and with no height will do the exact same as using CSS `height: auto`. I recommend this because for one it helps with arbitrary user agents(i.e. speech browsers), and relaying the correct aspect ratio for them. These will not normally be able to read the CSS. In addition, this allows the browsers to size the images even before the CSS and images resources are loaded. Not supplying, a width attribute in the image tag will cause the browser to render it as 0x0 until the browser can size. Here is a working example: ```css #floatingImageContainer { background-color: red; width: 75%; height: 100%; display: block; text-align: center } ``` ```html ![](http://lorempixel.com/g/400/200/) ``` Upvotes: -1
2018/03/19
460
1,646
<issue_start>username_0: Is there any other than `HyperOpt` that can support multiprocessing for a hyper-parameter search? I know that `HyperOpt` can be configured to use `MongoDB` but it seems like it is easy to get it wrong and spend a week in the weeds, is there anything that is more popular and effective?<issue_comment>username_1: Check out Ray Tune! You can use it for multiprocessing and multi-machine executions of random search, grid search, and evolutionary methods. It also has implementations of popular algorithms such as HyperBand. Here's the docs page - [ray.readthedocs.io/en/latest/tune.html](http://ray.readthedocs.io/en/latest/tune.html) As an example to run 4 parallel experiments at a time: ``` import ray import ray.tune as tune def my_func(config, reporter): # add the reporter parameter import time, numpy as np i = 0 while True: reporter(timesteps_total=i, mean_accuracy=i ** config["alpha"]) i += 1 time.sleep(.01) tune.register_trainable("my_func", my_func) ray.init(num_cpus=4) tune.run_experiments({ "my_experiment": { "run": "my_func", "stop": { "mean_accuracy": 100 }, "config": { "alpha": tune.grid_search([0.2, 0.4, 0.6]), "beta": tune.grid_search([1, 2]) } } }) ``` Disclaimer: I work on this project - let me know if you have any feedback! Upvotes: 2 <issue_comment>username_2: Some models(say RandomForest) have "njobs" parameter for use of number of cores. You can try njobs=-1; thus even if hyperopt uses 1 core, each trial would use all the cores, speeding up the process. Upvotes: 1
2018/03/19
2,515
9,044
<issue_start>username_0: In a project that is an Activity Model, Presenter and Model, the Activity Theme and Presenter and Presenter Model. When I do @Inject in Presenter to instantiate the Model it is never instantiated. Do you need a dependency "cascade"? > > FATAL EXCEPTION: main Process: fipedaggerrxjava, PID: 22258 > java.lang.NullPointerException: Attempt to invoke interface method > 'void > fipedaggerrxjava.mvp.SelectMarcaContractMVP$Model.getMarcas(java.lang.String)' > on a null object reference at > fipedaggerrxjava.module.marca.MarcaPresenter.initData(MarcaPresenter.java:35) > at > fipedaggerrxjava.module.marca.MarcaActivity$1.onCheckedChanged(MarcaActivity.java:63) > > > I already checked in Debug and really the Model that is not being instantiated by Dagger but I can not understand why. **App** ``` public class App extends Application implements HasActivityInjector{ @Inject public DispatchingAndroidInjector activityDispatchingAndroidInjector; @Override public void onCreate() { super.onCreate(); DaggerAppComponent.builder().build().inject(App.this); } @Override public AndroidInjector activityInjector() { return activityDispatchingAndroidInjector; } } ``` **ActivityBuilder** ``` @Module public abstract class ActivityBuilder { @Binds @IntoMap @ActivityKey(MarcaActivity.class) abstract AndroidInjector.Factory extends Activity bindMarcaActivity (MarcaComponent.Builder builder); } ``` **AppComponent** ``` @Component(modules = {ActivityBuilder.class, AndroidInjectionModule.class, AppModule.class}) @Singleton public interface AppComponent { void inject(App app); } ``` **AppModule** ``` @Module(subcomponents = MarcaComponent.class) public class AppModule { @Provides @Singleton @Named("URL_MARCA") String provideStringURLBase(){ return "https://fipe.parallelum.com.br/api/v1/"; } @Provides @Singleton Context provideContext(App app){ return app; } @Provides @Singleton Gson provideGsonRepositorie(){ return new GsonBuilder() .create(); } @Singleton @Provides OkHttpClient provideOkHttpCliente1(){ return new OkHttpClient.Builder() .connectTimeout(20, TimeUnit.SECONDS) .readTimeout(20, TimeUnit.SECONDS) .build(); } @Singleton @Provides RxJavaCallAdapterFactory provideRxJavaCallAdapterFactory(){ return RxJavaCallAdapterFactory.create(); } @Provides @Singleton Retrofit provideRetrofit(OkHttpClient okHttpClient, Gson gson, RxJavaCallAdapterFactory rxAdapter, @Named("URL_MARCA") String stringBaseURL){ return new Retrofit.Builder() .baseUrl(stringBaseURL) .addConverterFactory(GsonConverterFactory.create(gson)) .addCallAdapterFactory(rxAdapter) .client(okHttpClient) .build(); } } ``` **MarcaComponent** ``` @Subcomponent(modules = MarcaModule.class) @PerMarca public interface MarcaComponent extends AndroidInjector{ @Subcomponent.Builder abstract class Builder extends AndroidInjector.Builder {} } ``` **MarcaModule** ``` @Module public class MarcaModule{ @Provides @PerMarca APIFIPE provideAPIFIPE(Retrofit retrofit){ return retrofit.create(APIFIPE.class); } @Provides @PerMarca View provideViewMarca(MarcaActivity activity){ return activity; } @Provides @PerMarca Presenter providePresenterMarca(){ return new MarcaPresenter(); } @Provides @PerMarca Model provideModelMarca(){ return new MarcaModel(); } } ``` **AdapterMarca** ``` public class AdapterMarca extends BaseAdapter { private List mListMarca; @Inject public Context mContext; public AdapterMarca(List listMarca){ this.mListMarca = listMarca; } @Override public int getCount() { return mListMarca.size(); } @Override public Object getItem(int position) { return mListMarca.get(position); } @Override public long getItemId(int position) { return position; } @Override public View getView(int position, View convertView, ViewGroup parent) { View view = LayoutInflater.from(mContext).inflate(R.layout.layout\_list\_item, parent, false); TextView tvNome = view.findViewById(R.id.tv\_marca); tvNome.setText(mListMarca.get(position).getName().toString()); return view; } public void addListMarca(List marcaList){ mListMarca.clear(); mListMarca.addAll(marcaList); notifyDataSetChanged(); } ``` } **MarcaActivity** ``` public class MarcaActivity extends BaseActivity implements HasActivityInjector, View { private RadioGroup radioGroupMarca; private String tipoSelect = ""; private List mListMarca; private AdapterMarca mAdapterMarca; private ListView listViewMarca; @Inject public Presenter mMarcaPresenter; @Inject protected DispatchingAndroidInjector activityDispatchingAndroidInjector; @Override protected void onCreate(Bundle savedInstanceState) { AndroidInjection.inject(MarcaActivity.this); super.onCreate(savedInstanceState); setContentView(R.layout.activity\_main); listViewMarca = findViewById(R.id.lv\_marca); radioGroupMarca = findViewById(R.id.rg\_tipo); radioGroupMarca.setOnCheckedChangeListener(new RadioGroup.OnCheckedChangeListener() { @Override public void onCheckedChanged(RadioGroup group, int checkedId) { int id = group.getCheckedRadioButtonId(); switch (id){ case R.id.rb\_carros : tipoSelect = "carros"; mMarcaPresenter.initData(tipoSelect); break; case R.id.rb\_motos : tipoSelect = "motos"; mMarcaPresenter.initData(tipoSelect); break; case R.id.rb\_caminhoes : tipoSelect = "caminhoes"; mMarcaPresenter.initData(tipoSelect); break; } } }); } @Override public AndroidInjector activityInjector() { return activityDispatchingAndroidInjector; } @Override public void onMarcaLoader(List listMarcas) { if(mListMarca==null && listMarcas!=null){ initListView(); } if(mAdapterMarca!=null){ mListMarca.clear(); mListMarca = listMarcas; mAdapterMarca.addListMarca(mListMarca); } } private void initListView(){ mAdapterMarca = new AdapterMarca(mListMarca); listViewMarca.setAdapter(mAdapterMarca); } } ``` **MarcaPresenter** ``` @PerMarca public class MarcaPresenter implements Presenter { @Inject View mMarcaView; @Inject Model mMarcaModel; @Inject public MarcaPresenter(){ } @Override public void initData(String tipoMarca) { mMarcaModel.getMarcas(tipoMarca); } @Override public void getMarcas(List listMarcas) { mMarcaView.onMarcaLoader(listMarcas); } @Override public void onShowDialog(String title, String msg) { mMarcaView.onShowDialog(title, msg); } @Override public void onHideShowDialog() { mMarcaView.onHideShowDialog(); } @Override public void onShowToast(String s) { mMarcaView.onShowToast(s); } } ``` **MarcaModel** ``` @PerMarca public class MarcaModel implements Model { @Inject APIFIPE mApiFIPE; @Inject Presenter mMarcaPresenter; @Inject public MarcaModel(){ } @Override public void getMarcas(String tipoVeiculo) { final List marcaList = new ArrayList<>(); Observable> observable = mApiFIPE.getRepositories(tipoVeiculo); observable.subscribe(new Observer>() { @Override public void onCompleted() { mMarcaPresenter.getMarcas(marcaList); } @Override public void onError(Throwable e) { mMarcaPresenter.onShowDialog("Erro", "Falha ao carregar lista de marcas"); } @Override public void onNext(List marcas) { marcaList.addAll(marcas); } }); } } ```<issue_comment>username_1: Check out Ray Tune! You can use it for multiprocessing and multi-machine executions of random search, grid search, and evolutionary methods. It also has implementations of popular algorithms such as HyperBand. Here's the docs page - [ray.readthedocs.io/en/latest/tune.html](http://ray.readthedocs.io/en/latest/tune.html) As an example to run 4 parallel experiments at a time: ``` import ray import ray.tune as tune def my_func(config, reporter): # add the reporter parameter import time, numpy as np i = 0 while True: reporter(timesteps_total=i, mean_accuracy=i ** config["alpha"]) i += 1 time.sleep(.01) tune.register_trainable("my_func", my_func) ray.init(num_cpus=4) tune.run_experiments({ "my_experiment": { "run": "my_func", "stop": { "mean_accuracy": 100 }, "config": { "alpha": tune.grid_search([0.2, 0.4, 0.6]), "beta": tune.grid_search([1, 2]) } } }) ``` Disclaimer: I work on this project - let me know if you have any feedback! Upvotes: 2 <issue_comment>username_2: Some models(say RandomForest) have "njobs" parameter for use of number of cores. You can try njobs=-1; thus even if hyperopt uses 1 core, each trial would use all the cores, speeding up the process. Upvotes: 1
2018/03/19
450
1,809
<issue_start>username_0: I'm developing a new Angular based application, with a login form based on email and password. It have a form for these fields, defined by the following lines: ``` let formConfig = { email: ['', [Validators.required, Validators.email]], password: ['', [Validators.required]], }; this.form = this.formBuilder.group(formConfig) ``` It works just as expected, except for one situation: When the user uses Samsung's (and others) autocompletes to fill the email field, it inserts an empty space after the email, and the `Validators.email` assumes that it's an invalid email. My question is how can I solve this particular situation? I'm pretty sure that I can just put some *existing email validation regex*, but I hate to reinvent the wheel, if the validator exists, creating another one looks crazy. It's possible to implement some kind of validator that modifies my form control value stripping out whitespaces?<issue_comment>username_1: I don't see why you don't just regex it out using regular javascript before you do other things with it. You could match on the end of the string using regex / $/, I would use the the /\s{1,}$/ so you catch any "whitespace" characters one or greater. There are email validators out there but of course you'd need to get rid of the white space first at the end. I think teaching regex would be out of scope, but that is how I'd tackle the stray input from auto complete. another suggestion is turning off auto complete so you don't get stray stuff but that would just annoy the user (it annoys me).. Upvotes: 3 [selected_answer]<issue_comment>username_2: I experienced the same. For everyone still encountering this issue; adding `type="email"` to your in combinitation with `Validators.email` should do the trick. Upvotes: 3
2018/03/19
1,522
5,473
<issue_start>username_0: I'm trying to find the longest palindrome. I have two pointers starting at the first letter of the string. For each letter in the outer loop, I go through all the other letters in the inner loop and use the substring which is the difference between the starting letter(outer loop) and the ending letter(inner loop). I reverse this substring and check if the reversed version is the same as the original version. With that, I know I have found a palindrome. This algorithm is working for most of my test cases except one, and I can't figure out why. ```js function longestPalindrome (str) { const string = str.toLowerCase(); if (str.length < 2) return null; let palindrome = ''; function stringReverser (start, end) { const reversed = string.substr(start, end).split('').reverse().join(''); return reversed; } for (let i = 0; i <= string.length; i++) { for (let j = i; j <= string.length; j++) { if (string.substr(i, j) === stringReverser(i, j)) { if (string.substr(i,j).length > palindrome.length) { palindrome = string.substr(i,j); } } } } if (!palindrome) return null; return palindrome; } let result1 = longestPalindrome('My mom is called annnna') let result2 = longestPalindrome('My dad is a racecar athelete') let result3 = longestPalindrome('That trip with a kayak was quite an adventure!') console.log(result1) console.log(result2) console.log(result3)// should return ' kayak ' but returns 't t' instead. ```<issue_comment>username_1: The mistake in the original implementation is that `substr` arguments are `(begin, length)`, where the original code appears to have a mistaken assumption about the meaning of the second argument. See <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/substr> Here is a small change to your example code, which has more correct output: ```js function longestPalindrome (str) { const string = str.toLowerCase(); if (str.length < 2) return null; let palindrome = ''; function stringReverser (start, length) { const reversed = string.substr(start, length).split('').reverse().join(''); return reversed; } for (let i = 0; i < string.length; i++) { for (let j = 1; j <= string.length - i; j++) { if (string.substr(i, j) === stringReverser(i, j)) { if (j > palindrome.length) { palindrome = string.substr(i,j); } } } } if (!palindrome) return null; return palindrome; } let result1 = longestPalindrome('My mom is called annnna') let result2 = longestPalindrome('My dad is a racecar athelete') let result3 = longestPalindrome('That trip with a kayak was quite an adventure!') console.log(result1) console.log(result2) console.log(result3)// should return ' kayak ' but returns 't t' instead. ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: I suggest abstracting out the selected substring, and also using an `isPalendrome` function: ``` function longestPalindrome (str) { const inputString = str.toLowerCase(); if (str.length < 2) return null; let longestPalindrome = ''; function isPalendrome(strParam) { return strParam === strParam.split('').reverse().join(''); } for (let i = 0; i <= inputString.length; i++) { for (let j = i; j <= inputString.length; j++) { const thisStr = inputString.slice(i, j); if (!isPalendrome(thisStr)) continue; if (thisStr.length > longestPalindrome.length) longestPalindrome = thisStr; } } return longestPalindrome || null; } let result1 = longestPalindrome('My mom is called annnna') let result2 = longestPalindrome('My dad is a racecar athelete') let result3 = longestPalindrome('That trip with a kayak was quite an adventure!') console.log(result1) console.log(result2) console.log(result3) ``` Upvotes: 2 <issue_comment>username_3: Here is a solution using array helpers. The palindrome helper function checks if a given word is a palindrome. The filter looks for those that return true and then uses reduce to find the longest palindrome. ``` function findLongest(str) { let arr = str.split(' ').filter(word => palindrome(word)); return arr.reduce((a,b) => a.length > b.length ? a : b); } function palindrome(str) { return str === str.split('').reverse().join(''); } console.log(findLongest('That trip with a kayak was quite an adventure!')); ``` Upvotes: 0 <issue_comment>username_4: ***Java Code*** "findlongestPalindrome" function takes parameter as string and return longest palindrome string "isPalindrome" validates if the string is palindrome public class findlongestPalindrome { ``` public static void main(String[] args) { System.out.println("-->" + findlongestPalindrome("That trip with a kayak was quite an adventure!")); } public static String findlongestPalindrome(String s){ StringBuffer sb = new StringBuffer(); int len = 0; int maxlen = 0; String maxString = ""; char[] arr = s.toCharArray(); for(int i=0;i<=s.length();i++){ for(int j=i;j maxlen) { maxlen = len; maxString = sb.toString() + ""; } } } sb = new StringBuffer(); System.out.println(); } return maxString; } public static boolean isPalindrome(String s){ StringBuffer sb = new StringBuffer(s); return sb.reverse().toString().equals(s); } ``` } Upvotes: 0
2018/03/19
866
3,264
<issue_start>username_0: I was hoping to use InheritedWidget at the root level of my Flutter application to ensure that an authenticated user's details are available to all child widgets. Essentially making the Scaffold the child of the IW like this: ``` @override Widget build(BuildContext context) { return new AuthenticatedWidget( user: _user, child: new Scaffold( appBar: new AppBar( title: 'My App', ), body: new MyHome(), drawer: new MyDrawer(), )); } ``` This works as expected on app start so on the surface it *seems* that I have implemented the InheritedWidget pattern correctly in my AuthenticatedWidget, but when I return back to the home page (MyHome) from elsewhere like this: ``` Navigator.popAndPushNamed(context, '/home'); ``` This call-in the build method of MyHome (which worked previously) then results in authWidget being null: ``` final authWidget = AuthenticatedWidget.of(context); ``` Entirely possible I'm missing some nuances of how to properly implement an IW but again, it does work initially and I also see others raising the same question (i.e. [here](https://stackoverflow.com/questions/46990200/flutter-how-to-pass-user-data-to-all-views%20here) under the 'Inherited Widgets' heading). Is it therefore not possible to use a Scaffold or a MaterialApp as the child of an InheritedWidget? Or is this maybe a bug to be raised? Thanks in advance!<issue_comment>username_1: > > Is it therefore not possible to use a Scaffold or a MaterialApp as the > child of an InheritedWidget? > > > It is very possible to do this. I was struggling with this earlier and posted some details and sample code [here](https://stackoverflow.com/questions/49364116/best-practice-for-statefulwidget-streams-changenotifiers-and-the-state-object/49366472#49366472). You might want to make your App-level InheritedWidget the parent of the MaterialApp rather than the Scaffold widget. I think this has more to do with how you are setting up your MaterialWidget, but I can't quite tell from the code snippets you have provided. If you can add some more context, I will see if I can provide more. Upvotes: 1 <issue_comment>username_2: `MyInherited.of(context)` will basically look into the parent of the **current context** to see if there's a `MyInherited` instantiated. The problem is : Your inherited widget is instantiated **within** the current context. => No `MyInherited` as parent => crash The trick is to use a different *context*. There are many solutions there. You could instantiate `MyInherited` in another widget, so that the `context` of your build method will have a `MyInherited` as parent. Or you could potentially use a `Builder` to introduce a fake widget that will pass you it's context. Example of builder : ``` return new MyInheritedWidget( child: new Builder( builder: (context) => new Scaffold(), ), ); ``` --- Another problem, for the same reasons, is that if you insert an inheritedWidget *inside* a route, it will not be available *outside* of this route. The solution is simple here ! Put your `MyInheritedWidget` *above* `MaterialApp`. above material : ``` new MyInherited( child: new MaterialApp( // ... ), ) ``` Upvotes: 5 [selected_answer]
2018/03/19
276
1,099
<issue_start>username_0: I have a kubernetes cluster and I am using traefik ingress controller to route traffic to deployments inside the kubenetes cluster. I am able to use `ingress.kubernetes.io/rewrite-target` annotation to change the incoming request path to the one expected by the backend. For example : `/star` is transformed to `/trek` by the rewrite target annotation and the request gets routed to `/trek` and backend processing is successful. What I want to know is if there is a way to change response header so that `/trek` gets changed back to `/star`?<issue_comment>username_1: Did you get an answer to this? It looks like similar functionality is available in Apache Traffic Server: <https://docs.trafficserver.apache.org/en/latest/admin-guide/plugins/header_rewrite.en.html>, but would be good to have it in traefik Upvotes: 0 <issue_comment>username_2: The functionality that does this is [modifiers](https://docs.traefik.io/basics/#modifiers) and specifically `ReplacePath`. Here is [a similar answer](https://stackoverflow.com/a/48546509/48639) with some examples. Upvotes: -1
2018/03/19
799
2,967
<issue_start>username_0: I'm trying to use WinAppDriver, Appium & C# to do some UI automation on an ancient Delphi 5 application. It fires up the app, there's a little splash screen then a windows modal box for logging in. The usernames already filled out, so just type out the password and press the OK button. ``` var appCapabilities = new DesiredCapabilities(); appCapabilities.SetCapability("app", @"C:\APP\APP1998.exe"); appCapabilities.SetCapability("deviceName", "WindowsPC"); Session = new WindowsDriver(new Uri(WindowsApplicationDriverUrl), appCapabilities); Assert.IsNotNull(Session); Assert.IsNotNull(Session.SessionId); Assert.AreEqual("APP1998", Session.Title.ToUpper()); Session.Manage().Timeouts().ImplicitWait = TimeSpan.FromSeconds(15); Session.Keyboard.SendKeys("<PASSWORD>"); ``` These all fail: ``` //The logon dialog OK button Session.FindElementByName("OK").Click(); //The File menu Session.FindElementByName("File").Click(); //The Exit command from the File menu Session.FindElementByName("Exit").Click(); ``` I'm using WinAppDriver 1.0 and Appium 3.0.0.2 with Visual Studio, WinAppDriver and Inspect.exe running as admin. Inspect shows the login screen and the splash screen as separate screens which are not connected in the tree. The page source after you log in is: ``` xml version="1.0" encoding="utf-16"? ``` Coming from a webdriver background, I can't see any ID's in there - no wonder it's not being able to find them or is that a misunderstanding on my part. Is this app just too old for WinAppDriver? Should I give up?[![enter image description here](https://i.stack.imgur.com/cyV1i.png)](https://i.stack.imgur.com/cyV1i.png)<issue_comment>username_1: It's not the best option but I think you can use the sendkeys to access the OK button. Like Session.Keyboard.SendKeys(Keys.Alt + "o" + Keys.Alt); Since the access key is Alt+o. Alternately (IDK if this is gonna work) you can try to use the accessibilityId "3741054" as a accessibilityId like Session.FindElementByAccessibilityId("3741054"); Upvotes: 2 [selected_answer]<issue_comment>username_2: you can use below snippet to handle splash screen and any kind of windows of desktop(like if you have two windows and want to switch) ``` var currentWindowHandle = driver.CurrentWindowHandle; Thread.Sleep(TimeSpan.FromSeconds(5)); var allWindowHandles = driver.WindowHandles; driver.SwitchTo().Window(allWindowHandles[0]); ``` Upvotes: 0 <issue_comment>username_3: I have had far more success with Actions class than using webdrivers baked in .Click() for interactions with WindowElement objects. Also, searching by XPath with more than a single attribute to identify the object works far better, at least for me. So, from my experience working with WinAppDriver everyday the last couple of years I would try: new Actions(Session).Click(Session.FindElementByXPath("//\*[@Name='OK' and @ClassName='TWAOkButton']")).Build().Perform(); Upvotes: 0
2018/03/19
611
2,347
<issue_start>username_0: Is there a way to create traffic policies through CloudFormation or Terraform in AWS? I find resource for creating hosted zones and record sets, but I want to create a traffic policy to route my traffic 50% and 50% in different regions.<issue_comment>username_1: The [AWS CloudFormation Resource Types Reference](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html) does not have an entry for Route 53 traffic policies. There are API calls to CRUD the traffic policies, so it could be done via a Lambda Custom Policy. Upvotes: 0 <issue_comment>username_2: Terraform's [`aws_route53_record` resource](https://www.terraform.io/docs/providers/aws/r/route53_record.html) supports [weighted traffic policies](https://www.terraform.io/docs/providers/aws/r/route53_record.html#weighted-routing-policy) to allow you to send a percentage of traffic to different targets. If you are wanting to send traffic from different geographical regions to different targets then you can also use the [`geolocation_routing_policy`](https://www.terraform.io/docs/providers/aws/r/route53_record.html#geolocation_routing_policy) or [`latency_routing_policy`](https://www.terraform.io/docs/providers/aws/r/route53_record.html#latency_routing_policy). If you'd prefer to use Cloudformation then weighted record sets are also supported by the [`AWS::Route53::RecordSet`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-route53-recordset.html) resource by using the `Weight` parameter. This Cloudformation resource also supports geolocation and latency based routing. Upvotes: 0 <issue_comment>username_3: Currently there is no support from Terraform or CloudFormation for AWS Route 53 Traffic Policies: * <https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/AWS_Route53.html> * <https://github.com/terraform-providers/terraform-provider-aws/issues/11256> As a workaround you can save your policies as JSON files and use AWS-CLI to import them. And also you will need the AWS-CLI to associate them as this neither is possible (<https://github.com/terraform-providers/terraform-provider-aws/issues/1159>). Traffic Policy Document Format: <https://docs.aws.amazon.com/Route53/latest/APIReference/api-policies-traffic-policy-document-format.html> Upvotes: 1
2018/03/19
1,239
4,291
<issue_start>username_0: I have django 1.11.5 app with celery 4.1.0 and I recived all the time: ``` kombu.exceptions.EncodeError: is not JSON serializable ``` my settings.py: ``` CELERY_BROKER_URL = 'amqp://localhost' CELERY_RESULT_BACKEND = 'amqp://localhost' CELERY_ACCEPT_CONTENT = ['application/json'] CELERY_RESULT_SERIALIZER = 'json' CELERY_TASK_SERIALIZER = 'json' CELERY_TIMEZONE = 'Asia/Makassar' CELERY_BEAT_SCHEDULE = {} ``` tasks.py ``` from __future__ import absolute_import, unicode_literals from celery import task from django.contrib.auth.models import User @task(serializer='json') def task_number_one(): user = User.objects.create(username="testuser", email="<EMAIL>", password="<PASSWORD>") return user ``` I call task in the view: ``` def form_valid(self, form): form.instance.user = self.request.user task_number_one.delay() return super().form_valid(form) ```<issue_comment>username_1: This is because you are using the JSON serializer for task serialization (as indicated by the setting `CELERY_TASK_SERIALIZER = 'json'`), but you are trying to return a model instance (which cannot be serialized into JSON). You have two options: 1) Don't pass the instance, pass the primary key of the instance and then look up the object inside your task. 2) Use the `pickle` task serializer instead. This will allow you to pass objects as arguments to your tasks and return them, but comes with it's own [security concerns](http://docs.celeryproject.org/en/latest/userguide/security.html#serializers). Upvotes: 5 <issue_comment>username_2: The error is because of Celery expecting a `JSON` data from your task function while you returned a `User` instance. **How to solve this ?** You are not using that return data anywhere, so you don't have to return it. That is you can remove `return user` from the task function. Or, return a `Json` data from the task function will solve this issue as well **Solution 1** ``` @task(serializer='json') def task_number_one(): user = User.objects.create(username="testuser", email="<EMAIL>", password="<PASSWORD>") ``` **Solution 2** ``` @task(serializer='json') def task_number_one(): user = User.objects.create(username="testuser", email="<EMAIL>", password="<PASSWORD>") # return some json data instead of `USER` instance return {"status": True} # Change is here ``` Upvotes: 5 [selected_answer]<issue_comment>username_3: My celery task: ``` response = {} ... except HTTPError as e: response.update( { 'status': False, 'code': e.status_code, 'error': e.body, }, ) ... return response ``` I had `EncodeError(TypeError('Object of type bytes is not JSON serializable')` and `kombu.exceptions.EncodeError` although the response is a `dict` that shouldn't be a problem when JSON encoding. It turned out that `e.body` is of type bytes. I changed to `e.body.decode('utf-8')` and the problem disappeared. Upvotes: 0 <issue_comment>username_4: Another answer not related to this question but also useful if you pass obj to task, may got the same error info, then you can: ```py @shared_task(serializer="pickle") def email_send_task(msg: EmailMultiAlternatives): try: msg.send() except (smtplib.SMTPException, TimeoutError) as e: return f"Email failed with {e}" ``` Upvotes: 0 <issue_comment>username_5: For the following configuration: 1. Django==3.0.11 2. redis==2.10.6 3. celery==4.0.0 4. kombu==4.0.0 I updated the following configuration on the settings.py file and it worked. ``` CELERY_SETTINGS = { 'CELERY_TIMEZONE': TIME_ZONE, 'CELERY_ENABLE_UTC': True, 'CELERY_RESULT_BACKEND': REDIS_URL, 'CELERY_SEND_TASK_SENT_EVENT': True, 'CELERY_TASK_SERIALIZER': 'pickle', 'CELERY_RESULT_SERIALIZER': 'pickle', 'CELERY_ACCEPT_CONTENT': ['pickle', 'json'], } ``` **Don't forget to update the actual value of TIME\_ZONE and REDIS\_URL** The reason could be the Celery 4 version uses JSON as the serializer by default, while the celery3 version uses Pickle by default. The old Django may not be expecting the JSON format from the tasks. So, If you are using the very old Django version, this could help you. Upvotes: 0
2018/03/19
513
1,773
<issue_start>username_0: I need to replace a backslash with something else and wrote this code to test the basic concept. Works fine: ``` test_string = str('19631 location android location you enter an area enable quick action honeywell singl\dzone thermostat environment control and monitoring') print(test_string) test_string = test_string.replace('singl\\dzone ','singl_dbl_zone ') print(test_string) 19631 location android location you enter an area enable quick action honeywell singl\dzone thermostat environment control and monitoring 19631 location android location you enter an area enable quick action honeywell singl_dbl_zone thermostat environment control and monitoring ``` However, I have a pandas df full of these (re-configured) strings and when I try to operate on the df, it doesn't work. ``` raw_corpus.loc[:,'constructed_recipe']=raw_corpus['constructed_recipe'].str.replace('singl\\dzone ','singl_dbl_zone ') ``` The backslash remains! ``` 323096 you enter an area android location location environment control and monitoring honeywell singl\dzone thermostat enable quick action ```<issue_comment>username_1: I think it would be easier to remove the backslash itself: ``` In [165]: df Out[165]: constructed_recipe 0 singl\dzone In [166]: df['constructed_recipe'] = df['constructed_recipe'].str.replace(r'\\', '') In [167]: df Out[167]: constructed_recipe 0 singldzone ``` Upvotes: 1 <issue_comment>username_2: There's a difference between `str.replace` and `pd.Series.str.replace`. The former accepts substring replacements, and the latter accepts regex patterns. Using `str.replace`, you'd need to pass a *raw string* instead. ``` df['col'] = df['col'].str.replace(r'\\d', '_dbl_') ``` Upvotes: 3 [selected_answer]
2018/03/19
321
1,017
<issue_start>username_0: I have this array of tuplets containing strings and a url ``` var notifications:[(body: String, header: String, icon: URL)] = [] ``` Now, I want to append a tuplet with an empty URL I tried ``` notifications.append((body: "some text, header: "some more text", icon: nil)) ``` but that is not allowed What is the way to do this?<issue_comment>username_1: I think it would be easier to remove the backslash itself: ``` In [165]: df Out[165]: constructed_recipe 0 singl\dzone In [166]: df['constructed_recipe'] = df['constructed_recipe'].str.replace(r'\\', '') In [167]: df Out[167]: constructed_recipe 0 singldzone ``` Upvotes: 1 <issue_comment>username_2: There's a difference between `str.replace` and `pd.Series.str.replace`. The former accepts substring replacements, and the latter accepts regex patterns. Using `str.replace`, you'd need to pass a *raw string* instead. ``` df['col'] = df['col'].str.replace(r'\\d', '_dbl_') ``` Upvotes: 3 [selected_answer]
2018/03/19
1,258
4,039
<issue_start>username_0: I am not an expert when it comes to using Excel and Excel functions so any help would be greatly appreciated. I have 2 different data sets which have different time intervals which I would like to show on the same graph. A sample from the 2 data sets is the following: (A1:B14 is the first data set, D1:E14 is the second data set) [![enter image description here](https://i.stack.imgur.com/S3Wpm.png)](https://i.stack.imgur.com/S3Wpm.png) As you can see, the time values from the different data sets do not line up with one another (second data set contains some missing seconds) and this is causing a problem when it comes to displaying the Memory and CPU values on the same time axis (which would be the x-axis in this case). If there an efficient way to display both the CPU and Memory values on the same time axis in such a case? And if so what is the best way to do this? P.S: It is important that no data is removed in the process of lining up the data. Thank you<issue_comment>username_1: Put this formula into C2 then [double-click the fill handle](https://superuser.com/questions/859501/fill-formulas-until-end-of-adjacent-table/859580#859580). ``` =INDEX(E:E, MATCH(A2, D:D, 0)) ``` Leave the #N/A errors. These are discarded in chart data. [![enter image description here](https://i.stack.imgur.com/m5oDG.png)](https://i.stack.imgur.com/m5oDG.png) Upvotes: 0 <issue_comment>username_2: This is only "related" to THIS question (not an answer), but you might find some value in it... (especially the links at the bottom!) When working with times, there are various ways that you might be able to benefit from a list of "every second in a day", such as if you had a list of times and you need to figure out which ones are missing: 1. make a blank worksheet 2. highlight `Column A` by clicking the heading 3. hit `CTRL+1`, click `TIME` and choose a time format that includes seconds 4. now in cell `A1`, enter: `00:00:01` 5. in cell `A2` type: `=A1+$A$1` 6. click once on cell `A2` and hit `CTRL+C` to copy 7. hit `F5` and type: `A2:A86400` and hit Enter 8. hit `CTRL+V` to paste (now you have a list of every second in a day) 9. copy and paste this formula into cell `B1` (replacing both `(Range with your times)` with the the [single-column] range of cells where your time are, using **absolute cell references** (4 dollar signs) ie., if your times are in `column D` from `Rows 10 to 100` you would enter: `$D$10:$D$100`: ``` =IF(ISERROR(VLOOKUP( (Range with your times) ,1,FALSE)),"",VLOOKUP(A1, { (Range with your times) ,1,FALSE)) ``` 10. click once on cell `B1` and hit `CTRL+C` to copy 11. hit `F5` and type: `B2:B86400` and hit `Enter` 12. hit `CTRL+V` to paste Now `column B` will be blank for missing times, and show the times that are in your list. if it doesn't work, go through the steps again, carefully. Like anything with Excel or coding, details matter. (For example, don't paste the formula into cell A1, and also your times must be in one single column, and the *absolute references* need to be properly specified in part of the formula or so it keeps referring to the same cells when you copy the formula.) --- **Recommended Tutorials**: -------------------------- **Excel:** * [Learnfree.org Excel 2016 Tutorial](https://www.gcflearnfree.org/excel2016/) + [Lifewire: Excel Step by Step Basic Tutorial](https://www.lifewire.com/excel-step-by-step-basic-tutorial-3123501) **VBA:** * [Excel VBA For Complete Beginners](http://www.homeandlearn.org/) * MSDN (Microsoft Developer's Network) : [Getting Started with VBA](https://msdn.microsoft.com/en-us/library/office/ee814737(v=office.14).aspx) Also: [Free month of training from LinkedIn](https://learning.linkedin.com/in/office?trk=par_acq_MSFThelp-all-tc-homepage-feature-feature-feature_learning&src=mi-inprod&veh=general-office-help&utm_source=microsoft&utm_medium=help-integration&utm_campaign=par_acq_MSFThelp-all-tc-homepage-feature-feature-feature_learning) in partnership w/ Microsoft Upvotes: 1
2018/03/19
1,294
3,921
<issue_start>username_0: I have a problem related to different types of variables at the input type. My program is simple. I type the temperature in Celsius, program prints Celsius and Fahrenheit temperature value and then loops itself asking for next value in Celsius. If you type "-99999" it will stop. I wanted to change it to stop when I type a word "elo" (It basically means "Bye" in Polish slang :) ) but after a few hours of trying I gave up... I'll appreciate any help! ``` #include float fahrenheit(long); int main() { int celsius; printf("Type the temperature in celsius: ", &celsius); scanf\_s("%ld", &celsius); while (celsius != -99999) { printf("%ld %6.1f\n", celsius, fahrenheit(celsius)); printf("Type the temperature in celsius: ", &celsius); scanf\_s("%ld", &celsius); } } float fahrenheit(long celsius) { return (float) 1.8\*celsius + 32.0; } ```<issue_comment>username_1: Put this formula into C2 then [double-click the fill handle](https://superuser.com/questions/859501/fill-formulas-until-end-of-adjacent-table/859580#859580). ``` =INDEX(E:E, MATCH(A2, D:D, 0)) ``` Leave the #N/A errors. These are discarded in chart data. [![enter image description here](https://i.stack.imgur.com/m5oDG.png)](https://i.stack.imgur.com/m5oDG.png) Upvotes: 0 <issue_comment>username_2: This is only "related" to THIS question (not an answer), but you might find some value in it... (especially the links at the bottom!) When working with times, there are various ways that you might be able to benefit from a list of "every second in a day", such as if you had a list of times and you need to figure out which ones are missing: 1. make a blank worksheet 2. highlight `Column A` by clicking the heading 3. hit `CTRL+1`, click `TIME` and choose a time format that includes seconds 4. now in cell `A1`, enter: `00:00:01` 5. in cell `A2` type: `=A1+$A$1` 6. click once on cell `A2` and hit `CTRL+C` to copy 7. hit `F5` and type: `A2:A86400` and hit Enter 8. hit `CTRL+V` to paste (now you have a list of every second in a day) 9. copy and paste this formula into cell `B1` (replacing both `(Range with your times)` with the the [single-column] range of cells where your time are, using **absolute cell references** (4 dollar signs) ie., if your times are in `column D` from `Rows 10 to 100` you would enter: `$D$10:$D$100`: ``` =IF(ISERROR(VLOOKUP( (Range with your times) ,1,FALSE)),"",VLOOKUP(A1, { (Range with your times) ,1,FALSE)) ``` 10. click once on cell `B1` and hit `CTRL+C` to copy 11. hit `F5` and type: `B2:B86400` and hit `Enter` 12. hit `CTRL+V` to paste Now `column B` will be blank for missing times, and show the times that are in your list. if it doesn't work, go through the steps again, carefully. Like anything with Excel or coding, details matter. (For example, don't paste the formula into cell A1, and also your times must be in one single column, and the *absolute references* need to be properly specified in part of the formula or so it keeps referring to the same cells when you copy the formula.) --- **Recommended Tutorials**: -------------------------- **Excel:** * [Learnfree.org Excel 2016 Tutorial](https://www.gcflearnfree.org/excel2016/) + [Lifewire: Excel Step by Step Basic Tutorial](https://www.lifewire.com/excel-step-by-step-basic-tutorial-3123501) **VBA:** * [Excel VBA For Complete Beginners](http://www.homeandlearn.org/) * MSDN (Microsoft Developer's Network) : [Getting Started with VBA](https://msdn.microsoft.com/en-us/library/office/ee814737(v=office.14).aspx) Also: [Free month of training from LinkedIn](https://learning.linkedin.com/in/office?trk=par_acq_MSFThelp-all-tc-homepage-feature-feature-feature_learning&src=mi-inprod&veh=general-office-help&utm_source=microsoft&utm_medium=help-integration&utm_campaign=par_acq_MSFThelp-all-tc-homepage-feature-feature-feature_learning) in partnership w/ Microsoft Upvotes: 1
2018/03/19
2,653
9,066
<issue_start>username_0: I already got how can i copy specific column from another workbook but now i also need to filter a specific column. I have tried this code but i encounter an error "Subscript out of range". I need to filter Column C that contains "Mary" and copy its corresponding data. This is the sample of my code, I know there is something wrong with my syntax especially in using auto filter for COLUMN C and copying different column and paste it to another workbook. Please help me to make it right. Thanks ``` Sub RAWtransfertoTRUST() Dim MainWorkfile As Workbook Dim OtherWorkfile As Workbook Dim TrackerSht As Worksheet Dim FilterSht As Worksheet Dim lRow As Long, lRw As Long Application.ScreenUpdating = False Application.DisplayAlerts = False ' set workbook object Set MainWorkfile = ActiveWorkbook ' set the worksheet object Set TrackerSht = MainWorkfile.Sheets("Trust Activities Raw") With TrackerSht lRow = .Cells(.Rows.Count, "B").End(xlUp).Row End With Application.AskToUpdateLinks = False ' set the 2nd workbook object Set OtherWorkfile = Workbooks.Open(Filename:=Application.GetOpenFilename) ' set the 2nd worksheet object Set FilterSht = OtherWorkfile.Sheets("Raw Data") With FilterSht .AutoFilterMode = False .Range("B2:F").AutoFilter Field:=3, Criteria1:="Mary" lRw = .Cells(.Rows.Count, "B").End(xlUp).Row End With ' paste TrackerSht.Range("B" & lRow).PasteSpecial Paste:=xlPasteValues, _ Operation:=xlNone, SkipBlanks:=False, Transpose:=False With FilterSht If .FilterMode Or .AutoFilterMode Then .AutoFilterMode = False lRw = .Cells(.Rows.Count, "C").End(xlUp).Row .Range("J1:J" & lRw).Copy ' copy your range End With ' paste TrackerSht.Range("G" & lRow).PasteSpecial Paste:=xlPasteValues, _ Operation:=xlNone, SkipBlanks:=False, Transpose:=False With FilterSht If .FilterMode Or .AutoFilterMode Then .AutoFilterMode = False lRw = .Cells(.Rows.Count, "C").End(xlUp).Row ' last row with data in column "C" .Range("N1:Q" & lRw).Copy ' copy your range End With ' paste TrackerSht.Range("H" & lRow).PasteSpecial Paste:=xlPasteValues, _ Operation:=xlNone, SkipBlanks:=False, Transpose:=False With FilterSht If .FilterMode Or .AutoFilterMode Then .AutoFilterMode = False lRw = .Cells(.Rows.Count, "C").End(xlUp).Row ' last row with data in column "C" .Range("T1:W" & lRw).Copy ' copy your range End With ' paste TrackerSht.Range("L" & lRow).PasteSpecial Paste:=xlPasteValues, _ Operation:=xlNone, SkipBlanks:=False, Transpose:=False With FilterSht If .FilterMode Or .AutoFilterMode Then .AutoFilterMode = False lRw = .Cells(.Rows.Count, "C").End(xlUp).Row ' last row with data in column "C" .Range("Y1:Z" & lRw).Copy ' copy your range End With ' paste TrackerSht.Range("P" & lRow).PasteSpecial Paste:=xlPasteValues, _ Operation:=xlNone, SkipBlanks:=False, Transpose:=False With FilterSht If .FilterMode Or .AutoFilterMode Then .AutoFilterMode = False lRw = .Cells(.Rows.Count, "C").End(xlUp).Row ' last row with data in column "C" .Range("AB1:AC" & lRw).Copy ' copy your range End With ' paste TrackerSht.Range("R" & lRow).PasteSpecial Paste:=xlPasteValues, _ Operation:=xlNone, SkipBlanks:=False, Transpose:=False End Sub ```<issue_comment>username_1: So, a few issues here. In this code block: ``` With FilterSht .AutoFilterMode = False .Range("B2:F").AutoFilter Field:=3, Criteria1:="Mary" lRw = .Cells(.Rows.Count, "B").End(xlUp).Row End With ``` You are missing a number in the range `B2:F`. If you want to filter the entire column, then both you should exclude the number "2" from `B2`. I assume that you were wanting to use the `lRw` that is actually on the next line, so this would need to go *above* your range line, then you would need to include that with your `B2:F` by adding `& lRw`. That line should now look like: ``` .Range("B2:F" & lRw).AutoFilter Field:=2, Criteria1:="Mary" ``` Also, keep in mind that this **is not** including row 2 in your autofilter. I assume you were wanting to filter row 2, so you would need to change it to `B1:` if this was the case. --- Next issue is your copy / paste method. You are not pasting anything, because you never copied it. In the same **With block**, you can add this line: `.AutoFilter.Range.Copy` --- Here's your final result: ``` Sub RAWtransfertoTRUST() Dim MainWorkfile As Workbook, OtherWorkfile As Workbook Dim TrackerSht As Worksheet, FilterSht As Worksheet Dim lRow As Long, lRw As Long Application.ScreenUpdating = False Application.DisplayAlerts = False Set MainWorkfile = ActiveWorkbook Set TrackerSht = MainWorkfile.Sheets("Trust Activities Raw") With TrackerSht lRow = .Cells(.Rows.Count, "B").End(xlUp).Row End With Application.AskToUpdateLinks = False Set OtherWorkfile = Workbooks.Open(Filename:=Application.GetOpenFilename) Set FilterSht = OtherWorkfile.Sheets("Raw Data") With FilterSht .AutoFilterMode = False lRw = .Cells(.Rows.Count, "B").End(xlUp).Row .Range("B1:F" & lRw).AutoFilter Field:=3, Criteria1:="Mary" .AutoFilter.Range.Copy End With ' paste TrackerSht.Range("B" & lRow).PasteSpecial Paste:=xlPasteValues, _ Operation:=xlNone, SkipBlanks:=False, Transpose:=False With FilterSht If .FilterMode Or .AutoFilterMode Then .AutoFilterMode = False lRw = .Cells(.Rows.Count, "C").End(xlUp).Row .Range("J1:J" & lRw).Copy ' copy your range End With ' paste TrackerSht.Range("G" & lRow).PasteSpecial Paste:=xlPasteValues, _ Operation:=xlNone, SkipBlanks:=False, Transpose:=False With FilterSht If .FilterMode Or .AutoFilterMode Then .AutoFilterMode = False lRw = .Cells(.Rows.Count, "C").End(xlUp).Row ' last row with data in column "C" .Range("N1:Q" & lRw).Copy ' copy your range End With ' paste TrackerSht.Range("H" & lRow).PasteSpecial Paste:=xlPasteValues, _ Operation:=xlNone, SkipBlanks:=False, Transpose:=False With FilterSht If .FilterMode Or .AutoFilterMode Then .AutoFilterMode = False lRw = .Cells(.Rows.Count, "C").End(xlUp).Row ' last row with data in column "C" .Range("T1:W" & lRw).Copy ' copy your range End With ' paste TrackerSht.Range("L" & lRow).PasteSpecial Paste:=xlPasteValues, _ Operation:=xlNone, SkipBlanks:=False, Transpose:=False With FilterSht If .FilterMode Or .AutoFilterMode Then .AutoFilterMode = False lRw = .Cells(.Rows.Count, "C").End(xlUp).Row ' last row with data in column "C" .Range("Y1:Z" & lRw).Copy ' copy your range End With ' paste TrackerSht.Range("P" & lRow).PasteSpecial Paste:=xlPasteValues, _ Operation:=xlNone, SkipBlanks:=False, Transpose:=False With FilterSht If .FilterMode Or .AutoFilterMode Then .AutoFilterMode = False lRw = .Cells(.Rows.Count, "C").End(xlUp).Row ' last row with data in column "C" .Range("AB1:AC" & lRw).Copy ' copy your range End With ' paste TrackerSht.Range("R" & lRow).PasteSpecial Paste:=xlPasteValues, _ Operation:=xlNone, SkipBlanks:=False, Transpose:=False End Sub ``` Oh, and I *slightly* cleaned up your code formatting :D Upvotes: 2 <issue_comment>username_2: Thanks for all your help, i already resolved my issue. I just filter all the columns then delete the columns that i don't need. This is my sample code. ``` Sub RAWtransfertoTRUST() Dim MainWorkfile As Workbook, OtherWorkfile As Workbook Dim TrackerSht As Worksheet, FilterSht As Worksheet Dim lRow As Long, lRw As Long Application.ScreenUpdating = False Application.DisplayAlerts = False Set MainWorkfile = ActiveWorkbook Set TrackerSht = MainWorkfile.Sheets("Trust Activities Raw") With TrackerSht lRow = .Cells(.Rows.Count, "C").End(xlUp).Row End With Application.AskToUpdateLinks = False Set OtherWorkfile = Workbooks.Open(Filename:=Application.GetOpenFilename) Set FilterSht = OtherWorkfile.Sheets("Raw Data") With FilterSht .AutoFilterMode = False lRw = .Cells(.Rows.Count, "C").End(xlUp).Row .Range("B1:W" & lRw).AutoFilter Field:=2, Criteria1:="Mary" .AutoFilter.Range.Copy End With TrackerSht.Range("B" & lRow).PasteSpecial Paste:=xlPasteValues, _ Operation:=xlNone, SkipBlanks:=False, Transpose:=False With TrackerSht .Range("G:I,K:M,R:S,X:AD").DELETE Shift:=xlToLeft .Range("E:E").Copy .Range("G:O").PasteSpecial Paste:=xlPasteFormats .Range("G2", "G1000").NumberFormat = "dd/mm/yyyy" .Range("M2", "M1000").Interior.ColorIndex = 41 .Range("J2", "J1000").Interior.ColorIndex = 6 End With End Sub ``` Upvotes: 2 [selected_answer]
2018/03/19
728
2,745
<issue_start>username_0: I am trying to set location in componentDidMount. I am guessing `this` is not being passed in to the internal function. See my example: ```js import React, { Component } from 'react'; import GoogleMapReact from 'google-map-react'; const map = { width: '100%', height: '100vh' }; const AnyReactComponent = ({ text }) => { text }; export default class Map extends Component { constructor(props) { super(props) this.state = { center: { lat: 40.7446790, lng: -73.9485420 }, zoom: 11 } }; componentDidMount(){ if (navigator.geolocation) { navigator.geolocation.getCurrentPosition( function(position) { console.log(position); this.setState({ center: { lat: position.coords.latitude, lng: position.coords.longitude }, }) } ) } } render() { return ( ) }; }; ``` This keeps happening and I have no idea why. Can you help?<issue_comment>username_1: You are usually not supposed to change the state in componentDidMount(). Read the [docs](https://reactjs.org/docs/react-component.html#componentdidmount) for more information Upvotes: 0 <issue_comment>username_2: Change `function (position) {}` to an ES6 arrow function `(position) => {}` to preserve `this` in the closure ``` componentDidMount(){ if (navigator.geolocation) { navigator.geolocation.getCurrentPosition( (position) => { console.log(position); this.setState({ center: { lat: position.coords.latitude, lng: position.coords.longitude }, }) } ) } ``` Upvotes: 0 <issue_comment>username_3: This composentDidMount should be better. Arrow function does not have "this". So "this" will refer to the component. ``` componentDidMount(){ if (navigator.geolocation) { navigator.geolocation.getCurrentPosition( (position) => { console.log(position); this.setState({ center: { lat: position.coords.latitude, lng: position.coords.longitude }, }) } ) } } ``` Upvotes: 0 <issue_comment>username_4: First of all it is advised not to use setState in componentDidMount. Consider using componentWillMount. Secondly you're using `this` inside a function. `this` does not refer to the component anymore. You might need to do infamous this->that conversion. ``` componentWillMount(){ const that = this; if (navigator.geolocation) { navigator.geolocation.getCurrentPosition( function(position) { console.log(position); that.setState({ center: { lat: position.coords.latitude, lng: position.coords.longitude }, }) } ) } ``` } Upvotes: 1
2018/03/19
813
3,052
<issue_start>username_0: I am super new to using Node.js, NPM and all these modern tools for better productivity and workflow. So here are the details: ``` Node version - v8.10.0 Gulp CLI version - 2.0.1 Gulp Local version - 3.9.1 NPM version - 5.6.0 Windows 7 Node.js installed in D:/ProgramFiles ``` I've tried using gulp and it does work wonderfully with this script ``` var gulp = require('gulp'), watch = require('gulp-watch'); gulp.task('default',function(){ console.log('Gulp task created'); }); gulp.task('html' , function() { console.log('Something useful here'); }); gulp.task('watch', function() { watch('/app/index.html', function() { gulp.start('html'); }); }); ``` So typing gulp does respond with the default task message. Typing gulp html does respond too with a console message. However, when i type gulp watch, it does work with following output. ``` Starting 'watch'... Finished 'watch' after 7.99 ms ``` But whenever i make changes and save the index file, the cmd doesn't update. I've tried using Git Bash and other terminals. I've even installed previous node versions and tried solving this issue using those but no luck so far. I tried editing the dependencies to an older version but that doesn't work too. If anyone of you can help, I'll be thankful.<issue_comment>username_1: You are usually not supposed to change the state in componentDidMount(). Read the [docs](https://reactjs.org/docs/react-component.html#componentdidmount) for more information Upvotes: 0 <issue_comment>username_2: Change `function (position) {}` to an ES6 arrow function `(position) => {}` to preserve `this` in the closure ``` componentDidMount(){ if (navigator.geolocation) { navigator.geolocation.getCurrentPosition( (position) => { console.log(position); this.setState({ center: { lat: position.coords.latitude, lng: position.coords.longitude }, }) } ) } ``` Upvotes: 0 <issue_comment>username_3: This composentDidMount should be better. Arrow function does not have "this". So "this" will refer to the component. ``` componentDidMount(){ if (navigator.geolocation) { navigator.geolocation.getCurrentPosition( (position) => { console.log(position); this.setState({ center: { lat: position.coords.latitude, lng: position.coords.longitude }, }) } ) } } ``` Upvotes: 0 <issue_comment>username_4: First of all it is advised not to use setState in componentDidMount. Consider using componentWillMount. Secondly you're using `this` inside a function. `this` does not refer to the component anymore. You might need to do infamous this->that conversion. ``` componentWillMount(){ const that = this; if (navigator.geolocation) { navigator.geolocation.getCurrentPosition( function(position) { console.log(position); that.setState({ center: { lat: position.coords.latitude, lng: position.coords.longitude }, }) } ) } ``` } Upvotes: 1
2018/03/19
664
2,569
<issue_start>username_0: I have a lot of arguments for my script. And along with the argparser, I want users to also have the option to specify those arguments via a config file. ``` parser.add_argument('-a','--config_file_name' ...required=False) parser.add_argument('-b' ...required=True) parser.add_argument('-c' ...required=False) .... ``` At this point I just need the logic to implement the following: * Either the users can type in all the arguments in the command line or * They can type in the first argument, specify the file name and the code fills in/overwrites all the remaining optional arguments from the config file. How can this be achieved?<issue_comment>username_1: I don't think this is up to argparse to handle. Argparse simple needs to check if the argument for the config file is there and pass it on to your program. You need to handle this in your program, which would mean doing something like: ``` ... arguments=parser.parse_args() if len(arguments.config_file_name): f=fopen(arguments.config_file_name,'rb') conf_settings = f.read() for line in conf_settings: #parse your config format here. ``` this way, if the config\_file\_name is set, you will overwrite any possible given arguments, and if not, the program will be executed with the arguments specified by the user. for example: ``` import argparse parser = argparse.ArgumentParser() parser.add_argument("a") args = parser.parse_args() if args.a: #We're getting config from config file here... else: #We're getting config from the command line arguments #Now we can call all functions with correct configuration applied. ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: One way to solve this problem is with [`xargs`](http://man7.org/linux/man-pages/man1/xargs.1.html) command line utility. This wouldn't give you exactly the command line interface that you describe, but would achieve the same effect. `xargs` will run a command with arguments that it has read from standard input. If you pipe your arguments from the config file into `xargs` then your python program will be called with the arguments stored in the file. For example: config.txt ``` -a -b --input "/some/file/path" --output "/another" ``` You could then use these arguments to your program like so ``` xargs ./my-prog.py --optional-arg < config.txt ``` because of the way that argparse works, it will take the value of the last argument if duplicates are found. Thus, if you have duplicate arguments, the ones from the config file will be chosen. Upvotes: 0
2018/03/19
555
2,416
<issue_start>username_0: I would like to know if there is a way to obtain a reference to a view inside a DataTemplate in a ListView in Xamarin.Forms. Supposing I have this xaml: ``` ``` I would like to be able to grab a reference to the StackLayout named "ProductStackLayout" in every row of the ListView. I need to do this when the page is appearing, to dynamically manipulate it's content (for something than can't be achieved with data binding), so I can't take advantage of view references passed in event handlers originating from elements in the DataTemplate itself like ItemTapped or similar. For what I know, in WPF or UWP something like that could be achieved with the help of the VisualTreeHelper class, but I don't believe there is an equivalent of this class in Xamarin.Forms.<issue_comment>username_1: Yeah, It is possible access the view which is created using `DataTemplate` in run-time. Hook `BindingContextChanged` event for the view inside the `DataTemplate` from XAML. In the event call back, the view created from `DataTemplate` can be accessed using sender parameter. You need type cast the sender to access the view, because sender is boxed to object type. Or else, you can go for DataTemplate selector to create views based on your object. Upvotes: 4 [selected_answer]<issue_comment>username_2: You can also cast like this: ``` ITemplatedItemsView templatedItemsView = listView as ITemplatedItemsView; ViewCell firstCell = templatedItemsView.TemplatedItems[0] as ViewCell; StackLayout stackLayout = firstCell.View as StackLayout; ``` Which will give you reference to the views --- But you probably want to react based on the change of the binding context since you otherwise will have to manually change the content of the view. Using `BindingContextChanged` I suspect would make you render the content twice - first the change causes a render like normal - afterwards you render it again. So if for instance a change in a string occurs - a label will rerender - afterwards you get the value in BindingContextChanged and preform the render that you actually wanted. You can subclass ListView which i think would prevent it: ``` public class CustomListView : ListView { protected override void SetupContent(Cell content, int index) { // render differently depending on content.BindingContext base.SetupContent(content, index); } } ``` Upvotes: 2
2018/03/19
310
1,115
<issue_start>username_0: I am trying to set the selected value of a drop down in Angular 5. HTML: ``` {{state.name}} State is required ``` TypeScript: ``` stateFormControl = new FormControl('', [ Validators.required ]); this.vendorForm.controls["state"].setValue(this.state); ``` Even when I set the default value in the FormControl declaration, nothing is being set by default for the drop down.<issue_comment>username_1: Bind to the value attribute in your tag: ``` {{state.name}} ``` Upvotes: 1 <issue_comment>username_2: You should try this: ``` this.stateFormControl.setValue(this.state); ``` or ``` {{state.name}} State is required ``` If you decide to change your html then follow [this tutorial](https://angular.io/guide/reactive-forms#introduction-to-formbuilder) of reactive form So you will build your form with something like this ``` buildForm(){ this.vendorForm = this.fb.group({ state: ['', Validators.required ], // <--- You don't need to instantiate FormControls }); } ``` and you will have this somewhere else ``` ``` Upvotes: 1 [selected_answer]
2018/03/19
2,044
7,484
<issue_start>username_0: What is the best practice for reducing the size of JPEG images in a PDF file, newly created using [iText](https://itextpdf.com)? (My objective is a trade-off between image quality and file size.) The images are created as follows: ``` Image image = new Image(ImageDataFactory.create(imagePath)) ``` I would like to provide a scale factor, for instance `0.5`, which halves the number of pixels in a row. Say I generate a PDF with a single 3 MB image. I tried `image.scale(0.5f, 0.5f)`, but the resulting PDF file is still roughly 3 MB. I expected it to become much smaller. Thus I guess the source image, embedded in the PDF file, is not touched. But that is what I need: The total number of pixels in the entire PDF file stored on disk should be reduced. What is the easiest/recommended way to achieve this?<issue_comment>username_1: Scale the image first, then open the scaled image with iText. There is a create method in ImageDataFactory that accepts an AWT image. Scale the image using AWT tools first, then open it like this: ``` String imagePath = "C:\\path\\to\\image.jpg"; java.awt.Image awtImage = ImageIO.read(new File(imagePath)); // scale image here int scaledWidth = awtImage.getWidth(null) / 2; int scaledHeight = awtImage.getHeight(null) / 2; BufferedImage scaledAwtImage = new BufferedImage(scaledWidth, scaledHeight, BufferedImage.TYPE_INT_RGB); Graphics2D g = scaledAwtImage.createGraphics(); g.drawImage(awtImage, 0, 0, scaledWidth, scaledHeight, null); g.dispose(); /* Optionally pick a color to replace with transparency. Any pixels that match this color will be replaced by tansparency. */ Color bgColor = Color.WHITE; Image itextImage = new Image(ImageDataFactory.create(scaledAwtImage, bgColor)); ``` For better tips on how to scale an image, see [How can I resize an image using Java?](https://stackoverflow.com/questions/244164/how-can-i-resize-an-image-using-java) If you still need the original size when adding to PDF, just scale it back up again. ``` itextImage.scale(2f, 2f); ``` Note: This code is untested. --- **EDIT** in response to comments on bounty You got me thinking and looking. It appears iText treats importing an AWT image as a raw image. I presume it treats it the same as a BMP, which simply [writes the pixel data using /FlateDecode](https://developers.itextpdf.com/de/node/2615), which is probably significantly less than optimal. The only way I can think of to achieve your requirement would be to use ImageIO to write the scaled image to the file system or a ByteArrayOutputStream as a jpeg, then use the resultant file/bytes to open with iText. Here's an updated example using byte arrays. If you want to get any more fancy with compression levels and such, [refer here](https://stackoverflow.com/questions/17108234/setting-jpg-compression-level-with-imageio-in-java). ``` String imagePath = "C:\\path\\to\\image.jpg"; java.awt.Image awtImage = ImageIO.read(new File(imagePath)); // scale image here int scaledWidth = awtImage.getWidth(null) / 2; int scaledHeight = awtImage.getHeight(null) / 2; BufferedImage scaledAwtImage = new BufferedImage(scaledWidth, scaledHeight, BufferedImage.TYPE_INT_RGB); Graphics2D g = scaledAwtImage.createGraphics(); g.drawImage(awtImage, 0, 0, scaledWidth, scaledHeight, null); g.dispose(); ByteArrayOutputStream bout = new ByteArrayOutputStream() ImageIO.write(scaledAwtImage, "jpeg", bout); byte[] imageBytes = bout.toByteArray(); Image itextImage = new Image(ImageDataFactory.create(imageBytes)); ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: There is a way that listed in this [documentations](https://developers.itextpdf.com/examples/image-examples-itext5/reduce-image), that its give you access to compressing the image and reducing the entire PDF file stored on disk. hope it helps. Below for the example of the code: ``` /* * This example was written by <NAME> in answer to the following question: * http://stackoverflow.com/questions/30483622/compressing-images-in-existing-pdfs-makes-the-resulting-pdf-file-bigger-lowagie */ package sandbox.images; import com.itextpdf.text.DocumentException; import com.itextpdf.text.pdf.PRStream; import com.itextpdf.text.pdf.PdfName; import com.itextpdf.text.pdf.PdfNumber; import com.itextpdf.text.pdf.PdfObject; import com.itextpdf.text.pdf.PdfReader; import com.itextpdf.text.pdf.PdfStamper; import com.itextpdf.text.pdf.parser.PdfImageObject; import java.awt.Graphics2D; import java.awt.geom.AffineTransform; import java.awt.image.BufferedImage; import java.io.ByteArrayOutputStream; import java.io.File; import java.io.FileOutputStream; import java.io.IOException; import javax.imageio.ImageIO; import sandbox.WrapToTest; /** * @author <NAME> (iText Software) */ @WrapToTest public class ReduceSize { public static final String SRC = "resources/pdfs/single_image.pdf"; public static final String DEST = "results/images/single_image_reduced.pdf"; public static final float FACTOR = 0.5f; public static void main(String[] args) throws DocumentException, IOException { File file = new File(DEST); file.getParentFile().mkdirs(); new ReduceSize().manipulatePdf(SRC, DEST); } public void manipulatePdf(String src, String dest) throws DocumentException, IOException { PdfReader reader = new PdfReader(src); int n = reader.getXrefSize(); PdfObject object; PRStream stream; // Look for image and manipulate image stream for (int i = 0; i < n; i++) { object = reader.getPdfObject(i); if (object == null || !object.isStream()) continue; stream = (PRStream)object; if (!PdfName.IMAGE.equals(stream.getAsName(PdfName.SUBTYPE))) continue; if (!PdfName.DCTDECODE.equals(stream.getAsName(PdfName.FILTER))) continue; PdfImageObject image = new PdfImageObject(stream); BufferedImage bi = image.getBufferedImage(); if (bi == null) continue; int width = (int)(bi.getWidth() * FACTOR); int height = (int)(bi.getHeight() * FACTOR); if (width <= 0 || height <= 0) continue; BufferedImage img = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB); AffineTransform at = AffineTransform.getScaleInstance(FACTOR, FACTOR); Graphics2D g = img.createGraphics(); g.drawRenderedImage(bi, at); ByteArrayOutputStream imgBytes = new ByteArrayOutputStream(); ImageIO.write(img, "JPG", imgBytes); stream.clear(); stream.setData(imgBytes.toByteArray(), false, PRStream.NO_COMPRESSION); stream.put(PdfName.TYPE, PdfName.XOBJECT); stream.put(PdfName.SUBTYPE, PdfName.IMAGE); stream.put(PdfName.FILTER, PdfName.DCTDECODE); stream.put(PdfName.WIDTH, new PdfNumber(width)); stream.put(PdfName.HEIGHT, new PdfNumber(height)); stream.put(PdfName.BITSPERCOMPONENT, new PdfNumber(8)); stream.put(PdfName.COLORSPACE, PdfName.DEVICERGB); } reader.removeUnusedObjects(); // Save altered PDF PdfStamper stamper = new PdfStamper(reader, new FileOutputStream(dest)); stamper.setFullCompression(); stamper.close(); reader.close(); } } ``` Upvotes: 1
2018/03/19
544
1,956
<issue_start>username_0: Q1. **How do I display a table in AngularDart**? I just can't seem to find any examples of this on the Internet for some reason. I'm not familiar with either JS or Dart and I'm not sure if they're compatible or not (interchangeable). At this point any table at all will do but eventually the one in Q2 would be nice. I can get a list to work with this code: ``` {{item.property1}} ``` But if I try to create a table with this code from W3schools it doesn't work. ``` | | | | --- | --- | | {{ item.property1 }} | {{ item.property2 }} | ``` By creating the list I can confirm that getting an item from items works so it seems like it's syntax or something? If you know of a tutorial that would be great! --- Q2. While I'm asking and assuming I can eventually get this to work, does anyone know if it's possible to create tables like this in AngularDart instead of AngularJS? <https://material.angular.io/components/table/examples><issue_comment>username_1: Ok I think I got it. So soon after posting. If I change the table to be like the list it seems to work. Like this: ``` | | | | --- | --- | | {{ item.property1 }} | {{ item.property2 }} | ``` So it was definitely a syntax thing. Still it's surprising there's no examples anywhere. Also, if someone can tell me how to build one of those tables (now I'm assuming no) then I would still accept your answer. Upvotes: 1 <issue_comment>username_2: ng-repeat is an angularjs syntax, while ngFor is the latest angular version sytax to repeat over collection and render elements. ``` | {{ item.property1 }} | {{ item.property2 }} | ``` here you can find an example to create a table using **[angular-dart](https://webdev.dartlang.org/angular/guide/displaying-data)** To answer your 2nd question, you can use angular-material with ngFor to create a table **[STACKBLITZ EXAMPLE](https://stackblitz.com/edit/material-table-example)** Upvotes: 3 [selected_answer]
2018/03/19
325
1,203
<issue_start>username_0: I am having trouble initializing a List of monobehavior objects inside a non-monobehavior class. I was wondering if this is something that cant be done or if my problem is elsewhere. Thanks.<issue_comment>username_1: Ok I think I got it. So soon after posting. If I change the table to be like the list it seems to work. Like this: ``` | | | | --- | --- | | {{ item.property1 }} | {{ item.property2 }} | ``` So it was definitely a syntax thing. Still it's surprising there's no examples anywhere. Also, if someone can tell me how to build one of those tables (now I'm assuming no) then I would still accept your answer. Upvotes: 1 <issue_comment>username_2: ng-repeat is an angularjs syntax, while ngFor is the latest angular version sytax to repeat over collection and render elements. ``` | {{ item.property1 }} | {{ item.property2 }} | ``` here you can find an example to create a table using **[angular-dart](https://webdev.dartlang.org/angular/guide/displaying-data)** To answer your 2nd question, you can use angular-material with ngFor to create a table **[STACKBLITZ EXAMPLE](https://stackblitz.com/edit/material-table-example)** Upvotes: 3 [selected_answer]
2018/03/19
416
1,385
<issue_start>username_0: I have a progress bar: [![Progress bar](https://i.stack.imgur.com/JCxwH.png)](https://i.stack.imgur.com/JCxwH.png) For some reasons, I can seem to move the value of to the center. ``` .progressBar { border: black solid 1px; text-align: center; } #progressBarText { text-align: center; } 50% ``` It doesn't seem to take effect. How can I debug this further?<issue_comment>username_1: Ok I think I got it. So soon after posting. If I change the table to be like the list it seems to work. Like this: ``` | | | | --- | --- | | {{ item.property1 }} | {{ item.property2 }} | ``` So it was definitely a syntax thing. Still it's surprising there's no examples anywhere. Also, if someone can tell me how to build one of those tables (now I'm assuming no) then I would still accept your answer. Upvotes: 1 <issue_comment>username_2: ng-repeat is an angularjs syntax, while ngFor is the latest angular version sytax to repeat over collection and render elements. ``` | {{ item.property1 }} | {{ item.property2 }} | ``` here you can find an example to create a table using **[angular-dart](https://webdev.dartlang.org/angular/guide/displaying-data)** To answer your 2nd question, you can use angular-material with ngFor to create a table **[STACKBLITZ EXAMPLE](https://stackblitz.com/edit/material-table-example)** Upvotes: 3 [selected_answer]
2018/03/19
754
2,712
<issue_start>username_0: Can you please explain how to make the code say something else like "Wrong" when an integer is entered instead of a string eg: ``` name = ("Enter you name: ") if name == int: print("Wrong") ``` I tried to use the code i used above but the program just skipped it.<issue_comment>username_1: Try this ``` def is_number(s): try: float(s) print("Wrong") except ValueError: print(s) ``` Upvotes: 1 <issue_comment>username_2: You can use if statements to find out if the particular integer string is found in the input function. You will use the input function to store the name, but this will automatically evaluate to a string. So, used an if statement to look for any integer (that will be stored as a string in the input function). If any are found, you will get your error message. ``` name = input("Enter your name: ") if '1' in name: print('Wrong') if '2' in name: print('Wrong') if '3' in name: print('Wrong') if '4' in name: print('Wrong') if '5' in name: print('Wrong') if '6' in name: print('Wrong') if '7' in name: print('Wrong') if '8' in name: print('Wrong') if '9' in name: print('Wrong') if '0' in name: print('Wrong') ``` Each time a name is entered, each if statement will check to see if an integer has been stored as a string within the error message. If it is not stored, no error message will occur. If an integer is stored as a string, and error message will occur. Upvotes: 0 <issue_comment>username_3: You need to add input to ask the user for their name. Then check if it is a number. ``` name = input("Enter you name: ") if name.isnumeric(): print("Wrong") ``` It may be better to account for all non alphabet characters. Then, try to check if the input is alphabetical. ``` name = input("Enter you name: ") if not name.isalpha(): print("Wrong") ``` I would also suggest putting the code in a loop to keep asking until it is correct. Upvotes: 1 [selected_answer]<issue_comment>username_4: You need to use the `input` function in order to take input from the user. `name = input("Enter you name: ")` You can then check if the given information is convertible to an int (since input will always store the value as a string) using a `try except` statement. This will try to cast name to an int. If it succeeds, you know, it is an int. Otherwise, it is not an int. `try: int(name) print("Wrong") except ValueError: pass` Alternatively, you can use the `isalpha` builtin function (which is probably more simple) to ensure that name only contains alphabetical characters: `name = input("Enter your name: ") if not isalpha(): print("Wrong")` Upvotes: 0
2018/03/19
663
1,936
<issue_start>username_0: I want to align the description text of `mat-expansion-panel` component to the right I have tried using align-text to right but nothing is changing Here's my HTML ``` Cycle ingénieur en informatique 2014-2017 .... ``` CSS : ``` mat-panel-description { text-align: right; } ``` Here's the actual behavior [![example](https://i.stack.imgur.com/EOYMN.png)](https://i.stack.imgur.com/EOYMN.png) I would have that 2014-2017 stay stuck to the right.<issue_comment>username_1: The panel header content are contained in a span which is a flexbox so you can use that to push the contents out to the sides. view: ``` Cycle ingénieur en informatique 2014-2017 ``` css (this has to be global css eg. styles.css not in the component style): ``` .right-aligned-header > .mat-content { justify-content: space-between; } .mat-content > mat-panel-title, .mat-content > mat-panel-description { flex: 0 0 auto; } ``` Here is a [Stack Blitz demo](https://stackblitz.com/edit/angular-kano2a) Upvotes: 5 [selected_answer]<issue_comment>username_2: Based on this solution: <https://github.com/angular/components/issues/10024> ``` .mat-expansion-panel-header-description, .mat-expansion-panel-header-title { flex-basis: 0; } ``` and reading again the problem, this worked for me: ``` .mat-expansion-panel-header-description { display: flex; justify-content: flex-end; } ``` Upvotes: 4 <issue_comment>username_3: The easiest solution I found is to reset `flex-grow` of `mat-panel-description`: ```css // my.component.css mat-panel-description.right-aligned { flex-grow: 0; } ``` ```html // my.component.html ... 2014-2017 ... ``` [StackBlitz](https://stackblitz.com/edit/angular-kano2a-mujabw?file=app/expansion-overview-example.css) Upvotes: 3 <issue_comment>username_4: Add fxLayoutAlign="end" to mat-panel-description. It worked for me. ``` ' 2014-2017 ' ``` Upvotes: 0
2018/03/19
1,150
2,856
<issue_start>username_0: I have a numpy array: ``` >>> n1 = np.array([1, 1, 2, 1, 4, 5, 3, 8, 2, 9, 9]) ``` From this, I can get the number of elements from the beginning up to the highest value before the next lower number begins begins like this: ``` >>> wherediff = np.where(n1[1:]-n1[:-1] < 0) >>> wherediff = wherediff[0] + 1 >>> wherediff array([3, 6, 8]) ``` I can insert a 0 at the beginning of this array: ``` >>> wherediff = np.insert(wherediff, 0, 0) >>> wherediff array([0, 3, 6, 8]) ``` And I can get the number of elements between each successive value: ``` >>> sum_vals = np.abs(wherediff[1:] - wherediff[:-1]) >>> sum_vals array([3, 3, 2]) ``` Now, I want to generate another numpy array with the following properties: * for elemennts 0 through 2 inclusive, I want the value 1 (the number of 1s is `sum_vals[0]`, and I want it in positions `range(wherediff[0], wherediff[1])` * for elements 3 through 5 inclusive, I want the value 2 (the number of 2s is `sumvals[1]`, and I want it in positions `range(wherediff[1], wherediff[2])` * for elements 6 through 7 inclusive, I want the value 3 * for the last elements, I want the value 4 I tried this: ``` >>> n3 = [] >>> for i in range(1, wherediff.shape[0]): ... s1 = set(range(wherediff[i])) ... s2 = set(range(wherediff[i-1])) ... s3 = np.setdiff1d(s1, s2)[0] ... n3.append(np.repeat(i, len(s3))) ``` thinking I'd switch to an array later, but the `setdiff1d` function is not performing as expected. It's doing this: ``` >>> for i in range(1, wherediff.shape[0]): ... s1 = set(range(wherediff[i])) ... s2 = set(range(wherediff[i-1])) ... s3 = np.setdiff1d(s1, s2)[0] ... print(s3) ... set([0, 1, 2]) set([0, 1, 2, 3, 4, 5]) set([0, 1, 2, 3, 4, 5, 6, 7]) ``` whereas I would want; ``` 0 1 2 3 4 5 6 7 8, 9, 10 ``` Any ideas?<issue_comment>username_1: If you are using native python sets you may as well do the diff operation without numpy: ``` wherediff = np.array([0, 3, 6, 8]) for i in range(1, wherediff.shape[0]): s1 = set(range(wherediff[i])) s2 = set(range(wherediff[i-1])) s3 = np.array(list(s1 - s2)) print(s3) ``` If you want to do everything in numpy then this is the way: ``` for i in range(1, wherediff.shape[0]): s1 = np.array(range(wherediff[i])) s2 = np.array(range(wherediff[i-1])) s3 = s3 = np.setdiff1d(s1, s2) print(s3) ``` Note that you can use `assume_unique=True` here... Upvotes: 0 <issue_comment>username_2: Skip all the setdiff1d stuff and the index manipulation and work with an array of booleans: ``` flags = n1[1:] < n1[:-1] flags = np.insert(flags, 0, True) result = np.cumsum(flags) ``` The `cumsum` adds 1 to the sum for every `True`, so once for the first element and once for every time an element of `n1` was less than the previous. Upvotes: 3 [selected_answer]
2018/03/19
1,338
4,679
<issue_start>username_0: I am getting undefined is not an object evaluating \_this.props.navigation. Here is my code. > > I want to use the in multiple screens so I have to > extract it out and call it in any I need it in. > > > I have tried <https://github.com/react-navigation/react-navigation/issues/2198#issuecomment-316883535> to no luck. **Category.js** ``` import React, {Component} from 'react'; import {View, FlatList} from 'react-native'; import {ListItem} from 'react-native-elements' import {AppRegistry, TouchableOpacity, ActivityIndicator} from 'react-native'; import {SearchHeader} from '../SearchHeader'; export default class Category extends Component { constructor() { super(); this.state = { list: [], }; this.onPress = this.onPress.bind(this); } static navigationOptions = { title: 'Categories', headerStyle: {backgroundColor: '#ffb30c'}, }; renderSeparator = () => { return ( ); }; _keyExtractor = (item, index) => item.name; renderHeader = () => { return (); }; renderFooter = () => { if (!this.state.loading) return null; return ( ); }; onPress = (item) => { this.props.navigation.navigate('SpecTypeScreen',{cat:item}); }; search = () => { }; render() { return ( ( this.onPress(item)}> )} keyExtractor={this.\_keyExtractor} ItemSeparatorComponent={this.renderSeparator} ListHeaderComponent={this.renderHeader} ListFooterComponent={this.renderFooter} /> ); } } AppRegistry.registerComponent('CategoryScreen', () => CategoryScreen); ``` **SearchHeader.js** ``` import React, {Component} from 'react'; import Autocomplete from 'react-native-autocomplete-input'; import { AppRegistry, View, StyleSheet, Platform, Text, TouchableOpacity, } from 'react-native'; import {withNavigation} from 'react-navigation'; import colors from './config/colors'; import normalize from './config/normalizeText'; export class SearchHeader extends Component { constructor() { super(); this.state = { list: [], }; } search = (term) => { if (term.length > 2) { fetch("https://myUrl?term=" + encodeURI(term)) .then((response) => response.json()) .then((responseJson) => { this.setState({list: responseJson}); console.log(responseJson); }) .catch((error) => { console.error(error) }); } else{ this.setState({list: []}); } }; onPress = (item) => { this.props.navigation.navigate('ProductScreen',{spec:item}); }; render() { return ( ( {specification} )} style={[ styles.input, styles.inputLight, {borderRadius: Platform.OS === 'ios' ? 15 : 20}, {paddingRight: 50} ]}/> ); } } const styles = StyleSheet.create({ container: { borderTopWidth: 1, borderBottomWidth: 1, borderBottomColor: '#000', borderTopColor: '#000', backgroundColor: "#d71201", maxHeight:70 }, containerLight: { borderTopColor: '#e1e1e1', borderBottomColor: '#e1e1e1', }, input: { paddingLeft: 26, paddingRight: 19, margin: 8, borderRadius: 3, overflow: 'hidden', backgroundColor: colors.grey5, fontSize: normalize(14), color: colors.grey3, height: 40, ...Platform.select({ ios: { height: 30, }, android: { borderWidth: 0, }, }), }, inputLight: { backgroundColor: "#fff" }, autocompleteContainer: { backgroundColor:"#fff", marginLeft: 10, marginRight: 10 }, itemText: { fontSize: 15, margin: 5, marginLeft: 20, paddingTop:5, paddingBottom:5 }, descriptionContainer: { backgroundColor: '#F5FCFF', marginTop: 8 }, infoText: { textAlign: 'center' }, titleText: { fontSize: 18, fontWeight: '500', marginBottom: 10, marginTop: 10, textAlign: 'center' }, directorText: { color: 'grey', fontSize: 12, marginBottom: 10, textAlign: 'center' }, openingText: { textAlign: 'center' } }); AppRegistry.registerComponent('SearchHeader', () => SearchHeader); ```<issue_comment>username_1: You need to pass the `navigation` prop down. Try this: ``` renderHeader = () => { return (); }; ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: You need to wrap your component with `withNavigation(Component)` Example: `AppRegistry.registerComponent('SearchHeader', () => withNavigation(SearchHeader));` The point of withNavigation was so that you wouldn't need to pass navigation as a prop. Especially useful when you're passing through multiple children. Upvotes: 0
2018/03/19
845
3,238
<issue_start>username_0: Im developing an android APP which has a Broadcast-Receiver based on SMS incoming message. I want to track every message from specific senderNumber, and do some stuff with that SMS, for example, retrieve some data from every message. The messsage body I want to analyse is this: > > "Usted ha recibido **5.0** CUC del numero **55391393**.Saldo principal > **1565.0** CUC, linea activa hasta 2019-02-10, vence 2019-03-12" > > > I want to extract with Pattern class, the values marked in bold. But I'm really new in Regular Expressions. Some help? This is my actual code: ``` public class SMSReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { final Bundle bundle = intent.getExtras(); try { if (bundle != null) { final Object[] pdusObj = (Object[]) bundle.get("pdus"); assert pdusObj != null; for (Object aPdusObj : pdusObj) { SmsMessage currentMessage = SmsMessage.createFromPdu((byte[]) aPdusObj); String senderNum = currentMessage.getDisplayOriginatingAddress(); String message = currentMessage.getDisplayMessageBody(); /* String body = currentMessage.getMessageBody().toString(); String address = currentMessage.getOriginatingAddress(); */ Log.i("SmsReceiver", "senderNum: " + senderNum + "; message: " + message); //Save to DB if (senderNum.equals("Cubacel")) { Toast.makeText(context, "senderNum: " + senderNum + ", message: " + message, Toast.LENGTH_LONG).show(); //Parse this SMS with Regular Expresions } else { //Search for transferred numbers pending } } // end for loop } // bundle is null } catch (Exception e) { Log.e("SmsReceiver", "Exception smsReceiver" + e); } } } ``` This is a sample working code with JS, but I have NO idea how to implement in Java <https://regexr.com/3mgq2><issue_comment>username_1: ``` String re1=".*?"; // Non-greedy match on filler String re2="(5\\.0)"; // Float 1 String re3=".*?"; // Non-greedy match on filler String re4="(55391393)"; // Number 1 String re5=".*?"; // Non-greedy match on filler String re6="(1565\\.0)"; // Float 2 Pattern p = Pattern.compile(re1+re2+re3+re4+re5+re6,Pattern.CASE_INSENSITIVE | Pattern.DOTALL); ``` Try this :) Upvotes: 1 <issue_comment>username_2: I think I got it, please tell me if there is a better way: ``` public String[] parseTransfer(String cubacelMessage) { String[] data = new String[2]; Pattern pattern = Pattern.compile("Usted ha recibido (\\d+\\.\\d+) CUC del numero (\\d+).*", Pattern.CASE_INSENSITIVE); Matcher matcher = pattern.matcher(cubacelMessage); matcher.find(); data[1] = matcher.group(1); data[2] = matcher.group(2); return data; } ``` Upvotes: 1 [selected_answer]
2018/03/19
589
2,075
<issue_start>username_0: I'm trying to use a basic on click to expand so when the user clicks on the title it shows more details but for some reason it isn't working. Here is the page in question: <http://eftposnz.wpengine.com/integrated-eftpos/pos-vendors/> I'm not sure if the issue is with the code itself or if I'm not using it correctly. Here is the code I'm using: ``` $(document).ready(function () { $(".content").hide(); $(".show_hide").on("click", function () { var txt = $(".content").is(':visible') ? 'Read More' : 'Read Less'; $(".show_hide").text(txt); $(this).next('.content').slideToggle(50); show_hide.preventDefault(); }); }); ``` Any help would be greatly appreciated and please let me know if you need any further information.<issue_comment>username_1: I'm not familiar with WordPress, but it seems like the jQuery version it's using (or that you chose to use) won't let you use `$` from the get-go. As you can see on the [jQuery version included on your page](http://eftposnz.wpengine.com/wp-includes/js/jquery/jquery.js?ver=1.12.4), [`jQuery.noConflict()`](https://api.jquery.com/jquery.noconflict/) is called at the end, making `$` unavailable. Here's what you can do, as an easy/safe workaround: ``` (function($) { // your code using $ here })(jQuery); ``` Upvotes: 2 <issue_comment>username_2: Your jQuery is known in the window's scope as `jQuery` instead of `$`, which is a result of `jQuery.noConflict()`. Now you could create `$` yourself by writing this above your code: ``` var $ = jQuery; // yuck! ``` **But that would pollute your global scope!!** It would be cleaner to wrap your code into an anonymous function like this: ``` (function ($) { // your code }(jQuery)); ``` Upvotes: 1 <issue_comment>username_3: The site is currently using wordpress, so there is not "$" defined in the window context, instead of this, is only avalible via "jQuery". A solution can be: ``` (function($){ $(function(){ /* Here your code */ }); })(jQuery); ``` Upvotes: 2 [selected_answer]
2018/03/19
602
2,430
<issue_start>username_0: Imagine this scenario - My javascript based web application, which allows users to buy an insurance policy, is accessed by users across the globe. In this application, accurate age calculation is of prime importance as the insurance premiums are calculated based on the age. The age should be calculated as follows - ``` Age = Current date (Pacific timezone) - User's Date of birth ``` I understand that I cannot use javascript's local Date() object to calculate user's age as this returns the local system time and in case the user's system's time is incorrect or the user is in a different timezone the age calculation won't be accurate. I would like to know the best way to tackle this problem. Should I create a web service on my server that returns the current Pacific date? Kindly share your inputs. Thanks in advance!<issue_comment>username_1: It seems unlikely that timezone differences would be at all significant here, unless your insurance premiums go up hourly or daily. (And if so, I don't want your insurance. :) But if you can't trust the user's local clock -- and you can't -- you cannot do this in clientside javascript. It must be done serverside. (This is, of course, true for *any* form validation -- validate on the client for the user's convenience, then again on the server to prevent user shenanigans.) Upvotes: 3 [selected_answer]<issue_comment>username_2: You are missing a vital piece of information. The user's birthplace. All the messing around in the world with reducing the users present local time to your local time will only help to eliminate the potential discrepancy in today's date. To get an age accurate to +/- 1 day you must also consider the timezone of their place of birth. I agree with @username_1 and @robG. This is an unworkable level of precision for age. If I was born in the UK in December and living in New Zealand (+13 h) and holidaying in Alaska (- 1d OR + 13h) when would I celebrate my birthday? I suggest you would consider yourself to be one year older on the anniversary of your date of birth at the place you are. Even if you remain conscious that this is not quite right. My sister and I were born 366 days apart. Yet I was born at 22:20 and she was born at 00:01 (my father checked the hospital clock with the time service). Technically 365d and 41min. Yet the difference in our birthdays always quoted as 'a year and a day'. Upvotes: 0
2018/03/19
805
1,524
<issue_start>username_0: Here is the following test dataframe ``` In [32]: frame = pd.DataFrame(np.random.randn(4, 3)*1000000, columns=list('bde'), index=['Utah', 'Ohio', 'Texas', 'Oregon']) In [33]: frame Out[33]: b d e Utah 1.582808e+05 -351731.845560 -5.832029e+04 Ohio -1.653296e+06 -336185.349586 -1.170889e+05 Texas -4.741239e+04 -964691.055175 -9.489544e+05 Oregon -1.103707e+06 523821.598282 -1.245662e+06 ``` I want to change column b and d so that the elements are integers, but not column e. ``` In [35]: frame[['b','d']].applymap(int) Out[35]: b d Utah 158280 -351731 Ohio -1653296 -336185 Texas -47412 -964691 Oregon -1103707 523821 ``` That frame change but the column disappeared. How do I get it back the last column?<issue_comment>username_1: The answer was simply ``` In [36]: frame[['b','d']] = frame[['b','d']].applymap(int) In [37]: frame Out[37]: b d e Utah 158280 -351731 -5.832029e+04 Ohio -1653296 -336185 -1.170889e+05 Texas -47412 -964691 -9.489544e+05 Oregon -1103707 523821 -1.245662e+06 ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: You could use `assign`: ``` frame.assign(b = frame.b.astype(int), d = frame.d.astype(int)) ``` Output: ``` b d e Utah 524658 -965098 2.762532e+04 Ohio -980245 -629015 1.042148e+06 Texas -180861 -60601 -5.128917e+05 Oregon -752839 469190 -5.036541e+05 ``` Upvotes: 2
2018/03/19
3,999
5,343
<issue_start>username_0: I am generating a pandas dataframe with some data (some are numpy arrays) and saving the data with the pandas.to\_csv function. However, when reading the csv file to a dataframe again with pandas.read\_csv I notice that pandas added line breaks within the numpy array like so (see last output) ``` import pandas as pd import numpy as np # In[34]: # create the dataframe d = {'col1': [1, 2], 'col2': [3, 4]} df=pd.DataFrame(data=d) df.head() ``` Out: ``` col1 col2 0 1 3 1 2 4 ``` ``` # In[35]: # append array data to dataframe data = np.array([]) data = np.zeros(512) df = df.append({'col1' : data }, ignore_index=True) df.head() # In[37]: # write to csv df.to_csv('records.csv') #read csv df= pd.read_csv('records.csv') df.head() # In[40]: array = df['col1'].values print(array) ``` Out[]: ['1' '2' '[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0.]'] Any ideas of how to fix this or why is this happening? ps. thanks for the comments I included an example that replicates the issue and re-phrased the question since it seems that as commented what I want to do is to store a numpy array in a data frame cell.<issue_comment>username_1: This is how we resolved the issue. ``` array_list = np.array([]) for i in array: data_tmp = np.fromstring(i[1:-1],dtype=np.float,sep=' ') array_list = np.concatenate([array_list, data_tmp]) array_list = array_list.reshape((1,-1)) print(array_list) ``` [OUT] [[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] Upvotes: 1 <issue_comment>username_2: You can now use `numpy.printoptions` with NumPy >= 1.15 [[link]](https://numpy.org/doc/stable/reference/generated/numpy.printoptions.html) Code: ``` import numpy as np import pandas as pd df = pd.DataFrame([{'name': 'test', 'value': np.random.rand(10)}]) df.to_csv('to_csv_wo_printoption.csv') with np.printoptions(linewidth=10000): df.to_csv('to_csv_w_printoption.csv') ``` Result without the print option ``` ,name,value 0,test,"[0.73211706 0.3526481 0.8835388 0.97391453 0.48252462 0.82451648 0.19150101 0.32714367 0.13065582 0.93367579]" ``` Result with the print option ``` ,name,value 0,test,[0.73211706 0.3526481 0.8835388 0.97391453 0.48252462 0.82451648 0.19150101 0.32714367 0.13065582 0.93367579] ``` Upvotes: 2
2018/03/19
922
3,735
<issue_start>username_0: I have set the delegate in the viewDidLoad() method and also checked the IB to see if the delegate shows up when using (control + drag) and it does infact show as being hooked up. I deleted it as well and added a new SearchBar entirely in IB and hooked it back up. Nothing seems to do the trick. Any suggestions? ``` override func viewDidLoad() { self.searchBar.layer.zPosition = 1 self.searchBar.delegate = self } //this is never being called, breakpoint never hits func searchBar(searchBar: UISearchBar, textDidChange searchText: String) { print("searchText \(searchText)") } //this is never being called, breakpoint never hits func searchBarSearchButtonClicked(searchBar: UISearchBar) { print("search button clicked") self.firebaseQuery() } ```<issue_comment>username_1: <https://www.reddit.com/r/swift/comments/85o75h/bug_uisearchbar_delegate_methods_not_being_called/dvzcbb4/> Thanks to reddit user @applishish I got the issue I did not put a underscore in front of the parameter -\_- Thanks all for your help! Upvotes: 0 <issue_comment>username_2: Is your custom View Controller class set in Interface Builder? I.e. on the Identity Inspector tab, ensure your desired UIViewController subclass is set in the Custom Class section. Also, try setting a breakpoint in viewDidLoad(). Run the app and if the breakpoint doesn't get hit when you expect, that helps narrow down the problem. Upvotes: 1 <issue_comment>username_3: Is it possible you're setting the delegate incorrectly? In Interface Builder, command-click the search bar, drag the delegate to the yellow ViewController button in the storyboard to set the delegate. Remove `self.searchBar.delegate = self` from `viewDidLoad()` [![Set ViewController as UISearchBarDelegate Xcode 9.4 beta](https://i.stack.imgur.com/RLo9q.jpg)](https://i.stack.imgur.com/RLo9q.jpg) I'm also not seeing if your viewController Class has properly conformed to `UISearchBarDelegate` I would add it in an extension to your controller and put all your code relating to it there. ``` class ViewController: UIViewController{...} extension ViewController: UISearchBarDelegate { // implement all searchBar related methods here: func searchBarButtonClicked(_ searchBar: UISearchBar) { //capture the string if let input = searchBar.text { // do something with string. print(input) } } ``` Upvotes: 0 <issue_comment>username_4: I also had a problem with it not responding when I set it up using IB. I finally set it up using code instead of IB, and it worked. Try it this way: ``` class MySearchTableViewController: UITableViewController, UISearchResultsUpdating, UISearchBarDelegate { let searchController = UISearchController(searchResultsController: nil) override func viewDidLoad() { super.viewDidLoad() self.searchController.searchBar.autocapitalizationType = UITextAutocapitalizationType.none self.navigationItem.searchController = self.searchController //if this is set to true, the search bar hides when you scroll. self.navigationItem.hidesSearchBarWhenScrolling = false //this is so I'm told of changes to what's typed self.searchController.searchResultsUpdater = self self.searchController.searchBar.delegate = self } func searchBarSearchButtonClicked(_ searchBar: UISearchBar) { //hopefully this gets called when you click Search button } func updateSearchResults(for searchController: UISearchController) { //if you want to start searching on each keystroke, you implement this method. I'm going to wait until they click Search. } ``` Upvotes: 0
2018/03/19
1,454
5,375
<issue_start>username_0: I am using wiremock to mock github api to do some testing of my service. The service calls github api. For the tests I am setting endpoint property to ``` github.api.endpoint=http://localhost:8087 ``` This host and port are the same as wiremock server `@AutoConfigureWireMock(port = 8087)` so I can test different scenarios like : malformed response, timeouts etc. How can I make this port dynamic to avoid case when it is already used by system ? Is there a way to get wiremock port in tests and reassign endpoint property ? ``` @RunWith(SpringRunner.class) @SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT) @AutoConfigureWireMock(port = 8087) @TestPropertySource(properties ={"github.api.endpoint=http://localhost:8087"}) public class GithubRepositoryServiceTestWithWireMockServer { @Value("${github.api.client.timeout.milis}") private int githubClientTimeout; @Autowired private GithubRepositoryService service; @Test public void getRepositoryDetails() { GithubRepositoryDetails expected = new GithubRepositoryDetails("niemar/xf-test", null, "https://github.com/niemar/xf-test.git", 1, "2016-06-12T18:46:24Z"); stubFor(get(urlEqualTo("/repos/niemar/xf-test")) .willReturn(aResponse().withHeader("Content-Type", "application/json").withBodyFile("/okResponse.json"))); GithubRepositoryDetails repositoryDetails = service.getRepositoryDetails("niemar", "xf-test"); Assert.assertEquals(expected, repositoryDetails); } @Test public void testTimeout() { GithubRepositoryDetails expected = new GithubRepositoryDetails("niemar/xf-test", null, "https://github.com/niemar/xf-test.git", 1, "2016-06-12T18:46:24Z"); stubFor(get(urlEqualTo("/repos/niemar/xf-test")) .willReturn(aResponse() .withHeader("Content-Type", "application/json") .withBodyFile("/okResponse.json") .withFixedDelay(githubClientTimeout * 3))); boolean wasExceptionThrown = false; try { GithubRepositoryDetails repositoryDetails = service.getRepositoryDetails("niemar", "xf-test"); } catch (GithubRepositoryNotFound e) { wasExceptionThrown = true; } Assert.assertTrue(wasExceptionThrown); } ```<issue_comment>username_1: I am not aware of `@AutoConfigureWireMock` but if you are manually starting wiremock and setting up mocks, while starting spring you can setup a random port number utilizing spring random. A sample will look like this in your wiremock class ``` @Component public class wiremock { @Value("${randomportnumber}") private int wiremockPort; public void startWiremockServer() { WireMock.configureFor("localhost", wiremockPort); wireMockServer = new com.github.tomakehurst.wiremock.WireMockServer(wireMockConfig().port(wiremockPort).extensions (MockedResponseHandler.class)); wireMockServer.start(); } } ``` In your test class ``` //however you want to configure spring public class wiremock { @Value("${github.api.endpoint}") private String wiremockHostUrl; //use the above url to get stubbed responses. } ``` in your application.properties file ``` randomportnumber=${random.int[1,9999]} github.api.endpoint=http://localhost:${randomportnumber} ``` Upvotes: 2 <issue_comment>username_2: I know this is a bit old post but still there is a documented way to have these ports dynamically. Read more here: [Getting started](http://wiremock.org/docs/getting-started/ "Getting started"). Just scroll down a bit to 'Random port numbers'. From the documentation there: What you need to do is to define a Rule like so ``` @Rule public WireMockRule wireMockRule = new WireMockRule(wireMockConfig().dynamicPort().dynamicHttpsPort()); ``` And then access them via ``` int port = wireMockRule.port(); int httpsPort = wireMockRule.httpsPort(); ``` Upvotes: 4 <issue_comment>username_3: One more way, you can use dynamic port without conflict is ``` import org.springframework.util.SocketUtils; int WIREMOCK_PORT = SocketUtils.findAvailableTcpPort(); public WireMockRule wireMockServer = new WireMockRule(WIREMOCK_PORT); ``` if you want to access it from properties file, then we have `wiremock.server.port`provided by Wiremock ``` "github.api.endpoint=http://localhost:${wiremock.server.port}" ``` Upvotes: 3 <issue_comment>username_4: You have to set the WireMock port to 0 so that it chooses a random port and then use a reference to this port (`wiremock.server.port`) as part of the endpoint property. ``` @RunWith(SpringRunner.class) @SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT) @AutoConfigureWireMock(port = 0) @TestPropertySource(properties = { "github.api.endpoint=http://localhost:${wiremock.server.port}" }) public class GithubRepositoryServiceTestWithWireMockServer { .... } ``` See also [Spring Cloud Contract WireMock](https://cloud.spring.io/spring-cloud-contract/reference/html/project-features.html#features-wiremock). Upvotes: 5 <issue_comment>username_5: If you are using .NET/C#, you can just start the WireMock server with empty arguments like so: ```cs var myMockServer = WireMockServer.Start(); ``` and then get the port number if you need it like this: ```cs int portNumber = myMockServer.Port(); ``` Upvotes: 0
2018/03/19
1,410
4,963
<issue_start>username_0: I am storing datetime in a stringified JSON object in Redis cache like so: ``` "{\"email\":\"<EMAIL>\", \"expiry\": \"2018-03-19T23:00:03.0658822+00:00\"}" ``` In C#, when I query this data from Redis and convert it to string, it loses its timezone value and gets automatically stripped off of its timezone information. ``` RedisValue cookie = GetRedisDatabase().StringGet("sessionhash"); JObject cookieValue = JObject.Parse(cookie.ToString()); var email = JObject.Parse(cookie.ToString())["email"]; var expiry = JObject.Parse(cookie.ToString())["expiry"].ToString(); ``` The "expiry" string above only contains "2018/03/19 23:00:03". It seems like C# is automatically detecting the string to be of datetime format, and is stripping off timezone information from it. How can I ensure the "expiry" string is "2018-03-19T23:00:03.0658822+00:00"?<issue_comment>username_1: Your final ToString asked for the time without TZ info. Do this ``` RedisValue cookie = GetRedisDatabase().StringGet("sessionhash"); JObject cookieValue = JObject.Parse(cookie.ToString()); var email = JObject.Parse(cookie.ToString())["email"]; var expiry = JObject.Parse(cookie.ToString())["expiry"].ToString("O"); ``` Upvotes: 0 <issue_comment>username_2: DateTime does not know about timezones. Instead it has a DateTimeKind property which tells you if the time is machine local, UTC, or unknown. Methods ToLocalTime will convert a known UTC or unknown time to local time, and do nothing of already local. You'll need to use something else that keeps the timezone information, i believe DateTimeOffset can track a time with a variable offset, but not the timezone. NodaTime is a library which understands timezones. Upvotes: 1 <issue_comment>username_3: ``` internal class Program { private static void Main() { string expiry = "2018-03-19T23:00:03.0658822+00:00"; DateTime parsedExpiry = DateTime.Parse(expiry); Console.WriteLine(parsedExpiry.ToString()); Console.ReadKey(); } } ``` This code converts 19/3/2018 23:00 into 20/3/2018 7:00. The reason it does this is because, as per above answers, `DateTime` doesn't hold on to any TimeZone information. The only information you have is `DateTime.Kind`, which in the case of my code, outputs `Local`. I can use `parsedExpirey.ToUniversalTime()` to get UTC. You could do some extra parsing on the string representation and use the [TimeZoneInfo](https://msdn.microsoft.com/en-us/library/system.timezoneinfo(v=vs.110).aspx "TimeZoneInfo") class to maintain the timezone, but you'll likely need an extra column / storage space to store that info. You *can* use the `Convert` option, but then you'll be storing DateTimes in all different timezones, you'd be better off using `ToUniversalTime` and storing it all in UTC (**best practice**), then converting it to Local time for presentation to the user (or leave it UTC, depending on the application). Upvotes: 1 <issue_comment>username_4: I have a few general rules regarding handling DateTimes: 1. Always store, retrieve and transmit the value in UTC. Windows is pretty good at translating any UTC value to whatever the current user picked as his favourite timezone. You do not want to deal with Timezones [if you can avoid it at all](https://www.youtube.com/watch?v=-5wpm-gesOY). 2. Never store, retrieve and transmit the value as string. 3. In case 3 can not work, at least pick a fixed culture and string encoding at all endpoints. You do not want to add those to your worries. 4. In rare cases (Callendar Apps) it might be beneficial to store the "originally saved timezone". Upvotes: 0 <issue_comment>username_5: Unfortunately you cannot determine the time zone from an ISO date/time string. You can only determine the offset. The time zone names and codes [are not unique](https://en.wikipedia.org/wiki/List_of_time_zone_abbreviations)-- for example, "Arabia Standard Time" has an offset of UTC+03, but has the code "AST," which collides with "Atlantic Standard Time" (offset UTC-04). So while you can map in one direction, you can't reliably map in the other. That being said, getting the ***offset*** isn't so bad if you use a `DateTimeOffset` instead of `DateTime`. If the field isn't a DateTimeOffset in your object model, you can create a temporary anonymous type as a template and get it that way. Example: ``` public static void Main() { var input = "{\"email\":\"<EMAIL>\", \"expiry\": \"2018-03-19T23:00:03.0658822+01:00\"}"; var template = new { email = "", expiry = DateTimeOffset.Now }; var result = JsonConvert.DeserializeAnonymousType(input, template); Console.WriteLine("DateTime (GMT): {0:R}\r\nOffset from GMT: {1}", result.expiry, result.expiry.Offset); } ``` Output: ``` DateTime (GMT): Mon, 19 Mar 2018 22:00:03 GMT Offset from GMT: 01:00:00 ``` [Code on DotNetFiddle](https://dotnetfiddle.net/rplHLK) Upvotes: 0
2018/03/19
1,385
5,018
<issue_start>username_0: The split method for strings only takes a parameter for the delimiter, but how do I easily split by every (or every other, etc) letter? This method works to split by every letter, but seems clumsy for a simple task. ``` a=' '.join(string.ascii_lowercase).split() ``` I suppose a function could do this: ``` def split_index(string,index=1): split=[] while string: try: sect = string[:index] string = string[index:] split.append(sect) except IndexError: split.append(string) return split print(split_index('testing')) # ['t', 'e', 's', 't', 'i', 'n', 'g'] print(split_index('testing',2)) # ['te', 'st', 'in', 'g'] ``` I am surprised if no one has wished for this before, or if there is not a simpler built in method. But I have been wrong before. If such a thing is not worth much, or I have missed a detail, the question can be deleted/removed.<issue_comment>username_1: Your final ToString asked for the time without TZ info. Do this ``` RedisValue cookie = GetRedisDatabase().StringGet("sessionhash"); JObject cookieValue = JObject.Parse(cookie.ToString()); var email = JObject.Parse(cookie.ToString())["email"]; var expiry = JObject.Parse(cookie.ToString())["expiry"].ToString("O"); ``` Upvotes: 0 <issue_comment>username_2: DateTime does not know about timezones. Instead it has a DateTimeKind property which tells you if the time is machine local, UTC, or unknown. Methods ToLocalTime will convert a known UTC or unknown time to local time, and do nothing of already local. You'll need to use something else that keeps the timezone information, i believe DateTimeOffset can track a time with a variable offset, but not the timezone. NodaTime is a library which understands timezones. Upvotes: 1 <issue_comment>username_3: ``` internal class Program { private static void Main() { string expiry = "2018-03-19T23:00:03.0658822+00:00"; DateTime parsedExpiry = DateTime.Parse(expiry); Console.WriteLine(parsedExpiry.ToString()); Console.ReadKey(); } } ``` This code converts 19/3/2018 23:00 into 20/3/2018 7:00. The reason it does this is because, as per above answers, `DateTime` doesn't hold on to any TimeZone information. The only information you have is `DateTime.Kind`, which in the case of my code, outputs `Local`. I can use `parsedExpirey.ToUniversalTime()` to get UTC. You could do some extra parsing on the string representation and use the [TimeZoneInfo](https://msdn.microsoft.com/en-us/library/system.timezoneinfo(v=vs.110).aspx "TimeZoneInfo") class to maintain the timezone, but you'll likely need an extra column / storage space to store that info. You *can* use the `Convert` option, but then you'll be storing DateTimes in all different timezones, you'd be better off using `ToUniversalTime` and storing it all in UTC (**best practice**), then converting it to Local time for presentation to the user (or leave it UTC, depending on the application). Upvotes: 1 <issue_comment>username_4: I have a few general rules regarding handling DateTimes: 1. Always store, retrieve and transmit the value in UTC. Windows is pretty good at translating any UTC value to whatever the current user picked as his favourite timezone. You do not want to deal with Timezones [if you can avoid it at all](https://www.youtube.com/watch?v=-5wpm-gesOY). 2. Never store, retrieve and transmit the value as string. 3. In case 3 can not work, at least pick a fixed culture and string encoding at all endpoints. You do not want to add those to your worries. 4. In rare cases (Callendar Apps) it might be beneficial to store the "originally saved timezone". Upvotes: 0 <issue_comment>username_5: Unfortunately you cannot determine the time zone from an ISO date/time string. You can only determine the offset. The time zone names and codes [are not unique](https://en.wikipedia.org/wiki/List_of_time_zone_abbreviations)-- for example, "Arabia Standard Time" has an offset of UTC+03, but has the code "AST," which collides with "Atlantic Standard Time" (offset UTC-04). So while you can map in one direction, you can't reliably map in the other. That being said, getting the ***offset*** isn't so bad if you use a `DateTimeOffset` instead of `DateTime`. If the field isn't a DateTimeOffset in your object model, you can create a temporary anonymous type as a template and get it that way. Example: ``` public static void Main() { var input = "{\"email\":\"<EMAIL>\", \"expiry\": \"2018-03-19T23:00:03.0658822+01:00\"}"; var template = new { email = "", expiry = DateTimeOffset.Now }; var result = JsonConvert.DeserializeAnonymousType(input, template); Console.WriteLine("DateTime (GMT): {0:R}\r\nOffset from GMT: {1}", result.expiry, result.expiry.Offset); } ``` Output: ``` DateTime (GMT): Mon, 19 Mar 2018 22:00:03 GMT Offset from GMT: 01:00:00 ``` [Code on DotNetFiddle](https://dotnetfiddle.net/rplHLK) Upvotes: 0
2018/03/19
1,404
4,966
<issue_start>username_0: I have a list of strings and stringOutput as example: ``` readonly List carMake = new List { "Toyota", "Honda", "Audi", "Tesla" }; string myFunction() { // do some processing... string stringOutput = CallGetLatestRecord(); // the above returns "a:toyota:c" //Call another function after changing the string to "a:Toyota:c" //I am planning to use \*\*stringOutput.Replace\*\* //but don't know how to get toyota or list items values dynamically callFoo(stringOutput); } ``` So this is what I want. If my: * stringOutput contains **"a:toyota:c"**, I would like to update it to **"a:Toyota:c"** using carMake. * stringOutput contains **"a:audi:c"**, I would like to update it to **"a:Audi:c"** using carMake. How do I convert this using Linq ? Also, note that at runtime I dont know if it is toyota or any string... so I want a generic solution using Linq<issue_comment>username_1: Your final ToString asked for the time without TZ info. Do this ``` RedisValue cookie = GetRedisDatabase().StringGet("sessionhash"); JObject cookieValue = JObject.Parse(cookie.ToString()); var email = JObject.Parse(cookie.ToString())["email"]; var expiry = JObject.Parse(cookie.ToString())["expiry"].ToString("O"); ``` Upvotes: 0 <issue_comment>username_2: DateTime does not know about timezones. Instead it has a DateTimeKind property which tells you if the time is machine local, UTC, or unknown. Methods ToLocalTime will convert a known UTC or unknown time to local time, and do nothing of already local. You'll need to use something else that keeps the timezone information, i believe DateTimeOffset can track a time with a variable offset, but not the timezone. NodaTime is a library which understands timezones. Upvotes: 1 <issue_comment>username_3: ``` internal class Program { private static void Main() { string expiry = "2018-03-19T23:00:03.0658822+00:00"; DateTime parsedExpiry = DateTime.Parse(expiry); Console.WriteLine(parsedExpiry.ToString()); Console.ReadKey(); } } ``` This code converts 19/3/2018 23:00 into 20/3/2018 7:00. The reason it does this is because, as per above answers, `DateTime` doesn't hold on to any TimeZone information. The only information you have is `DateTime.Kind`, which in the case of my code, outputs `Local`. I can use `parsedExpirey.ToUniversalTime()` to get UTC. You could do some extra parsing on the string representation and use the [TimeZoneInfo](https://msdn.microsoft.com/en-us/library/system.timezoneinfo(v=vs.110).aspx "TimeZoneInfo") class to maintain the timezone, but you'll likely need an extra column / storage space to store that info. You *can* use the `Convert` option, but then you'll be storing DateTimes in all different timezones, you'd be better off using `ToUniversalTime` and storing it all in UTC (**best practice**), then converting it to Local time for presentation to the user (or leave it UTC, depending on the application). Upvotes: 1 <issue_comment>username_4: I have a few general rules regarding handling DateTimes: 1. Always store, retrieve and transmit the value in UTC. Windows is pretty good at translating any UTC value to whatever the current user picked as his favourite timezone. You do not want to deal with Timezones [if you can avoid it at all](https://www.youtube.com/watch?v=-5wpm-gesOY). 2. Never store, retrieve and transmit the value as string. 3. In case 3 can not work, at least pick a fixed culture and string encoding at all endpoints. You do not want to add those to your worries. 4. In rare cases (Callendar Apps) it might be beneficial to store the "originally saved timezone". Upvotes: 0 <issue_comment>username_5: Unfortunately you cannot determine the time zone from an ISO date/time string. You can only determine the offset. The time zone names and codes [are not unique](https://en.wikipedia.org/wiki/List_of_time_zone_abbreviations)-- for example, "Arabia Standard Time" has an offset of UTC+03, but has the code "AST," which collides with "Atlantic Standard Time" (offset UTC-04). So while you can map in one direction, you can't reliably map in the other. That being said, getting the ***offset*** isn't so bad if you use a `DateTimeOffset` instead of `DateTime`. If the field isn't a DateTimeOffset in your object model, you can create a temporary anonymous type as a template and get it that way. Example: ``` public static void Main() { var input = "{\"email\":\"<EMAIL>\", \"expiry\": \"2018-03-19T23:00:03.0658822+01:00\"}"; var template = new { email = "", expiry = DateTimeOffset.Now }; var result = JsonConvert.DeserializeAnonymousType(input, template); Console.WriteLine("DateTime (GMT): {0:R}\r\nOffset from GMT: {1}", result.expiry, result.expiry.Offset); } ``` Output: ``` DateTime (GMT): Mon, 19 Mar 2018 22:00:03 GMT Offset from GMT: 01:00:00 ``` [Code on DotNetFiddle](https://dotnetfiddle.net/rplHLK) Upvotes: 0
2018/03/19
887
3,289
<issue_start>username_0: I'm working on the following practice problem from [GeeksForGeeks](https://www.geeksforgeeks.org/add-two-numbers-without-using-arithmetic-operators/): > > Write a function Add() that returns sum of two integers. The function should not use any of the arithmetic operators (+, ++, –, -, .. etc). > > > The given solution in C# is: ``` public static int Add(int x, int y) { // Iterate till there is no carry while (y != 0) { // carry now contains common set bits of x and y int carry = x & y; // Sum of bits of x and y where at least one of the bits is not set x = x ^ y; // Carry is shifted by one so that adding it to x gives the required sum y = carry << 1; } return x; } ``` Looking at this solution, I understand **how** it is happening; I can follow along with the debugger and anticipate the value changes before they come. But after walking through it several times, I still don't understand **WHY** it is happening. If this was to come up in an interview, I would have to rely on memory to solve it, not actual understanding of how the algorithm works. Could someone help explain why we use certain operators at certain points and what those totals are suppose to represent? I know there are already comments in the code, but I'm obviously missing something...<issue_comment>username_1: At each iteration, you have these steps: ``` carry <- x & y // mark every location where the addition has a carry x <- x ^ y // sum without carries y <- carry << 1 // shift the carry left one column ``` On the next iteration, `x` holds the entire sum *except* for the carry bits, which are in y. These carries are properly bumped one column to the left, just as if you were doing the addition on paper. Continue doing this until there are no more carry bits to worry about. Very briefly, this does the addition much as you or I would do it on paper, *except* that, instead of working right to left, it does all the bits in parallel. Upvotes: 4 [selected_answer]<issue_comment>username_2: What you have here is a case of Binary Math on the Represenetation in memory: <https://www.wikihow.com/Add-Binary-Numbers> Generally when programming in C#, you do not bother with the "how is it represented in memory" level of things. 55% of the time it is not worth the effort, 40% it is worse then just using the builtin functions. And the remaing 5% you should ask yourself why you are not programming in Native C++, Assembler or something with similar low level capacities to begin with. Upvotes: 0 <issue_comment>username_3: Decimal arithmetic is more complicated than binary arithmetic, but perhaps it helps to compare them. The algorithm that is usually taught for addition is to go through the digits one by one, remembering to "carry a one" when necessary. In the above algorithm, that is not exactly what happens - rather, all digits are added and allowed to wrap, and *all* the carries are collected to be applied all at once in the next step. In decimal that would look like this: ``` 123456 777777 ------ + 890123 001111 << 1 011110 ------ + 801233 010000 << 1 100000 ------ + 901233 000000 done ``` In binary arithmetic, addition without carry is just XOR. Upvotes: 2
2018/03/19
5,229
10,132
<issue_start>username_0: I am trying to fit empirical CDF plot to two Gaussian cdf as it seems that it has two peaks, but it does not work. I fit the curve with **leastsq** from **scipy.optimize** and **erf** function from **scipy.special**. The fitting only gives constant line at a value of 2. I am not sure in which part of the code that I make mistake. Any pointers will be helpful. Thanks! ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt x = np.array([ 90.64115156, 90.85690063, 91.07264971, 91.28839878, 91.50414786, 91.71989693, 91.93564601, 92.15139508, 92.36714415, 92.58289323, 92.7986423 , 93.01439138, 93.23014045, 93.44588953, 93.6616386 , 93.87738768, 94.09313675, 94.30888582, 94.5246349 , 94.74038397, 94.95613305, 95.17188212, 95.3876312 , 95.60338027, 95.81912935, 96.03487842, 96.2506275 , 96.46637657, 96.68212564, 96.89787472, 97.11362379, 97.32937287, 97.54512194, 97.76087102, 97.97662009, 98.19236917, 98.40811824, 98.62386731, 98.83961639, 99.05536546, 99.27111454, 99.48686361, 99.70261269, 99.91836176, 100.13411084, 100.34985991, 100.56560899, 100.78135806, 100.99710713, 101.21285621]) y = np.array([3.33333333e-04, 3.33333333e-04, 3.33333333e-04, 1.00000000e-03, 1.33333333e-03, 3.33333333e-03, 6.66666667e-03, 1.30000000e-02, 2.36666667e-02, 3.40000000e-02, 5.13333333e-02, 7.36666667e-02, 1.01666667e-01, 1.38666667e-01, 2.14000000e-01, 3.31000000e-01, 4.49666667e-01, 5.50000000e-01, 6.09000000e-01, 6.36000000e-01, 6.47000000e-01, 6.54666667e-01, 6.61000000e-01, 6.67000000e-01, 6.76333333e-01, 6.84000000e-01, 6.95666667e-01, 7.10000000e-01, 7.27666667e-01, 7.50666667e-01, 7.75333333e-01, 7.93333333e-01, 8.11333333e-01, 8.31333333e-01, 8.56333333e-01, 8.81333333e-01, 9.00666667e-01, 9.22666667e-01, 9.37666667e-01, 9.47333333e-01, 9.59000000e-01, 9.70333333e-01, 9.77333333e-01, 9.83333333e-01, 9.90333333e-01, 9.93666667e-01, 9.96333333e-01, 9.99000000e-01, 9.99666667e-01, 1.00000000e+00]) plt.plot(a,b,'r.') # Fitting with 2 Gaussian from scipy.special import erf from scipy.optimize import leastsq def two_gaussian_cdf(params, x): (mu1, sigma1, mu2, sigma2) = params model = 0.5*(1 + erf( (x-mu1)/(sigma1*np.sqrt(2)) )) +\ 0.5*(1 + erf( (x-mu2)/(sigma2*np.sqrt(2)) )) return model def residual_two_gaussian_cdf(params, x, y): model = double_gaussian(params, x) return model - y params = [5.,2.,1.,2.] out = leastsq(residual_two_gaussian_cdf,params,args=(x,y)) double_gaussian(out[0],x) plt.plot(x,two_gaussian_cdf(out[0],x)) ``` which return to this plot [![enter image description here](https://i.stack.imgur.com/sPha2.png)](https://i.stack.imgur.com/sPha2.png)<issue_comment>username_1: You may find `lmfit` (see <http://lmfit.github.io/lmfit-py/>) to be a useful alternative to `leastsq` here as it provides a higher-level interface to optimization and curve fitting (though still based on `scipy.optimize.leastsq`). With lmfit, your example might look like this (cutting out the definition of `x` and `y` data): ``` #!/usr/bin/env python import numpy as np from scipy.special import erf import matplotlib.pyplot as plt from lmfit import Model # define the basic model. I included an amplitude parameter def gaussian_cdf(x, amp, mu, sigma): return (amp/2.0)*(1 + erf( (x-mu)/(sigma*np.sqrt(2)))) # create a model that is the sum of two gaussian_cdfs # note that a prefix names each component and will be # applied to the parameter names for each model component model = Model(gaussian_cdf, prefix='g1_') + Model(gaussian_cdf, prefix='g2_') # make a parameters object -- a dict with parameter names # taken from the arguments of your model function and prefix params = model.make_params(g1_amp=0.50, g1_mu=94, g1_sigma=1, g2_amp=0.50, g2_mu=98, g2_sigma=1.) # you can apply bounds to any parameter #params['g1_sigma'].min = 0 # sigma must be > 0! # you may want to fix the amplitudes to 0.5: #params['g1_amp'].vary = False #params['g2_amp'].vary = False # run the fit result = model.fit(y, params, x=x) # print results print(result.fit_report()) # plot results, including individual components comps = result.eval_components(result.params, x=x) plt.plot(x, y,'r.', label='data') plt.plot(x, result.best_fit, 'k-', label='fit') plt.plot(x, comps['g1_'], 'b--', label='g1_') plt.plot(x, comps['g2_'], 'g--', label='g2_') plt.legend() plt.show() ``` This prints out a report of ``` [[Model]] (Model(gaussian_cdf, prefix='g1_') + Model(gaussian_cdf, prefix='g2_')) [[Fit Statistics]] # fitting method = leastsq # function evals = 66 # data points = 50 # variables = 6 chi-square = 0.00626332 reduced chi-square = 1.4235e-04 Akaike info crit = -437.253376 Bayesian info crit = -425.781238 [[Variables]] g1_amp: 0.65818908 +/- 0.00851338 (1.29%) (init = 0.5) g1_mu: 93.8438526 +/- 0.01623273 (0.02%) (init = 94) g1_sigma: 0.54362156 +/- 0.02021614 (3.72%) (init = 1) g2_amp: 0.34058664 +/- 0.01153346 (3.39%) (init = 0.5) g2_mu: 97.7056728 +/- 0.06408910 (0.07%) (init = 98) g2_sigma: 1.24891832 +/- 0.09204020 (7.37%) (init = 1) [[Correlations]] (unreported correlations are < 0.100) C(g1_amp, g2_amp) = -0.892 C(g2_amp, g2_sigma) = 0.848 C(g1_amp, g2_sigma) = -0.744 C(g1_amp, g1_mu) = 0.692 C(g1_amp, g2_mu) = 0.662 C(g1_mu, g2_amp) = -0.607 C(g1_amp, g1_sigma) = 0.571 ``` and a plot like this: [![enter image description here](https://i.stack.imgur.com/5AX7x.png)](https://i.stack.imgur.com/5AX7x.png) This fit is not perfect, but it should get you started. Upvotes: 2 [selected_answer]<issue_comment>username_2: Here is how I used the scipy.optimize.differential\_evolution module to generate initial parameter estimates for curve fitting. I have coded the sum of squared errors as the target for the genetic algorithm as shown below. This scipy module uses the Latin Hypercube algorithm to ensure a thorough search of parameter space, which requires parameter bounds within which to search. In this case, the parameter bounds are automatically derived from the data so that there is no need to provide them manually in the code. ``` import numpy as np import matplotlib import matplotlib.pyplot as plt from scipy.optimize import curve_fit import warnings from scipy.optimize import differential_evolution from scipy.special import erf # bounds on parameters are set in generate_Initial_Parameters() below def two_gaussian_cdf(x, mu1, sigma1, mu2, sigma2): model = 0.5*(1 + erf( (x-mu1)/(sigma1*np.sqrt(2)) )) +\ 0.5*(1 + erf( (x-mu2)/(sigma2*np.sqrt(2)) )) return model # function for genetic algorithm to minimize (sum of squared error) # bounds on parameters are set in generate_Initial_Parameters() below def sumOfSquaredError(parameterTuple): warnings.filterwarnings("ignore") # do not print warnings by genetic algorithm return np.sum((yData - two_gaussian_cdf(xData, *parameterTuple)) ** 2) def generate_Initial_Parameters(): # data min and max used for bounds maxX = max(xData) minX = min(xData) maxY = max(yData) minY = min(yData) parameterBounds = [] parameterBounds.append([minX, maxX]) # parameter bounds for mu1 parameterBounds.append([minY, maxY]) # parameter bounds for sigma1 parameterBounds.append([minX, maxX]) # parameter bounds for mu2 parameterBounds.append([minY, maxY]) # parameter bounds for sigma2 # "seed" the numpy random number generator for repeatable results result = differential_evolution(sumOfSquaredError, parameterBounds, seed=3) return result.x xData = np.array([ 90.64115156, 90.85690063, 91.07264971, 91.28839878, 91.50414786, 91.71989693, 91.93564601, 92.15139508, 92.36714415, 92.58289323, 92.7986423 , 93.01439138, 93.23014045, 93.44588953, 93.6616386 , 93.87738768, 94.09313675, 94.30888582, 94.5246349 , 94.74038397, 94.95613305, 95.17188212, 95.3876312 , 95.60338027, 95.81912935, 96.03487842, 96.2506275 , 96.46637657, 96.68212564, 96.89787472, 97.11362379, 97.32937287, 97.54512194, 97.76087102, 97.97662009, 98.19236917, 98.40811824, 98.62386731, 98.83961639, 99.05536546, 99.27111454, 99.48686361, 99.70261269, 99.91836176, 100.13411084, 100.34985991, 100.56560899, 100.78135806, 100.99710713, 101.21285621]) yData = np.array([3.33333333e-04, 3.33333333e-04, 3.33333333e-04, 1.00000000e-03, 1.33333333e-03, 3.33333333e-03, 6.66666667e-03, 1.30000000e-02, 2.36666667e-02, 3.40000000e-02, 5.13333333e-02, 7.36666667e-02, 1.01666667e-01, 1.38666667e-01, 2.14000000e-01, 3.31000000e-01, 4.49666667e-01, 5.50000000e-01, 6.09000000e-01, 6.36000000e-01, 6.47000000e-01, 6.54666667e-01, 6.61000000e-01, 6.67000000e-01, 6.76333333e-01, 6.84000000e-01, 6.95666667e-01, 7.10000000e-01, 7.27666667e-01, 7.50666667e-01, 7.75333333e-01, 7.93333333e-01, 8.11333333e-01, 8.31333333e-01, 8.56333333e-01, 8.81333333e-01, 9.00666667e-01, 9.22666667e-01, 9.37666667e-01, 9.47333333e-01, 9.59000000e-01, 9.70333333e-01, 9.77333333e-01, 9.83333333e-01, 9.90333333e-01, 9.93666667e-01, 9.96333333e-01, 9.99000000e-01, 9.99666667e-01, 1.00000000e+00]) # generate initial parameter values initialParameters = generate_Initial_Parameters() # curve fit the data fittedParameters, niepewnosci = curve_fit(two_gaussian_cdf, xData, yData, initialParameters) # create values for display of fitted peak function mu1, sigma1, mu2, sigma2 = fittedParameters y_fit = two_gaussian_cdf(xData, mu1, sigma1, mu2, sigma2) plt.plot(xData, yData) # plot the raw data plt.plot(xData, y_fit) # plot the equation using the fitted parameters plt.show() print(fittedParameters) ``` Upvotes: 0
2018/03/19
543
1,593
<issue_start>username_0: I am trying to make my `h1` be inline within my box like [this](https://i.stack.imgur.com/04Xa8.png). Currently my H1 text is stacked on top of each other [and looks like this](https://i.stack.imgur.com/LWBtx.png). I want this to be inline rather than stacked on top of one and other, I have tried adding `display: inline-block;` and `display: inline;` to my H1 neither working. What do I need to add or remove from my H1 or box div to be able to achieve my H1 being inline! HTML ``` Centered Text ============= ``` CSS ``` *{ margin: 0; padding: 0; } body{ background-color: teal; font-family: sans-serif; } .box{ position: absolute; left: 50%; top: 50%; transform: translateX(-50%) translateY(-50%); text-align: center; width: 500px; height: 100px; background-color: red; } h1{ position: absolute; left: 50%; top: 50%; transform: translateX(-50%) translateY(-50%); text-align: center; font-size: 50px; font-weight: lighter; color: white; } ``` [Jfiddle](https://jsfiddle.net/rp8t50cn/1/)<issue_comment>username_1: Try using `position: relative` in h1. [You can check the result](https://jsfiddle.net/svh6jay2/) Upvotes: -1 <issue_comment>username_2: Just add `white-space: nowrap;` to `h1`, see [this fiddle](https://jsfiddle.net/rp8t50cn/4/). Upvotes: 0 <issue_comment>username_3: Your `h1` has `position: absolute`, but no `width` setting. Just add `width: 100%;` to it to make it the width of its container so the texts fits into it in one line. <https://jsfiddle.net/80r16xgs/2/> Upvotes: 3 [selected_answer]
2018/03/19
1,197
4,264
<issue_start>username_0: I have large datasets from 2 sources, one is a huge csv file and the other coming from a database query. I am writing a validation script to compare the data from both sources and log/print the differences. One thing I think is worth mentioning is that the data from the two sources is not in the exact same format or the order. For example: Source 1 (CSV files): ``` <EMAIL>,key1,1 <EMAIL>,key1,3 <EMAIL>,key2,1 <EMAIL>,key3,5 <EMAIL>,key3,2 <EMAIL>,key3,2 <EMAIL>,key2,3 <EMAIL>,key3,1 ``` Source 2 (Database): ``` email key1 key2 key3 <EMAIL> 1 1 5 <EMAIL> 3 2 <EMAIL> 1 1 5 ``` The output of the script I want is something like: ``` source1 - source2 (or csv - db): 2 rows total with differences <EMAIL> 3 2 2 <EMAIL> 3 1 source2 - source1 (or db-csv): 2 rows total with differences <EMAIL> 3 2 <EMAIL> 1 1 5 ``` The output format could be a little different to show more differences, more clearly (from thousands/millions of records). I started writing the script to save the data from both sources into two dictionaries, and loop through the dictionaries or create sets from the dictionaries, but it seems like a very inefficient process. I considered using pandas, but pandas doesn't seem to have a way to do this type of comparison of dataframes. Please tell me if theres a better/more efficient way. Thanks in advance!<issue_comment>username_1: You can using `pivot` convert the df , the using `drop_duplicates` after `concat` ``` df2=df2.applymap(lambda x : pd.to_numeric(x,errors='ignore') pd.concat([df.pivot(*df.columns).reset_index(),df2)],keys=['db','csv']).\ drop_duplicates(keep=False).\ reset_index(level=0).\ rename(columns={'level_0':'source'}) Out[261]: key source email key1 key2 key3 1 db <EMAIL> 3 2 2 1 csv <EMAIL> 3 2 ``` Notice , here I am using the `to_numeric` to convert to numeric for your df2 Upvotes: 0 <issue_comment>username_2: You were in the right path. What do you want is to quickly match the 2 tables. Pandas is probably overkill. You probably want to iterate through you first table and create a dictionary. What you **don't** want to do, is interact the two lists for each elements. Even little lists will demand a large searches. The [ReadCsv](https://docs.python.org/2/library/csv.html) module is a good one to read your data from disk. For each row, you will put it in a dictionary where the key is the email and the value is the complete row. In a common desktop computer you can iterate 10 millions rows in a second. Now you will iterate throw the second row and for each row you'll use the email to get the data from the dictionary. See that this way, since the dict is a data structure that you can get the key value in O(1), you'll interact through N + M rows. In a couple of seconds you should be able to compare both tables. It is really simple. Here is a sample code: ``` import csv firstTable = {} with open('firstTable.csv', 'r') as csvfile: reader = csv.reader(csvfile, delimiter=',') for row in reader: firstTable[row[0]] = row #email is in row[0] for row2 in get_db_table2(): email = row2[0] row1 = firstTable[email] #this is a hash. The access is very quick my_complex_comparison_func(row1, row2) ``` If you don't have enough RAM memory to fit all the keys of the first dictionary in memory, you can use the [Shelve module](https://docs.python.org/3/library/shelve.html) for the firstTable variable. That'll create a index in disk with very quick access. Since one of your tables is already in a database, maybe what I'd do first is to use your database to load the data in disk to a temporary table. Create an index, and make a inner join on the tables (or outer join if need to know which rows don't have data in the other table). Databases are optimized for this kind of operation. You can then make a select from python to get the joined rows and use python for your complex comparison logic. Upvotes: 1
2018/03/19
755
2,992
<issue_start>username_0: I'm using Anaconda 5.1 and Python 3.6 on a Windows 10 machine. I'm having quite a few problems ; I tried to add some useful tools such as lightGBM, tensorflow, keras, bokeh,... to my conda environment but once I've used `conda install -c conda-forge packagename` on all of these, I end up having downgrading and upgrading of different packages that just mess with my installation and I can't use anything anymore after those installations. I wonder if it were possible to have multiples versions of packages & dependencies living alongside each other which won't kill my install? Sorry if my question seems noobish and thanks for your help, Nate<issue_comment>username_1: You could try disabling transitive deps updates by passing `--no-update-dependencies` or `--no-update-deps` to `conda install` command. Ex: `conda install --no-update-deps pandas`. Upvotes: 3 <issue_comment>username_2: Alright by searching around I was able to have everything up and running and it doesn't seem to be in conflict anymore, I had to uninstall Anaconda, reboot my computer and then reinstall it after my installation was broken though. As long as packages and dependencies weren't messing around with each other, I was able to install lightgbm, folium and [catboost](https://stackoverflow.com/questions/45165532/how-to-install-yandex-catboost-on-anaconda-x64) in the regular (base) environment and use them. Those were installed straight with `conda install -c conda-forge packagename`, except for catboost which I linked. Do not forget to check for the different versions of conda, python and pip (wheel) which might affect your system. Also, `conda install nb_conda` was installed to be able to select different environments in the Jupyter notebook straight away. I got this from [this helpful post and a mix of the answers below](https://stackoverflow.com/questions/37085665/in-which-conda-environment-is-jupyter-executing). Then, when I wanted to install Tensorflow, Keras, theano what worked for me are the instructions in the second top comment [in this thread](https://stackoverflow.com/questions/34097988/how-do-i-install-keras-and-theano-in-anaconda-python-on-windows) though you should not forget to install jupyter again in the activated new environment you created. After that, close everything, re-launch everything and in the top right corner of the Jupyter you should be able to pick the different environments and work from there. I hope this will help someone else in the same predicament. Upvotes: 3 [selected_answer]<issue_comment>username_3: I was trying to install pyrobuf library and it shew a lot of conflicts. What worked for me is ``` conda update --prefix /Users//opt/anaconda3 anaconda ``` Upvotes: -1 <issue_comment>username_4: You can try using different conda environments. For example: `conda create -n myenv` Then you can activate your environment with: `conda activate myenv` and deactivate with: `conda deactivate` Upvotes: -1
2018/03/19
354
997
<issue_start>username_0: help! I dont have any knowledge about regex but I need to use it in my sql query (amazon redshift). I have list like this: [1245,2324,4433] and I would like to get first number (1245). How can I do that? regards<issue_comment>username_1: ``` select (regexp_matches('[1245,2324,4433]', '\d+'))[1] ``` Explanation: with `regexp_matches` you select the first number from the string (`\d+`) then you select the first (and the only) element from a returned set. If no number found at all 0 rows would be returned. For RDS it would be ``` regexp_substr('[12456,232466,4433]','\\d+') ``` References: * <https://www.postgresql.org/docs/current/static/functions-string.html> * <https://docs.aws.amazon.com/redshift/latest/dg/REGEXP_SUBSTR.html> * <https://docs.aws.amazon.com/redshift/latest/dg/pattern-matching-conditions-posix.html> Upvotes: 1 <issue_comment>username_2: you can try this: ``` select replace(split_part('[1245,2324,4433]',',',1),'[','') ``` Upvotes: 0
2018/03/19
1,028
3,341
<issue_start>username_0: So I am working on code in a class, and I have a problem where my teacher wants me to: *"Write a Python function that will take a list of numbers as a parameter(list can be a mixed list), and returns a new list comprised of the integer values(truncated in the case of a float), of the original list. E.g. the list [5, 6.4, 7.5, 8.8, 2, 2.1] returns [5, 6, 7, 8, 2, 2]"* I've started the funtion already, but I am stuck on the part of deceifering whether or not a value in the list is an int or float... This is what I have: ``` def int_list(a_list): for x in a_list: if x = int: ``` I don't think we can ask if x is an int or float without saying type(x), but I dont think my teacher wants us using any Python built in library functions. Any help is appreciated. Thanks<issue_comment>username_1: You can do something like: ``` if isinstance(x, float): x = int(x) ``` OR, if you want to avoid the use of a library function, you can do: ``` if x % 1 != 0: x = int(x) ``` For example: try executing `print(5.5%1)` and `print(5%1)`. OR, you can just simply use `x = int(x)` if it is guaranteed that the values will be numbers. Otherwise, you can use try, catch to handle exceptions. Upvotes: 2 [selected_answer]<issue_comment>username_2: You don't need a function. Use a list comprehension instead: ``` >>> list = [5, 6.4, 7.5, 8.8, 2, 2.1] >>> nl = [int(x) for x in list] >>> nl [5, 6, 7, 8, 2, 2] ``` You can think of the list comprehension as a function that gets applied to all of its elements: ``` >>> def process_list(a_list, a_func): ... new_list = [] ... for x in a_list: ... new_list.append(a_func(x)) ... return new_list ... >>> process_list(list,int) [5, 6, 7, 8, 2, 2] ``` but as you see, a list comprehension does a lot of this stuff on its own, and you only have to supply a function that translates each value to a new one. (And technically speaking, you don't even *have* to process the original values. The list will be iterated for each of its elements and the function will be called for each, but what the function actually does with it is none of Python's concern.) Upvotes: 2 <issue_comment>username_3: ``` sample_list = [5, 6.4, 7.5, 8.8, 2, 2.1] def iterate_list(sample_list): new_list = [] for num in sample_list: num = int(num) new_list.append(num) return new_list print(iterate_list(sample_list)) ``` your output will be: ``` [5, 6, 7, 8, 2, 2] ``` 1. We have a sample\_list of the numbers you mentioned for the list. 2. We create a function to iterate over each item in the list, and we create a parameter called (sample\_list). The arg that we pass when we call the function will be the sample\_list that we created. 3. We create an empty list called new list to store the integer version of each value in the sample\_list. 4. We run a for loop that will iterate over each number in the sample\_list, and we will create a variable called num that will store the integer value of that particular number. We will then append that integer value to the new\_list, and then return the new list. 5. We then call the function. In this case I print the output to make sure that I got the proper output, but you do not have to print when you call the function. Upvotes: 0
2018/03/19
1,053
2,856
<issue_start>username_0: I'm trying to `scale()` only numeric columns IF a `data.frame` contains a mix of numeric and non-numeric columns of data. (Initially, I am wondering if there could be an `if` statement showing if a `data.frame` contains non-numeric data?) Note that I want to keep the original `data.frame` variables, and only add the new, `scale`d variables with the suffix `".s"` to the original `data.frame`. I have tried **the following**. But it looks like it also populates the non-numeric column `Loc` in the below example? ``` stan <- function(data, scale = TRUE, center = TRUE, na.rm = TRUE){ data <- if(na.rm) data[complete.cases(data), ] ind <- sapply(data, is.numeric) data[paste0(names(data), ".s")] <- lapply(data[ind], scale) return(as.data.frame(data)) } # EXAMPLE: stan(iris) ```<issue_comment>username_1: Using `dplyr`, you can do: ``` library(dplyr) iris %>% mutate_if(is.numeric, funs(s = scale)) ``` which will create the scaled columns with the suffix `_s` (no way to change this to `.s` as far as I know, although you can always do an additional renaming step). Upvotes: 2 <issue_comment>username_2: RE: your question on how to test whether your data frame has any non-numeric columns, you have a couple of ways to do this. Here's one: ``` all(sapply(iris, class) == "numeric") # [1] FALSE ``` You can use that as your test in the `if` statement. It should be true exactly when `scale()` can produce a result. Alternatively, you could `try` the offending `colMeans`, but that ends up being more complicated. EDIT: since the OP accepted this as the answer, I'll add @Frank 's comment that answers the first part: > > `f = function(d) {ind <- sapply(d, is.numeric); d[paste0(names(d)[ind], ".s")] <- lapply(d[ind], scale); d}` - Frank > > > Upvotes: 3 [selected_answer]<issue_comment>username_3: Alternative solution: ``` data <- data.frame(iris, scale(Filter(is.numeric, setNames(iris, paste0(names(iris), ".s"))))) ``` Returns: ``` > head(data) Sepal.Length Sepal.Width Petal.Length Petal.Width Species Sepal.Length.s Sepal.Width.s Petal.Length.s Petal.Width.s 1 5.1 3.5 1.4 0.2 setosa -0.8976739 1.01560199 -1.335752 -1.311052 2 4.9 3.0 1.4 0.2 setosa -1.1392005 -0.13153881 -1.335752 -1.311052 3 4.7 3.2 1.3 0.2 setosa -1.3807271 0.32731751 -1.392399 -1.311052 4 4.6 3.1 1.5 0.2 setosa -1.5014904 0.09788935 -1.279104 -1.311052 5 5.0 3.6 1.4 0.2 setosa -1.0184372 1.24503015 -1.335752 -1.311052 6 5.4 3.9 1.7 0.4 setosa -0.5353840 1.93331463 -1.165809 -1.048667 ``` Upvotes: 2
2018/03/19
890
3,039
<issue_start>username_0: We have an Oracle database with the following charset settings > > SELECT parameter, value FROM nls\_database\_parameters WHERE parameter like 'NLS%CHARACTERSET' > > > ``` NLS_NCHAR_CHARACTERSET: AL16UTF16 NLS_CHARACTERSET: WE8ISO8859P15 ``` In this database we have a table with a `CLOB` field, which has a record that starts with the following string, stored obviously in ISO-8859-15: `X²ARB` (here correctly converted to unicode, in particular that 2-superscript is important and correct). Then we have the following trivial piece of code to get the value out, which is supposed to automatically convert the charset to unicode via globalization support in Oracle: ``` private static final String STATEMENT = "SELECT data FROM datatable d WHERE d.id=2562456"; public static void main(String[] args) throws Exception { Class.forName("oracle.jdbc.driver.OracleDriver"); try (Connection conn = DriverManager.getConnection(DB_URL); ResultSet rs = conn.createStatement().executeQuery(STATEMENT)) { if (rs.next()) { System.out.println(rs.getString(1).substring(0, 5)); } } } ``` Running the code prints: * with `ojdbc8.jar` and `orai18n.jar`: `X�ARB` -- incorrect * with `ojdbc7.jar` and `orai18n.jar`: `X�ARB` -- incorrect * with `ojdbc-6.jar`: `X²ARB` -- **correct** By using `UNISTR` and changing the statement to `SELECT UNISTR(data) FROM datatable d WHERE d.id=2562456` I can bring `ojdbc7.jar` and `ojdbc8.jar` to return the correct value, but this would require an unknown number of changes to the code as this is probably not the only place where the problem occurs. Is there anything I can do to the client or server configurations to make *all* queries return correctly encoded values without statement modifications?<issue_comment>username_1: Please have a look at [Database JDBC Developer's Guide - Globalization Support](https://docs.oracle.com/database/121/JJDBC/global.htm#JJDBC28643) > > The basic Java Archive (JAR) file ojdbc7.jar, contains all the > necessary classes to provide complete globalization support for: > > > * `CHAR` or `VARCHAR` data members of object and collection for the character sets `US7ASCII`, `WE8DEC`, `WE8ISO8859P1`, `WE8MSWIN1252`, and `UTF8`. > > > To use any other character sets in `CHAR` or `VARCHAR` data members of > objects or collections, you must include orai18n.jar in the `CLASSPATH` > environment variable: > > > `ORACLE_HOME/jlib/orai18n.jar` > > > Upvotes: -1 <issue_comment>username_2: It definitely looks like a bug in the JDBC thin driver (I assume you're using thin). It could be related to LOB prefetch where the CLOB's length, character set id and the first part of the LOB data is sent inband. This feature was introduced in 11.2. As a workaround, you can disable lob prefetch by setting the connection property > > oracle.jdbc.defaultLobPrefetchSize > > > to "-1". Meanwhile I'll follow up on this bug to make sure that it gets fixed. Upvotes: 3 [selected_answer]
2018/03/19
788
3,049
<issue_start>username_0: I have code in one folder, and want to import code in an adjacent folder like this: ``` I am trying to import a python file in innerLayer2, into a file in innerLayer1 outerLayer: innerLayer1 main.py innerLayer2 functions.py ``` I created the following function to solve my problem, but there must be an easier way? This only works on windows aswell and I need it to work on both linux and windows. ``` # main.py import sys def goBackToFile(layerBackName, otherFile): for path in sys.path: titles = path.split('\\') for index, name in enumerate(titles): if name == layerBackName: finalPath = '\\'.join(titles[:index+1]) return finalPath + '\\' + otherFile if otherFile != False else finalPath sys.path.append(goBackToFile('outerLayer','innerLayer2')) import functions ``` Is there an easier method which will work on all operating systems? Edit: I know the easiest method is to put innerLayer2 inside of innerLayer1 but I cannot do that in this scenario. The files have to be adjacent. Edit: Upon analysing answers this has received I have discovered the easiest method and have posted it as an answer below. Thankyou for your help.<issue_comment>username_1: The easiest way is: Move the `innerLayer2` folder to inside the `innerLayer1` folder Add an empty file named `__init__.py` on the `innerLayer2` On the `main.py` use the following: ``` import innerLayer2.functions as innerLayer2 # Eg of usage: # innerLayer2.sum(1, 2) ``` Upvotes: 0 <issue_comment>username_2: If you have to use the current **directory** design, I would suggest using a combination of `sys` and `os` to simplify your code: ``` import sys, os sys.path.insert(1, os.path.join(sys.path[0], '..')) from innerLayer2 import functions ``` Upvotes: 1 <issue_comment>username_3: Use `.` and `..` to address within package structure as specified by [PEP 328](https://docs.python.org/2.5/whatsnew/pep-328.html) et al. Suppose you have the following structure: ``` proj/ script.py # supposed to be installed in bin folder mypackage/ # supposed to be installed in sitelib folder __init__.py # defines default exports if any Inner1/ __init__.py # defines default exports from Inner1 if any main.py Inner2/ __init__.py # defines default exports from Inner2 if any functions.py ``` Inner1.main should contain import string like this: ``` from ..Inner2 import functions ``` Upvotes: 2 <issue_comment>username_4: Upon analysing answers I have received I have discovered the easiest solution: simply use this syntax to add the outerLayer directory to sys.path then import functions from innerLayer2: ``` # main.py import sys sys.path.append('..') # adds outerLayer to the sys.path (one layer up) from innerLayer2 import functions ``` Upvotes: 2 [selected_answer]
2018/03/19
549
1,983
<issue_start>username_0: So I am trying to get the user input(value) of this select option button. Everything I look at online dosen't really tell me how to get this button value so I can use it and manipulate it, please help ![](https://i.stack.imgur.com/vVWbU.png)<issue_comment>username_1: The easiest way is: Move the `innerLayer2` folder to inside the `innerLayer1` folder Add an empty file named `__init__.py` on the `innerLayer2` On the `main.py` use the following: ``` import innerLayer2.functions as innerLayer2 # Eg of usage: # innerLayer2.sum(1, 2) ``` Upvotes: 0 <issue_comment>username_2: If you have to use the current **directory** design, I would suggest using a combination of `sys` and `os` to simplify your code: ``` import sys, os sys.path.insert(1, os.path.join(sys.path[0], '..')) from innerLayer2 import functions ``` Upvotes: 1 <issue_comment>username_3: Use `.` and `..` to address within package structure as specified by [PEP 328](https://docs.python.org/2.5/whatsnew/pep-328.html) et al. Suppose you have the following structure: ``` proj/ script.py # supposed to be installed in bin folder mypackage/ # supposed to be installed in sitelib folder __init__.py # defines default exports if any Inner1/ __init__.py # defines default exports from Inner1 if any main.py Inner2/ __init__.py # defines default exports from Inner2 if any functions.py ``` Inner1.main should contain import string like this: ``` from ..Inner2 import functions ``` Upvotes: 2 <issue_comment>username_4: Upon analysing answers I have received I have discovered the easiest solution: simply use this syntax to add the outerLayer directory to sys.path then import functions from innerLayer2: ``` # main.py import sys sys.path.append('..') # adds outerLayer to the sys.path (one layer up) from innerLayer2 import functions ``` Upvotes: 2 [selected_answer]
2018/03/19
2,391
5,267
<issue_start>username_0: This is not a "vlookup-and-fill-down" question. My source data is excellent at delivering all the data I need, just not in in a usable form. Recent changes in volume mean manually adjusted fixes are no longer feasible. I have an inventory table and a services table. The inventory report does not contain purchase order data for services or non-inventory items. The services table (naturally) does. They are of course different shapes. Pseudo-coding would be something to the effect of `for every inventory$Item in services$Item, replace inventory$onPO with services$onPO`. Sample Data ``` inv <- structure(list(Item = c("10100200", "10100201", "10100202", "10100203", "10100204", "10100205-A", "10100206", "10100207", "10100208", "10100209", "10100210"), onHand = c(600L, NA, 39L, 0L, NA, NA, 40L, 0L, 0L, 0L, 0L), demand = c(3300L, NA, 40L, 40L, NA, NA, 70L, 126L, 10L, 10L, 250L), onPO = c(2700L, NA, 1L, 40L, NA, NA, 30L, 126L, 10L, 10L, 250L)), .Names = c("Item", "onHand", "demand", "onPO"), row.names = c(NA, -11L), class = c("data.table", "data.frame")) svc <- structure(list(Item = c("10100201", "10100204", "10100205-A"), `Rcv'd` = c(0L, 0L, 44L), Backordered = c(20L, 100L, 18L)), .Names = c("Item", "Rcv'd", "Backordered"), row.names = c(NA, -3L), class = c("data.table", "data.frame")) ```<issue_comment>username_1: Starting with the tables: ``` >inv Item OnHand Demand OnPO 1: 10100200 600 3300 2700 2: 10100201 NA NA NA 3: 10100202 39 40 1 4: 10100203 0 40 40 5: 10100204 NA NA NA 6: 10100205-A NA NA NA 7: 10100206 40 70 30 8: 10100207 0 126 126 9: 10100208 0 10 10 10: 10100209 0 10 10 11: 10100210 0 250 250 > svc Item Rcv'd Backordered 1: 10100201 0 20 2: 10100204 0 100 3: 10100205-A 44 18 ``` After far more cursing than I'd like to admit, the simple solution that works on the above test data, and my live data proved to be: ``` # Insert OnHand and OnPO data from svc for (i in 1:nrow(inv)) { if(inv$Item[i] %in% svc$Item) { x <- which(svc$Item == inv$Item[i]) inv$OnPO[i] <- svc$Backordered[x] inv$OnHand[i] <- svc$`Rcv'd`[x] } else{} } # cleanup inv[is.na(inv)] <- 0 ``` Is there a simpler or more obvious method that I've overlooked? Upvotes: 0 <issue_comment>username_2: Assuming you want to replace `NA`s in `onPO` with values from `Backordered` here is a solution using `dplyr::left_join`: ``` library(dplyr); left_join(inv, svc) %>% mutate(onPO = ifelse(is.na(onPO), Backordered, onPO)) %>% select(-Backordered, -`Rcv'd`); # Item onHand demand onPO #1 10100200 600 3300 2700 #2 10100201 NA NA 20 #3 10100202 39 40 1 #4 10100203 0 40 40 #5 10100204 NA NA 100 #6 10100205-A NA NA 18 #7 10100206 40 70 30 #8 10100207 0 126 126 #9 10100208 0 10 10 #10 10100209 0 10 10 #11 10100210 0 250 250 ``` Or a solution in base R using `merge`: ``` inv$onPO <- with(merge(inv, svc, all.x = TRUE), ifelse(is.na(onPO), Backordered, onPO)) ``` --- Or using `coalesce` instead of `ifelse` (thanks to @username_3): ``` library(dplyr); left_join(inv, svc) %>% mutate(onPO = coalesce(onPO, Backordered)) %>% select(-Backordered, -`Rcv'd`); ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: In `data.table` world, this is an "update-join". Join on "Item" and then update the values in the original set with the values from the new set: ``` library(data.table) setDT(inv) setDT(svc) inv[svc, on="Item", c("onPO","onHand") := .(i.Backordered, `i.Rcv'd`)] #inv original table #svc update table #on= match on specified variable # := overwrite onPO with Backordered # onHand with Rcv'd # Item onHand demand onPO # 1: 10100200 600 3300 2700 # 2: 10100201 0 NA 20 # 3: 10100202 39 40 1 # 4: 10100203 0 40 40 # 5: 10100204 0 NA 100 # 6: 10100205-A 44 NA 18 # 7: 10100206 40 70 30 # 8: 10100207 0 126 126 # 9: 10100208 0 10 10 #10: 10100209 0 10 10 #11: 10100210 0 250 250 ``` Upvotes: 1 <issue_comment>username_4: We could use `eat` from my package [*safejoin*](https://github.com/username_4/safejoin), and "patch" the matches from the rhs into the lhs when columns conflict. We rename `Backordered` to `onPO` on the way so the two columns conflict as desired. ``` # devtools::install_github("username_4/safejoin") library(safejoin) library(dplyr) eat(inv, svc, onPO = Backordered, .conflict = "patch") # Item onHand demand onPO # 1 10100200 600 3300 2700 # 2 10100201 NA NA 20 # 3 10100202 39 40 1 # 4 10100203 0 40 40 # 5 10100204 NA NA 100 # 6 10100205-A NA NA 18 # 7 10100206 40 70 30 # 8 10100207 0 126 126 # 9 10100208 0 10 10 # 10 10100209 0 10 10 # 11 10100210 0 250 250 ``` Upvotes: 0
2018/03/20
446
1,633
<issue_start>username_0: similar question has been asked before however I am not sure if the proposed solutions can be applied in my case. I have generated consumerKey and consumerSecret as per the woocommerce api documentation. I have confirmed that I can get the results using these keys by calling the below url in the webbrowser: ``` https://mywebsite.com/wp-json/wc/v2/products?consumer_key=ck_blahblah&consumer_secret=cs_blahblah ``` However, when I execute the same api call in the postman, using GET and correctly replacing user-> consumerKey and pass -> consumerSecret I always get 401 : woocommerce\_rest\_cannot\_view. I have tried both http and https with the same error. Any ideas?<issue_comment>username_1: Use this plugin <https://github.com/WP-API/Basic-Auth> and when you call the API use the Basic Authentication using the username and password. Upvotes: 2 <issue_comment>username_2: Woo Commerce uses a diferent authentication method for HTTP and HTTPS. So, if "HTTPS" = 1 is not being passed by Apache/Nginx to you code it will enforce the HTTP method. Do a double check if this "HTTPS" is passed to your PHP: 1. Open the file: ./wp-includes/load.php 2. Search for "is\_ssl" 3. Insert a "echo 'test\_beg'; echo $\_SERVER['HTTPS']; echo 'test\_end'; 4. Do a request on the API 5. If it return test\_beg and test\_end without "on" or "1" in the middle, the HTTPS is not being passedList item It can happen when using a reverse proxy, so, you could need to insert "SetEnvIf HTTPS on HTTPS=on" on your httpd.conf (if using Apache). I hope it helps :) (remember to delete these 'echo' on load.php) Upvotes: 1
2018/03/20
550
1,488
<issue_start>username_0: So I have this piece from html ``` XS 10 x 10 cm 5 300 Ft ``` And I want to get that '5 300' out of it. My code to get that: ``` print(item.find('label',{'for':'productX'}).find('span', attrs={'class': 'p'}).find('span')) ``` but it only prints out this: ``` ``` I hope somebody can help Edit: already tried to write .text to the end but it gives nothing ' '.<issue_comment>username_1: You almost got it, you just need to add `.text` to the last `find` function. ``` from bs4 import BeautifulSoup html = """ XS 10 x 10 cm 5 300 Ft """ item = BeautifulSoup(html, "lxml") print(item.find('label',{'for':'productX'}).find('span', attrs={'class': 'p'}).find('span').text) ``` Outputs: ``` 5 300 ``` Upvotes: 1 <issue_comment>username_2: You can try this: ``` from bs4 import BeautifulSoup as soup import re s = """ XS 10 x 10 cm 5 300 Ft """ final_result = re.sub('^\s+|[a-zA-Z\s]+$', '', soup(s, 'lxml').find('span', {'class':'p'}).text) ``` Output: ``` u'5 300' ``` Upvotes: 0 <issue_comment>username_3: Here's one with select, which doesn't give you as many options but is quite readable ``` import bs4 s = """ XS 10 x 10 cm 5 300 Ft """ soup = bs4.BeautifulSoup(s, 'xml') soup.select_one("#_productX_label > span > span").text ``` Output: `'5 300'` --- For your other issue of not being able to use the text property, perhaps the data is being filled out by a js function, or stored in an attribute? Upvotes: 0
2018/03/20
731
2,359
<issue_start>username_0: As the title says, I need to access each `child` element of the map function of all children, `React.Children.map(this.props.children, (child)...` I need this because I want to conditionally render certain props, and also prevent rendering based on certain conditions depending on which child is being rendered at the moment. I have bound this function in the constructor ``` this.renderChildren = this.renderChildren.bind(this); ``` but it's still not working. The only way I can even get this map function to work is if I wrap it in a `return()` function. Any ideas? ``` renderChildren(funcs) { // debugger return ( React.Children.map(this.props.children, (child) => { debugger // *** Need to access `this.state` from in here *** return React.cloneElement(child, { state: this.state, // *** Need access here too callbackFuncs: funcs }) }) ) } ... return({this.renderChildren(callbacks)}) ``` The following will NOT work (not wrapped in a return) ``` renderChildren(funcs) { React.Children.map(this.props.children, (child) => { return React.cloneElement(child, { state: this.state, callbackFuncs: funcs }) }) } ```<issue_comment>username_1: You almost got it, you just need to add `.text` to the last `find` function. ``` from bs4 import BeautifulSoup html = """ XS 10 x 10 cm 5 300 Ft """ item = BeautifulSoup(html, "lxml") print(item.find('label',{'for':'productX'}).find('span', attrs={'class': 'p'}).find('span').text) ``` Outputs: ``` 5 300 ``` Upvotes: 1 <issue_comment>username_2: You can try this: ``` from bs4 import BeautifulSoup as soup import re s = """ XS 10 x 10 cm 5 300 Ft """ final_result = re.sub('^\s+|[a-zA-Z\s]+$', '', soup(s, 'lxml').find('span', {'class':'p'}).text) ``` Output: ``` u'5 300' ``` Upvotes: 0 <issue_comment>username_3: Here's one with select, which doesn't give you as many options but is quite readable ``` import bs4 s = """ XS 10 x 10 cm 5 300 Ft """ soup = bs4.BeautifulSoup(s, 'xml') soup.select_one("#_productX_label > span > span").text ``` Output: `'5 300'` --- For your other issue of not being able to use the text property, perhaps the data is being filled out by a js function, or stored in an attribute? Upvotes: 0
2018/03/20
447
1,595
<issue_start>username_0: I'm trying to move the order review section to the top of Woocommerce checkout page and this is working: ``` remove_action( 'woocommerce_checkout_order_review', 'woocommerce_order_review', 10 ); add_action( 'woocommerce_before_checkout_form', 'woocommerce_order_review', 20 ); ``` But when Checkout opens it scrolls down to the order review section, rather than to the top of the page.<issue_comment>username_1: This works: ``` remove_action( 'woocommerce_checkout_order_review', 'woocommerce_order_review', 10 ); add_action( 'woocommerce_after_checkout_billing_form', 'woocommerce_order_review', 20 ); ``` Upvotes: 2 <issue_comment>username_2: Moving the review form doesn't automatically move the `Your Order` heading. This is what I added to functions.php ``` remove_action( 'woocommerce_checkout_order_review', 'woocommerce_order_review', 10 ); add_action( 'woocommerce_before_checkout_form', 'prefix_wc_order_review_heading', 3 ); add_action( 'woocommerce_before_checkout_form', 'woocommerce_order_review', 4 ); /** * Add a heading for order review on checkout page. * This replaces the heading added by WooCommerce since order review is moved to the top of the checkout page. */ function prefix_wc_order_review_heading() { echo '### Your Order '; } ``` And to hide the existing `Your Order` heading (and some spacing for the credit card form), I added this to style.css ``` .woocommerce-checkout #order_review_heading { display: none !important; } .woocommerce-checkout #order_review { margin-top: 2rem !important; } ``` Upvotes: 0